Helping front office teams grow better

Be your AI's editor - #409

In an demonstration video of HubSpot's in-CRM AI, called, like most AI things in software, copilot, the voiceover reminded us to always verify what the AI produced. Whether you are asking it to summarize a CRM record, write an introductory email, or answer a 'how does this work' question, the trainer wanted to make sure the viewers verified what their copilot produces. The message seemed to be the old Reagan line, "trust, but verify," although it didn't leave me feeling too trusting.

This need to "make sure it's actually right" is a reason that the people hyping AI online seem to be hyping its least valuable uses. It seems like the only thing you can find on LinkedIn are videos of people making LinkedIn posts using AI. The vapidity of most writing on LinkedIn makes it pretty easy to spoof; the low stakes of a social post mean that the cost of being wrong approaches zero.

In these wide-open scenarios, AIs tend to focus on the wrong things, provide oddball answers, and need a lot of coaching. The article links below helpfully analogizes AI as a new, inexperienced coworker. Like a new employee, they need a lot of training and coaching to get things right. The type of coaching the article advises us to do is to be better at prompt writing. The writer says this'll take about ten hours. That seems about right. But you still end up with a pile of text that you'll need to edit to have it be useful. While it's a start, I'm doubtful this does all that much for us.

The first time I saw an AI produce anything remotely useful was when I gave an OpenAI a training file and specific instructions, then asked it real questions. The file as a fairly intricate matrix of territories and service types. I instructions were basically to use the file to help categorize inquiries into the appropriate territory and service. I then passed it a variety of pretty difficult to figure requests and it gave me back the categorization. It was reasonably accurate and promised to save approximately a boatload of sorting time per incoming request. This time saved in sorting would allow people to focus on actually solving the request.

I think it's that layer that'll make this tool really useful: completing a hard task based on specific information. In CRM contexts, the hard task is making unstructured data useful. There are any number of BI tools to summarize and visualize structured data. But aside from counting unstructured stuff—piles of emails, meeting records, and user-generated notes and tasks—extracting and summarizing remain difficult tasks. Our CRM copilot gets at this; any AI tool that allows you to input files and then get responses will too.

"Review before you use" and "make the assignment clear" are kinds of editing. While some people think AI can become its own editor (I think this is what Microsoft means by its Magentic-One "Orchestrator"), I'm doubtful about this tool's ability to consistently make useful decisions. It's still a little too much like a cub reporter: over-eager, recognizing the wrong patterns, producing voluminous copy, etc. It needs a stern editor to use the red pen and to assign the story in the first place; it needs grizzled veteran to tell it what's what. For most of us knowledge workers, that remains the bulk of being 'good enough' at AI.


Reading

586c5a86-090e-40ce-99c5-2f91ac026547_1312x736Getting started with AI: Good enough prompting

Don't make this hard.

oneusefulthing.org