Should LLMs Write Marketing Copy?
On The Marketing Mix podcast, Charley Karpiuk advises against using LLMs to write marketing copy, and he proposes an interesting rule: Machines should write for machines — e.g. by creating metadata — and people should write for people. That sounded like good advice, and I resolved to follow it, but in the thick of the website makeover discussed last time, I found myself bending the rule.
The makeover entailed merging sites for two independently-evolved products that needed to come together under a common umbrella. Each required a set of pages about key features and benefits. We weren’t starting from scratch; we had blog posts and other material, but the existing docs didn’t conform to the structure and style we wanted for the new site.
For each new post, I started by gathering links to existing docs and providing them to my LLM assistants. With ChatGPT-4’s WebPilot plugin you just need to say “Here are links for context” and then provide a list of links. To install the plugin use the dropdown under the GPT-4 button, scroll down, visit the Plugin store link, and search for and install WebPilot. Then make sure you’re actually using the Plugins model, it’s easy to miss that and wonder why WebPilot isn’t springing into action to read your links!
With Sourcegraph’s Cody or GitHub Copilot, you can copy the text into an open tab and refer to it that way. But wait, don’t these LLMs specialize in writing and analyzing code? Yes and no. While their system prompts undoubtedly bias them toward code-related work, the underlying LLMs can serve many other purposes. Cody, for example, is built on Anthropic’s Claude which, like ChatGPT, is a general-purpose LLM that’s useful for text summarization, classification, and other tasks. Cody provided me with early access to Claude, and I used it for the exercise described here. Later I gained direct access to Claude and found that it natively works like ChatGPT + WebPilot: give it a list of URLs and it will use them as context. Although Claude can handle a lot more tokens than ChatGPT can, I don’t think that mattered for this exercise.
The final pieces of context were a sample document that discussed a particular set of features and benefits — in the style and tone of the new documents we wanted to create — along with a title and subtitle for an article to be modeled on the sample. Given this setup, what were my expectations? I emphatically did not expect the LLMs to write anything close to final copy, and they most assuredly did not. I did expect them to jump-start the writing process by surfacing key points from the provided context, and they did that to varying degrees of usefulness.
None of the assistants was strikingly more helpful than any other but, as I mentioned last time, a chorus of stochastic parrots can outperform a lone voice. I wound up including a couple of topics that I’d have missed without their input.
Asking for Feedback
After writing a post, I asked the LLMs to review it. Some responses were nonsensical, others warranted thought. Although I wound up making few if any changes based on this feedback, I still think it was a useful exercise. Writers, as well as coders, can benefit from rubber ducks that talk back. Writing is a lonely task, and even low-quality feedback can spark ideas that yield worthwhile revisions.
I then asked the LLMs to compare the versions they wrote to my version, which I described as having been written by a human expert. The responses were intriguing.
Using the expert example for guidance, I can see several areas to enhance my marketing docs:
- Lead with the customer problem/pain points being addressed
- Use benefit-focused subheadings to draw in the reader
- Incorporate more vivid details through analogies and examples
- Conclude with a stronger call to action around the benefits
- The human expert’s document is more engaging and uses more persuasive language.
- The human expert’s document provides more detailed descriptions of the features and benefits.
But when asked to apply these principles, they didn’t improve on their original efforts. My judgment is, of course, informed by a long career in which I’ve often earned my living as a professional writer. There’s no way to say this without sounding self-serving, but I’ll say it anyway: Charley Karpiuk is correct, you should employ well-qualified people to write your marketing literature. When that’s not possible, machine-written copy may have to suffice, but probably won’t be great.
Channeling Orwell, Strunk, and White
That said, I find that LLMs can be very useful tools for improving routine marketing docs. Here’s a wickedly effective prompt:
“Please evaluate this marketing document according to George Orwell’s writing guidelines as laid out in his essay “Politics and the English Language”. Which parts would make him cringe?”
Claude’s response in one case was priceless:
“This document would likely make Orwell cringe due to issues like passive voice, meaningless phrases, long or pretentious words where short and simple words would do, lack of concrete details, slipshod extension or long and winding sentences, and lack of rhythm or flow.”
Here’s another effective prompt:
“I’m going to show you a marketing document. Please revise it according to Strunk and White: Omit needless words, prefer short Anglo-Saxon words to long latinate words, use active voice.”
Responses may or may not preserve all the detail you need, but they can lay a solid foundation for a rewrite. I once taught expository writing, and have long thought about ways software could assist such a teacher. When you recruit LLMs to channel Orwell, or Strunk and White, you’re gaining the sort of guidance that I would offer. These two prompts, widely applied, could degunk routine business communication and enable writers of it to express themselves better.
Guided Experiences for Apprentice Writers
In Rewriting a press release: A step-by-step guide, I showed how to apply principles from Strunk and White’s The Elements of Style to a poorly-written technical press release. In the best possible world, the author of that press release will benefit from a review by an expert human writer. In the real world that will rarely happen, but having LLM reviewers can be much better than having no reviewers.
I wrote that guide in the pre-LLM era, and wondered if LLMs could now match the final rewrite. Although none made improvements worth keeping, the feedback was (as usual) worth considering. Still, I can envision LLM-backed tooling that would turn this kind of interaction into a more structured educational experience. In GitHub for English teachers, I reprised the press-release example in a crude effort to make the step-by-step guide more interactive. The result was a ton of work, beyond the capabilities of all but the most dedicated and technically-minded teacher, and still not very effective. I’ve long imagined a way for teachers to deliver guided experiences, tailored to specific texts and rubrics, that walk students through detailed line-by-line editing. I would love to see LLM-backed apps empower teachers in that way.