I’ve written sixty-something blog posts since ChatGPT burst onto the scenes. Until now, I’ve avoided writing one about using the eponymous tool.
Sure, generative AI has made a cameo appearance in some earlier posts; it just hasn’t been the lead actor.
Today, that time has come.
I’ve been asked one-too-many times how I use ChatGPT in my content creation process, how I recommend my clients take advantage of it, and whether I think the days of human content creators are numbered.
My answers—including the post I’m about to write—are based on several months of personal experimentation, reading other people’s articles, and observing how people use ChatGPT and react to articles that it has produced.
I don’t pretend to have found a perfect solution; that’s the BS of clickbait ads.
Nor do I pretend to be an expert at generative AI prompting; I’m an inexperienced amateur, at best.
However, I have formed some clear opinions about certain aspects of ChatGPT usage and the issues that arise, which I’m ready to share.
Let’s start by dismissing the headline I chose for this post.
Large language models (LLMs), such as ChatGPT, are large computer models that are somewhat competent at understanding human language.
This means they can parse—in other words, translate—the prompt that they are given into a set of instructions for finding relevant information within the (enormous) data set on which they’ve been trained.
It also means they can combine that information into a response that fits any stylistic instructions included in the prompt.
Make no mistake, LLMs are getting very good at these tasks—and they can perform them incredibly quickly, thanks to gobs of cloud computing resources.
A quick aside: Quoting from a LinkedIn article by Paul Walsh, an AI and analytics expert at Accenture, “[It] is estimated that the carbon footprint of training a large language model is equal to around 300,000 kg of carbon dioxide emissions, which is roughly equivalent to 125 round-trip flights between New York and Beijing.” Those gobs of computing resources come with strings attached.
Anyhow, back to the question at hand.
Despite their ability to hoover up relevant information and turn it into extremely credible content, LLMs have no understanding about the subject in question.
Other than the context in which the information is found—for example, the domain authority of the source—LLMs have no idea whether information is real or even realistic.
This leads to the widely publicized issue of LLM hallucinations, where the code spits out a New York Times quality write upon something that’s pure fiction.
Secondly, LLMs do not experience emotions.
While interacting extensively with their conversational interface might lead someone to believe they have struck up a “relationship” with their instance of ChatGPT, the code on the other side assuredly doesn’t have “mutual feelings”.
It can be difficult to separate writing style, mood, and tone, from actual emotion.
LLMs can certainly produce evocative prose that sounds emotionally charged. But it’s just that—evocative prose. Words laced with meaning, smartly written to evoke a desired emotion in the reader.
What separates authentic content written by a human being from an equivalent piece written by a large language model is the human’s experience and genuine emotion associated with the topic.
A tool like ChatGPT can produce authentic sounding content on a very dry topic; when it’s all facts and no emotion, a computer can write much the same as a human (and, arguably, even better).
But effective content marketing depends on publishing relevant, helpful information that engages the reader, evokes a desired emotional response, and helps to build a trusted relationship.
With that objective in mind, it’s impossible to produce authentic content using ChatGPT alone.
And there’s the clue to what comes next.
The approach I have evolved for using ChatGPT (and other LLMs) in content creation starts with breaking the process down into steps.
In big handfuls, this means:
> Ideation: Coming up with topics to write about.
> Outlining: Deciding what subtopics should be included and how they might flow.
> Research: Finding out what has already been written on the subject and gathering relevant facts and opinions to reference.
> Drafting: Producing a working draft.
> Review and Editing: Fact-checking and optimizing the content, flow, voice, tone, and style.
> Production and Publication: Adding images, improving the layout and readability, adding metadata and other on-page SEO, and pushing out on appropriate channels.
Several of these steps are conducive to tools like ChatGPT.
Specifically: ideation, outlining, research, and editing.
Ideation, a.k.a. brainstorming, works by asking multiple people to come up with ideas related to a particular topic or question.
They don’t discuss any of them; they just riff off each other’s ideas until the flow dries up.
LLMs are great at this game!
The same goes for outlining once a topic has been chosen, and for coming up with catchy headlines (although ChatGPT is too prone to replicating headline tropes for my liking).
Where ChatGPT really shines is at performing research. More on that in a moment.
Finally, don’t overlook uploading your draft content into ChatGPT and asking it to improve your writing.
Whether you want to sound more sophisticated or just make the piece more readable, ChatGPT can take all sorts of instructions and return an improved draft for your consideration.
Just for kicks, try asking it to rewrite something in iambic pentameter (the rhythmic poetic style favored by one W. Shakespeare).
Here’s what the first section of this post becomes:
In vast expanse, the language models lie,
With vast compute, they grasp our speech and sigh.
They take the prompts we give, with ease they find,
Within their trained vast stores, the thought designed.
To craft a phrase with style, they know the art,
Their speed, unmatched; in cloud, they play their part.
Yet, as Walsh said, there's cost that's not so small:
For one trained model, flights round-trip they'd equal all.
But let's return to models and their might,
Though swift and smart, they lack true human sight.
Context known, yet truth from lies they can't discern,
Hallucinations, fictions, from them we learn.
They lack the heart, the feelings that we hold,
With them, no mutual bond or tales retold.
Though prose they pen might stir a soul or two,
It’s just a craft, no heartfelt tears ensue.
Humans write with memories, joys, and pain,
While models craft with algorithms' reign.
In dry discourse, both human and machine,
Might sound the same, their styles both sharp and keen.
Yet content true needs trust, and heart, and soul,
That's what makes human writing truly whole.
Now, let’s focus on research…
For generations, writers seeking to find and reference information from “the literature” had to visit a library.
Until the early 20th century, “the literature” was entirely paper based. Books, journals, newspapers, pamphlets, papyrus scrolls, etc.
The use of film for information storage became common in the 1920s, followed by microfiche in the 1930s—replacing reels of microfilm with flat sheets, more suitable for archiving newspapers and other large documents.
Although computers and mainframes appeared in the 1960s and 70s, it wasn’t until the introduction of the personal computer in the 1980s, and digital storage media in the 1990s, that true digitization began.
Foundations for the ubiquitous access to digital information that we enjoy today—and which makes LLMs possible—weren’t laid until digital archiving projects, such as Google Books, kicked off in the late 1990s.
How on earth did anyone find anything before Google?
For those seeking the breadth and depth of information that ChatGPT can access, it required hiring the services of a researcher—frequently, a team of researchers.
For those wanting something a bit more affordable, this translated to sending a junior employee (even better, an intern) to the archive in search of information.
That’s my mental model of what happens when I type a prompt into ChatGPT.
I’m sending a digital intern to the (now digital) library to dig up relevant information.
Fortunately, my ChatGPT intern can work thousands of times faster than its human predecessors—and writes much better prose.
So far, so good.
Next question: What would you have done with the information that an intern brought back, written up as a nice article for you to publish?
Would you publish it blindly? Or would you ask a few questions and make some edits?
Given that the intern usually knows precious little about the topic they are being asked to research, they can’t possibly gauge which facts are most important—or even whether some of those facts are really facts, conjecture, speculation, or flat-out lies.
Is this sounding eerily familiar? Context known, yet truth from lies they can't discern…
No fool would simply insert their name in the byline and publish an intern’s work without giving it some serious fact-checking, editing, and revision.
A subject matter expert must inject personal knowledge and experience to distinguish the piece from something any old intern could produce.
They would use the intern as a research assistant, not a primary author.
And that’s the perfect way to think about using ChatGPT.
The way I use ChatGPT when producing B2B content—and the way I counsel my clients to use it—is thusly:
If I’m stuck on topics to write about, I ask ChatGPT to ideate a list, then see whether anything catches my eye or triggers me to come up with something even better.
Once I’ve picked the topic, I ask ChatGPT to come up with a list of subtopics that will be of interest to my target audience. There are several ways of asking this question, which can yield different results, including “most popular topics”, “most relevant topics”, “most important issues”, etc. So, I run this step a few times before choosing the subtopics and organizing them into a sensible order.
As I’m drafting the piece, I use ChatGPT to research facts and figures (being careful to factor in the LLMs blind spot for events and data happening after it was trained) that I can then sift through and incorporate. I ask the LLM to cite its sources and then check them for credibility and relevance.
After I’ve drafted the piece, I sometimes use ChatGPT to help refine or rewrite sections that aren’t flowing as nicely as I’d like. I also ask it to check for stylistic errors and suggest ways to improve readability. Some of them I incorporate; some I don’t.
The most important thing to understand is that I don’t let ChatGPT write the draft.
That prerogative belongs to me, so that my expertise, insights, beliefs, emotions, opinions, and personality can pervade the finished product.
That’s the measure of authenticity.
Ah yes, the third question that I mentioned in my introduction: Are human content writers going the way of the dodo?
No. They are not.
Are human content writers who spurn ChatGPT at risk of becoming obsolete?
Yes. They certainly are.
Content producers who find themselves out of a job because of the continued rise of LLMs will be those who fail to incorporate them into their work.
They will be displaced by producers who learn to embrace and capitalize on the best ideation and research assistant humanity has ever produced.
I recommend investing the time and effort to learn how and when best to use these tools, while always remembering that it’s you—the human—that brings the authenticity.
As with all things prognosticative, individual results may vary.
ChatGPT was not used in the production of this blog post except for transforming the first section into iambic pentameter, which was totally worth it.
Sign up today for our free Substack newsletter, in which we share our latest blog content, a selection of B2B content marketing insights gathered from across the web, and quick, actionable tips for taking your content marketing to the next level.
Sign up here!
Image credits: Adobe Stock