Last November, after years of pent-up hype, OpenAI unleashed a tidal wave of interest around generative AI with the release of ChatGPT. Since then, the media cycle has been in overdrive with speculation about the technology’s consequences on the future of work, business and culture. Generative AI is a particularly potent pitch when it comes to content creation in the communications industry.
Of course, the launch of ChatGPT has itself been nothing short of a PR and marketing coup. While its underlying technology is evolutionary, not revolutionary, a well-timed decision to leap from limited, invite-only usage to free to all means resulting in social media platforms being swamped by its outputs.
In the aftermath, generative AI has been heralded as a new paradigm for society, an apocalyptic step towards ubiquitous low-grade information, and everything in between. Meanwhile, Google reportedly issued a “code red” as it saw the go-to-market momentum slipping away from its own costly in-house development initiatives.
The machines are improving quickly…
But beyond the hype, what does this mean for communications professionals? Could a generative AI strategize its own launch into the public consciousness as effectively as the humans working at OpenAI did? Will it, with a little more technological progress, be writing articles and developing brand imagery in ways that compete with human creatives? Are we, in short, facing the kind of technological disruption which regularly upends how industries operate?
I believe that the answer to all of these questions is, almost certainly, no – but not for the reasons which have so far dominated the skeptical end of the public conversation around generative AI.
The tendency has been for skeptics to focus on doubts around the technology which future developments might answer. For instance, it’s a fair guess to say that better training processes can solve the infamous habit that image-generating AI tools like Midjourney and DALL-E have of depicting people with an inaccurate, and often disturbing, number of fingers. Likewise, improved guardrails will likely fix the issue of tools directly replicating their training material – a phenomenon which is informing Getty Images’ lawsuits over copyright infringement.
More serious is the issue of untrue information being presented as factual. Following that “code red” emergency, Google’s launch of its ChatGPT competitor didn’t achieve quite the same PR success. In their first demo video, the tool answered a question incorrectly and investors promptly shaved $100 billion from the company’s market value. The risks involved with similar mistakes being made around sensitive topics like health or personal finances hardly need spelling out.
The indifference to truth, accuracy, and legitimacy that the current wave of generative AI tools display is a real cause for concern. It’s reasonable to expect social damages as these platforms are integrated into search results, and it’s prudent to be wary of the consequences such errors could have if they are integrated into communications workflows. However, it would be bold to claim that the problem is insoluble and that R&D teams will not be able to guide future generative AI (at least much closer) towards reliability.
… but the machines are still machines
So, acknowledging the power of progress to refine and perfect, why wouldn’t an ideal generative AI tool under ideal conditions be able to serve up your next content marketing platform?
The answer lies in thinking about both how generative AI works and what it means for something to be ‘new’. Newness is, on the surface, a very simple concept. If a thing exists which didn’t exist before, it feels fair to call that a new thing. This is also one of the bare minimum qualities we expect of communications activities. No brand would dream of running a competitor’s creative collateral as its own: the legal implications of doing so would pale compared to the reputational damage.
The idea of newness breaks down at its extremes, though. At one edge, changing a single word in a text would make it, by a very literal definition, ‘new’ – but no reasonable person would agree with that description. At the other edge, every text must involve borrowed ideas, turns of phrase, and so on – but no reasonable person would say that such borrowings stop it from being ‘new’.
Between these extremes are many diverse types of newness. Cover songs, love letters, opinion pieces, billboard adverts and the buildings they are attached to are all acts of creation with unique ways of potentially being new. The question, then, is what exact type of new generative AI can achieve, and how well it matches the type of new that communications programs require.
While tools like ChatGPT are often described as mimicking the human brain, it’s more accurate to say that they are merely inspired by it. On a scientific level, their functioning is entirely different, and cannot actively seek originality in the way a person can.
That’s because, rather than viewing information like people do, through the lens of an understanding of how the world works, the process of generative AI is much closer to taking a mathematical average of various numbers. Any generative AI output emerges from the spaces between the texts or images it was trained on – and while such combinations might surprise us, they cannot step beyond the boundaries of that dataset.
On the other hand, brand communications, work best when they articulate difference by stepping beyond the spaces already defined in the market. In advertising campaigns, new ideas create the strongest emotional responses. In crisis response, specific replies are needed for fresh events as they emerge. In thought leadership, original arguments and advice are the ones most worth sharing.
Or, more briefly: if your brand has a real story to tell, you’ve already passed beyond the competency of generative AI to tell it.
An unwritten future
Handing off the full end-to-end creative workflow to the machines might not be the only option. Opinion pieces offer endless speculation about how these tools may come to play a collaborative role in creative endeavors, just as GitHub is positioning its own AI platform as a ‘co-pilot’ for programmers, not a replacement to them.
That’s an open question for the future, though, as we work through persuasive evidence for and against it. What is clear is that these tools’ metric of success will be, in part, how much they do not impact the output: their position in the workflow should only be seen as valuable to the extent it does not limit businesses’ ability to tell persuasive new stories about their brands, products and services.
Over a long enough timescale, those stories might be told without human intervention. Indeed, the founding goal of OpenAI is to not to build generative AI, but to deliver artificial general intelligence which outperforms humans in the broadest possible sense. Until we get there, though, human creativity will still be the vital ingredient for effective, original, worthwhile communications.