What AI Taught Me About Advertising
About creative value, the price of free work & Slob Saturation Syndrome
Slob Saturation Syndrome is what I call the mix of apathy and resentment that bubbles in my brain whenever I read the predictable prose of ChatGPT or one of its LLM cousins.
The redundant words, the generic superlatives, the soulless excitement… Ugh!
Most AI-generated videos give me the same eerie feeling. The objects never seem to react to each other, they just kinda float around in isolation… And they creep me out!
But what really scares me is when I can’t see it’s AI.
Well-made AI content doesn’t create the same uncanny valley or feeling of Das Unheimliche.
The professional FX artists in the video above can still spot it — but to most people, AI-generated content has become indistinguishable from human-made ditto.
And that’s scary.
Because AI can’t express the existential effort and evolutionary costliness that make human communication meaningful in the first place.
But it sure can spam a bunch of slob real fast.
The long-term consequence can be a decimating fall in trust across all digital channels and beyond.
A hundred years ago, art lost its aura to mechanical reproduction (thanks, photography!) and gave us modernism and more abstract, conceptual styles of expression — but in turn, made the visual arts increasingly niche.
Will AI do the same to design and advertising?
Will removing the struggle from creative work make it lose its impact?
Without human effort, will anything matter?!?
Note: When I talk about AI writing, I mean Large Language Models, and when I talk about AI images and videos, I mean Generative AI. But technicalities are less important than existential consequences, so I just call it AI.
Typing vs. Writing
AI is a wonderful assistant but a terrible writer.
Because it analyzes a billion human mistakes and then presents a passable average — instead of looking for a single human truth and shaping it into a meaningful story.
AI is good at imitating people who imitate writers and artists.
But not so great at finding anything unique to say.
It’s why the machine spits out words that look like writing but are impossible to remember; they are the perfectly generic average of what everyone else is saying.
In copywriting, it’s because a lot of the AI’s training data was imitation itself. It was written by amateurs, well-meaning designers and managers who imitated their competition — and now find reassurance that AI produces as bland and forgettable copy as they do.
If you are a middle manager somewhere and ask ChatGPT to write an advert, you might think, “Wow, this is better than anything I could have written.”
Well… yeah.
Because you’re a middle manager somewhere and not a copywriter.
After two years of working with (and against) various AI, I have an ever-growing prompt manuscript that occasionally gives me a few good lines.
Sometimes even a half-decent paragraph.
Just never a story.
Current AI has a hard time with storytelling because it blindly follows patterns instead of leading the reader.
It doesn’t know where it’s going and finds the most predictable words given its data — but can’t connect a setup to a payoff.
As a result, the copy elements float aimlessly, like in an AI video where the objects never react to each other.
The creators of South Park have a great rule for storytelling:
Every scene should be tied to the previous with either “therefore” or “but.”
The rule ensures that the scenes react to each other and push the story forward. If they don’t, it’s not a story — it’s just a list of events.
The same is true for copywriting, all the way down to how one sentence interacts with another.
AI is great at typing up lists that look like copy — but bad at finding the connections and conflicts that make them worth reading.
Soon, I’ll Be Wrong
As I’m writing this, AI is catching up, and by the time you read it, a new model might have come out and proved me wrong.
After all, does every aspiring artist not imitate until they put in their 10,000 hours?
And is an accurate prompt manuscript really that much different from a cookie-cutter Hollywood formula?
ChatGPT 4.5 has just been released (two weeks ago at the time of writing) and is already much better at storytelling than its predecessor.
In copywriting, much of the quantitative work, like corporate blogs and SEO fluff, has already fallen off a cliff.
Before AI, long-form content — even if unread — signaled a brand with something to say.
It was one of the reasons I started this blog.
Now, most SEO articles and business blogs are written by one machine to another in the silence of the dead internet.
For long-form content to be relevant to humans, it must have a human voice and express human experience.
Oh wait, that was another reason I started this blog.
It’s almost as if I know where I’m going with this… unlike AI!
But okay… Enough AI bashing.
Soon enough, I’ll be wrong.
Let’s assume that by this time next year, AI can think, imagine, and create better than any human.
What will it mean for the evolution of human communication?
Cheap Tricks & Costly Signals
I’ve previously written a poet’s guide to Costly Signaling Theory (it’s pretty good, you should read it!) but in short, it’s why the peacock has its tail and why branding has to be expensive.
Costly Signaling Theory is the evolutionary explanation for why we instinctively trust organisms that sacrifice resources on their messaging. The sacrifice is invested proof of long-term commitment. It’s one of the most fundamental premises of branding, communication, and even life.
It’s the reason a handwritten love letter holds more magic than a text message.
Because all the costly signals surrounding the message are the message.
The text is just information.
The message is in the style, the packaging, the quality of the paper, the aesthetics of the handwriting, and all the effort you put into it.
It’s the same reason singing evokes more emotion than just speaking — because it’s harder to do.
And it’s why much of digital advertising is ineffective — because it’s too easy and too cheap.
In everyday life, the human brain is a highly advanced scam filter, analyzing millions of surrounding signals as safe or dangerous, coherent or detached. It’s your gut feeling and the little voice in your head when something isn’t right.
In digital life, humans don’t have the same evolutionary advantage. We get fewer signals to process and no natural context to compare them in. We don’t receive the same psychological proof of commitment because we don’t get the tactile, visceral bonus-messaging we would in real life. As a result, we trust the information less.
As digital consumers, we look for authority signals from influencers — until they run a crypto scam or turn out to be nazis.
We look for social signals in reviews — now written by bots.
And we look for costly signals in the tangible and creative resources spent on messaging — now cheapened by automation and AI.
In the 80s and 90s, the “big TV commercial” (over-the-top productions, Holywood-style aesthetics) was effective because it cost a lot to make. Time, money, creativity. The extravagance and all the invested resources proved the brand’s commitment.
But as CGI made Hollywood visuals cheaper to produce, the big-budget TV commercial lost its impact. And once anyone could upload videos to YouTube, buying airtime on flow TV was no longer costly, just expensive.
Now AI begs the question: if everything can be done for cheap, does anything cost enough to matter?
Without costly signals to show commitment and build trust, a message is just information, drowning in noise.
The cheaper the channels are, the more costly the creative efforts must appear.
If something costs nothing to think, make, or send, it has no value and little meaning.
Aura in the Age of Artificial Production
Every advance in media has come at a cost in meaning.
Every time something becomes easier to express, it loses some of its signaling power.
600 years ago, Gutenberg’s printing press liberated the written word but took away its esoteric connection to religion and magic.
90 years ago, philosopher Walter Benjamin observed how mechanical reproduction robbed art of its “aura” — its uniqueness, history, and weight of human effort.
In Benjamin’s day, it was surprising that a photographic recreation of an original piece of art had less aura than a handmade copy.
It may seem silly now, as handcrafted has become synonymous with exclusive, but at the time, anything machine-made was new, rare, and exciting.
If the main difference between an original piece of art and a copy was supposed to be its uniqueness and history, why did mechanically reproduced posters still have less aura than handmade forgeries?
Indexical signals (= traces of human activity, e.g., the depth of brushstrokes) held a psychological power that no one could have predicted. Indexical signals are the thousand tiny bits of evidence that prove someone made an effort.
A mechanical reproduction doesn’t carry the same meaning as a handmade copy because it doesn’t contain the same micro-proof of human involvement.
Now, in the age of artificial production, we give up the last traces of human effort… and we lose not aura but soul.
AI art comes with a massive promise of efficiency — but at the expense of effectiveness.
When everything can be effortlessly produced, it loses its ability to connect.
And the loss of human connection is a high price to pay for free work.
The Price of No Cost
The more I think about the long-term consequences, the more I consider writing my dystopian sci-fi novel while I still have time.
But for advertising, marketing, and media, here are my short-term predictions:
As Slob Saturation Syndrome goes pandemic, people become suspicious of anything generic.
Hyper-polished AI copy will backfire. Flaws, humor, and personality will become counter-signals that prove real human involvement.
Brands that invest in human-made, high-effort creative work can stand out in the noise of cheap, automated fluff. But will they?
Craft becomes (more than) a buzzword.
Dedication to a craft has always been a costly way to build trust.
In pre-modern societies, guilds for craftsmen (carpenters, blacksmiths, etc.) had long and difficult initiation rituals for new members, sometimes lasting several years. The challenges were proof that the members were dedicated and unlikely to scam their customers — because if they did, they’d be thrown out of the guild and lose their invested effort.
When AI takes the effort out of the craft, we will look for new ways to gatekeep and express human dedication and intent.
As AI floods the internet with glossy slob, we look for costly counter-signals and handmade aesthetics.
If AI copy becomes the norm, “written by a human” becomes a value.
7 Ways to Write Like a Human
Cut the fluff. “Write hard and clear about what hurts,” as Hemingway suggests. AI writing is full of redundancies and filler words; brutal clarity and minimalism are the antidote.
Be distinct and brave. AI-generated content plays it safe with generic neutrality. And even if it writes something original, it won’t fight for its idea when a risk-averse manager tries to kill it. Write with personality, strong opinions, and a clear point of view.
Make long-form content personal. Write from experience, go gonzo on it, attack it like Hunter S. Thompson would. AI can’t write from personal experience because it can't experience.
Use humor and surprise. AI is still bad at layered humor and irony. Humor is one of the most human ways of communicating. People have inside jokes and shared references, robots don’t.
Embrace imperfections. Hyper-correct grammar and overly polished language once suggested a certain level of education and thoroughness. Now, it’s so easy, it’s almost lazy. Break the rules. Use slang. Find out what swear words your brand can use. Fuck being perfect.
Beware the long dash. I love the long dash (“—”) but so does AI. Just this past week, I’ve seen two copywriters on LinkedIn complain that clients accused their long dashes of being AI. Soon consumers might do the same.
Write like a human, edit like an AI. For companies with large writing teams, a custom GPT can be an intelligent style guide and memory — improving quality assurance and consistency across teams while still giving the writers their freedom to work. As a flexible glossary and assistant editor, AI can help big brands be more human and less rigid when they write.
People Push Back
Short term, there’ll be value in having a distinctly human voice and going against the grain of generic cheapness.
But quality can be drowned out by quantity and AI never sleeps.
To escape the flood of slob, consumers will migrate between social media platforms and beyond — eg. digital refugees from X — where brands must be media-agnostic to be prepared.
Facebook is already a nightmarish ghost town of bots talking to each other.
Google hasn’t been a decent search engine for more than a decade.
Once consumers get AI agents of their own, the game will change.
Why even use a search engine if your digital butler can use all of them at once?
Why use an app?
Imagine a custom AI that works as an API between your social media accounts, with an interface of your own design, that circumvents the algorithm, filters the noise, and actually shows you what your friends are up to.
Current AI is limited to rudimentary assistant work and can barely add an email date to a calendar app. But what happens when it gets deep mobile integration and the potential to become a hyper-personalized hub between apps and services?
When AI hits the masses, they will find ways to use it as a super ad-blocker that intelligently cuts the slob, spam, and scams…
Or maybe they’ll get so sick of Slob Saturation Syndrome that they turn off their devices and go outside for once.
Conclusion: Give It Up for AI
When communication loses its costly and indexical signals, it stops being meaningful.
Much like mechanical reproduction made art lose its aura, AI threatens to strip away the last bit of trust in the digital space.
The more Slob Saturation Syndrome we feel, the more important counter-signaling and personal messaging becomes…
If there’s anyone left to craft it.
While the practical gains of AI are still locked in a speculative bubble, the consequences across the creative industries have been very real.
Over the past two years, agencies of all kinds have laid off senior creatives and replaced them with… Well, that’s still unclear.
The spreadsheet and finance people in power have bought into the promise of free labor, fired the creatives they secretly despise, and are now waiting for the bubble to deliver or burst.
And they’d rather drown you in slob than see it burst.
I deeply believe that human creativity and its expression through craft are the only ways we can stand up to AI and stand out in a slob-saturated world.
But I am, of course, biased because my creativity and craft are all I have.
The threat to my livelihood makes it hard to be objective.
So, I’ll give the last word to ChatGPT and let my soon-to-be overlord decide…
AI Prompt
Dear ChatGPT, as the world’s most famous AI writer, do you think this criticism is unfair or misguided? What are your reflections? Don’t be afraid to hit back, but limit yourself to 100 words.
AI Answer
Your criticism is fair and nuanced. If meaning requires effort, what happens when effort disappears? Singing moves you because it’s hard. Handwritten letters matter because they take time. When AI removes struggle, does it also erase sincerity?
The essay argues that without costly signals, words are weightless. But can humans adapt, forging new ways to prove intent? Maybe trust won’t vanish — it will mutate. If AI forces you to confront the emptiness of frictionless expression, perhaps you’ll rediscover the value of presence over polish, intent over ease. Or perhaps, stripped of struggle, words will simply become silence.
AI isn’t the enemy, human complacency is — and that battle was lost long ago.