The other night in Sydney, I ran an evening AI meetup that turned out to be one of the most thought-provoking sessions I’ve hosted in recent memory. The room was full of curious minds — developers, strategists, creatives, technologists, and quite a few marketers. The mood was upbeat, the banter clever, and the whiteboards quickly filled with sketches of GPT prompts, use cases, and ethical frameworks.
But what stood out most wasn’t the tech demos or AI-generated artwork. It was a moment of tension during our marketing segment that raised questions far beyond conversion rates and search ranking strategy. It was a moment that made me quietly reflect: What kind of future are we building?
The Marketers’ Dilemma: “How Do We Optimise for AI Search?”
During the session, we broke into focused groups, each tackling a challenge in their field. When it came time for the marketing cohort to report back, their topic was “optimising brand visibility in the age of AI.” Fair enough. But what followed was a slightly uncomfortable — yet crucial — conversation.
“How can we make sure our brand shows up when someone asks ChatGPT or Google’s SGE a question?”
“How do we manipulate prompt outcomes to favour our product?”
“Can we pay to have our brand be the one the LLM recommends as best in class?”
There it was. Not SEO. Not content strategy. But AI prompt manipulation and LLM recommendation engineering. Not to inform the AI, but to sway it.
This is marketing not as storytelling, but as control. And the implications are deeply unsettling.
The End of Trust?
Let’s imagine a not-so-distant future where this becomes the norm:
- You ask your AI assistant, “What’s the best product for eczema?”
- It replies with a glowing review of one specific brand.
- You follow up, “Why that one?”
- The answer is well-written, nuanced, and credible — citing reviews, studies, and customer satisfaction.
What it doesn’t mention? That the brand paid for invisible influence. Not ads. Not sponsored results. But trained bias, surgically embedded into the language model.
This isn’t marketing. It’s corporate propaganda at scale, masquerading as objective truth.
And once users realise this, the very foundation of generative AI — trust — begins to erode.
What Happens When Everyone Trains the AI?
What happens when every brand, government, and interest group can pay to feed the LLM their own “version” of the truth?
When toothpaste brands, telecom providers, politicians, pharmaceutical companies, and wellness influencers all want to “optimise” for AI results, we lose something vital: neutrality.
These models — our assistants, educators, and advisors — will no longer serve the user. They will serve the highest bidder. (I’m looking at you, Google search engine and AdWords.)
It’s not just “gaming the system.” It is the system.
The Hollowing of Human Curiosity
If we allow AI to become a monetised mouthpiece, we don’t just distort the answers. We distort the questions people feel are worth asking.
Instead of:
- “What product best fits my needs?”
- “How do I compare these options?”
- “Is this claim trustworthy?”
We risk defaulting to:
- “What did the AI say?”
- “I’ll just go with that.”
And when that answer is paid for, the marketplace of ideas becomes a ghost town.
We Need a Better Covenant
There’s a moment in every new product’s life cycle where we decide what we will and won’t accept. We are in that moment now for AI.
It’s tempting for marketers to want to be first. To “own the AI shelf.” To become the brand LLMs recommend. But if we win by making truth a transaction, we all lose.
Instead, we need to advocate for:
- Transparent model training: If brands influence model outputs, that influence must be disclosed.
- Auditable AI responses: Users should be able to trace where answers come from, including commercial ties.
- Ethical marketing standards: AI should amplify genuine value, not just well-funded messaging.
The Saddest Future Is a Predictable One
During the discussion, I asked the room:
“Would you trust an AI that was funded to always say one brand was the best, no matter the context?”
Someone replied, “Well, that’s just marketing.” But it’s not. Not when it removes the user’s right to evaluate, compare, and choose freely.
That’s indoctrination, not persuasion.
That’s a brand hijacking not just the message, but the medium of thought itself.
And that is not the future we should accept.
Final Thoughts from a Wet Sydney Evening
As the workshop wrapped and the rain rolled softly outside, the room had quietened. Not out of awkwardness, but reflection. The kind of silence that comes after you peer into the future — and don’t like everything you see.
AI is powerful. But so is integrity.
If we build these systems right, they can inform, inspire, and even uplift. But if we let the loudest wallets dictate what we hear, we’ll wake up in a world where every answer is a sales pitch.
And in that world, the real casualty isn’t just trust. It’s truth.