PRACTICAL AI Marketing Solutions
Ready to work together?
Find out where AI can add measurable value to your marketing engine and fuel confident growth.
The most important shift in marketing this year isn't that you can use AI, it's that your platforms already are, whether you've decided to or not.
Andy Mills
27/03/2026
.png)
There's a post I keep seeing on LinkedIn. Different people, same message. "We're exploring AI. We've tested a few tools. We're working on a strategy."
Good. Fine. Reasonable.
But while they're exploring, the platforms they already depend on have moved. Not incrementally. Structurally.
I've been in B2B marketing for close to twenty years. I was there when print budgets were sacred and digital was the side project nobody took seriously. I watched that shift play out over a decade. This one is faster, and it's hitting every channel at once.
OpenAI is testing an Ads Manager inside ChatGPT. Meta has embedded an autonomous AI agent directly into its Ads Manager. Google's AI Overviews now appear on 58% more queries year-over-year. New York just made it illegal to run AI-generated human likenesses in ads without disclosure. And a peer-reviewed study confirmed that flooding platforms with low-quality AI content actively degrades discovery for everyone.
That's not five separate stories. That's one story told from five angles.
And the usual reading of it, "marketers need to adopt AI faster," misses the point entirely.

Most AI conversations in marketing still centre on the practitioner. Should I use AI to write my emails? Generate my creative? Build my reports?
That framing made sense in 2023. It's outdated now.
When Meta embeds Manus AI directly into the Ads Manager navigation, that's not a feature update. It's a structural change to what "managing a campaign" means. Market research, report building, campaign analysis. The things a junior account manager does on a Tuesday morning are now part of the platform's interface.
When OpenAI starts testing a dedicated Ads Manager with select partners, that's not an experiment. It's a new advertising channel being born, with early partners shaping its pricing and norms before anyone else gets access.
The question isn't "are you using AI?"
It's "do you understand that the platforms you're already using have changed what they are?"
You can choose not to use ChatGPT to write your ad copy. You cannot choose whether Meta's AI agent is optimising your campaign behind the panel you click every day.
That distinction matters.

Google's AI Overviews appearing on 58% more queries is a statistic. The detail underneath it is where the strategy breaks.
AI Overviews frequently cite sources that differ from the top organic rankings. A page-one ranking no longer guarantees you'll appear in the answer most users actually see. Traditional results still appear for roughly 52% of queries, so organic SEO isn't dead. But it's no longer the full picture.
The industries seeing the sharpest growth in AI Overview coverage, education, B2B technology, restaurants, finance, insurance, are exactly the categories where teams have spent years and significant budgets building traditional SEO programmes. Those programmes are now structurally incomplete.
I know what that investment looks like up close. I once took a company from a single-page website to a full content and lead generation function, built the site, launched the blog, started the newsletter, created the inbound engine from nothing. That took months of focused work. The idea that a new discovery layer can quietly bypass all of that effort isn't theoretical to me. It's personal.
Here's the bit most teams miss. Most SEO teams haven't been asked to audit for AI citation patterns. Their KPIs are still built around rankings, traffic, and click-through rates from traditional results. The gap between what teams are measuring and what's actually driving visibility is widening every quarter.
SEO hasn't died. It's silently split into two disciplines: traditional ranking and AI citation. Most teams are only doing one of them.

New York's law requiring disclosure of AI-generated human likenesses in ads takes effect mid-2026. First violation: $1,000 civil penalty. Repeat violations: $5,000 each.
New York isn't acting alone. Indiana, Kentucky, Rhode Island, Oregon, and California all have related AI or privacy laws effective from early 2026.
The practical problem is straightforward. Most creative teams using AI to generate human-like figures in ads are not running those assets through legal review. The workflow typically goes: brief, generate, review for brand fit, publish. Compliance review for synthetic media disclosure isn't a step in that process.
It needs to be.
The brands most exposed are the ones that moved fastest. If you've been enthusiastically using AI-generated human likenesses in paid creative across multiple states, you now have a retroactive compliance question, not just a forward-looking one.
Speed without a legal framework creates liability. And the penalties are designed to escalate.

The intuition that AI content floods are bad for everyone now has peer-reviewed backing.
A study published in the Journal of Marketing Research examined AI content across YouTube, Reddit, and TikTok. The finding: when novice creators use AI to produce content at scale, it congests recommendation algorithms. High-quality professional content becomes harder to surface. Consumers disengage. The platforms themselves become less useful.
But the most strategically interesting finding is this: if AI content quality reaches expert level, both consumers and professionals benefit. Professionals who integrate AI into already high-quality workflows come out ahead.
This is the distinction I keep coming back to. I use AI to produce this newsletter. The research pipeline, the drafting, the editorial illustrations, they're all AI-assisted. But there's a quality gate at every stage. The framework, the editorial judgment, the voice, that's mine. AI powers the process. It doesn't replace the thinking.
The risk isn't that AI replaces human creators. It's that low-quality AI content drowns them out algorithmically. A volume-first AI strategy doesn't just risk your brand reputation. It actively degrades the platforms you depend on for distribution.

The most reasonable pushback here is that platforms have always changed underneath marketers. Facebook's algorithm updates, Google's core updates, TikTok's rise. Marketers survived all of them by moving fast and testing aggressively. Why should this be different?
Fair point. But the difference is speed and simultaneity.
AI is being embedded into paid media, organic discovery, content distribution, and legal compliance all at the same time. Previous platform shifts happened in sequence, giving marketers time to respond channel by channel. This one is hitting every channel at once, and the rules are being written while the tools are being deployed.
Adaptation is still the right instinct. But adaptation without understanding what's actually changed inside each platform is just reactive guessing.
The marketers who come out ahead won't be the fastest movers. They'll be the ones who understand the mechanics well enough to move in the right direction.
If this is landing and you're wondering what to do with it, here's where I'd start.
Map your platform exposure. Every platform where you spend money or publish content. For each one, identify what AI capabilities have been embedded in the last six months. If you don't know, that's the first problem.
Split your SEO tracking. Start measuring AI Overview citations separately from traditional organic rankings. If you're in education, B2B tech, finance, or insurance, this is already urgent.
Build synthetic media compliance into your creative workflow. Not as a one-off patch for New York. As a step in the production process that accounts for a growing patchwork of requirements.
Kill volume-first AI content. If your content plan relies on AI to produce more rather than better, the evidence now says you're actively degrading your own distribution. Redirect the effort toward quality-augmented production, where human expertise directs AI output.
The platforms are not waiting for you to decide how you feel about AI.
They've already decided for themselves.