Ad TechAdvertisersMediaOpinionTechnology

Omar Oakes: What is your content actually worth when the AI crawlers come for it?

There is one question every media executive is avoiding right now. Even if the AI companies are already answering it on their behalf.


The best questions are simple to ask but difficult to answer.

Here’s the one that all publishers (including micro-publishers such as yours truly…) should be asking right now:

What is your content actually worth to an AI company — and what do you do about it?

Not “should we use AI in our newsroom?” Not “how do we optimise for AI search?” Those are operational questions, and they have answers. The harder question is the one underneath: if the AI companies hoovering up the internet’s output are building products worth hundreds of billions of dollars, and your journalism is part of what makes those products work — what share of that value is yours? And what does claiming it cost you?

This is the decision that is being made right now, deal by deal, policy by policy, prize announcement by prize announcement.

When you get wildly different conclusions from people with broadly similar problems, it usually means nobody’s done the hard thinking yet.

Three rights make a wrong

So the story that matters most in media and advertising this week features three publishers and three irreconcilable bets on how they will deal with ‘the AI question’.

First, Le Monde has struck licensing deals with OpenAI, Perplexity and Meta. CEO Louis Dreyfus is now reporting that stories surfaced on ChatGPT convert to paid subscriptions twenty times more often than the same stories on Facebook, and fifty times more than on Google Discover. Speaking to Press Gazette, he’s urging other publishers to follow. Larger operations like News Corp are well ahead of them with a “woo and sue” strategy: having done deals with OpenAI and Meta, they will aggressively come after other AI scrapers who don’t pony up.

Then, Wikipedia’s editors voted to ban large language models from generating or revising entries entirely. The reasoning: AI-generated text routinely violates Wikipedia’s core editorial standards — accuracy, verifiability, neutrality. Two narrow exceptions were retained: basic copyediting of your own writing, and machine translation. Everything else, out.

Business Insider, meanwhile, reportedly announced a $400 quarterly prize for the best staff use of AI — something which I had assumed, given the 1 April timing, was an April Fool’s joke by Semafor’s media editor Max Tani. Remember that Insider’s publisher Axel Springer is the same organisation whose CEO declared the company “going all-in on AI” while cutting 21% of its workforce.

In isolation, all three moves are understandable. But when you put them together, a big problem emerges.

Comparing apples and oranges? First decide: what is fruit?

If you’re advising a media business right now (or working inside one) the temptation is to look at these three and ask: which model do we follow? That’s the wrong question.

Instead, consider each of them had to think about before they could make any decision at all: what kind of asset do we actually have?

Le Monde‘s answer is that it has premium journalism with a known brand, and that AI companies need that brand to make their products credible. The licensing deal is a big bet: we have something you need, and here’s the price.

But this licensing bet only works if the brand remains credible enough that ChatGPT users want to click through. Which means the deal is only as good as the journalism underneath it. Are they investing in that, or assuming it?

Wikipedia’s answer is that it has something rarer than journalism: a collaboratively maintained, openly licensed record of human knowledge, built on editorial integrity as a founding principle. The moment you let a language model — which confabulates, bullshits, and optimises for plausibility over accuracy — anywhere near that record, you’ve contaminated the asset. Wiki’s bet is defensive: asset protection.

But doesn’t the ban only hold as long as the volunteer editor community does? It seems that the real threat to Wikipedia’s asset isn’t AI writing — it’s AI reading, which will eventually make Wikipedia’s role as the internet’s baseline reference layer redundant if the knowledge just gets absorbed. The ban might keeps the database clean, but it doesn’t keep it relevant.

Insider’s answer is, while a bit strange, also tells you something valuable. A prize for AI use inside an organisation that’s already hollowed out its editorial workforce isn’t a strategy. It’s the performance of a strategy, which is a different thing entirely, but clearly the thing which someone, somewhere inside publisher Axel Springer has decided works for them.

To be fair, that pressure to look like you have an AI strategy — before you’ve done the harder work of defining what you actually have — is everywhere right now.

Even Meta, which is supposed to be a high priest in the blessed order of AI overlords, is prone to this performative nonsense. Witness the launch, and swift demise, of an internal leaderboard called “Claudeonomics”, designed to rank employees by sheer volume of AI token consumption, as a case study in “tokenmaxxing” — the idea that volume of AI use is itself a proxy for productivity.

It’s also, obviously, completely insane. Volume is not insight and activity is not strategy. A leaderboard that ranks your employees by how much they’ve typed into a chatbox tells you nothing about whether the work is any good. It’s a Silicon Valley problem, but these days, when Big Tech sneezes, the whole media and advertising industry gets cholera.

Media companies can’t fall into the trap of doing the same thing with a different set of numbers: tracking AI adoption rates; counting tools deployed; or (bleurgh) running AI prize schemes. None of it answers the underlying question.

You can’t ignore this for much longer

Here’s the uncomfortable part, and the reason most publishers are still stalling.

The honest answer to “what is our content worth to an AI company?” is either embarrassingly low or uncomfortably high — and both answers demand decisions that nobody wants to make.

If your content is worth a lot, you’re probably already being undercut. The deals being struck quietly around you are setting a price floor that you’ll spend years trying to argue your way above. Saying nothing while you “watch the space” isn’t neutrality. It’s ceding ground.

If your content isn’t worth much — if it’s the kind of high-volume, commodity output that AI can replicate cheaply and accurately — then the licensing conversation is a distraction from a structural problem that AI didn’t create, but is making impossible to ignore.

Either way, you’re being asked to say something out loud that you’d rather keep vague. Which is why so many media strategies right now are elaborate exercises in not answering the question.

The three publishers in this story aren’t interesting because of what they decided. They’re interesting because each of them, at some point, had to stop hedging and say what they actually were.

If you’re working with a media client — or inside a media business — that is still in “watching brief” mode on AI, ask them this: can you say, in a single sentence, what your content is worth and to whom?

If the answer is a hedge, you have your answer. Not about AI. About whether the strategy you’re being sold is actually a strategy at all.


This article first appeared in Ad-verse Reactions, a newsletter written by independent journalist and consultant Omar Oakes, covering the economics, power structures and unintended consequences shaping advertising and media. You can subscribe to Ad-verse Reactions for regular analysis at omaroakes.substack.com.

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button