Ad TechAdvertisersAgenciesAnalysisCreativeFinanceMediaNewsResearchSocial MediaTechnology

Amazon Prime Video’s Yash Chaturvedi: how AI is revolutionising advertising now

New-style contextual tokens are replacing traditional categorisation with autonomous 'agentic models'

The future of contextual advertising isn’t arriving on a slow drip but rather barreling toward us like a freight train. Picture yourself watching a sports drama. One minute the coach is firing up the team with an uplifting speech; the next minute fists fly in a bench-clearing brawl. Old ad systems treat both scenes as identical because they share a single label: “Sports.”. With new contextual systems, we see them as totally different emotional moments and can choose ads to match. This shift, driven by “contextual tokens” and smart GenAI agents, will redefine how CTV inventory is bought and sold.

Today’s Reality

Most streaming ads still rely on broad categories created years ago such as Sports, Comedy, Kids, News. Those buckets are fine for basic brand-safety rules but terrible at nuance. Research by MAGNA/IPG (2023) estimates that roughly one-third of CTV impressions land in scenes that don’t fit a brand’s tone, wasting well over a billion dollars in U.S. media each year. Meanwhile culture moves at meme speed, and by the time a committee invents a new sub-label, the trend is already old.

What “Tokenizing” a Video Really Means

Think of a film broken into bite-size sticky notes, one for every second.
*Where are we? Locker room.
*How does it feel? Hopeful.
*What’s on screen? Pink sneakers.
*Music vibe? Upbeat rap.
*Each sticky note is a token: a tiny description of what’s happening right now. As the story plays, the notes refresh automatically. They don’t mention who’s watching but only what’s on screen such that privacy stays intact.

Enter the “Agentic Model”

Now picture a lightning-fast personal shopper reading those sticky notes in real time. It knows your brand’s style guide and budget, predicts whether an ad will land and then:
*Bids higher if the scene is perfect,
*Swaps creative assets to match the moment, or
*Sits out if the tone clashes.
That shopper is an agentic model, a software that makes its own media decisions because it sees the story unfold second by second.

The Future: Full-Video Tokens + Agentic Models: how it works
*Sense: Models detect objects, emotions, and sentiment.
*Reason: A lightweight language model scores brand suitability and predicts response.
*Act: The ad server picks the bid, creative, and breaks timing or rewrites the copy on the fly.
*No personal data, no privacy headaches, total situational awareness. This can be repeated every few seconds. And now suddenly each impression is a custom fit, and not some vague genre based on the title.

Creative evolves as Modular, Not Static

a Nielsen study shows 49% of incremental lift on creative quality. Tokens let us treat creatives like Lego bricks, swapping pieces until the ad clicks with the scene. Shoot video in chapters, write copy in interchangeable lines, design artwork in layers. The engine assembles the best combo on demand. Then we watch the metrics, tweak the palette, and do it again, rinse and repeat which is faster than running a monthly report for targeting performance.

Personalization without the Privacy Pitfalls

Tokens describe the content, not the viewer, so they sail through GDPR and CCPA. This unlocks personalization even as cookie consent declines and eventually vanishes. Shoppable moments pop up the instant a product shows on-screen. A tap sends the shopper straight to checkout with manual mapping. Adaptive formats pick themselves. A quick gag triggers a six-second bumper; a slow-burn dialogue opens space for an interactive poll. Predictive sequencing lines up the next creative variant based on how the last one landed, all of this without stitching together user IDs from ten sites.

What Tokens Deliver Right Now

*Performance targeting shifts from yes/no to sliding scale.
*Dynamic ad-breaks stretch or shrink based on real-time engagement curves.
*Brand safety zooms to the second. A family brand can advertise in a PG-13 film but dodge the lone edgy scene.

Five Moves to Make Before This Year Ends

1. Count your category traffic. If half your impressions hinge on a taxonomy tag, that half is vulnerable and you should pivot in 2026.
2. Pilot a token feed. Attach scene-level signals to some portion of your spend; measure the delta in click-through and completion, and how it drives performance.
3. Break your ad creative into parts. More building blocks equal more relevant combinations.
4. Time to be specific with your brand safety guardrails. Translate gut instincts about brand safety into prompts and be specific.
5. Default to privacy-first. Start treating user data as a bonus, not a baseline.

The agentic models are ready. Tokenized video is here. Brands that start feeding those agents this year will set the clearing price in 2026

Yash Chaturvedi is head of product for Live Ads at Amazon Prime Video.

Sources Used:
1. Kantar Context Lab CTV Benchmarks, Apr 2024.
2. MAGNA / IPG “Contextual Inefficiency in CTV,” Dec 2023.
3. Nielsen ROI Report, Edition 6, 2022.

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button