Artificial intelligence has already made a significant impact within the programmatic sphere through machine learning. AI-driven neural networks are becoming more capable and more able to mimic human behaviour as the technology develops. Advertisers could soon be applying the technology to adapt campaign placement and copy in response to real-world outcomes.
Just recently, for example, AI software ‘painted’ the cover of Bloomberg Businessweek.
The heaviest investors in this technology are the usual suspects, such as Google and Facebook, who are already aggressively hoovering up revenue in the ad space. Artificial intelligence has shown its capabilities at targeting and optimisation and we are going to see more attempts to push AI into the creative space.
For example, with a feedback loop we could see proximity campaigns where digital OOH sites generate and update creative copy in real time based on real-world action. Perhaps a food outlet’s ad will generate better footfall if it shows coffee instead of sandwiches at 10am, and switches to advertising meal deals, with AI-generated copy highlighting their value for money, at lunchtime?
The system could evaluate the performance of local sites and even evaluate the effect of a simple change of colour. The key is finding a way to pass back real-world information from which it can learn.
It’s a terrific opportunity, but it could also represent a brand safety nightmare. The AI might ‘learn’ that a particular image or approach generates higher sales, but that creative might not fit the brand’s core values. To use the example above, maybe the brand wants to focus its advertising on how good its food tastes, not on how cheap it is.
And that’s not the worst of it. We’ve already seen other dangers of these systems learning in an imperfect world, with Microsoft’s chatbot, Tay, turning into a racist bigot in a matter of hours and other technologies developing in a way that’s undesirable at best and appalling at worst. Even Amazon’s Alexa, a trailblazer of in-home AI, has already shown her own little quirks and oddities.
There is also the new phenomenon of adversarial examples, where neural networks can be tricked by objects that would be perceived differently by a person.
So perhaps what we need is a middle ground. Just like parents teach human children the difference between right and wrong, AI needs human feedback and approval. Machine learning can help a neural network measure outcomes itself, but there needs to be a person feeding back on performance.
A hybrid human-AI model mitigates a lot of the brand safety risks, with a person acting as ‘compliance’ in the same way a legal department does for human creatives at present. The AI submits its ads for approval and uses the feedback to learn which ones work and which don’t.
It also depends on the level of risk, of course. An AI deciding which background colour to use in a particular ad, based on which hues lead to the greatest footfall at the nearest store at the right time, doesn’t carry the same brand safety considerations as an AI deciding what copy or calls to action that ad should be broadcasting to the world at large.
And importantly, a lot of this technology is not just idly speculating about what the world will be like in five decades’ time. In the short term we are already seeing digital OOH campaigns starting to react to stimuli in real time, with tech being used to optimise ads for the location, the time of day and a wide range of other external and environmental factors.
And then, in the not-so-long term, we’ll start to see larger advertisers look to use neural networks to increase the relevance of their ads and generally drive greater personalisation.
So with all of this on the horizon, agencies, advertisers and media owners will have to invest in the technology that the likes of Google and Facebook are currently investing in. With the right expertise, advertisers will be able to make their communications work harder in a cluttered environment.
Lawrence Dodds is business director at media agency UM.