AdvertisersAgenciesAnalysisCreativeResearch

Giles Keeble: how advertising discovered ‘post-truth’ before the OED – and the trouble with Big Data

The Oxford English Dictionary has declared “post-truth” is the word of the year. It means “relating to or denoting circumstances in which objective facts are less influential in shaping public opinion than appeals to emotion and personal belief.”

This seems to me to have been something advertising has accepted for some time. While it is always a bonus to be able to say something provable that the competition can’t claim (or isn’t claiming), many clients have found a rational claim, however spurious or irrelevant, easier to judge the work by than an emotional appeal.

If and when they looked for an ‘insight’, it was often irrelevant to the target audience, if not generic. I have always believed that a relevant observation leads to better, more human work than most so-called ‘insights’. Meaningful insights are rare. Of course, it requires an investment in R&D and understanding of consumers to produce brands that are distinctive in a demonstrably factual way that also taps into the needs, beliefs and feelings of the buyers and users.

Part of what brands do is distinguish similar products along a number of different criteria, including price and variety, but not necessarily quality. So advertising is and has been pre and post-truth, while not ignoring it when it exists. Nevertheless, now ‘post-truth’ is enshrined in the OED, along with Essex girl, what will the ASA proclaim? That advertising must be decent, post-truthful and honest? A bit of an oxymoron, like ‘reliable Trump.’

—————————-

A few months ago, I read a book called ‘How Not to be Wrong’ by Jordan Ellenberg about how maths relates to real life and, though I have to admit I didn’t understand all of it in detail, I did get the gist.

One ‘fact’ I found interesting is that Ronald Fisher, who came up with the notion of statistical significance levels, which he originally set at 0.05 as a cut-off point, did not expect this to be fixed. Each test should look at the specific circumstances and the hypotheses involved.

His book led me to look for my copy of David Boyle’s The Tyranny of Numbers, which warns against trying to measure what can’t be measured. Both should be read in these days of Big Data – as a caveat for mistaken assumptions not as an injunction to disregard all data.

In marketing and advertising, it is important to make sure that what is being tested is the right thing to be tested, and that it can in fact be tested. Sample recruitment and size are crucial to the usefulness of the results, as well as the hypotheses and the questions posed.

I remember a discussion with the head of planning at a top agency and Millward Brown about their respective views of research done for a large household brand, where the client insisted on a general sample (on the grounds that lots of people would see the ads) rather than the specific audience it was aimed at.

It is hard for an agency to feel sanguine about research for work based on a particular brief aimed at a particular type of person being tested against everyone.The main reason for doing it this way was that the former would have cost more to recruit. But once done the way it was it was hard to determine what errors it may have led to.

Back to top button