
The most common mistake made in digital measurement is simply to quote the number of fans, views, clicks or impressions.
'Digital media might be the most measurable media, but it is also the easiest to misinterpret,' warns Simeon Duckworth, global head of analytics at Mindshare. Writing in the latest edition of the IPA's Advertising Works 21 tome, launched this week, Duckworth explores the ways in which marketers should address the issue.
'While the interactivity of digital media - constantly asking people to make decisions and choices - creates many, very useful metrics, it is also, sadly, the reason that can make it difficult to understand what influences these decisions,' he says. 'Unpicking cause and effect in digital media is riddled with nuance.'
The importance of a considered approach to measurement is Duckworth's central theme - he cautions against being swept away with what the headline data might appear to be shouting, and how to spot inherent bias.
'Digital measurement is no longer about post-campaign justification. It is increasingly about understanding human behaviour in an adaptive environment,' he adds.
Here, we pull out Duckworth's key messages and advice to help inform marketers' digital planning strategy.
Understanding decision-making, reaction and engagement
Our actions, views and intentions are increasingly sedimented in digital data.
Search terms, Twitter, Facebook posts, blogs and product reviews all have the potential to uncover the ways we are reacting to communications.
So, for instance, share of search or natural search positions are indicators of salience; positive and negative sentiment can be tracked and overlaid with communication weight; likes and retweets can be forms of endorsement and recommendation.
But we must experiment to measure brand impact. The ability to conduct almost gold-standard control testing (such as Millward Brown's Dynamic Logic) isan important way to measure brand affinity or sales impact, and sets digital apart from other media.
The Metropolitan Police case study (see below), in which 'communication engagement' was six times the average, with 60% of hosted videos watched in full, shows how a follow-up with people who liked the communication on Facebook, meant they were able to keep the debate going. With some careful design, this could have been extended to testing how youngsters exposed to the communication thought differently from a control group.
How to avoid the bias of fans and 'birds of a feather'
It is vital to set benchmarks and targets.
Size always needs a comparison, if not always a yardstick. Quoting headline numbers - from fans to clicks - is a common mistake; the expected number of fans will depend systematically on the category, strength and size of the brand, the country, the target audience and so on. Traditional research has known this for years and has developed sophisticated ways of creating norms that take factors such as brand size into account. Digital benchmarks need to do the same.
The 'selection bias' must also be addressed.
We are quite used to the idea that the online or Twitter population is different from the general population (but that it can still be very informative). We also know that Facebook fans are likely to be drawn from the most loyal consumers; the value of a fan (three times the average consumer) tells you very little about the value of creating new fans.
However, the deeper problem is that behaviour online and exposure to marketing is commonly driven by intent - the more I want to buy a product the more likely I am to see related media about it. If I intend to buy a Canon camera from Amazon, I am more likely to use Canon-related search words, to see Canon display ads and even to be a fan on the Canon Facebook page.
But none of this has influenced my decision to buy.
Sadly, this 'selection bias' matters a great deal. A recent study outlined at the www2012 conference found that naive measurement can be biased to the tune of 1000%.
Care needs to be taken when looking at simple correlations between offline and online metrics. Best practice should include test and control measures wherever possible, even with econometric models.
Recognise 'homophily': love of the same.
As social media has grown, we have become more interested in how ideas propagate, through peer-to-peer networks and mass media.
The temptation is to see viral communication everywhere. Clusters of similar people tend to congregate together and may look like they are influencing each other. In fact, they are responding to the same external stimulus - seeing a 'quit smoking' ad on TV, for example.
This is known as a 'homophily' - or more simply the idea that 'birds of feather flock together'. Often, what looks like a cascade is actually 'homophily'.
Tips to improve digital measurement
Start early, plan evaluation and implement test and control experiments.
Digital campaign measurement and evaluation is often geared to short-term optimisation rather than measuring the wider impact of a creative idea.
Not only should digital measures be included within the existing evaluation plan, but any test and control experiments should also be considered. This requires forethought and pre-planning compared with the evaluation of more traditional media and metrics.
It's about more than clicks.
Use online to evaluate brand behaviour and message take-out. Only one in six people has ever clicked on an ad, so it is, perhaps, no surprise that there is no correlation between clicks and brand impact. A good way of measuring brand impact is via 'recontact' studies, to establish differences in attitudes between engaged and control groups.
Think about novel ways to connect online data sources.
Increasingly, there is the option to use online panels to connect digital data and behavioural response in a single source. For example, Facebook and Google have connected loyalty-card data with cookies to isolate the sales impact of their ad formats through controlled tests.
The opportunity of using single-source data (such as Kantar's Compete panel) in combination with econometric modelling and controlled testing offers powerful potential for understanding the chain of influence, including the role of targeting.
METROPOLITAN POLICE SERVICE IPA - BEST DEMONSTRATION OF CONSUMER PARTICIPATION
The 'Who killed Deon?' campaign was designed to help tackle youth violence by educating disenfranchised teenagers about a legal principle known as Joint Enterprise.
The idea was an interactive murder-mystery hosted on Facebook, a 'whodunnit' about a boy called Deon.
It embraced a teenage audience by enlisting them as co-creators and collaborators.
The campaign was four-and-a-half times more efficient than forecast, achieving 135,371 unique visitors compared with the 30,000 estimated, and at cost of £1.33 against £6.04 budgeted.The prediction is that it will generate a payback of £8.50 for every £1 spent.
MARKETER VIEW: ELSEWHERE IN
ADVERTISING WORKS 21
BRANDS UNDER SCRUTINY
Jo Blundell, Marketing director UK, Burger King
At Burger King, it may take only four people to sign off an idea, but it takes 47 signatures at local, EMEA and global level to get that creative on to TV or into restaurants; and, most importantly, 25,000 internal stakeholders across the business - from crew to franchise operators - to believe in the brand, own the idea and make the message a reality that is grounded in truth.
In an age of heightened scrutiny and transparency, with even greater channels for our guests to give feedback and comment on their experience, the onus is on marketing to ensure that every stakeholder, every function, every point of contact with the guest lives up to the promise: the story needs to add up. Any chink in the chain of evidence will be found and commented on.
ECONOMETRICS: SERVANT, NOT MASTER
Richard Bateson, Commercial director, Camelot Global
A good econometric model can separate out the key influences on sales shifts and allow you to evaluate the return on marketing investment of factors as diverse as TV advertising, promotions, the weather, the economy and competitor communications.
Particularly powerful is the ability to look across a diverse brand portfolio and understand the inter-dependencies of each of your products, halo effects and even potential cannibalisation.
Within the National Lottery, econometrics is important, as we need to be answerable to our players and all our stakeholders about where we invest our budgets and how much we return to good causes for every marketing pound spent. It cannot answer every question, but does provide one strong tool within our measurement suite.
My main concern is the danger of thinking 'if it says it in the model, it must be true'; models are only as good as the data put in, the skills of the analyst and the other information available, to look at the outputs holistically. Also, a dependency on models that tell us what worked in the past can make us risk-averse in the future.
Few clients would take a leap into social if all eyes were on the econometrics model. For example, the Orange association with cinema probably doesn't stack up in the world of econometrics.
Marketing needs bravery alongside the science bit.