When research goes wrong

As the election proves, research does not always represent the reality. Creatives are well aware of this, Simon James writes.

Conservative Party: its voters are believed to be systematically underrepresented in polls. Credit: PA Photos
Conservative Party: its voters are believed to be systematically underrepresented in polls. Credit: PA Photos

Normally, it’s politicians who are hammered at the hands of the polls. Last week, it was the pollsters that were pilloried at the hands of the politicians. The Labour strategist David Axelrod Tweeted: "In all my years as journalist & strategist, I’ve never seen as stark a failure of polling as in UK. Huge project ahead to unravel that." I think the ICM Unlimited director Martin Boon put it most succinctly on the release of the exit polls when he Tweeted: "Oh shit."

In a pyrrhic victory for creative directors everywhere, as they have always told us: the research was wrong. All the final pre-election polls had Labour and the Conservatives neck and neck. The result was a 6.3 per cent margin of victory for the Tories.

So who is to blame? Well, first, UK pre-election polling has form in getting the result wrong. YouGov’s Anthony Wells wrote: "Every couple of decades, a time comes along when all the companies get something wrong." When general elections only occur every five years, decades represent a short span of time.

In fact, we only have to go back to 1992, the last Tory majority, to witness pollsters’ nadir. In that year, the polls had Labour narrowly ahead on the eve of the election, only for the Tories to win the popular vote by 7.5 per cent with a record total number of votes. Yet, in the aftermath of the 1992 debacle, polling was – in theory – put right.

One of the problems polling companies have to face is human nature. When you see them quoting 1, 2 or 3 per cent margins of error on their forecasts, it is very easy for us to confuse precision with accuracy. Those error margins merely refer to a 95 per cent confidence level – that 19 times out of 20 we would get the same result within the stated margin of error – based on the same question. However, the margin of error has nothing to say on the gap between what people say and what they do.

We should not take for granted that stated voting intention is a perfect indicator for voting behaviour.

One of the reasons exit polls are far more accurate is that the question is asked after the fact. Sadly, in my career as a data analyst, I have always had far more success predicting the past than the future.

There are two known effects in play. The first is non-responder bias. It would appear that those polled systematically underrepresented the Conservative vote. The Sunday Times  journalist Rod Liddle pointed out that Tories are mostly at work when the researchers come knocking. Polling companies seek to adjust for this bias – especially since 1992 – but evidently it’s still there. Are Tories deceptively harder to reach by polling methods?

A potential reason for this is another bias – that of social desirability. When asked in research, we overestimate our level of charitable giving and underestimate our pay if we are paid a great deal or overestimate it if we are paid little. Labour talks about fairness and paints the Tories as evil. According to some, a Tory majority would mean the end of the NHS and/or the end of the world. Who wants to admit to being selfish or voting out of self-interest? Probably more people than would care to admit.

In defence of the forecasters, this election has seen a radical shift in the share of votes. The Liberal Democrats’ votes dropped from 6.8 million in 2010 to 2.4 million in 2015 – a fall of 65 per cent.

Meanwhile, the Greens’ vote increased by 336 per cent, Ukip’s by 322 per cent and the Scottish National Party’s by 196 per cent.

Again, when you are using historical data to predict the future, it only works well when the past bears a strong resemblance to the future. The surge of seats for the SNP and votes for Ukip were both accurately forecast – but the scale of the collapse of the Lib Dems vote was not.

Polling companies were not the only ones to call it wrong. You could have placed a bet on a Tory majority at 16-1 with Betfair (in-between important meetings) as late as Thursday afternoon.

Professional gamblers are usually very good forecasters because they back their opinions with their own money. Not this time (see chart, below).

Anyone who has worked in this industry long enough will have been part of a campaign that flew in research and yet bombed with the  general public. Just like the pollsters, we are all wise after the fact, assigning blame randomly among unexpected market events, pesky competitors or, worst of all, an apathetic public.

In an era when statisticians like Nate Silver feature in their own episode of Pano­rama, this is a victory for creative teams everywhere. Research is fallible. My first boss once said to me: "Simon, one should use market research as a drunk would use a lamp post – for support, not illumination."

I use that phrase far too often. But, at this point in time, I think it fits.

Simon James is the global lead for marketing performance analytics at SapientNitro

Topics

You have

[DAYS_LEFT] Days left

of your free trial

Subscribe now

Get a team licence 

 Give your teams unrestricted access to in-depth editorial analysis, breaking news and premium reports with a bespoke subscription to 北京赛车pk10.

Find out more

Market Reports

Get unprecedented new-business intelligence with access to 北京赛车pk10’s new Market Reports.

Find out more

Looking for a new job?

Get the latest creative jobs in advertising, media, marketing and digital delivered directly to your inbox each day.

Create an Alert Now