Feature

The hubris and the angst dogging the research industry

The uproar over the 'failure' of the pre-general election opinion polling offers marketers an apt reminder of the need for caution when considering research data.

  The recent reaction to UK General election polls showed how we rely on stats too much at times
The recent reaction to UK General election polls showed how we rely on stats too much at times

On 19 June, in the light-filled lecture theatre of the Royal Statistical Society, an independent inquiry got under way to explain the divergence between the pre-election polls and the eventual result of May’s general election.

The panel of nine statisticians, sociologists and political scientists, chaired by Professor Patrick Sturgis of the University of Southampton, is expected to take until March 2016 to report its findings on behalf of the inquiry’s sponsors – The British Polling Council and The Market Research Society.

To go to these lengths to locate "the causes of the discrepancy" is testament to the research industry’s hubris, as well as its angst. "How could we be so wrong?" might as well be the title of this inquiry, for all the academic sobriety of its proceedings. The expectation was to be right. Failure is seen like those one-in-a-million engineering or transportation catastrophes that are the proper subjects of inquiries.

Yet, far from letting us down on election night, the pollsters did us a favour. They reminded us of the limitations of human ability to read and understand complex systems. They reacquainted us with the virtues of humility and caution in the face of seemingly incontrovertible data.

Far from letting us down, the pollsters did us a favour

The great thing about an election is that it is fast and unequivocal in its corroboration, or overturning, of assumptions. A single day and it’s done; we discover what it was we didn’t know, no matter how strongly we felt that we did.

In our humbler sphere of marketing, we are less fortunate. Research findings are ossified into ‘learnings’ where they remain in PowerPoint charts, unchanged and unchallenged, right through to board decisions that could be business-critical.

Refutation, if it comes, is diffuse, and can take years to play out. By that time, if things aren’t going too well, no one sits there asking: "Hey, do you think that research we did way back to get us here was actually flawed?"

The time to ask that question, then, is at the outset, when research is being judged, or, better still, commissioned. Conduct the inquiry before you inquire, focusing on the ways in which your chosen methodology may be studded with the asterisks of doubt.

In quant, online questionnaires are now the predominant commercial route to illumination. Yet digital deceit is such a cultural norm that we take it for granted: idealised Instagramming, avatar identities, fake Twitter accounts, the artifice of the "presentation of self in everyday life", to borrow Erving Goffman’s pre-digital, but extraordinarily prescient, book title.

Your research partner will seek to reassure you that these biases are accounted for – and perhaps the subject of your probing doesn’t warrant too much ‘idealising’ on behalf of your consumers. Even so, as with any question-based methodology, you could do worse than recall Freud’s dictum that "we are largely invisible to ourselves".

In qual, despite the wealth of methodologies to hand, focus groups still account for the lion’s share of the marketing research budget. Their known limitations read like a side-effects list on a pharmaceutical leaflet: artificiality of surroundings, anchoring, moderator bias, order effect, bound by context and time.

The research industry likes to think it suffuses our decision-making with the light of understanding

By far the most serious is the one uncovered by the behavioural economist Cass Sunstein. He showed that a group will tend to exaggerate any slight bias that was there at the outset. So pernicious is this that, by the end of the session, the overall group bias will be more extreme than that of the single most-biased member beforehand.

What is the answer? It is not to abandon research altogether, but to do it less often, better. At the very least, that means challenging the pronouncements of research specialists more than marketers typically do now, and demanding to know precisely how ‘interpretations’ have been arrived at.

At best, it means embracing the academic ideal of ‘triangulation’, where different methodologies are interleaved to help identify underlying themes. If, say, co-operative enquiry, ethnography and conjoint analysis all seem to point to a common motif, you might be on to something.

The research industry likes to think it suffuses our decision-making with the light of understanding, but the reality is more like pinpricks of light emerging into our cave of ignorance. If there are enough of them, and if they come from different directions, then we can pick a few features out in the gloom – all the while reckoning that it could look very different tomorrow.

A little knowledge is all that’s possible. No one needs an inquiry to point that out. Thinking we understand when we don’t, acting as though tiny clues were clinching evidence – well, that is the dangerous thing.


Recommended