How to stop the noise drowning out fair judgments in your team
A view from Sue Unerman

How to stop the noise drowning out fair judgments in your team

Judgments can be swayed by many factors seemingly unrelated to the matter in hand. But there are ways to ensure you minimise bias when making decisions.

The Media Week awards, which 北京赛车pk10 calls “the most highly prized awards in UK commercial media” are back and now open for entries, with deadlines looming in June and July.

I’m honoured and delighted to have been asked to judge again. I have seen the growth of professionalism and rigour in the judging over the years. However, a new book, by Nobel prize winner Daniel Kahneman (co-authored by Olivier Sibony and Cass Sunstein), makes for grim reading on the effects on judgment of what the writers categorise as “noise”. 

The book is packed full of evidence casting significant doubt on nearly every aspect of judgment, many of which underpin business and society. For example, a study of 208 federal judges in 1981, who were all exposed to the same 16 hypothetical cases, found that in only three was there agreement on the verdict. There was also huge variation in sentencing – in one case, where the average sentence was a year, one judge recommended 15 years in prison. 

In real-life judgments (as opposed to a hypothetical case), judges have been found more likely to grant parole at the beginning of the day or after a food break. Hungry judges are tougher. One study that examined 1.5 million judgments over three decades showed that, when the local football team loses a game at the weekend, judges make harsher decisions on Monday.

A study of six million decisions made by French judges found that defendants are given more leniency on their birthdays. And when it is hot outside, people are less likely to be granted asylum, according to evidence on the effect of temperature on 207,000 immigrant court decisions.

This is shocking and, as you read through the book, the evidence piles up for the unreliability of human judges and juries. 

More evidence, then, that evidence-based decisions, using rigorous modelling, are so important in media and advertising thinking and proof of why the IPA data bank is so useful. 

Are robotic judgments better? Not by much, according to this book.  Partly, of course, because the rules are based on history (past judgments delivered by humans and, therefore, subject to bias) or a set of rules (created by humans and subject to bias). Machine learning is not as "unnoisy" as it seems.  

Winning an award is important and can help your career path but your career also depends on the judgments of others in other ways. Studies based on 360-degree performance reviews find that the variance in scores based on empirical performance accounts for no more than 20-30% of the review. The rest is system noise. And the noise may have absolutely nothing to do with you – it could be down to a row that the rater had at home, bad weather spoiling their plans for the evening or, conversely, the fact that they have had a generous review from someone else. 

We can’t delegate career decisions to machines, anyway, as the authors write: “Creative people need space. People aren’t robots… people need face-to-face interactions and values are constantly evolving. If we lock everything down, we won’t make space for this.” 

What should we do to account for noise in decision-making (aside from hoping for good weather and a winning football team)?

Kahneman, Sibony and Sunstein advocate appointing a “decision observer”. Someone to identify and point out bias. This is common on major boards in respect of non-executive directors and chairs, but non-existent in many reviews or on awards judging panels and should be welcomed (at least as a trial).

In addition, high-performing teams need, as a matter of course, to understand how to reach agreement when they disagree, in a way that steps aside from who is most forceful or charming. We all need to develop a way of working through disagreements that is transparent.

In our book Belonging: the Key to Transforming and Maintaining Diversity, Inclusion and Equality at Work, we say this: “Understand that there are three kinds of disagreement: a) we are using different facts and evidence to reach our conclusions; b) we are interpreting the facts and evidence differently; c) we actually fundamentally disagree.”  We detail how to do this in chapter six. 

Start with this, and at least some of the noise in collective decision making will quieten to ensure better outcomes for everyone. 

Sue Unerman is chief transformation officer at MediaCom

Picture: Education Images/Getty Images

Topics