Facebook was found to be discriminatory in the way it serves ads
Facebook was found to be discriminatory in the way it serves ads
A view from Felicity Long

It's time to build an ethical framework for AI

Brands need emotional intelligence to navigate the ethical challenges that technology and data bring.

As we move into the next decade, consumers are demanding more of businesses in terms of their environmental or social impact. They want businesses to build scale in a way that’s human rather than corporate. We saw this sentiment reflected at CES from the marketers on the "putting the C in CMO" panel at the start of the month.

It would be easy to dismiss this as "woke" but Kantar’s latest research on what drives real growth shows that over-performing businesses focus significantly more on people growth – that means delivering tangible improvements for the people they work with, the people they serve, and the communities they sit within.

Nowhere is this approach more important than in the way businesses use technology, which is changing the very way we work and the types of services that can be developed. 

Tools such as machine learning are creating a huge number of benefits – improving disease diagnosis and predications of natural disasters, for example – but as people become over-reliant on tech, they're also creating unexpected issues, ones that challenge our industry in particular.

The AI and machine learning revolution is underpinned by data, often powered by consumer behaviour. We send countless emails, search and share personal information online; information that brands then use to tell us what to buy, what to read and what to view. 

How brands use such data matters, because flawed data (or even its absence) can result in serious harm. As Caroline Criado Perez reveals in Invisible Women, for example, feeding gender-skewed data into an AI tool will only cause it to duplicate errors, not fix them.

Ask the right questions

As marketers, we need to ask some serious ethical questions. The simplest of these is "what data are the algorithms actually using?" but many are much more complex like "what is the foundational thinking upon which this algorithm is built?"

One of the most recent examples of where this has gone wrong within marketing is the Facebook algorithm, which was found to be . The challenge wasn’t intent, it was that the problem Facebook is trying to solve is that of response for advertisers/increased revenue – not how to ensure that this is done in a non-discriminatory way. To be fair to Facebook, it is now doing more to reduce any bias in both the data-learning set and the approach to build.

The smart way to deal with this challenge is not just, of course, to ensure that we comply with the latest rules on data privacy but to underpin our approach with an ethical framework that actually future proofs the way we work.

It’s important to follow some key principles:

  1. Humans and society need to be at the core and not an after-thought.

  2. When building with AI or machine learning, think about how to ensure you are building and using data sets that are as diverse as possible.

  3. When selecting suppliers to work with, challenge them on how diverse their workforces are (a study by the AI Now Institute shows that just 15% of AI researchers at Facebook are women – at Google this drops to 10%).

  4. Go slow, get it right – the possible impact on your brand and business could be enormous. 

A good example of a brand that is thinking in a diverse way is L’Oréal. Its new AI-powered spot diagnosis tool has been co-developed with dermatologists and the algorithm has been modelled on more than 6,000 dermatologist patient photos of all ethnicities, genders and varying skin conditions, each of which has been graded by acne experts.

In addition to having some core principles, we will also be better placed to deliver the value that data and AI bring if we underpin our approach with a strong emotional intelligence. This will ensure we allow our clients to enter consumers’ lives in increasingly powerful ways, ethically.

Put humans at the centre

When our industry was first established it was based on negotiation, skills that required strong emotional awareness in addition to business insight. As our remit has expanded and become more aligned with technology, the value of pure intellect has risen. 

More than ever, we now need emotionally intelligent leaders, leaders who don’t always follow the logical path but question whether it’s the right thing to do – blending art and science.  

Those of you that attended the recent 北京赛车pk10 breakfast event "The Year Ahead: Change and Grow" would have heard Kate Rowlinson, MediaCom’s UK chief executive talk about the importance of "whole brain".  

We won’t be the first industry to take this approach. Similar questions are asked every time a new piece of technology impacts society and changes the way people live.

Take the invention of the respirator in the 1950s, for instance; it made it possible to keep patients alive who were unable to breathe unaided. However, it also raised ethical questions, especially when successful heart transplant operations began in the 1960s. 

It forced society to ask what should be done with patients on respirators who show no brain response and will never regain consciousness. The response of society and the medical profession was to change how we define death. 

Our challenge is less fundamental but no less critical to the way consumers view advertising and the communications they receive from our clients, the brands we choose to let into our lives.

The first wave of AI has been very tech led, the second needs to have humans at the centre. Brands and marketers can really add value to this new age of information because, at its core, marketing is about connecting with people.

Felicity Long is managing director, connected execution, MediaCom