Most Brits think AI should be regulated by the 'Blade Runner rule'

As Blade Runner 2049 hits cinemas, a survey has found that 85% of Brits believe that AI in marketing should be governed by a key principle from the movie franchise.

Most Brits think AI should be regulated by the 'Blade Runner rule'

The "Blade Runner rule" says it should be illegal for AI applications such as social media bots, chatbots and virtual assistants to conceal their identity and pose as humans. 

This finding was from a study by WPP digital group Syzygy titled 

The survey also found that just over nine in 10 (92%) of Brits believe that the use of AI in marketing should be regulated with a legally-binding code of conduct, while three quarters (75%) think that brands should need their explicit consent before using AI when marketing to them. 

AI in advertising

There is a distinction between how consumers feel about AI in marketing versus advertising. Only 17% of Brits would feel negative if they discovered the latest ad for their favourite brand was created by AI rather than humans.

Comparatively, 28% who would feel more negatively towards their favourite brand if they discovered it was using AI instead of humans to offer customer service and support. This rises to a third (33%) among women.

However, if brands are transparent about the use of AI, the majority of consumers are not averse to brands using the technology for marketing. Nearly eight in ten (79%) would not object to AI being used to profile and target them in marketing.

"It seems clear from this research that people are open to AI playing an increasing role in their lives," Mark Ellis, managing director at Syzygy said. "With the rapid advance of AI in marketing, it seems also clear that as an industry we need to establish a code of conduct for the safe, transparent and responsible use of AI in marketing. We’re calling on brands and marketing agencies across networks to join us in a new initiative to develop a set of voluntary AI Marketing Ethics guidelines that puts the wellbeing of marketing audiences at their heart."

Syzygy has proposed the following guidelines for the Marketing AI Rule Book: 

  • Do no harm - AI technology should not be used to deceive, manipulate or in any other way harm the wellbeing of marketing audiences
  • Build trust - AI should be used to build rather than erode trust in marketing. This means using AI to improve marketing transparency, honesty and fairness, and to eliminate false, manipulative or deceptive content
  • Do not conceal - AI systems should not conceal their identity or pose as humans in interactions with marketing audiences
  • Be helpful - AI in marketing should be put to the service of marketing audiences by helping people make better purchase decisions based on their genuine needs through the provision of clear, truthful and unbiased information

"This research reveals how consumers are conflicted when it comes to AI - many see advantages, but there are underlying fears based on whether this technology, or the organisations behind it, has their best interests at heart," Dr Paul Marsden, Syzygy's consumer psychologist who managed the study added. "Brands need to be sensitive to how people feel about this new technology. What we need is a human-first, not technology-first approach to the deployment of AI."

The report was based on a survey of 2,000 UK adults from the WPP Lightspeed Consumer Panel.

Topics

Market Reports

Get unprecedented new-business intelligence with access to ±±¾©Èü³µpk10’s new Advertising Intelligence Market Reports.

Find out more

Enjoying ±±¾©Èü³µpk10’s content?

 Get unlimited access to ±±¾©Èü³µpk10’s premium content for your whole company with a corporate licence.

Upgrade access

Looking for a new job?

Get the latest creative jobs in advertising, media, marketing and digital delivered directly to your inbox each day.

Create an alert now

Partner content