It is the year 2029. I no longer have to battle through my love/hate relationship with screens. My technology is invisible, calming even, constantly finding ways of making life better, and more fulfilling.
My iPhone turns my music down to nudge me to make eye contact with my boss. Alexa treats me with a discount on my next book for taking a 30-minute break from a four-hour binge session with Transparent. And my bank politely reminds me to stay within six Net-a-Porter items a month, based on my dwindling credit score.
Does this sound liberating or controlling? Perhaps a bit of both, but as the chief innovation officer of a customer experience agency, it is my job to fantasise about these kinds of scenarios.
It is very possible that when we look back on 2017, it will seem like the Wild West. A time when companies could use technology however they wanted as long as it was legal, regardless of either ethical considerations or whether it was in the interests of the consumer.
As a marketing industry, we talk incessantly about "customer first". It has become the staple mantra of a new brand utopia. Yet I can’t help feeling that our obsession with Artificial Intelligence and predictive analytics models are eroding this good intention. There is a risk that in trying to mathematise every human desire, we are neglecting what people really need; compromising not only their well-being, but any trust they should have in us.
In 1950, no one imagined that tobacco companies would be barred from marketing to teens, that babies would be used in advertising to sell sugary drinks, or even that seat belts would become compulsory. Most insiders manage to see regulation coming. We have to be an industry that can talk about ethics. And if we can’t talk about ethics, someone else will do the talking for us.
The new "Algorithm of Value"
Scott Galloway has an interesting hypothesis for why Silicon Valley giants such as Amazon, Uber, Airbnb, and Spotify are dominating the market. The key to exponential growth is what he terms the "Algorithm of Value".
It is a simple equation. Success is predicated on the ability to extract huge quantities of quality data from people, as well as the ability to analyse this data on the fly, to better the service and upsell. It is fundamentally a virtuous circle, constantly learning, iterating and improving, and is the principle behind what AI specialists call ‘machine learning’. It is such a powerful and seductive technology because, when done well, it creates a behavioural feedback loop that keeps customers happy captives in a brand universe, with little or no reason to leave.
What is exciting for marketers and brands alike, is that machine-learning techniques that sustain this behavioural loop are only getting more sophisticated. Companies like MIT-born Affectiva are adding new layers to this human data stream, offering machine-learning technologies that can interpret nuanced emotional signals from images, video, and speech data. New-Zealand based Soul Machines is developing hyper-realistic virtual humans programmed to respond intuitively to individual needs and emotional states.
From the art of persuasion to the science of behavioural conditioning
As someone passionate about innovation and human behaviour, it is easy to be swept away by the technological optimism and creativity that machine learning brings. The potential to analyse infinite data clusters at quantum speeds, to see people differently in ways they had not even seen themselves before, is mesmerising. But this ability to decode patterns in the rich fabric of people’s lives, and identify new levels of psychological triggers, is also influencing how marketers see themselves. Advertising used to be about the art of persuasion; now there seems to be a gravitational pull towards a fully-fledged behavioural conditioning business.
Silicon Valley idealists would argue that this power to change individual and societal behaviour has been a force for good. There is a myriad of examples to reinforce this. Skype allows us to communicate with anyone in the world using machine-learning to translate any language. Spotify can open up our ears to a world we would never have explored. And as QQ Alert has shown, deep learning can successfully track down missing people, lost for decades.
But equally, there is the same potential to abuse machine-learning, to hack vulnerabilities in the human brain source code, keeping people unhappy captives of a system they can’t perceive, or control. We witness this mostly at the darker end of the online gambling and gaming world, but increasingly these reflexes can be seen creeping into online retail loyalty schemes, and entertainment apps.
AI and "acceptance creep"
As brands start to experiment, General Data Protection Regulations are a necessary step in the battle to protect consumers from fraud and privacy intrusions of nefarious AI systems. Dr Stefan Larsson suggests that expecting any privacy self-management is futile, and that as a regulatory model on its own it doesn’t work.
Individuals may have all the information about what companies do with their data, why they need it, and what a specific algorithm does. They may even protest their rights when their data is misused. But fundamentally, human cognitive limitations, lack of time, information overload, and our primal instincts, keep us forever handing over the terabytes of our lives.
In digital anthropology circles, this is known as "Acceptance Creep". We collectively normalise this invasiveness as the price to pay for socialising, ego flattery, finding a mate, and procuring free stuff. This is only going to intensify as the dopamine-fuelled reward loops of persuasive technology become standardised tactics across digital communications.
Designing AI for good
The mythology of AI easily leads our imaginations into the world of Blade Runner and Humans. But I believe this is the wrong focus. Self-replicating androids are less worrying than the negative impact, right now, of machine learning on our moral compass and self-protection mechanisms. People should fear themselves, their vulnerability faced with the masters of machine learning, and the mental trade-offs they make to sacrifice their privacy. The warning signs of this vulnerability are already surfacing in our media streams. Every day we are informed that fake news is distorting our perception of reality, that social media dependence is increasing levels of mental health problems in young people, and that personal debt has reached its highest levels in decades.
As marketers, we should be setting the agenda for a higher level of customer centricity, before the tide turns and we find ourselves in a storm of consumer backlash. We are at a critical moment, just as the interest in AI starts to peak, to create an ethical framework that sets this technology on the right path, that works for customers, not against them.
By working together, we can help brands understand the fine line between enabling and undermining customer interests.
But, fundamentally, the opportunity is there to creatively rethink how AI can open people’s worlds to a better, richer life, rather than nudging them relentlessly towards a lonely life in front of a screen.
At Proximity we want to work with brands with the ambition to use AI as a force of good.
This is an exciting time – let us evolve in sync towards the best-not-worst of all possible worlds.
Sarah Jane Blackman is chief innovation officer at Proximity London