What business can learn from big tech's AI ethical failures
A view from Paul Vallois

What business can learn from big tech's AI ethical failures

Here are five key areas to consider in order to use AI ethically.

It has long been the case that the pace of technological development has left that of government, policy and regulation in its trail.

As a result, it’s hard to review the news and not see a headline about the failures of big tech to better self-govern, the need for greater regulation and even the desire, from some quarters, to break up these behemoths.

Whether it is GDPR breaches, political manipulation, self-harm and extremism going uncensored or the ever-listening ears of voice assistants, it’s fair to say that ethics have taken a back seat to customer acquisition and active users.

As artificial intelligence adoption continues to move from pilots on the periphery of businesses to a mainstay of growth strategies, it’s critical that companies take the time to explore not just what AI can do but what it should do.

Some of the key implementors and solutions providers in this space have taken a proactive approach, with the likes of Microsoft, IBM, Adobe and Salesforce creating frameworks and policies for their responsible development of AI for both themselves and their clients.

While there is a direct correlation in some businesses between the maturity of their adoption of AI and their approach to ethics, others are implementing AI at a rate of knots but are not yet giving due consideration to the ethics framework and cultural considerations around it.

No single approach to ethics

There’s not one definitive answer as to what should be included within a business’ ethical policy regarding AI, since every business will have nuanced differences. For example, in the extreme case of autonomous vehicle makers, consideration needs to be given to the long-debated classic conundrum of the "trolley problem", where an accident involving a vehicle is imminent and the vehicle must opt for one of two potentially fatal options – ie an accident that involves several bystanders versus one that involves only a couple of bystanders.

In a recent global study on this subject by the Massachusetts Institute of Technology, it was found that there was largely global alignment on this conundrum – ie the options that involve the lowest possible loss of human life should be the outcome. However, cultural outlooks between East and West start to affect decision-marking when the conundrum is applied to an elderly bystander versus a younger one.

There are some hard and fast areas that every business exploring the further use of AI should be considering.

Five things to consider in using AI ethically 

1 A policy of transparency

This rings true for both internal and external audiences.

Recent Forbes research showed the disparity between business leaders, who saw AI not as a threat to jobs but as a way of elevating existing jobs and creating new ones, and the workforce as a whole, who believed the threat to be far higher.

The fact that the research also found that nearly 20% identified "resistance from employees due to concerns about job security" as a challenge to their AI efforts accentuates the need for a transparent policy that is communicated throughout the business. This should identify the benefits that AI can bring, the proposed use cases and clear red lines as to where it won’t be utilised. 

Just as important is the need for transparency with customers. Customers should never be in a situation whereby they are unsure if they are conversing with a human or AI – this is especially important when dealing with sensitive subjects such as health and finance. Customers should also be able to discover more about the businesses they are interacting with and how their personal data will be used in the context of machine learning.

2 Perpetuation of inequality

There has been a huge push for diversity over the past five years across all walks of business and while some progress has been made, it has been slow.

And without careful internal governance on the diversity of those creating the AI logic and programmes, current biases could be perpetuated or, worse still, heighten.

The World Economic Forum has already warned of "racist robots", referencing the case of software developed to predict future criminals being biased against black people. While not responsible for this fail, IBM recognised the potential for bias in AI and last year launched the Fairness 360 toolkit. This open-source solution analyses how machine-learning algorithms make decisions in real time and figures out if they are being biased.

3 Artificial stupidity

Machine learning is a process by which errors will inevitably be made while algorithms are optimised and machines train and get smarter.

This means a robust approach to training, piloting and releasing is crucial. But even if this is done, there will be cases that are untested and unexpected. As such, consideration needs to be given to AI management, ongoing analysis of outcomes and development of learning patterns as the machine gets smarter. Clear protocol for reporting and escalating unexpected results should be defined.

4 The customer is always right

The protection of customer data, privacy and safety is paramount. In addition to these obvious requirements, it’s important that the customer maintains control and a business’ products and services remain accessible.

In the case of automation, it’s important that, if AI-enabled services are not able to fully support a customer’s needs, then alternative human support can be easily sought. Or, for example, if customers are not comfortable utilising conversational AI, that they are not discriminated against and/or marginalised.

5 Ownership and security

Trust is critical for the future of AI. Dystopian headlines about a robot future and current concerns with big tech are already making some people feel cynical about engaging with AI-powered services and solutions.

As such, breaches in security, exposure of personal data and potential hacks to "re-educate" machine-learning patterns have to be treated with utmost importance, and robust security measures and contingence plans must be put in place.

This is to protect both customers and the business. The latter needs back-up plans and potential short-term human solutions to take over and support the business while breaches are fixed before normal, automated services can be resumed.

Encouragingly, a recent piece of global research conducted by Forbes found that 63% of respondents stated they "have an ethics committee that reviews the use of AI" and 70% indicated that they "conduct ethics training for their technologists".

That said, only time will tell if the late adopters of AI will be as enlightened in their approach to ethics as the earlier ones appear to have been.

Paul Vallois is managing director at Nimbletank, an AI service and design consultancy. He was previously managing partner at Partners Andrews Aldridge