The impact of AI on Ethics

Over three stimulating days in June, CognitionX masterminded a Summit in London on artificial intelligence (AI) like no other. Academics, authors, thought leaders, entrepreneurs, programmers and practitioners came together to explore, challenge and debate on a number of topics with AI at the heart of our conversations. It was truly fascinating and I learnt a huge amount. In particular, the panel I appeared on, focusing on “The Impact of AI on Ethics”, drew a large and highly engaged diverse crowd. Together with Professor Alan Winfield (Bristol Robotics Laboratory, University of the West of England), Hetan Shah (Royal Statistical Society), Dong Nguyen (Turing Institute) and Sam Muirhead (school pupil), as a panel and with the audience we addressed the key question of “how can we create AI that takes into account ethical decisions?”

In preparing for the Summit, I identified three key areas of both opportunity and risk when thinking about the impact of AI on ethics which I've outlined below. They're not exhaustive by any means, but I hope they provoke some further thinking:1

1) The shifting moral frontier of defining what is the “right thing to do” means that somehow we need to impart a set of ethical values into AI systems and a context within which they can make decisions. What is the right today may not be right tomorrow. Machines don’t know our values unless we tell them – and herein lies the dilemma. As humans we appear to have great difficulty in agreeing to a set of ethical values and then sticking to them. My experience as an ethics practitioner working with corporates is that there is an apparent disconnect between the values espoused by organisations and how employees sometimes see business being carried out. So, the question is, how can we mitigate the risk that this inherent tension between the ethical values espoused by organisations and how business is conducted will be mirrored in AI (bearing in mind humans don’t always seem to be able to get it right) and, maybe even more critically, how can we create boundaries or a framework within which these values can be deployed by AI that will deliver the right decision in an ever-changing world?

2) Secondly, there is a connection between gender, power and ethics that needs to be explored in more depth in order to optimise diversity, maximise performance in business and mitigate the risk of bias in AI. A recent PwC Strategy& report provides evidence that women reported more moral outrage and greater reservations about sacrificing ethical values than men did. If you tell people that a job requires them to compromise ethical values, men’s interest in the position doesn’t change, whereas women’s declines significantly. So we can see how, over the years, having just a few women opt out of the workplace because of the ethical conflicts means there are fewer women at the top making it into leadership roles and acting as mentors to junior women. What I see in many businesses is that ethical standards are often perceived as obstacles or barriers to goals rather than critical drivers and integral to excellent organisational performance. We need to position ethical behaviours and standards as imperatives for enhanced business outcomes and creating competitive advantage. Research also tells us that power and influence increases the focus on goal-directed behaviour in individuals. So, why not make ethical behaviour, based on a set of common values, one of the business goals for leaders that will help to create a more open and inclusive environment that will help to reduce the risk of bias in AI; where being different either because of age, sexual orientation, gender, religion, ethnicity, or any other diversity lens, is seen as a force for good.

3) And finally, there is a huge opportunity for business to rebuild trust in society through their role in the deployment of AI. If businesses don’t demonstrate an ability to effectively address the ethical concerns in AI, there will be growing social resistance towards organisations in their application of it. We have to recognise that once the realm of data scientists and machine learning practitioners only, AI is now getting more and more attention from the media and the wider public. With trust levels at a crisis point between business and society, as evidenced by the 2017 Edelman Trust Barometer results, organisations cannot afford to be complacent about the role they need to play in calibrating the ethical risks associated with AI. To earn trust and restore faith, businesses must step outside of their traditional roles and work towards a new, more integrated operating system that puts people – and the addressing of their fears and concerns – at the heart of everything they do. To do this, institutions need to manifest trustworthy behaviours such as openness, inclusivity, fairness and honesty on a consistent basis.

Perhaps, somewhat ironically, these are exactly the kinds of values that AI systems will need to be taught by the programmers in order to “do the right thing”, not only today but also tomorrow and the day after. So it all begins and ends with us humans after all.

 At PwC we are tackling the challenges highlighted in my post above in a number of ways including:

  • Publishing a Responsible Technology Approach which looks to maximise the positive impacts offered by technology whilst minimising any negative ones;
  • Launching a Responsible AI Framework which enables our clients to build trust and confidence in their AI deployments; and
  • Introducing Technology Degree Apprenticeships in partnership with the University of Birmingham and the University of Leeds which will focus on encouraging more women to consider careers within AI.

Find Tracey Groves on LinkedIn and Twitter