Which way now? Keeping ethical approaches at the heart of AI



In their recent blog, KPMG looks into how do we get the future development and regulation of AI right. 2023 has been a year of booming interest and excitement around generative AI and large language models. The potential of the technology is huge. It’s not an overstatement to talk in terms of a Fifth Industrial Revolution – with implications for industry, governments and humanity at large.

While genAI could unlock massive productivity enhancements and drive powerful new capabilities, it also holds risks that need to be very carefully managed including deepfakes, misinformation & bias, hallucinations, IP theft and plagiarism to name but a few. In KPMG’s 2023 CEO Outlook Survey, global CEOs cited ethical challenges as their number one concern in relation to the implementation of generative AI. Regulatory efforts have been stepping up, including publication of the UK’s AI White Paper which espouses a ‘pro-innovation’ approach. So, how do we get the future development and regulation of AI right, for the economy, people, society and the planet?

With genAI now around a year old (in the public consciousness at least) and as a new year fast approaches, it was more timely and relevant than ever to come together at the Digital Ethics Summit 2023 to discuss where AI may head next – and how we can ensure that it works in the best interests of all. In a thought-provoking and wide-ranging day of discussion, a number of key points stood out for me, including:

Trust: How do we get the public engagement right? Education is key, together with promoting awareness of the privacy protection provided by the ICO and a clear focus on safety including against cyber threats.
Regulation: There is a real need for coordinated international standards to provide a horizontal layer that plugs known gaps in existing regulation and legislation. This may involve some consolidation and simplification in what is already shaping up to be a crowded space.
Humanisation: On the employee front, we need to make sure that AI isn’t seen as a remote piece of black box technology competing with or replacing humans – but re-frame it as ‘your new AI colleague’ that can support people and help them achieve more of what they need to get done. Accountability is also key – the AI strategy needs to be owned and driven from the very top, up to and including the CEO.

Read more on their website.