/artificial intelligence

News and resources on artificial intelligence systems, innovations and initiatives worldwide.

NextGen:AI: Legislation should facilitate UK AI development - Lord Holmes

Kicking off Finextra’s inaugural NextGen AI conference at Convene Sancroft in London, was Lord Chris Holmes of the UK House of Lords in his headline keynote titled: ‘AI in 2024: At the Edge’.

1 Like 1 1 comment

NextGen:AI: Legislation should facilitate UK AI development - Lord Holmes

Editorial

This content has been selected, created and edited by the Finextra editorial team based upon its relevance and interest to our community.

Lord Holmes outlined the need to control AI innovation and ensure that legislation facilitates AI development in the UK, and how AI is impacting financial services governance.

“I think, be it in financial services, be it in any sector of our economy or society, we will have our best opportunity of achieving optimal outcomes if we always look to thread that gold of inclusion and innovation,” he began.

Holmes highlighted that the social, democratic, and economic issues with AI need to be addressed. AI is transformational across all fields – Holmes cited cases of AI innovation in medical science and personalised learning, but then also pointed to its damaging impact of spreading misinformation during election cycles, deepfake scams, and bias in AI models.

Holmes then outlined his AI Bill proposed in parliament, which he designed to further trust, transparency, accountability, accessibility, international collaboration, innovation, and inclusion when legislating AI. “We needed to legislate in this country for the opportunities and for the challenges of artificial intelligence.”

Holmes outlined the three main clauses of the Bill:

  1. The need for a small, agile, nimble horizontally-focused AI authority to support bigger regulators to identify AI bias;
  2. To have an AI officer in all organisations to be responsible for the ethical deployment of AI; and
  3. All-around public engagement – effective public engagement and trust is essential for benefits of the technology to reach its full potential.

Holmes continued that human-led AI can see democratisation of financial services and a better balance between the interests consumers and financial providers – a human-led AI future.

He concluded: "The businesses that will succeed are those who understand their business well, and then think, how can AI help in what we're trying to achieve? As opposed to, how do we fit AI into our business?"

In the second session of the morning, James Wong, lawyer at Clifford Chance, presented the keynote: ‘Where are we now with AI?’. Wong started off by chronicling a short history of AI technology, starting with the birth of automation during First Industrial Revolution, and moving through history until what is now largely considered the Fourth Industrial Revolution. He recounted how AI began as a footnote of the digital revolution, initiated by the proposal of the Turing Test and with the Dartmouth Conference of 1956, and suffered through multiple winters before the development of machine learning in the 1990s that led to the AI boom of the modern era.

“With the development of deep learning, neural networks and generative AI, we find ourselves now in a situation where AI is integrated into so many aspects of our everyday lives. AI is not a novel technology. This is something that has been iteratively built on for decades; it's a maturing technology. It survived two winters, and now it's settling into its groove, having advanced to a level that allows it to be useful not only to researchers, not only to enterprise users, but to the world at large, and that is a very exciting juncture.”

Wong emphasised that with new technologies, there is always both opportunity and opposition. With AI, financial institutions are well placed to tap into these opportunities, but "new opportunities quickly surface new harms". With new technology there will always be disruption, and public opposition to AI that has been voiced must also be considered and acknowledged, said Wong, referencing public opposition to AI taking jobs and AI usage in creative industries as examples.

He detailed: “If we are to learn from the past, we need to recognise that change is disruptive, and long gone are the days when you can adopt a revolutionary new technology and leave a trail of destruction and anarchy in your way. We need to be conscious about the effects that AI that we build and deploy has on individuals, communities and societies.”

However, Wong highlighted that AI is unique to other, new disruptive technologies throughout history, citing four points:

  1. Creative capability – AI is able to make leaps unthought of before in medicine, scientific research, and more;
  2. Versatility across domains – AI's general-purpose application is much broader than previous technologies and even previous generations of AI;
  3. Speed and scale – generative AI is unique in its ability to both analyse and synthesise vast amounts of data quickly and efficiently; and
  4. Adaptability – AI gets better over time and with use.

Wong stated: “These are the special factors that make AI the cornerstone technology of the Fourth Industrial Revolution.”

Wong explained that there are two key persuasions on the issue of addressing AI risk: "AI safety" and "AI ethics". Safety looks at the more remote but more systemic dangers of AI technology; and ethics looks at more fundamental and immediate issues like fairness, privacy, explainability, and accountability. He points out that in any given context there are often more people in one camp than the other, but it is important to consider both and also address the more immediate issues when looking at AI governance.

Wong then touched on the ‘Code is Law’ principle posed by legal scholar Lawrence Lessig in the 1990s, which hypothesised that software code is a form of regulation that dictates people’s behaviour. While Lessig was referencing the building blocks of the internet, Wong highlighted how his observations are still valuable today:

“The ‘Code is Law’ concept is a useful lens through which to consider the closer and closer integration of digital systems into our normal lives, shaping the way the physical world works and how humans, as biological beings, behave.”

Sponsored [Webinar] PREDICT 2025: The Future of Faster Payments in the US

Comments: (1)

A Finextra member 

AI obviously can be utilized in the financial industry both by providers, middlemen/distributors and by end-users of services. Important is however how AI can be used in the political govrenance and for news media issues. In Sweden an opposition MP has swamped a cabinet minister with hundreds of AI generated queries on the ministry issues and according to the constitution the minister must reply to these MP questions. The ministry in question will need to spend many working hours to formulate replies to the queries, blocking them form working on the ministry tasks. This is but one example what can be done to disrupt society without breaking the law. Do we for instance know to what extent the content in news media is created by AI and not by analytical and skilled reporters seeking to produce unbiased news. The use of AI in society need to be carefully considered. 

[Webinar] PREDICT 2025: What the National Payments Vision means for the UKFinextra Promoted[Webinar] PREDICT 2025: What the National Payments Vision means for the UK