Kicking off Finextra’s inaugural NextGen AI conference at Convene Sancroft in London, was Lord Chris Holmes of the UK House of Lords in his headline keynote titled: ‘AI in 2024: At the Edge’.
Lord Holmes outlined the need to control AI innovation and ensure that legislation facilitates AI development in the UK, and how AI is impacting financial services governance in Parliament.
“I think, be it in financial services, be it in any sector of our economy or society, we will have our best opportunity of achieving optimal outcomes if we always look to thread that gold of inclusion and innovation,” he began.
Holmes highlighted that the social, democratic, and economic issues with AI need to be addressed before to financial services. AI is transformational across all fields – Holmes cited cases of AI innovation in medical science and personalised learning, but then also pointed to its damaging impact of spreading misinformation during election cycles, deepfake scams, and bias in AI models.
Holmes then outlined his AI Bill proposed in parliament, which he designed to further trust, transparency, accountability, accessibility, international collaboration, innovation, and inclusion when legislating AI. “We needed to legislate in this country for the opportunities and for the challenges of artificial intelligence.”
Holmes outlined the three main clauses of the Bill:
- The need for small, agile, nimble horizontally-focused AI authority to support bigger regulators to identify AI bias;
- To have an AI-responsible officer in all organisations to be responsible for the ethical deployment of AI; and
- All-around public engagement – effective public engagement and trust is essential for benefits of the technology to reach its full potential.
Holmes continued that human-led AI can see democratisation of financial services and better balance between consumer and financial providers – a human-led AI future.
He concluded: "The businesses that will succeed are those who understand their business well, and then think, how can AI help in what we're trying to achieve? As opposed to, how do we fit AI into our business?"
In the second session of the morning, James Wong, lawyer at Clifford Chance, presented the keynote: ‘Where are we now with AI?’. Wong started off by chronicling a short history of AI technology, starting with the birth of automation during First Industrial Revolution, and moving through history until what is now largely considered the Fourth Industrial Revolution. He recounted how AI began as a footnote of the digital revolution, initiated during the Turing Test and Dartmouth Conference of 1956, and suffered through multiple winters before the development of machine learning in the 1990s that led to the AI boom of the modern era.
“With the development of deep learning, neural networks and generative AI, we find ourselves now in a situation where AI is integrated into so many aspects of our everyday lives. AI is not a novel technology. This is something that has been iteratively built on for decades; it's a maturing technology. It survived two winters, and now it's settling into its groove, having advanced to a level that allows it to be useful not only to researchers, not only to enterprise users, but to the world at large, and that is a very exciting juncture.”
Wong emphasised that with new technologies, there is always opportunity and opposition. With AI, financial institutions are best placed to tap into these opportunities, but bad actors will also inevitably arise. Additionally, with new technology there will always be disruption, and public opposition to AI that has been voiced must also be considered and acknowledged, said Wong, referencing public opposition to AI taking jobs and AI usage in creative industries.
He detailed: “If we are to learn from the past, we need to recognise that change is disruptive, and Long gone are the days when you can adopt revolutionary new technology and leave a trail of destruction and anarchy in your way. We need to be conscious about the effects that AI that we build and deploy has on individuals, communities and societies.”
However, Wong highlighted that AI is unique to other, new disruptive technologies throughout history, citing four points:
- Creative capability – AI is able to make leaps unthought of before in medicine, scientific research, and more;
- Versatility across domains – AI can be used across all industries;
- Speed and scale – the rapidity at which AI technology as developed has been transformational; and
- Adaptability – AI continues to learn the more it operates.
Wong stated: “These are the special factors that make AI the cornerstone technology of the Fourth Industrial Revolution.”
Wong explained that there are two key parties of addressing AI risk: safety and ethics. Safety looks at the dangers of AI technology, such as misinformation, deepfakes, and more; and the ethics looks at fairness, privacy, explainability, and accountability. He points out that in every organisation there is always more people in one party than the other, but it is important to consider both and also address the more immediate issues when looking at AI governance.
Wong then touched on the ‘Code is Law’ principle posed by legal scholar Lawrence Lessig in the 1990s, which hypothesised that software code is a form of regulation that dictates people’s behaviour. While Lessig was referencing the early impact of internet usage, Wong highlighted how his observations are still valuable today where everyone is an avid internet user:
“The ‘Code is Law’ is a useful lens through which to consider the closer integration of digital systems into our normal lives, shaping the way the physical world works and how humans, as biological beings, behave.”