Join the Community

22,017
Expert opinions
44,203
Total members
434
New members (last 30 days)
171
New opinions (last 30 days)
28,675
Total comments

Balancing innovation and responsibility for the future: Lord Chris Holmes on AI regulation

In an era dominated by artificial intelligence (AI), society stands at a crossroads. Do we take advantage of the unprecedented opportunities AI presents, or do we hesitate, waiting for the consequences of under-regulation to manifest?

 

Lord Chris Holmes, MBE, in his recent speech at the #RISK conference in London, addressed this very dilemma, calling for a structured and balanced regulatory framework to ensure that AI serves the public good.

 

He started from the importance of four core “I’s”:

  • inclusion
  • innovation
  • investment
  • international perspective

 

These elements, he mentioned, should be central to any regulatory framework designed to manage the rapid development and deployment of AI technologies.

 

The importance of risk ran through the entirety of Lord Holmes' message. He cautioned against adopting a "wait-and-see" approach to AI regulation, saying that indecision is a choice with its own risks. The world has already seen examples of the dangers of AI when it is left unregulated, from biased algorithms determining whether someone qualifies for a loan to AI-powered election interference threatening democratic processes. Inaction, according to Lord Holmes, could have dire consequences not just for individuals but for the very fabric of the society.

 

At the heart of his argument is the notion that regulation and innovation are not mutually exclusive. Lord Holmes pointed to the UK’s history of regulatory success, citing the Competition and Markets Authority (CMA) as an example of how deliberate regulatory intervention has been replicated globally, proving that regulation can enhance rather than stifle innovation. His call is for "right-sized regulation", a middle ground between over-prescriptive rules that limit creativity and a "laissez-faire" approach that invites chaos.

 

The speech also focused on inclusion that comes from the societal potential that AI holds. AI’s transformative abilities could revolutionise healthcare, education, mobility, providing personalised and efficient services to millions of people. One compelling example that he mentioned is the use of AI in breast cancer screening. It has significantly improved diagnosis and treatment outcomes. Yet, these benefits will remain out of reach if a proper regulatory framework is not implemented. Without it, AI could deepen the divide between those who have access to cutting-edge technologies and those who do not.

 

The stakes are just as high for democracy. In his speech, Lord Holmes pointed out that in 2024, "40% of the world’s democracies are due to hold elections, and AI could influence these outcomes, potentially swinging election results without voters even realising how." The potential for AI to be used as a tool for political manipulation or voter suppression is immense.

 

At the same time, AI presents a huge opportunity on the economic front as well. By 2030, it is predicted that AI could contribute $15.7 trillion to global GDP. Lord Holmes commented that while the exact number is less important, what matters is ensuring that the UK is positioned to secure a significant share of that growth. However, without a coherent regulatory framework, businesses could shy away from investing in AI due to uncertainty, stifling economic potential.

 

As part of his vision, Lord C. Holmes introduced his AI Regulation Bill, which aims to tackle the challenges posed by AI in a way that supports innovation while protecting each individual and society as a whole.

 

AI Regulation Bill proposes several practical steps to address these challenges. The first is the creation of an AI authority, which would serve as a central regulatory body, ensuring that all existing regulators are competent to handle the complexities of AI. This authority would also fill gaps in the current regulatory landscape and ensure consistency across sectors. Without such an authority, he says, we risk a fractured and inconsistent approach to AI regulation, which would be detrimental to both individuals and businesses.

 

Another key component of the bill is the introduction of AI sandboxes. These controlled environments would allow businesses to develop and test AI technologies under real market conditions without fear of regulatory penalties. Holmes highlighted the success of similar sandboxes in the UK in the FinTech sector (running from 2016), which have since been replicated in over 50 jurisdictions worldwide. The goal is to foster innovation while maintaining safety and accountability.

 

Also the idea of a Responsible AI Officer is introduced. This is a new role within organisations tasked with ensuring that AI is deployed ethically and without bias. By assigning responsibility for the ethical use of AI, businesses can create a culture of accountability and transparency, building public trust in their technologies.

 

However, one of the most critical aspects of the bill is its emphasis on public engagement: if the public is not involved in the conversation about AI, they may resist the technology out of fear, missing out on the benefits while shouldering the majority of the risks. Public engagement, therefore, is essential to building trust and ensuring that AI is developed and deployed in a way that serves the public good.

 

AI holds enormous potential, but without "right-sized regulation", we risk undermining society, economy, and democracy, his speech at the RISK conference concluded. "It is up to us to shape the future of AI responsibly, ensuring that it serves the public good and enhances, rather than threatens, our way of life."

 

External

This content is provided by an external author without editing by Finextra. It expresses the views and opinions of the author.

Join the Community

22,017
Expert opinions
44,203
Total members
434
New members (last 30 days)
171
New opinions (last 30 days)
28,675
Total comments

Trending

David Smith

David Smith Information Analyst at ManpowerGroup

Best 5 White-Label Neobank Solutions in 2024

Dmytro Spilka

Dmytro Spilka Director and Founder at Solvid, Coinprompter

5 Compliance Challenges that Your Algo Execution Model May be Creating

Kyrylo Reitor

Kyrylo Reitor Chief Marketing Officer at International Fintech Business

Forex Market Regulation on the African Continent

Now Hiring