What is the global AI legislative outlook?

1 Like 0 Be the first to comment

What is the global AI legislative outlook?

Contributed

This content is contributed or sourced from third parties but has been subject to Finextra editorial review.

The EU AI Act made history as the world's first AI legislation when it entered into force in August 2024. We have now entered a 24-month transitional period during which delegated legislation, guidelines, and standards are being drafted and published across all EU member states. The regulatory framework is designed to identify varying levels of risk, address high-risk areas and define obligations for the deployment of high-risk AI systems.

This is an excerpt from The Future of AI in Financial Services 2025 report, which was a special edition for the inaugural Finextra event, NextGen AI. Click here to read the report.

“The EU AI Act represents a genuinely pioneering effort to regulate AI across all industries globally, establishing a risk-based classification, where high-risk systems like insurance and credit scoring face stringent regulations on data quality, documentation, human oversight, and transparency,” commented Isa Goksu, CTO of Globant UKI and DE. “It’s one to watch for any company operating within the EU.”

Lord Christopher Holmes, Baron Holmes of Richmond, added: “The EU AI Act is certainly the chunkiest piece of legislation currently. It is highly prescriptive and will need to be understood by all UK firms with an interest in or connection to the EU. It is also very much worth looking at the pieces of AI legislation passed in China and the work of the HKMA in this respect.”

While the EU may be the first to officially put AI regulation into force, the drive towards legislation is not unique. Other legislative and regulating bodies, e.g., in the US or the UK, are similarly drafting pieces of AI regulation. Pavel Goldman-Kalaydin, head of AI & ML at Sumsub, emphasised: “The US already faces something similar, with state-level AI regulatory initiative being much more active than anything seen on the federal level. The degree to which all these state legislations will be cohesive with themselves, and with anything that will come from the federal government, is yet to be seen.”

Shaun Hurst, principal regulatory adviser at Smarsh, elaborated: “In the UK, the government is working on an approach to balancing innovation with proper oversight. Additionally, the US issues strong guidelines through its Executive Order on the Safe, Secure & Trustworthy Development and Use of AI, while also launching a dedicated institute for studying safety issues.

“The most significant aspect of these varying regulatory developments is how each of them is attempting to tackle different priorities, from protecting everyday users to making sure advanced systems stay reliable and fair,” Hurst continues. “Ultimately, the real challenge for banks and financial institutions will be getting everyone at an international level on board and on the same page, since technology doesn’t care about borders.”

UK AI legislation

Looking towards the UK, White & Case’s AI Watch wrote that the “government's AI Regulation White Paper of August 3, 2023 and its written response of February 6, 2024 to the feedback it received as part of its consultation on the White Paper both indicate that the UK does not intend to enact horizontal AI regulation in the near future. Instead, the White Paper and the Response support a ‘principles-based framework’ for existing sector-specific regulators to interpret and apply to the development and use of AI within their domains.”

In light of this approach, the FCA has outlined a 12 month plan which includes collaboration with other members of the Digital Regulation Cooperation Forum (DRCF) to deliver a pilot AI and Digital Hub, as well as additionally running its own Digital Sandbox and Regulatory Sandbox.

In an update to the regulator’s AI approach, Jessica Rusu, chief data, information and intelligence officer at the FCA, wrote: “The Government’s principles-based, sector-led approach to AI is welcome; the FCA is a technology-agnostic, principles-based and outcomes-focused regulator. We are focused on how firms can safely and responsibly adopt the technology as well as understanding what impact AI innovations are having on consumers and markets. This includes close scrutiny of the systems and processes firms have in place to ensure our regulatory expectations are met.”

Lord Holmes also submitted an AI bill, he told Finextra: “My hope is that I will make some more progress with my AI Regulation Bill. In opposition, Labour were supportive and positive about the Bill, its clauses, and its principles. Currently, they are not looking to legislate, save for a specific narrow AI safety Bill. I believe it is crucial that we take the opportunity of our common law legal system, our tech and FS ecosystem and pass economy wide, society wide AI legislation: for the benefit of citizen, consumer, innovator and for inward investment.”

US AI legislation

The Biden-Harris administration released an executive order in October 2023 to secure the development and application of AI. Marking the country’s most comprehensive effort on AI regulation to date, the executive order aimed at establishing the United States as a leader in safe, ethical, and responsible AI use.

In October 2024, marking one year after issuing the executive order, the White House released an update on landmark achievements over the past 12 months. The update announced that the federal agencies have completed all actions on schedule, including:

  • The launch of a new Task Force on AI Datacenter Infrastructure.
  • The establishment of the AI Safety and Security Board (AISSB) to advise the Secretary of Homeland Security on the safe and secure use of AI in critical infrastructure.
  • Releasing a Department of Treasury report on managing security risks of AI use in the financial sector.

Additionally, the blueprint for an AI Bill of Rights outlines guidance around equitable access and use of AI systems and “provides five principles and associated practices to help guide the design, use and deployment of ‘automated systems’, algorithmic discrimination and protection; data privacy; notice and explanation; and human alternatives, consideration and fallbacks,” writes White & Case.

In light of the election results of November 2024 and the exchange of power in January 2025, it is still to be seen how these efforts will be continued and how AI will be tackled by the Trump administration.

Striking a balance between innovation and security

Regulating the AI space is a challenge exemplary of the current digital age. Hurst explained the predicament well: “Regulators face a daunting task; AI is developing faster than rules and policies can keep up with, and they need to focus on protecting customers while allowing companies to innovate. In response, financial institutions should set up testing grounds, often referred to as a sandbox, where they can safely experiment with new features and ideas under proper supervision. One good example of this is the FCA’s AI Lab, part of its Innovation Services which supports companies in developing new AI models and solutions.”

Yet while regulation can be perceived as a big challenge for companies, it is necessary in order to safely develop. A switch in mindset is necessary for organisations to embrace the potential that doesn’t just lie in AI, but also in its regulation.

“Companies have a role to play in this dynamic too,” Goldman-Kalaydin emphasised. “Even though implementing the EU AI Act, or any other AI-related standard, requires extensive compliance resources, companies should shift their mindset from perceiving these compliance efforts as purely ‘costs’, but instead as something that can work in their favour. In the future, the successful company is the one who does not antagonise AI best practices and understands that having safe, trustworthy AI, is in reality a competitive advantage.”

For Lord Holmes the balance between innovation and security is “mission for all legislators and regulators, it is essential and it is entirely achievable.” He continued: “We all know bad regulation, that doesn’t for a moment mean that regulation is bad, that’s just bad regulation. It is crucial to hold the needs, the aspirations of citizens and consumers, innovators and investors simultaneously, that there is a chance to enable optimum outcomes. This requires taking a principles based approach. To those principles: trust and transparency, inclusion and innovation, assurance, accountability, and accessibility.”

In Europe, we have seen that the EU AI Act is designed to ensure accountability and auditability of AI systems for fairness, accuracy, and compliance with privacy regulations, but it is simultaneously driving interest in AI systems now that there are frameworks in place. A SAP Concur survey found that 51% of CFOs are investing in AI in 2024 compared with only 15% in August 2023.

The reality is that a legal framework gives companies that might be hesitant to embrace AI the needed guidance to confidently and securely deploy this technology. Looking towards the global regulatory landscape, it seems that a risk-based approach is the best way forward.

Imposing strict compliance requirements for high-risk applications and require assessment before being put on the market as well as throughout their lifecycle, while paving the way for less risky technologies to more flexibility and freely develop will encourage healthy – and more importantly, safe – AI innovation.

Comments: (0)

/ai Long Reads

Sehrish Alikhan

Sehrish Alikhan Reporter at Finextra

How can AI become more sustainable?

/ai

Scott Hamilton

Scott Hamilton Contributing Editor at Finextra Research

What is agentic AI, and why should banks or customers care?

/ai

Dominique Dierks

Dominique Dierks Senior Content Manager at Finextra

What is the global AI legislative outlook?

/ai

Editorial

This content has been selected, created and edited by the Finextra editorial team based upon its relevance and interest to our community.