The EU AI Act officially became law yesterday, the world’s first AI legislation and marking the end of a process which began back in April of 2021.
We have already produced a guide on
how to prepare for the AI regulation, and
background on what the Act was proposing.
Now that the AI Act is in law, here’s a look at how some fintechs are responding.
EU AI Act already becoming outdated
Steve Bates, chief information security officer, Aurum Solutions states that the act is a “positive step towards improving safety around use of AI, but legislation isn’t a standalone solution. Many of the act’s provisions don’t come into effect until 2026,
and with this technology evolving so rapidly, legislation risks becoming outdated by the time it actually applies to AI developers.”
For Bates this is most notably in the area of data attribution: “The act does not require AI model developers to provide attribution to the data sources used to build models, leaving many authors of original material unable to assert and monetise their rights
on copywrite material.”
Increasing demands on companies
The legislation will likely result in an increased demand on their risk management from companies. The act provides a risk scaling of minimal risk, specific transparency risk, high risk, and unacceptable risk.
Elena Mora, head of privacy and data protection at MAPFRE, expanded on what this will mean for risk management: “The European Union's new AI Regulation will likely have a considerable influence on companies by demanding a rigorous classification of AI systems
based on their risk level, as well as the mandatory application of risk management systems in the case of high-risk AI systems.
“Its application will be a significant challenge —especially in startups and SMEs, due to the costs derived from the adaptation.”
Mora explained that under the new AI law, companies which develop or use AI systems in the EU will need to:
- Evaluate the risk level of their AI systems in accordance with the criteria established in the regulation.
- Implement appropriate mitigation measures for systems identified as high risk, such as techniques to ensure data quality, mechanisms to correct biases and ways to guarantee transparency and justification of decisions made by AI systems.
- Ensure ongoing compliance with the GDPR, especially with regard to processing personal data using AI-based systems.
Shaun Hurst, principal regulatory advisor, Smarsh, added to these thoughts: “Banks utilising AI technologies categorised as high-risk must now adhere to stringent regulations focusing on system accuracy, robustness and cybersecurity, including registering
in an EU database and comprehensive documentation to demonstrate adherence to the AI Act.
“For AI applications like facial recognition or summarising internal communications, banks will need to maintain detailed logs of the decision-making process. This includes data inputs, the AI model's decision-making criteria and the rationale behind specific
outcomes.”
Hurst stated that while the act aims to ensure accountability and auditability of AI systems for fairness, accuracy, and compliance with privacy regulations, it will mean “financial institutions must assess their capabilities in keeping abreast with these
changes in legislation and whether existing compliance technologies are up to scratch.”
What this means for UK AI
As the act has come into force, many UK financial institutions will be required to be compliant with it due to their operations.
Jason Raeburn, intellectual property and technology litigation partner at Paul Hastings, commented: “Now that the EU AI Act has come into force, UK tech firms need to be geared up for change. The Act will require businesses to have an in-depth understanding
of its regulatory requirements and be primed and ready for implementation – especially for those aiming to scale up to a global market.”
Raeburn noted that the act has extraterritorial implications, meaning it applies to providers who place or put AI systems within the EU. He said: “Given its ‘extraterritorial effect’, compliance will be mandatory for many UK AI systems, including those with
outputs utilised by users in the EU, which means firms will be required to make significant investment in compliance measures (and engineering changes) to avoid hefty penalties.
“Due to the broad scope of the EU’s regulation, UK tech businesses will inevitably face friction as the Act comes into force, especially for those involved in high-risk AI applications.
“The Act’s risk-based approach means that higher-risk AI systems will encounter more rigorous compliance demands, with severe penalties for non-compliance. For UK tech firms, this will likely have a huge impact on operations and strategic planning.”
AI interest growing with the Act
Bob Stark, global head of market strategy at Kyriba commented that now the act is in place, the question for most CFOs is what it will mean for AI use in their finance team.
Stark cited a
SAP Concur survey, which saw 51% of CFOs investing in AI this year compared with just 15% in August 2023. Despite this 58% of these respondents said they had little understanding about how to use AI in finance.
Stark said: “It’s clear that more CFOs recognise the need to adopt AI to increase financial process automation and improve data-driven decision making. The EU AI Act provides assurance to finance teams that their use of AI is not considered high risk, largely
because in corporate finance, payments, risk management and treasury operations, AI is used to either improve upon human decision making or the results of AI models are reviewed by humans.
“This is the case for popular AI use cases such as payments and fraud detection where AI assists with identifying anomalous payments that fall outside of a company’s payment policy – and for cash forecasting where AI is leveraged to improve predictability
of cash flows.”
What is next for EU AI
For Stark, the act has provided a direction for general purpose AI providers to improve the transparency of their models. It stated that is has also provided encouragement for continued innovation in this area: “The act also offers further validation for
CFOs that they should feel comfortable in continuing to adopt AI to improve operational efficiency and improve the impact of data insights into corporate finance decisioning.”
Bates is also positive about places where AI can enhance efficiency: £Alongside legislative reform, businesses need to focus on educating staff on how to safely use AI, where it should and shouldn’t be deployed and identifying targeted use-cases where it
can boost productivity.”
However, Bates emphasises that AI is not always the answer: “AI isn’t a silver bullet for everything. Not every process needs to be overhauled by AI and in some cases, a simple automation process is the better option. All too often, firms are implementing
AI solutions just because they want to jump on the bandwagon. Instead, they should think about what problems need to be solved, and how to do that in the most efficient way."