How to prepare for the EU AI Act

  2 Be the first to comment

How to prepare for the EU AI Act

Contributed

This content is contributed or sourced from third parties but has been subject to Finextra editorial review.

The European Union’s AI Act has formally been signed by the presidents of its parliament and Council. Imminently, the regulation will be published in the EU’s Official Journal – 20 days after which, the law will be put into force. For financial institutions, there will be just two years’ implementation period. So, how should banks prepare for the EU’s seminal AI Act?

Artificial intelligence (AI), and its associated risks to organisations, is higher in the headlines than ever. In May, days after the launch of its latest AI model, GPT-4o, Jan Leike – a key safety researcher at OpenAI – quit over development concerns. He claimed OpenAI is putting the release of “shiny products” in front of safety: “Building smarter-than-human machines is an inherently dangerous endeavour. OpenAI is shouldering an enormous responsibility on behalf of all of humanity.”

The latest Act from the EU is one part of the international response to such risks, seeking to enforce a balanced developmental environment for AI and facilitate benefits like improved healthcare, safer transport, more efficient production, and sustainable energy.

Artificial intelligence poses risks and benefits to the financial sector too, making it crunch time for banks. 

For those institutions yet to start their compliance journey, here is Finextra’s two-year plan to help navigate the implementation period and reap the rewards of this bourgeoning technology:

2024

H2:

In early H2, members of the organisation’s C-Suite should have familiarised themselves with all the stipulations in the AI Act and identified key areas within the business that need to be updated to ensure compliance. From here, a plan of action should be put drawn up, ensuring the relevant stakeholders within the business – such as IT, operations, managers, and staff – are aligned.

Perhaps kick off proceedings by filling out the EU’s AI Act Compliance Checker, to get a gauge of your organisation’s readiness. The appointment of a third-party consultant or advisory team may also be considered.

By the final quarter of 2024, things will be heating up and the action plan should be fully underway. The first milestone to plan for is the ban of all AI systems that pose unacceptable risk. High risk systems tend to be those that apply to areas such as biometrics, infrastructure, education, employment, select private services, and more. See the available literature for more information.   

Either way, ensure your organisation is on top of the AI products it offers and how they operate. This stipulation comes into play six months after the Act is entered into the EU’s Official Journal – most likely December 2024. 

As 2024 comes to a close, it is time to consider the areas that still need addressing. How are updates progressing? Are there any issues? How will they be overcome? Do third-party providers need to be pulled in to support on deadlines?

2025

H1:

As is the case with any major project, progress can lose pace here, but as an organisation, you may well be over the hump. There are things that have been struck off the checklist and others to laser focus on. Keep checking to see if there has been any delegated legislation, standards, or guidance published by the Union – or its associated bodies – to assist with the process. A good place to start is by subscribing to the EU AI Act newsletter.

By the first quarter of 2025 (or nine months after the entry comes into force) the Commission and the Member States will encourage the drawing up of codes of conduct – by individual providers of AI systems or by organisations representing them – which are intended to foster the voluntary application. Thusly, a bank should ensure it has a thorough understanding of all its providers, and that its use of their product meets protocol. See article 85 for more information.

H2:

Welcome to the marathon of the middle. It is only a matter of months until the Act comes into force. If the looming deadline wasn’t enough to expedite progress toward the finish line, is it time to point out that the AI Act provides for administrative fines of up to “6% of the annual worldwide turnover or €30 million, whichever is higher,” for non-compliance.

By this point, the checklist put together at the start by your organisation’s C-Suite should be nigh-on complete. This is the time for rounding off the remaining technicalities and ensuring staff-at-large understand the regulation and how it may impact their daily roles in a practical sense. Compliance is, as ever, a team effort.

One big milestone to watch out for at this juncture is that governance rules and obligations for General Purpose AI (GPAI) become applicable. As a technology, GPAI is an AI system that is – unlike Dall-E or Sora – task agnostic. It is highly complex and can execute all kinds of functions. The stipulations that control the risk of such systems include meeting specific transparency requirements, complying with EU copyright law, as well as the publishing detailed summaries of the content, for training. See article 85 for more detail.

There may also be an Annual Commission review, with possible amendments on prohibitions at this stage. Stay in the loop here.

2026

H1:

The start of this half of 2026 is your last chance before the deadline to sense-check all the recent IT and operational updates. Has the plan been completed in full? How effectively was it rolled out and have all the stipulations been met? Perhaps there are learnings from this process that could streamline future compliance efforts.

At 18 months after entry into force, the Commission will issue a template for high-risk AI providers’ post-market monitoring plan. This is detailed in Article 6, which also lays out how high-risk systems should be classified.

As we reach June 2026, the EU AI Act will apply in law, including Annex III (i.e. the category of high-risk AI systems), with only specific exceptions. At the 36-months-after-entry-into-force mark, the entire EU AI Act for all risk categories, including Annex II (i.e. the list of criminal offences), will apply.

The deadline

Providing your plan has been adhered to over the last two years, June 2026 (the finish line) will not only mean avoiding fines from the EU; it will represent security and opportunity – two golden standards in an uncertain age of innovation.

By preparing for AI so thoroughly, your institution stands to improve on its fraud detection measures, strip away cumbersome operational costs, closely manage risk,  automate investment, streamline credit decisions, tighten cybersecurity measures, build realistic chatbots, boost data insights with diagnostic and predictive modelling, as well as offer hyper-personalisation and super-apps.

Comments: (0)

/ai Long Reads

Scott Hamilton

Scott Hamilton Contributing Editor at Finextra Research

Where is AI winning, and heading next in financial services?

/ai

Yi Ding

Yi Ding Assistant Professor at Gillmore Centre, Warwick

AI can make P2P lending safer and more fruitful for investors

/ai

John Barber

John Barber Vice President and Head at Infosys Finacle

Scaling AI in European banking: A practical framework for success

/ai

Hamish Monk

Hamish Monk Senior Reporter at Finextra

What is automation in financial services?

/ai

Hamish Monk

Hamish Monk Senior Reporter at Finextra

Understanding AI for personalisation

/ai

Editorial

This content has been selected, created and edited by the Finextra editorial team based upon its relevance and interest to our community.