Join the Community

22,908
Expert opinions
43,959
Total members
469
New members (last 30 days)
222
New opinions (last 30 days)
28,962
Total comments

The EU AI Act and Financial Services: How are Financial Institutions impacted?

On August 1, 2024, the European Union’s AI Act came into force—the world’s first legal framework regulating artificial intelligence. Designed to balance innovation and risk, this regulation introduces a series of staggered compliance deadlines, with most provisions becoming fully applicable by mid-2026. However, some prohibited AI uses, such as real-time biometric surveillance in public spaces, will be banned within six months.

Fines for non-compliance will also be severe, reaching up to 7% of global annual turnover for violations involving banned AI applications.

The AI Act’s compliance obligations are still evolving, but organizations must start preparing now. The act introduces risk-based classification for AI systems and imposes specific requirements on developers and providers, particularly in regulated industries like financial services.

In the meantime, a lot has been written about this regulation. Supporters praise the EU’s proactive stance, emphasizing the need for transparency, security, and ethical standards in one of the most disruptive technologies to date. Critics argue that the regulation comes too soon, as AI is still evolving, and its full potential remains uncertain. Some believe it could place the EU at a competitive disadvantage, creating barriers for startups and smaller firms that lack the resources to comply.

Yet, well-designed regulation can drive long-term competitiveness. Companies that integrate security, privacy, and ethics early may avoid costly retrofitting later as global standards evolve. Additionally, trust-driven industries such as financial services may benefit from greater consumer confidence in AI systems that meet stringent EU standards.

The AI Act categorizes AI systems into five risk levels, each with specific obligations:

  • Unacceptable Risk: AI systems which pose an unacceptable risk are outright banned under the AI Act. This includes AI applications that manipulate human behavior, use real-time remote biometric identification (e.g. facial recognition in public spaces) or implement social scoring (ranking individuals based on personal characteristics, socio-economic status or behavior). However, AI systems used for military, national security or pure scientific research and development are exempted from this category.
  • High Risk: AI systems used for biometric and facial recognition, medical applications, education, employment, and certain public sector functions are classified as high-risk and must comply with strict regulations. While these AI applications are permitted, they must adhere to strict regulations governing both the AI system itself and its provider. This includes thorough documentation, pre-market conformity assessments and potential regulatory audits.
  • General-Purpose AI (GPAI): This category primarily includes foundation models such as ChatGPT. Unless the weights and model architecture are released under a free and open-source license, in which case only a training data summary and a copyright compliance policy are required, GPAI systems must comply with transparency requirements. Additionally, high-impact general-purpose AI systems must undergo a rigorous evaluation process to assess potential systemic risks.
  • Limited Risk: AI systems with limited risks, such as chatbots and image and video generative AI must meet transparency requirements to ensure users are aware they are interacting with AI. The EU’s primary focus for this category is on reducing manipulation risks and ensuring transparency. This includes obligations to provide a summary of training data and to implement policies ensuring copyright compliance, among other requirements.
  • Minimal Risk: Most AI systems fall into this category and are encouraged to self-regulate. Examples include AI used in video games and spam filters. Since these applications are considered low risk, they face minimal regulatory requirements under the AI Act.

This tiered approach ensures that regulatory efforts focus on high-impact AI, while allowing low-risk innovation to continue with minimal constraints.

Financial services have been identified as one of the sectors where AI could have the most significant impact. The regulation defines two high-risk AI use cases in finance:

  • Creditworthiness assessments (AI models used to approve/reject loans)
  • AI-driven risk assessment & pricing for life and health insurance

AI-powered fraud detection, money laundering prevention, customer due diligence, credit scoring, algorithmic trading, investment optimization, insurance underwriting, and robo-advisors all fall under the AI Act. These applications will now require greater transparency, bias mitigation and regulatory oversight.

The AI Act also references existing EU financial services laws, particularly those covering internal governance and risk management. These laws will continue to apply to financial institutions using AI. EU financial regulators will oversee AI Act implementation within financial services, including market surveillance and enforcement.

One of the biggest challenges for financial institutions is proving compliance with the AI Act’s rigorous standards, particularly in areas like transparency, fairness, accountability, and oversight. For large financial institutions, the number of AI systems could be in the hundreds. Those that developed or deployed AI before the AI Act came into effect will need to reassess whether these systems comply with the new regulations. Consumer protection provisions in the Act may also require modifications to existing systems.

Additionally, financial firms must determine their role under the AI Act:

  • Providers: Those developing AI systems must ensure compliance from the outset.
  • Deployers: Those using third-party AI solutions must verify compliance and manage risks related to misuse or malfunction.

To navigate this, financial institutions should:

  • Identify AI Systems – Catalog all AI applications and classify them under the AI Act’s risk framework.
  • Assess Impact – Map AI Act requirements to existing policies (e.g. Model Risk Management, Data Protection, Third-Party Risk Management).
  • Strengthen Documentation – Ensure AI models have transparent training data records, bias assessments, and regulatory compliance reports.
  • Review Third-Party AI Systems – Both providers and deployers share responsibility for compliance. Institutions customizing third-party AI must evidence additional controls.
  • Enhance Governance – Align AI oversight with EU financial regulations, ensuring compliance with risk management, transparency, and fairness.
  • Customer Communication – Inform clients when high-risk AI systems impact their data or financial decisions.

While the AI Act introduces new compliance burdens, it also presents opportunities. Financial institutions that embrace transparency and risk management early may gain a competitive edge as global AI regulations evolve. By taking proactive steps now, financial services firms can turn compliance into an advantage—ensuring trust, efficiency, and resilience in an AI-driven future.

For more insights, visit my blog at https://bankloch.blogspot.com

External

This content is provided by an external author without editing by Finextra. It expresses the views and opinions of the author.

Join the Community

22,908
Expert opinions
43,959
Total members
469
New members (last 30 days)
222
New opinions (last 30 days)
28,962
Total comments

Now Hiring