Artificial intelligence has rapidly evolved over the last couple of decades. Yet while AI innovation and adoption offers numerous benefits, it has so far been
largely unregulated. This is about to change as the European Commission presses on with the EU AI Act. Signifying the first global piece of AI regulation, the proposed framework is drafted to ensure that AI systems in the EU are safe, transparent and respect
fundamental rights.
When was the EU AI Act proposed?
The EU AI Act was first proposed by the European Commission in April 2021. Borne out of an increased understanding of the
risks that AI can pose, the EU AI Act is designed to address problems such as the black box scenario. In a black box problem, it’s unclear how AI systems process data and generate predictions or decisions. This has the potential of leading to unwanted outcomes,
biases, and discrimination that are hard to fix if it’s impossible to understand or follow the underlying process. The EU AI Act will be the first-ever legal framework on AI and will position Europe as a leading player in the regulation of artificial intelligence.
EU AI Act summary
Designed to address risk, the EU AI Act aims to create conditions for the use and development of trustworthy AI systems. The regulatory framework will address different levels of risk, identify high-risk areas to use AI in and outline obligations for the
deployment of high-risk AI systems.
The EU AI Act identifies four different levels of risk:
Source: ISACA
- Prohibited: AI systems qualifying as unacceptable risk will be prohibited under the EU AI Act. This includes all cognitive behavioural manipulation, real-time and remote biometric identification systems, and social scoring systems that
pose a threat to people.
- Regulated high risk: AI systems affecting the safety or fundamental rights of people fall under the high-risk category and require assessment before being put on the market as well as throughout their lifecycle. This includes AI systems
in areas such as biometric identification, law enforcement or migration.
- Transparent limited risk: This refers to AI systems with specific transparency obligations, such as AI systems that generate or manipulate image, audio or video content. ChatGPT would fall under this category and would need to comply with
transparency requirements.
- Minimal risk: AI systems that pose low or limited risk can be used and put to market freely without obligations. Spam filters or AI-enabled video games would fall under this category.
Who does the EU AI Act apply to?
The EU AI Act will apply to any company that uses AI systems, brings them to market or puts them to service in the EU. The regulatory framework covers providers, deployers, importers, distributors and product manufacturers alike. This means
that it can also apply to manufacturers or providers outside of the EU (such as the UK) if the output produced by the AI system is intended to be used in the EU.
What does the EU AI Act mean for banks and financial institutions?
At the time of writing, finance is not included among the high-risk systems in the current EU AI Act proposal. Mentioned financial use-cases are limited to credit scoring models and risk assessment tools, which would fall under the high-risk
category as they determine a person’s access to financial resources. The direct impact of the EU AI Act on banks and financial institutions will be clearer once the final proposal has passed.
When will the EU AI Act come into force?
In June 2023 the European Parliament adopted its negotiating position on the EU AI Act, which is now going through the legislative process with EU countries in the Council. After going through expected amendments, the final proposal will
likely be published late 2023 or early 2024. Once the EU AI Act is finalised and comes into force, it enters a transitional period where standards will be mandated and developed, and the governance structures will be set up to become operational. The duration
of the transition period will be announced once the proposal has been finalised.