Generative AI: How to manage banking risks

  4 Be the first to comment

Generative AI: How to manage banking risks

Contributed

This content is contributed or sourced from third parties but has been subject to Finextra editorial review.

Like all businesses, banks are exploring how to use GenAI (generative artificial intelligence) effectively and safely. However, banks are different from most businesses. Even where AI-specific rules have yet to enter the scene, banks must make sure that their use of the technology complies with existing rules for financial services. Taking a risk-centric approach today will give banks a head-start when new AI standards arrive.

The rise of AI in banking

AI continues to challenge the way that banks think about their business. When it comes to GenAI, banks can build on their strong track record of adopting earlier forms of machine learning. Initially these applications have tended to support back-office functions, such as fraud detection, market abuse surveillance, and regulatory reporting, rather than customer-facing roles.

GenAI has accelerated both adoption and excitement. The ability to engage with the AI using natural language opens up new opportunities across more areas of banks’ businesses. For example, firms could use it:

  • As an internal efficiency tool, such as allowing front-office staff to source information from the bank’s various compliance policies;
  • For customer engagement, for example to provide more responsive (albeit less predictable) customer support or, with appropriate safeguards, advice; and
  • To interrogate large volumes of data to inform front-line decision-making, such as creditworthiness assessments.

In some cases, AI can help reduce the banks’ exposure to legal risks. Let’s say that an AI system is set to monitor customers’ transactions to identify cases of potential payment fraud. This could help prevent the customer becoming a fraud victim. It could also help the bank by reducing its potential liability to reimburse victims of fraud.

These opportunities are not, however, risk-free. GenAI may introduce new risks and amplify existing ones. For example, the susceptibility of large language models to ‘hallucinate’ calls into question their reliability as a source of information. Inappropriate processing of customer information could breach data protection or secrecy rules. Intellectual property may be misused.

Applying bank regulation to AI

Given the risks, banks must carefully consider the regulation that applies to them when they put AI into their business. Financial regulators take a ‘technology-neutral’ approach to their rulebooks. This means that they do not change how regulations apply based solely on underlying forms of technology. For example, banks should make sure that financial advice is suitable for the customer whether that advice is informed by a human or computer, or a combination of the two.

High-level rules accommodate AI most easily. For example, UK banks must treat customers fairly and communicate with them in a way which is clear, fair and not misleading. These rules are therefore relevant when it comes to how transparent firms are about how they apply AI in their business. Firms should tread particularly carefully where its outputs could negatively affect customers, such as when assessing creditworthiness.

More targeted regulations are also relevant. For example, banks need to manage operational risks. As more rules emerge which aim to build firms’ resilience to technology-related incidents, banks must prepare for disruption to their AI systems. More rules will attach to arrangements where banks rely on third parties to support important parts of their business.

Individuals should also consider their regulatory responsibilities. When using AI, outputs may not be explainable as a function of their inputs. Regulators will expect senior managers to understand the associated risks and to have put in place controls to manage them.

AI regulation is on the march

Policymakers are responding to AI-specific risks by introducing rules and guidance on its development and use. For example:

  • EU AI Act - The EU is introducing an extensive regulatory and liability regime for AI. The requirements of the AI Act, which will phase in over the next two years, focus on transparency, accountability, and human oversight. The most onerous obligations apply to high-risk use cases, such as creditworthiness and credit scoring. Banks should also note that these rules also apply to employment-related use cases, such as monitoring and evaluation of employees. A separate set of rules will apply to banks’ use of GenAI.
  • US sector-led approach - President Biden’s 2023 Executive Order on AI induces swathes of sectoral regulators to develop tools on AI safety. Various individual states have taken steps to regulate some forms of AI use through AI-specific laws within their jurisdictions, such as AI-hiring restrictions in New York. The pressure to regulate AI at a federal level is increasing but this is at an early stage.
  • UK principles-led approach - The UK government intends to follow a lighter touch, industry-led approach. The government has tasked regulators, including the Bank of England and Financial Conduct Authority, to come up with tailored approaches that suit the way AI is being used in their sectors. These will be informed by five overarching principles, such as appropriate transparency,explainability, and fairness.

Focus on changes to risk management

The advent of AI regulation should not put off banks. Many themes in the emerging AI rules and guidance echo standards under financial regulation. This includes making sure that there are robust governance arrangements and consistent lines of responsibility around AI systems, that third party risks are monitored and managed, and that customers are protected from harm.

Some of the detail of AI-specific rules will also be familiar to banks. Incoming AI standards will require businesses to assess risks, maintain more policies, keep more records, run more audits, and report to regulators. Banks’ extensive risk and compliance processes mean that they are well-placed to absorb this additional layer of regulation.

The challenge for banks is to identify the gap between today’s ‘good enough’ and tomorrow’s ‘best practice’. The range of potential use cases means that AI engages a wide spectrum of compliance areas cutting across various functions in the bank. An integrated compliance programme would help draw these strands together, preferably in a way which is specific enough to ensure a consistent approach to AI adoption but also flexible enough to deal with the quirks of different business lines or use cases.

Banks may find it helpful to have an AI steering committee to centralise this process. The responsibilities of this AI SteerCo could include integrating AI into existing compliance arrangements. For example, reviewing the bank’s business line policy documents, governance and oversight structures, and third party risk management framework. It could develop protocols for staff interacting with, or developing, AI tools. It could also look ahead to changes in technology, risk and regulation.

Banks have already started on their AI compliance journey. Ensuring they comply existing financial regulation is the first step towards meeting the additional challenge of AI regulation. A holistic approach which identifies and manages risks to the bank, its customers, and the wider financial system is one that is likely to stand the test of time.

Comments: (0)

Contributed

This content is contributed or sourced from third parties but has been subject to Finextra editorial review.