AI validation and governance: How banks are securing their AI models

Be the first to comment

AI validation and governance: How banks are securing their AI models

Contributed

This content is contributed or sourced from third parties but has been subject to Finextra editorial review.

Artificial intelligence (AI) and generative AI have become one of the largest topics in banking in the last two years. From OpenAI’s launch to the EU AI Act coming into effect, things are heating up.

Banks are becoming more comfortable with the use cases AI is offering them, which means that ensuring AI models are being validated properly (doing what they’re supposed to do) and the models have good governance in place (roadblocks in place to ensure that the AI remains ethical and safe) is core to their placement in financial infrastructure.

With these concerns in mind, data protection and bias in AI remain big concerns as banks start to use more AI. We spoke with NatWest and ING about what they’re doing to protect themselves, and NayaOne about what they’re seeing as some of their customer’s concerns.

How are banks using or planning to use AI?

Karan Jain, CEO, Nayaone told me: “Across our client base of banks and insurers we are seeing every client experimenting with GenAI and AI. The Evident AI Index report demonstrates how much focus and resource firms are putting into this area.”

The Evident AI Index shows the different maturity of AI within banking, ranking JP Morgan at the top as of writing this article. The majority of financial institutions have already begun some sort of investigation into how they can use AI, less have begun using it.

NatWest is a bank which has publicly started using AI, announcing last November that their chatbot Cora got an injection of generative AI to give conversations a more natural feel.

Graham Smith, head of data science and innovation, NatWest Group commented: “Like many institutions, we’ve been on a journey of increasing our use of AI for many years, but the more recent emergence of Generative AI (LLMs) has opened a range of new opportunities.

“Supporting our customers is our number one priority, so we’re making use of AI where we think it can help us to significantly improve our products and service offering to customers.”

Smith reported that last year Cora handled 10.8 million retail banking conversations up by around 400,000 in 2022, and almost half of these conversations required no human intervention.

Bahadir Yilmaz, chief analytics officer, ING said they are also using generative AI in their communications with customers through Chatbots, but are also “experimenting” with its use in marketing. He said: “We are sending emails that are going to clients that are personalised with AI.”

Smith described some other uses they are looking at: “AI can be used in a whole spectrum of different places within a bank. At one end, AI is becoming available at everyone’s fingertips to make everyday tasks more efficient, whether that’s through summarising emails, or allowing developers to code more efficiently.

“At the other end of the spectrum, specific processes have been made more efficient through incremental and transformational uses of AI, including Generative AI, such as summarising calls with customers or better detecting fraud earlier in the fraud lifecycle.”

Internal efficiency is then a popular use of AI within banking processes. Yilmaz stated ING are also using AI across functionalities like their KYC, AML, and software engineering practices.

Smith commented: “We are actively experimenting in a number of areas using emerging AI technologies, including giving our staff the ability to have conversational interactions to access our internal corporate knowledge assets, and by working on the next generation of our Cora chatbot to personalise interactions even further.”

Smith also told me that around 10% of NatWest’s analytical models are already using AI, increasing efficiency while reducing costs.

What are banks worried about for AI?

As banks increasingly start to use AI, this will kick up more, and new, risks. Smith highlighted that the increased use of AI can impact customer perceptions: “We’re very much alive to the challenges AI presents to our customers and wider society, whether that be through potential bias, gaining customer trust through transparency and data integrity, or the possibility of increasingly sophisticated fraud or scams.”

Yilmaz emphasised that the biggest risks start with the “foundation model risks, so bias or hallucination. AI systems are really high recall models, so they answer everything, and that means some of the answers they give are not necessarily correct.”

Elaborating on this, Yilmaz gave the example that this could be giving wrong or slightly wrong answers to clients, but could also be wrongly assessing a file when it comes to a money laundering risk.

Yilmaz also stated that data processing, data compliance and data protection risk are also a major concern. This was something Jain also noted their clients expressing concerns over “how to access tools in a way that meets compliance and risk processes, to test and compare vendors (or how to develop compliance and risk processes).”

How are banks protecting data and tackling bias?

Bias and data integrity within AI are important issues which banks should be concerned about as they implement different AI and generative AI models into their infrastructure.

For both banks we spoke to there was an emphasis on guardrails within AI models to protect from some of the risks. Yilmaz said: “You have to have the organisational resilience and also guardrails around the models to make sure that like they are generating results that are useful for you.”

Smith added to this sentiment: “As the pace of AI advancement evolves alongside the development of formal regulations, having our own guardrails is crucial when it comes to managing the risks of AI.”

He further emphasised: “We’re working to embed data ethics and responsible AI principles across the bank and in our employee training programmes, ensuring we have enhanced controls in place to manage any risks.”

Banks are using different processes to ensure this. Smith stated that for many of the emerging generative AI cases they’re looking at, they have chose to keep a human in the loop.

Yilmaz described ING’s AI risk management approach. They have identified 138 risks in generative AI which they are putting through a 20 step process – such as data privacy, data leakage, information security, decision risks, model risks, ethics – before they put any AI into production.

He stated this process is just to identify the risks, but mitigating them can be difficult. In some cases, two factor authentication can solve the problem but issues like bias are more complex. He commented: “It is an effort right now, a research effort, to identify our exposure, but also identifying the mitigating factors. So you have to start getting really creative. The number one tool that we have right now is the guardrails and additional LLMs verifying the results of the first LLM.”

Jain commented that in their experience, “firms who are doing well think about it holistically, and have a risk and controls process that scales, and is right-sized for experimentation vs roll-out. What sets the leaders apart is the ability to experiment quickly to find the right tool and education. Firms aren’t always public about it, but many are implementing external sandbox platforms so they can cut their proof of concept times down from over 12 months to weeks, and once they’ve identified the right partner they can proceed with confidence.”

How should banks move forward with AI?

As with any new technology used in banking, it seems the best start in using AI in banking infrastructure is through caution and testing – and banks seem to be understanding this.

Smith commented: “Underpinning any use of AI are the fundamentals of keeping our customer’s data safe and using it appropriately for the intended purpose.”

Yilmaz’s details about the rigorous testing to identify risks shows how seriously they are taking this issue also.

Yet, as more regulations relating to AI and its use start to emerge, financial services will need to throw even more resources behind ensuring that the AI they are using is compliant and ethical – they will need to throw the whole toolbox at this issue.

Comments: (0)

Sponsored

This content has been created by the Finextra editorial team with inputs from subject matter experts at the funding sponsor.