Join the Community

22,238
Expert opinions
44,206
Total members
424
New members (last 30 days)
214
New opinions (last 30 days)
28,750
Total comments

Transparent and controlled AI-powered client lifecycle management & KYC

  1 1 comment

Artificial intelligence (AI) is being used throughout digital transformation initiatives and is changing how banks perform client on-boarding, anti-money laundering (AML) and know-your-customer (KYC) compliance. Additionally, for the purposes of anti-money laundering, artificial intelligence systems can mine large amounts of transaction data and identify with better accuracy risk-relevant facts, thereby reducing work and bringing down costs.

What is opaque AI and transparent (or explainable) AI?

Opaque AI uses complex techniques such as neural networks, deep learning, genetic algorithms, ensemble models, and more. What these methods have in common is that the “logic” behind their predictions and decisions can’t be easily expressed or in some cases cannot be explained at all.

Transparent AI, on the other hand, uses algorithms such as linear regression and Bayesian learning.  These more traditional methods have logic that can easily be understood by humans. By exposing key underlying data elements that drive the model, transparent AI enables people to quickly grasp why decisions were made.  For example, if age is the top predictor for next best product to add to an onboarding bundle, that information is easily surfaced. 

Why transparent AI in client lifecycle management & KYC?

AI transparency isn’t always required, but in highly regulated areas, such as compliance, AML, KYC, or circumstances when data privacy and protection are mandated, it may be critical. 

For instance, consider the EU General Data Protection Regulation (GDPR) that took effect in May 2018. This regulation gives European consumers specific protections, including allowing them to request that companies explain exactly how they reached certain algorithmic-based decisions. In this case, having transparent AI enables companies to better comply with the regulations. 

But the question of when to use opaque AI and when to use transparent AI isn’t always about satisfying regulators. It can also lead to ethical and moral considerations. 

To illustrate, consider a situation where a ‘black-box” opaque AI engine makes a prediction or decision without the ability to explain how it came to that decision. That decision might end up being the reason a law suit is filed, casting ethical shadows on the firm.  In this case, authorities want proof that age wasn’t used to drive the decision. The risk of course is that opaque AI models can’t explain what drove their decisions, and so there may not be a solid defense.  Another downside of opaque systems is when they incorrectly predict something, such as an AML false positive.  With no ability to explain themselves, it’s impossible to socialize to key stakeholders why mistakes were made. 

However, transparent AI can provide an explanation if a false positive decision occurs. Maybe the explanation was that a key model driver was the DOB of the individual – and the DOB was not within a +/- 5 year range. If after inspection stakeholders determine this is not an acceptable model driver, this can be modified as required (depending on the risk appetite).

In summary, to successfully use AI in digital transformation of client lifecycle management, AML, and KYC, you need an AI-powered system with transparency and the proper controls to provide for full audibility and to reduce risks to acceptable levels.

 

External

This content is provided by an external author without editing by Finextra. It expresses the views and opinions of the author.

Join the Community

22,238
Expert opinions
44,206
Total members
424
New members (last 30 days)
214
New opinions (last 30 days)
28,750
Total comments

Now Hiring