Join the Community

22,024
Expert opinions
44,216
Total members
425
New members (last 30 days)
171
New opinions (last 30 days)
28,678
Total comments

Responsible Artificial Intelligence for Anti-Money Laundering: How to Address Bias

Introduction

The eager and rapid adoption of artificial intelligence (A.I.) by financial institutions (F.I.s) may surprise those outside this otherwise cautious industry. However, the industry consensus is clear that intelligent technologies such as A.I. are major factors in the race to differentiate and establish market share. For example, a survey conducted last year by the World Economic Forum found that 85% of F.I.s had implemented A.I. somehow, and 77% of all respondents anticipated A.I. to possess high or very high overall importance to their businesses within two years.

Compliance departments at F.I.s are poised to benefit from integrating A.I. into their anti-money laundering (AML) programs. Unfortunately, previously adequate, legacy rules-based AML systems have become antiquated. They lack the sophistication needed to recognize nuances of rapidly evolving criminal patterns and keep up with new products and consumer behavior. The result is high false positive and low detection rates that sap an F.I.’s resources by requiring the need to secure more costly, experienced compliance staff. The high false positive and low detection rates stemming from rules-based monitoring are why chief compliance officers (CCOs) at F.I.s are turning to intelligent technologies such as A.I. to manipulate data more effectively across their AML programs. But, how can they do so responsibly?

Defining Responsible A.I. and its importance

In recent years, the A.I. community has encountered multiple instances of machine learning (ML) models making biased predictions. The ML research community responded with several studies, tools, and metrics to analyze the issue. This led to a growing body of research on fairness, privacy, interpretability, and trustworthiness of AI/ML models under the umbrella term “Responsible A.I.” Responsible A.I. is now broadly discussed – it entered the Gartner Hype Cycle for Artificial Intelligence in 2020.

While exact definitions of Responsible A.I. vary across thought leaders, the common themes are fairness, interpretability, privacy, transparency, inclusiveness, accountability, and security. This blog delves into the element of fairness and what CCOs can do to reduce biased A.I. in their anti-money laundering programs. Ensuring that ML models in AML programs don’t produce biased results is ethical and helps prevent customer mistrust, lost business opportunities, and reputational damage.

How bias can be introduced into A.I.

A key aspect in exploring Responsible A.I. is first to understand how bias can creep into the model workflow and at which stages. Let’s start with the data sourcing and preparation stage. A machine learning model is reliant on accurate, complete training data. However, most F.I.s' business operations were set up before extensive digitalization occurred, so sometimes the information needed to train machine learning models is recorded incorrectly, incompletely, or not at all. This may happen because typically, only a small stream of application data, about 5% to 10% of the total, make it through the pipeline and lands in a data lake for analysis.

A machine learning model will also produce biased results when the training data is not representative. An example from the world of computer vision of using biased data to train a model: Duke University researchers created a model that could generate realistic, high-resolution images of people from a pixelated photo. However, as white people were overrepresented in the data used to train the model, the model did not work for people of other races and ethnicities.

As has been widely discussed by the A.I. community, bias is not just about data, however. Bias can also creep in during the feature selection stage. Know Your Customer (KYC) is the part of an AML program that is most susceptible to biased model features, as KYC models attempt to assess individual people. While using certain attributes such as gender or number of children is unethical, the data science team needs to be vigilant and ensure that using benign attributes like employment status or net worth does not encode systematic bias into the models. While the transaction monitoring area of an AML program is less susceptible to biased model features, as it deals mainly with transactional, not personal, data, bias may still creep in. For example, seemingly innocuous location data (postal code, country, etc.) may serve as a proxy for data that is impermissible to consider, such as race or ethnicity.

Finally, human biases can influence what action AML professionals will take with A.I. model outputs. In an AML program, an analyst or investigator must act based on the information provided by the A.I. model – they must decide which alerts to investigate, which alerts to combine into cases, and which to report to authorities. Humans are susceptible to many cognitive biases that stem from basic cognitive processes such as wishful thinking, mental shortcuts, societal influence, hunger, fatigue, etc. These biases can unconsciously influence decision-making pertaining to model predictions and outputs.

How can CCOs reduce bias in A.I.?

Ensuring Responsible A.I. for AML is a joint effort across the compliance and data science teams at F.I.s. Here are some things CCOs can do to support Responsible A.I.

Communicate thoroughly with the data science team: As F.I.s delve deeper into Responsible A.I., direct and clear communication between CCOs and the data science team is needed. For example, compliance teams should provide the data science team guidance on the company values, principles, and regulatory guidelines that ML models should align with. In addition, CCOs should emphasize that evaluation of bias in models should be included in success criteria, on par with model performance related to false positives/negatives and detection.

Request auditability: The A.I. development and deployment process should be fully transparent and auditable, tracking precisely who made what modification to what model so that there is always an accurate, complete log of model creation.

Prioritize interpretable models: Another path to full transparency in A.I. development and management is to build interpretable models rather than black-box models. Like others, we believe that interpretable models are preferable to explainable black-box models for several reasons. First, black-box model explanations can be inconsistent across vendors, which in turn creates confusion among analysts. Furthermore, the explanations themselves can be difficult to decipher given the background and knowledge level of the analyst. 

In cases where black-box models will perform better than interpretable models, however, the black box model should be used, and the team should focus on explanations that provide relevant context such as the program’s strengths and weaknesses, data used to arrive at a specific decision, and why an alternative decision was not chosen. To make explanations easy to understand and use, they should be in graphical form or in the form of a pre-built natural language narrative that can be incorporated into regulatory reports – whatever works best for the team.

Evaluate model performance and monitor for drift: Ongoing evaluation and re-training is paramount in ensuring bias-free model performance. Once a model is trained and deployed, it needs to be monitored consistently. The model can “drift” as relationships among data change over time due to customer behavior changes, new product releases, or other systemic changes. These factors can cause the model performance to degrade over time, and if not corrected by periodically re-training the models, result in incorrect or biased decisions. Doing these things automatically is the key to continual monitoring.

Assess predictive outcome fairness: Model performance on various population segments should be evaluated to ensure there isn’t a disparate impact on specific population segments. For example, an F.I. uses a risk scoring model to classify customers as high and medium risk. The F.I. can cross-check the risk scores against sensitive attributes such as race, religion, zip code, or income to investigate the model for bias. For example, suppose risk scores for lower-income individuals are consistently higher than those of higher-income individuals; the F.I. should identify which features are driving the risk scores and whether those features truly represent risk. Suppose the F.I. finds that a feature represents a different characteristic or behavior due to different financial circumstances and not inherent risk. Then, the feature should be modified or removed to reduce model bias. For example, perhaps a model contains the rapid movement of funds as a feature. The F.I. may find that the model produces higher risk scores for low-income people, but upon investigation may determine that the difference in risk scores across low-income and high-income people is being driven by the fact that low-income people are more likely to spend their entire paycheck faster, and not by truly risky money movement patterns.

Looking ahead

Having a strategy now as to how they will ensure Responsible A.I. is thus not only the right for CCOs to do, but it can also help them get ahead of future A.I. regulations. U.S. regulators have sent strong signals that they are encouraging innovation and growth in A.I., and agencies have been instructed to avoid regulatory or non-regulatory actions that act as barriers to the development and deployment of A.I. However, as A.I. begins to permeate the financial industry, ethical and responsible behavior and governance arise. For example, the principal regulatory guidance on quantitative models for U.S. banks is currently S.R. 11-7, Supervisory Guidance on Model Risk Management, from the Board of Governors of the U.S. Federal Reserve System. While this has become a model for regulators globally, it was published in 2011 and does not encompass the full scope of A.I. Thus; more AI-specific regulation is expected to be developed, though it may well come from broader regulatory bodies (e.g. the European Commission) before it comes from financial regulators.

Regardless of if and when A.I. regulations come, by working closely with data science teams now, CCOs can do their part to ensure that A.I. usage within their AML programs is responsible, effective, and free of bias.

External

This content is provided by an external author without editing by Finextra. It expresses the views and opinions of the author.

Join the Community

22,024
Expert opinions
44,216
Total members
425
New members (last 30 days)
171
New opinions (last 30 days)
28,678
Total comments

Trending

David Smith

David Smith Information Analyst at ManpowerGroup

Best 5 White-Label Neobank Solutions in 2024

Ruoyu Xie

Ruoyu Xie Marketing Manager at Grand Compliance

Governance, Risk and Compliance: How AI will Make Fintech Comply?

Now Hiring