Community
Manually evaluating transaction monitoring models is slow and error-prone, with mistakes resulting in potentially large fines. To avoid this, banks are increasingly turning to automated machine learning.
Regulators increasingly expect banks and financial institutions to be able to demonstrate the effectiveness of their transaction monitoring systems.
As part of this process, banks need to evaluate the models they use and verify (and document) that they’re up to the task. Institutions that fail to maintain a sufficiently effective anti-money laundering program are frequently hit with huge fines, including several that have totaled over USD1 billion.
Lisa Monaco, the deputy attorney general at the US Department of Justice (DoJ) while announcing a recent fine for Danske Bank, said to expect companies to invest in robust compliance programs. Failure to do so may well be a one-way ticket to a multi-billion-dollar guilty plea.
Such threats are putting added pressure on smaller banks and FIs. While the larger institutions often will struggle less because of their army of data scientists, model validation and evaluation can be a burden for players with more limited resources.
What is a model?
In the US, banks commonly monitor transactions using a rule-based system of parameters and thresholds. Common rules detect the value of transactions over a period of time or an increase in the volume or value of transactions. If sufficient conditions are met, an alert is triggered.
Even in their simplest incarnation, regulators consider such systems to be models. According to supervisory guidance OCC 2011-12, a model is defined as any quantitative approach that processes inputs and produces reports. In practice, a typical rule-based transaction monitoring system involves multiple layers of rules.
Regardless of complexity, banks must manage model risks appropriately. There are three main types of model risk that banks need to consider:
These are easy questions to ask, but answering them can be extremely challenging. The OCC supervisory guidance stipulates that banks should manage model risks just like any other type of risk, which includes “critical analysis by objective, informed parties who can identify model limitations and assumptions and produce appropriate change”.
This guidance states that banks should ensure their models are performing as expected, in line with their design objectives and business uses. It defines the key elements of an effective validation framework as:
Regulatory compliance
Regulators have continued to raise the bar as the US seeks to restrict access to sanctioned countries and individuals, as well as cracking down on financial crime in general.
Since 2018, the New York State Department of Financial Services has required boards or senior officers to submit an annual “compliance finding” that certifies the effectiveness of an institution’s transaction monitoring and sanctions filtering programs.
Taking this a step further, the DoJ announced in 2022 that it was considering a requirement for chief executives and chief compliance officers to certify the design and implementation of their compliance program. With continued geopolitical tensions as the war in Ukraine drags on, the potential cost of a compliance failure is only going to increase.
The regulation of models comes under these broad requirements for effective risk controls. While the approach that banks take to evaluate models will vary on a case-by-case basis, the general principles apply equally.
Similarly, the frequency of model evaluation should be determined using a risk-based approach, typically prompted by any significant changes to the institution’s risk profile, such as a merger or acquisition, or expansion into new products, services, customer types or geographic areas. However, regulators increasingly expect models to be evaluated as often as every 12-18 months.
Model evaluation challenges
Rule-based models are being asked to do much more as the nature and volume of financial transactions has evolved. As new threats have emerged, models have become more and more complex (though not more effective). Unfortunately, many are not up to the task.
In many cases, the model has become a confusing black box that few people within the institution understand. Over the years, changes to data feeds, scenario logic, system functions, and staffing can mean that documentation explaining how the model works is incomplete or inaccurate. All of this can make evaluation very difficult for smaller banks. A first-time assessment will almost certainly be time-consuming and costly, and possibly flawed.
However, the challenges are not going away. Changes in consumer behavior, which accelerated during the pandemic, are here to stay. Banks and FIs have digitized their operations, vastly increasing their range of online services and payment methods. Consumers are also showing greater willingness to switch to challenger banks with digital-first business models.
These changes have created more vulnerabilities. Competitive pressures are putting compliance budgets under pressure, while the expansion of online services has created more opportunities for AML failures. To keep up, FIs need to respond quickly and flexibly to new threats.
Better model evaluation with Automated Machine Learning
This process of model evaluation can be optimized using automated machine learning (AutoML). This allows models to be evaluated continuously (or on short cycles) with a standardized process, which leads to higher quality evaluations. By contrast, the manual approach is slow and very error prone.
AutoML models take huge sets of data, learn from the behaviors encoded in that data and reveal patterns that indicate evidence of money laundering. The rapidly changing landscape of AML regulations, in combination with the growing number of transactions and customers, leaves almost no room for a traditional manual project-by-project approach. That is why the industry is increasingly looking at a more disruptive approach: models that are trained with customers' good behavior. The results of this non-traditional method in combination with AutoML let banks adapt to the new reality and stay ahead of almost any new criminal pattern.
This content is provided by an external author without editing by Finextra. It expresses the views and opinions of the author.
David Smith Information Analyst at ManpowerGroup
20 November
Konstantin Rabin Head of Marketing at Kontomatik
19 November
Ruoyu Xie Marketing Manager at Grand Compliance
Seth Perlman Global Head of Product at i2c Inc.
18 November
Welcome to Finextra. We use cookies to help us to deliver our services. You may change your preferences at our Cookie Centre.
Please read our Privacy Policy.