Join the Community

22,080
Expert opinions
44,048
Total members
428
New members (last 30 days)
200
New opinions (last 30 days)
28,699
Total comments

Seven challenges financial institutions must address to harness machine learning’s potential

Machine learning (ML), the most prominent arm of artificial intelligence (AI), cuts both ways for the financial services industry, where its applications are getting wider by the day.

The benefits are obvious. ML models are trained to learn from results just as the human brain does and can execute complex tasks at a scale and speed humans simply cannot.

But perils abound. Complexity of the models is a risk. Many can be opaque and obscure, notorious for being black boxes. And when non-transparent models malfunction, things could get out of hand.

In extreme cases, it could even lead to financial institutions failing, with systemic consequences for the entire economy.

For financial institutions, there are a number of challenges in actually making ML models adhere to the existing principles and best practices of model risk management. In our experience working with financial institutions, the following are seven of the most common challenges we see and what steps they’re taking to address them.

1) Operationalizing an ML model validation framework that covers algorithms, validation techniques, controls, and documentation

Financial institutions need to put in place an end-to-end validation framework specifically for ML models.

Selecting suitable algorithms with respect to business requirements and availability of data is crucial. This requires expertise in ML modelling, business understanding, and programming.

The validation techniques for ML models differ from those generally used by financial institutions for other models. They could also differ according to the ML algorithm used and the availability and structure of the data.

Additionally, re-validations and targeted validations (significant changes applied to existing models) should be covered by the second line of defense, to confirm the model is fit for the purpose. In ML models, minor changes in parameters or tuning the setup can affect the behavior of the algorithm and the model’s results significantly.

Then, the control framework needs to be in place, with an emphasis on the design and effectiveness of the controls. Complete documentation is a must to ensure the independent party understands the objective of modelling, algorithms and validation techniques used, control ownership, and coverage.

It is also important that model validation functions are staffed with people who possess the right knowledge and skills. Hence, model-validation teams must hire people with a data science background and a solid grounding of different AI and ML modelling techniques.

2) Setting up policies covering regulatory requirements, governance and controls, monitoring

There is still considerable uncertainty around regulatory requirements for ML model validation.

Regulatory bodies have presented general regulatory expectations; however, there is no formal regulatory framework for ML models. Financial institutions should develop a policy stating general regulatory requirements, which could include model risk management guidelines and guidelines for ML models.

The model risk management guidelines should cover conceptual soundness, data quality checks, governance and controls, model monitoring, and model validation. The Board and senior management should be aware of use cases and understand the effectiveness of the controls used in the ML model lifecycle. Roles and responsibilities need to be clearly defined to achieve ownership and accountability.

3) Implementation of ML models within a robust and controlled environment

The implementation of ML models is predisposed to risks. Compared with statistical or traditional models, the complex specifications of ML algorithms put stress on computational and memory efficiency, which heightens concerns about implementation risks.

The implementation of ML models using different platforms requires expertise and infrastructure. The emphasis should be on creating a robust IT infrastructure, developing tools using programming, improving model monitoring, and validation setups within these tools. This complexity makes the validation task more difficult to verify the correct implementation of models within the IT system.

Documentation of the implementation process enables an independent party to understand the process flow of the system used. The model validation function needs to assess the appropriateness of the model implementation, and evaluate the testing performed and overall control framework underpinning the model.

4) Designing effective data governance processes

Since data is an important aspect of ML models, adequate governance processes around it are critical. The data governance process should cover sources, input data quality checks, analyzing data (which includes univariate analysis and outliers' analysis), controls on manual inputs, and other aspects.
From a model validation perspective, data testing requires an effective data management framework that establishes a set of rules on data quality, completeness, and timeliness for models. In such a sense, deviations from these standards is a challenging topic, as data used in ML methods is huge compared with that in traditional models. Also, ML models rely on large volumes of heterogeneous and high-dimensional data, making it important to document from sourcing, processing, and transformation, until the last stage of the full deployment of the model, to ensure data is appropriate.

Therefore, the model validation team must confirm that input data is available and has undergone appropriate quality checks before being used in production. It is also necessary to test how different ML techniques handle missing data, normalization techniques, and anomalous data. Also, firms should ensure good traceability of data back to source systems so that data challenges can be fixed at the source.

5) Controlling for lack of explainability of ML models

The lack of explainability of ML models is a major challenge for the more complex techniques, such as ANN, where the input-output responses are unclear and lack transparency. The complexity of some ML models can make it challenging to provide a clear outline of the theory, assumptions, and mathematical basis of the final estimates. Finally, such models prove to be hard to validate efficiently.

The black box characteristic makes it difficult to assess a model’s conceptual soundness, reducing its reliability. For instance, the validation of the hyperparameters may require additional statistical knowledge, and therefore, institutions should ensure that the staff overseeing validation is appropriately trained.

Model validators can look at mitigating controls to address the lack of transparency. Such controls can be part of the ongoing monitoring that are more rigorous. It is also recommended to use benchmark models to compare outputs and variances against predefined rules, which could lead to further investigation or discontinuation of the use of models in production.

6) Hyperparameter calibration of ML models

The key assumptions for ML models are usually the hyperparameters developed and tuned to be applied in the model. If these assumptions are opaque, so would be the business intuition or soundness. Moreover, in ML models, the value of the hyperparameters can severely impact the model’s results.

Changes in the hyperparameter settings need to be evaluated to assess the appropriateness of the modeler’s choice. If further changes in hyperparameters are conducted, the validation team must confirm that the model results are consistent.

7) Outcomes analysis

Outcome analysis, we have seen, is crucial to compensate for the lack of explainability in some ML techniques. Moreover, outcome analysis has an important role in assessing model performance. The analysis is focused on cross-validation and its variants. Back-testing procedures do not have the same relevance as in the traditional models.

Variance vs. bias trade-off in ML models can be challenging and concerning. While this has not been out of scope of the statistical and regression models, ML models amplify the alarms.

Many metrics can be used for this purpose, depending on the model’s methodology. For instance, MSE could be decomposed into bias and variance. Explicit evaluation of the trade-offs should be reviewed and documented.

Out-of-sample testing is also an important component for outcome analysis for AI/ML. The validators must review and assess whether appropriate procedures have been followed in the model development process to ensure outcome analysis is appropriately conducted, including cross-validation and testing sets.

External

This content is provided by an external author without editing by Finextra. It expresses the views and opinions of the author.

Join the Community

22,080
Expert opinions
44,048
Total members
428
New members (last 30 days)
200
New opinions (last 30 days)
28,699
Total comments

Trending

Kyrylo Reitor

Kyrylo Reitor Chief Marketing Officer at International Fintech Business

How to avoid potential risks when working with correspondent accounts

Kathiravan Rajendran

Kathiravan Rajendran Associate Director of Marketing Operations at Macro Global

Is a Seamless Cross-Border Payment Future Possible?

Now Hiring