Join the Community

22,238
Expert opinions
44,206
Total members
424
New members (last 30 days)
214
New opinions (last 30 days)
28,750
Total comments

Balancing AI innovation and model risk: Insights from SS1/23

Recently, the UK Government called upon key regulators—including the ICO, FCA, and Bank of England (which incorporates the PRA)—to outline their strategic approach to Artificial Intelligence (AI).  In response, the Bank of England highlighted various existing regulations that demonstrate its capacity to supervise AI. Notably, SS1/23 featured prominently in addressing at least two of the government's five highlighted areas. 

This focus is understandable, given that virtually all AI-driven applications are based on mathematical models. As a result, AI-related risks and model-related risks are inherently interconnected.

In this blog, we will address critical questions surrounding the PRA's SS1/23 principles. 

SS1/23 came into effect on May 17th this year. It covers all models used by firms, and by all - we mean all.

The PRA has explicitly stated that SS1/23's scope extends across a firm's entire operations, not just credit and market risk, which have traditionally been the primary focus of model risk management for many institutions. This comprehensive approach means that if HR departments employ models for candidate screening, if customer interactions are managed by AI-driven chatbots, or if personalised advertising content is generated by AI, all of these fall within the purview of SS1/23.

Is SS1/23 up to the task of supporting such a broad range of models?

The core strength of SS1/23 is it’s principle-based, technology agnostic approach. This makes it as applicable to the most complex large language models imaginable as it is to the simplest rule-based ones. SS1/23 adopts a top-down perspective, concentrating on potential risks and outcomes rather than specific application areas, technological underpinnings or methods of operation.

It might then, be safe to assume that one can rest easy, safe in the knowledge that adherence to SS1/23 supports a comprehensive risk management framework that mitigates the dangers AI-based tools present. However, there is one obvious gap, which receives only a passing mention in the Bank’s response to the government.

At present, SS1/23 applies exclusively to firms holding IRB permissions for capital requirement calculations. Of the over 1,300 firms under PRA regulation, a mere 23 possess these permissions. However, these 23 include the UK's major banks and building societies, representing the bulk of UK banking activity. However, as we’ve seen with Google, Amazon and Tesla, new entrants and market disruptors can start small and grow rapidly. There is no regulatory requirement for a newly established FinTech to apply for IRB permissions and there are now fewer incentives for them to do so given the forthcoming implementation of the Basel 3.1 standard. Consequently, the pool of IRB approved firms to which SS1/23 applies is unlikely to materially increase any time soon.

How is the Bank going to ensure that AI-appropriate model risk management principles are applied to the rest of its flock?

Based on the PRA’s previous comments , a pretty sound bet is for the risk management principles described in SS1/23 to be rolled out across the board in the near future. We expect this will be in a simplified form, applied proportionally based on a firm’s size and complexity but we envisage two key impacts:

  1. All organisations can expect to increase the level of resources applied to model risk management in the future.

  2. To successfully embed model risk management principles, firms will need to consider their cultural approach to model risk. Model risk should be front and centre, clearly visible to the board and allocated a similar level of scrutiny and oversight as other areas of risk.

The table below outlines how we see this playing out in terms of actions firms will need to undertake to manage AI-based risks (and model risk in general).

 

SS1/23 Principle

Actions Required

Principle 1 – Model identification and model risk classification

  • Factoring in the complexity and transparency aspects of AI models into model tiering, meaning that the materiality assessment of a firm’s model landscape captures these risks.

  • Identifying how models interact with other business systems and processes as well as other models, to be able to accurately assess the holistic risk that the model presents.

Principle 2 – Governance

  • Ensuring senior management understands the additional risks associated with complex AI-based models.

  • Model risk is viewed as a risk category in its own right but is also something that can and does interact with other risk types.

  • Reviewing risk appetite statements, ensuring model risk is given equal prominence to other risks and are aligned with firms’ wider risk appetite.

  • Revising governance structures to ensure that model risk management (MRM) receives sufficient attention from senior executives.

  • Identifying suitably qualified individual(s) responsible for MRM with a direct reporting line to the CRO.

  • Ensuring there is business-wide understanding of model risk including the inherent uncertainty in model outcomes. This uncertainty should be a standard discussion item whenever model outputs are reviewed.

  • Executives should be able to articulate the level of uncertainty and how they have factored it into decisions that have been made or influenced by the model.

Principle 3 – Model development, implementation, and use

  • Providing guidelines for developers when building AI models, specifying how and where it is appropriate to use complex techniques.

  • Establishing robust methods to establish the validity and providence of data used to build and implement models using unstructured data and/or data supplied by a third party.

  • Reviewing and updating polices to support appropriate regulatory and legally compliant business use of models.

Principle 4 – Independent model validation

  • Ensuring guidelines for 2nd line MRM functions are appropriate for all types of models across all areas of the business to allow models to be fully validated.

  • Defining specific requirements for how and when AI models should be monitored and how the results of this should be reported through governance.

  • Developing suitable approaches for the validation of unstructured data sources and third-party data.

Principle 5 – Model risk mitigants

  • Developing policies for risk mitigation action plans, which must be in place for when model performance becomes sub-optimal or where model behaviour varies from expectation.

 

So, while SS1/23's model risk management principles currently have limited application among PRA-regulated firms, their broader adoption appears inevitable.  Therefore, forward-thinking firms should begin planning now for the inevitable changes that are expected in the MRM regulatory landscape.

External

This content is provided by an external author without editing by Finextra. It expresses the views and opinions of the author.

Join the Community

22,238
Expert opinions
44,206
Total members
424
New members (last 30 days)
214
New opinions (last 30 days)
28,750
Total comments

Now Hiring