Artificial intelligence (AI) presents huge opportunities for central banks. At the same time, its adoption entails complex risk management challenges.
The use cases for AI span a broad range of critical functions of a central bank including data analysis, research, economic forecasting, payments, supervision and banknote production. The adoption of AI presents new risks and can amplify existing ones. The potential risks are wide-ranging and include those around data security and confidentiality, risks inherent to AI models (eg "hallucinations") and, importantly, reputational risks. The potential risk exposure for central banks can be significant, owing to the criticality and sensitivity of the data they handle as well as their central role in financial markets.
This report on the governance of AI adoption in central banks provides guidance on the implementation of AI at central banks and proposes a governance and risk management framework. A comprehensive risk management strategy can leverage existing risk management models and processes, in particular the well established three lines of defence model. In incorporating the specific issues around AI and its use cases, risk managers at central banks can make use of the frameworks proposed by a number of international bodies. A good governance framework is key for adopting AI. The report proposes an adaptive governance framework and recommends ten practical actions that central banks may want to undertake as part of their journey in adopting AI.
The report is the outcome of work conducted by Bank for International Settlements (BIS) member central banks in the Americas within the Consultative Group on Risk Management (CGRM), which brings together representatives of the central banks of Brazil, Canada, Chile, Colombia, Mexico, Peru and the United States. The Artificial Intelligence Task Force that prepared this report was co-led by Alejandro de los Santos from the Bank of Mexico and Angela O'Connor from the BIS. The BIS Americas Office acted as the secretariat.