Community
Regulators around the world are taking various approaches to address the challenges and opportunities presented by AI in financial services. It is very much still an evolving landscape, with specific requirements already different across regions. It is certainly a space to watch, and I believe that regulations are needed as much on AI as they are on other areas of banking like credit risk. If that were to be the case, then expect the same level of scrutiny as to how AI is working, is it having the impact expected, can it be easily explained.
In Europe, we are already seeing the European Union Artificial Intelligence Act (EU AI Act) begin to inform both in region, as well as drive some thinking on how regulators outside of the EU will mould their own rules for managing AI adoption.
From my own personal experience of having led lending operations in banking, it is interesting how the Australian financial regulator, the Australian Securities and Investments Commission (ASIC), is also starting to address how banks should be using AI. ASIC recently published a report titled “Beware the gap: Governance arrangements in the face of AI innovation”. This outlined a potential governance gap in how the Australian financial services and credit licences are implementing AI in a way that affects customers.
The banking sector in Australia already have quarterly review points on key areas such as credit, collections, risk weighted asset coverage and the models and how they were working. So, as AI is likely to touch those areas, similar governance and oversight would be expected.
What regulators like ASIC could be concerned about is how some financial service providers might be adopting AI at a more rapid pace than the testing approaches and policy structures can keep up with. If testing is not sufficient, then no product, policy or process change linked to AI should be viewed as robust to launch to the public and customers. This risk of the AI governance gap widening is not just a worry in terms of governance in and of itself, it is more a case of raising the risk of the potential of consumer harm.
Whilst there may be some anxiety about a gap, my own discussions with technology teams at banks reveal that they have been moving cautiously on their generative AI (GenAI) adoption under the direction of their compliance teams. So, those early GenAI projects have been in the back office and have not touched customer facing processes alone (i.e. humans are in the loop at least, or it is not a customer facing process).
This will change in 2025 as all teams get more confident about how GenAI can be overseen without serious mistakes. I expect that AI and GenAI-driven models will become more deeply embedded into financial service offerings. From creating more personalised customer interactions and streamlining back-end workflows, intelligence from these models will become actionable (even autonomous and not human managed, but yes monitored) and drive impact at scale for the financial services industry. However, the models need controls.
My view is that banks should not be resistant to AI regulation, rather welcoming to any robust frameworks ensuring the minimisation of potential for customer (and bank) harm. A strong regulatory push for AI, with some albeit it likely never full consistency globally, will drive safe and ethical innovation. Importantly, it will help build trust as customers will have the assurance that they are being protected as AI becomes more widely used in banking.
Looking more globally, when countries are considering how to balance regulation versus innovation, it should come down to testing and transparency around how outcomes are reached – transparency and minimisation of chances of bias are important parts of ensuring safe and effective AI use. Similar to the autonomous enterprise and use of AI agents within banking which can operate independently on a specific task, proper governance structures are needed to enable safe operation and control of these agents. Humans should be able to override the automation and be in full control, and certainly capable of effective monitoring. Think of the example of any fully automated credit decisioning or asset valuations. These equally require monitoring of effectiveness and fairness of doing the right thing by the customer.
How AI is being used in the financial services industry will continue to grow and create transformational change that is aligned with driving efficiency, improving customer service, and modernising legacy systems, but it does need to be properly regulated. Even though financial services industries globally will create their own AI regulations, the ideal would be some level of standardisation in order to create the most comprehensive AI policies possible. If we do not get there, then at the very least countries and banks should look to incorporate the same level of governance as around other automated functions like credit decisioning.
AI regulation is a competitive space globally, and Europe and Australia are showing themselves as the front runners, with the rest of the world at lease watching, perhaps not yet following in their footsteps. Certainly, some level of controls and assurance focus are required. What financial services institutions should remember is that AI should not shape us; we should be shaping it.
This content is provided by an external author without editing by Finextra. It expresses the views and opinions of the author.
Ritesh Jain Founder at Infynit / Former COO HSBC
08 January
Steve Haley Director of Market Development and Partnerships at Mojaloop Foundation
07 January
Nkahiseng Ralepeli VP of Product: Digital Assets at Absa Bank, CIB.
Sergiy Fitsak Managing Director, Fintech Expert at Softjourn
06 January
Welcome to Finextra. We use cookies to help us to deliver our services. You may change your preferences at our Cookie Centre.
Please read our Privacy Policy.