While artificial intelligence (AI) is not a new concept, having been first explored by a research group in the 1950s, its widespread industrial application has only just begun.
This is an excerpt from The Future of AI in Financial Services 2025 report, which was a special edition for the inaugural Finextra event,
NextGen AI. Click here to read the report.
Financial services is just one of the many sectors that is today grappling with this bourgeoning technology. How can it be seamlessly embedded into incumbent systems? What are the customer service applications? Which risks and regulations must be heeded?
These are just some of the questions financial institutions are being forced to ask, as they look to the future of AI in finance. As is often the case in highly competitive markets, the imperative is in many ways straightforward: advance and innovate or get
left behind.
How should financial institutions mitigate risk around AI use?
In an exclusive interview with Finextra, Shaun Hurst, principal regulatory adviser, Smarsh, underlined that “the banking and financial services sector was quick to take up GenAI, helping to improve customer services and create efficiencies. However, emerging
technologies like AI are now causing several unique challenges for banks and financial institutions, and some of the most significant relate to compliance, privacy and security.”
Financial institutions that take on AI face several risks including “model risk, bias, regulatory compliance, potential reputational damage, and cybersecurity” as highlighted by Bahadir Yilmaz, chief analytics officer at ING.
Awareness of these challenges was something Graham Smith, head of data science and innovation, NatWest Group also emphasised:
“Given the role that banks play in society, it’s incumbent on the industry to make sure any AI usage is managed carefully to ensure the best outcomes for customers. The first step is understanding and communicating the risks and opportunities of a particular
use of AI, be it with stakeholders or customers. For us, it’s important that everyone is comfortable operating within those parameters, prioritising transparency, data privacy and, ultimately, trust.”
“The key to overcoming these challenges,” Hurst continued, “is to have people at every level, from the C-suite to graduate new joiners, involved in its rollout and use, ensuring everyone works together to keep things secure. It’s also important that tech
teams have regular check-ins on how systems are performing and run timely risk assessments, paying careful attention to data quality, which means leaders need to understand and manage a vast amount of data and ensure the data is accurate for the best possible
output.”
Evidently, the future of AI in financial services will be accompanied by a new cohort of structures and staffers, who are mandated to monitor the entire production line: from the information AI is being fed, to the developers feeding it, and the quality
of the output generated. Only then can institutions maximise return on investment and security for customers.
Taking this one step further, Isa Goksu, CTO, Globant UKI and DE, said that “financial institutions must implement comprehensive strategies that include robust governance frameworks and explicit internal policies to ensure transparency and accountability,
particularly among senior managers.”
“We’ve heard a lot about AI bias and its risks this year,” he continued. “The best way to tackle AI bias, is to train models on high-quality, unbiased data and run regular audits. In high-stakes areas like credit scoring, transparency is essential to build
trust. A ‘human-in-the-loop’ approach is key to securing that trust. It allows for human oversight over AI-driven decisions, while regulatory sandboxes let institutions test AI safely, ensuring compliance and reducing risks before full deployment.”
Smith highlighted their structures, also including human supervision: “At the heart of our Code of Conduct, we ensure that AI systems are subject to human oversight, and that they respect and promote human agency; they are technically robust, resilient and
safe; that the decisions or predictions produced can be explained; and that they are free from unfair bias or discrimination. We work with multi-stakeholder teams across the bank to ensure we have a robust AI risk management governance process to further embed
our Code of Conduct.”
Yilmaz stated that financial institutions can mitigate risks “by implementing a combination of governance, transparency and risk management practices tailored to the unique risks AI poses. By integrating a combined approach, financial institutions can foster
trust in their AI systems, ensure compliance, and better manage the risks of AI-driven innovations.”
Pavel Goldman-Kalaydin, head of AI & ML, Sumsub, spoke to the issues around compliance and stewardship too: “Financial institutions face increasingly stringent regulatory requirements, adding pressure to replace home-built systems with specialised, standardised
platforms. Finance firms need to onboard new users swiftly and securely, perform anti-money laundering screening on customers, verify business clients, and monitor for fraud and suspicious transactions. We look to support finance firms with these, providing
the option to adopt and manage all features through a single AI-powered platform.”
Goldman-Kalaydin added that in one
recent case, a UK businessman and Revolut user lost £165,000 to fraud, when scammers bypassed security measures and gained access to his business account. Hundreds of transactions were authorised in just an hour.
“Avoiding common fraud schemes requires vigilance and awareness from individuals too,” he said. “They must be cautious with unsolicited payment requests, verify the legitimacy of invoices or purchase requests, and remain mindful of sharing personal or financial
information so easily.”
Clearly, the future of holistic fraud prevention means co-operation between every stakeholder in the value chain. This is an approach
Turning to the tools that are available to financial players, Goldman-Kalaydin underlined that AI works both ways when it comes to financial crime: “While fraudsters use it to create deepfakes and manipulate unsuspecting victims, financial institutions can
also harness AI to combat these threats. Regarding businesses, recent data revealed a
fourfold global increase in deepfake fraud cases globally, highlighting the need for more robust fraud prevention measures. The key is to stay one step ahead of fraudsters by
adopting AI-driven solutions that can detect anomalies in user behaviour and identify fraud patterns as they emerge. As the financial industry becomes increasingly digital, the threat landscape will continue to evolve. A reactive approach to fraud is no longer
sufficient; financial institutions must proactively monitor and defend against emerging threats.”
Can financial institutions learn from other industries that have embraced AI?
Some sectors have embraced these AI-powered tools more than others. Along with financial services, the automotive and healthcare industries have displayed considerable appetite. Hurst argued that these use cases “have largely focused on improving customer
service.” However, he stressed that there are more “lessons to be learned across industries.”
“One important example of this is in the healthcare sector,” he said. “Some healthcare providers have leveraged AI for predictive analytics, allowing them to make personalised medicines to improve and speed up patient outcomes. The financial sector has largely
focused its AI efforts on improving customer services, but the predictive ability provides more tailored and personalised customer recommendations, at a more rapid pace, to elevate offerings.”
Goldman-Kalaydin agreed, noting that financial services should leverage insights from sectors where AI is integral to innovation and risk management. “In fintech, AI-driven personalisation has set new standards for customer support and experience, by tailoring
financial products to individual needs,” he said. “The crypto industry uses AI for real-time transaction analysis, helping to monitor for suspicious payments and prevent fraud on decentralised platforms. Similarly, iGaming uses AI to detect risky behaviours
and ensure responsible gaming, an approach that could benefit financial services in identifying suspicious patterns. Adapting these industry practices can help financial institutions enhance security, efficiency, and customer engagement.”
Lord Christopher Holmes, Baron Holmes of Richmond, advised: “looking across other industries, other markets, other economies and societies to gain the greatest real time insights and assess their applicability for their own customers, colleagues and organisations.”
As the world of finance looks to the horizon for the next iteration of AI, or a new application to drive security, it may also benefit from a brief glance to the side – to see how other industries are putting this fledgling technology to use.