Join the Community

23,807
Expert opinions
40,562
Total members
385
New members (last 30 days)
199
New opinions (last 30 days)
29,224
Total comments

The agentic bank: Where innovation meets risk

Tomorrow’s banks are transforming the way they operate. Griffin, a UK-based banking-as-a-service fintech, recently announced plans to integrate autonomous artificial intelligence (AI) agents into its core banking infrastructure, while traditional bank ING has already confirmed the incorporation of agentic AI into its operations.

The prospect of machines running banks may sound nightmarish, but these aren’t the chatbots that litter every customer service page. Agentic AIs are autonomous entities that can perform complex tasks without a human-in-the-loop, and ING COO Marnix van Stiphout has highlighted its value across marketing, compliance, and monitoring.

True agentic AI offers banks a bounty of never-before-possible opportunities, though they must also beware its risks.

What is agentic AI?

Agentic AI refers to AI that operates autonomously, enabling it to make decisions and perform tasks without human involvement. For banks, this means AI agents will be able to handle complex financial crime tasks, detect and act on fraud risks, manage credit risks, and resolve customer service enquiries.

Infrastructure can further enhance the value of agentic AI. Griffin’s offering of a Model Context Protocol (MCP) server will facilitate interactions between its core banking systems and customers’ AI models, such as providing AI agents with relevant data, rules, and business context. This enables agents to act consistently and intelligently over time, coordinating multi-step actions. MCP servers enable reliable end-to-end decision-making patterns based on responses from multiple systems.

Within a fraud context, this means the nuance of past outcomes, such as transactional patterns, influence decisions AI agents make in the present, from opening an account to performing know-your-customer (KYC) checks. This ensures agentic AI can act within policy and maintain transparency.

The value of agentic AI

Agentic AI has the potential to hyper-personalise banking services for customers, including anticipating their needs to suggest products that fit with their circumstances, over chat, voice, app, and more.

For example, identifying rent transactions could prompt agentic AI to suggest opening an individual savings account and provide the customer with a mortgage offer once they meet the deposit requirements. If the customer sets saving objectives, the agent could also provide reminders to curtail spending, such as highlighting particularly expensive regular brunches that may set back homeownership.

For banks, agentic AI offers the opportunity to achieve significant efficiencies by automating ID verification, KYC checks, and risk management. AI agents also excel at tasks that often involve humans papering over the cracks in fragmented systems.

For fraud, this includes detecting subtle, evolving fraud patterns using both historical and real-time context, and reconciling fraud alerts in a manner that is fully compliant with the bank’s policy framework. For example, Nasdaq’s anti-financial crime unit, Verafin, has launched agentic AI to help banks with anti-money laundering compliance, enhancing screenings, and due diligence processes.

Weighing up the risks

However, much like humans, AI agents are not infallible. When misconfigured or poorly monitored, they can pose cybersecurity threats, exacerbated by their expanded access to AI systems and autonomous decision-making algorithms. Fraudsters, who could also be harnessing agentic AI for malicious purposes, can take advantage of these digital bank employees, which are stocked with data and able to act of their own accord.

Criminals can trick agents into approving fraudulent transactions, unwittingly exposing sensitive data, or exposing vulnerabilities to hijacking through adversarial inputs. This means that banks must consider the operational and reputational risks of using agentic AI, and ensure safeguards are in place to protect their business and customers.

Pioneering banks are right to highlight the opportunities offered by an agentic bank. The benefits are substantial, benefitting banks and their customers, but the legal, regulatory, and financial crime risks will require new tech-centric approaches to governance and compliance.

The proliferation of fraud, with criminals stealing more than £1.17 billion in the UK alone in 2024, means it’s inevitable a bank using agentic AI will find itself going toe-to-toe against an agentic fraudster, money launderer, terrorist, or human trafficker. Striking a balance between risk and opportunity is essential, and collaboration across the entire ecosystem, from fintechs to regulators, will be vital in proactively keeping customers safe from the threat.

Governing autonomy

The shift toward agentic AI by banks reflects a broader interest from Big Tech, and adoption will only continue to grow. Banks can benefit from improved personalisation, automation, and fraud prevention.

However, governance is necessary to mitigate the risks associated with agentic AI: not just oversight from banks, but the implementation of proactive regulations to allow financial institutions to reap the rewards.

Innovation in banking is accelerating, but so are the risks. In this competitive, high-stakes environment, speed matters, but vigilance matters more.

External

This content is provided by an external author without editing by Finextra. It expresses the views and opinions of the author.

Join the Community

23,807
Expert opinions
40,562
Total members
385
New members (last 30 days)
199
New opinions (last 30 days)
29,224
Total comments

Now Hiring