Staying ahead of cybercrime: The importance of AI-based fraud prevention

1 Like 0 Be the first to comment

Staying ahead of cybercrime: The importance of AI-based fraud prevention

Contributed

This content is contributed or sourced from third parties but has been subject to Finextra editorial review.

Since the emergence of ChatGPT in November 2022, we have seen the increased integration of artificial intelligence (AI) across our personal and professional lives. From deepfakes to biometric bypasses, the ways fraudsters are leveraging AI-powered technology are evolving rapidly. This article explores how cybercrime is increasingly leaning on cutting-edge innovations, and how, in turn, financial institutions can fight back by bolstering their long-term security and fraud detection measures.

AI has taken the criminal underbelly of financial services by storm – as much as it has its legitimate counterpart. There has even emerged an underground network of marketplaces and AI-enabled dark web search engines, providing bad actors access to polymorphic malware creation engines, biometric bypasses, AI chatbots configured for fraud, and more.

The democratisation of generative AI (GenAI) adds yet another layer of complexity to the picture, thanks to the technology’s ability to dodge detection during phishing attempts, by facilitating – among other things – the cloning of voice and video images via deepfakes.

Analysis from Visa has found that these kinds of social engineering attacks are increasingly targeted at alternative and fast-growing payment rails – such as account-to-account payments or crypto – since they can be subject to less mature safeguards. In Visa’s June 2023 survey, ‘The Impact of APP Scams’, we found that one in three UK consumers has fallen victim to authorised push payment (APP) scams. According to UK Finance, total APP fraud losses were almost £460 million in the same year. These figures reveal the sheer scale of the challenge.

To respond to this growing threat, financial institutions must stay ahead by adopting advanced AI-enabled counter-measures – encompassing onboarding, authentication, authorisation and post-transaction payments, across both card and account-to-account/real-time payments – ultimately ensuring the flow of business remains uninterrupted.

So, what should institutions do in the next 12 months to scale this mountain? How are consumer expectations and new payment rails impacting the landscape? This article seeks to answer these questions and more, while considering how the financial services industry should respond to, and harness, AI in 2025 and beyond.

A tectonic fraud landscape: Cybercrime and AI

Despite what some sensational headlines may suggest regarding the rise of fraud, Visa is seeing combined fraud rates on its network fall in Europe, year on year. This context is critical, because the payments ecosystem remains strong, and card payments continue to be one of the safest and most secure ways for customers and merchants to transact.

From a fraud and threat perspective, however, there are of course evolving risks. Most significant of which is the increasing ability of cybercriminals to share information over the internet – be it to spotlight money-making opportunities; successful fraud practices; or key vulnerabilities in chains. While information sharing has been taking place in the financial crime underworld for some decades, the openness of communication is a very contemporary issue – and is being facilitated by a rising number of peer-to-peer (P2P) and social media channels, coding forums, and other dark web arenas. These networks are, in some cases, enabling fraudsters.  

But the bad actors are now benefitting from better tools and technologies, as well as better levels of communication and organisation. From automated attacks, to bots, to machine learning (ML) models that hone tactics over time, cybercriminals’ stock of strategies is increasingly rich and varied. In 2025 and beyond, the fraud landscape is likely to be defined by the global rollout of GenAI, which can be used to help bad actors counter many of financial institutions’ traditional defences, and target consumers directly.

Perhaps the most challenging issue here is, again, access. While most open-source AI models are bound up in safety frameworks that ensure privacy and the rule of law are observed, cybercriminals have worked out how to make tweaks to them for nefarious ends. These new, advanced models – with all legal boundaries razed – are perhaps the most challenging developments for both the private and public sectors to answer.

Notwithstanding its benefits to operations, customer service, and product development, the impact of GenAI on the ground is that it has, in some cases, enabled the democratisation of fraud capabilities. In other words, bad actors no longer need to be specialists. They can now rent services via forums on the dark web (at negligible monthly rates) and use them to, for instance, produce effective email phishing content, distribute short-message-service (SMS) scam campaigns, or automate processes for creating the command-and-control infrastructures that generate malware-dropper PDFs.

Historically, fraudsters have had no choice but to approach three or four specialists to access such services – from the social engineering aspect to coding and intrusion. Cybercriminals therefore once coagulated into organised groups, comprised of departments that each specialise in one area, be it malware development, social engineering, or money mule management. These departments would then coalesce around a range of targets, based on identified vulnerabilities. The future of cybercrime, on the other hand, will be characterised by centralised access to a gamut of cutting-edge offensive tools and the automation of activity; enabling more disaggregated attacks.

Financial institutions vs fraudsters: The game of cat and mouse

The good news is that the financial sector’s payments ecosystem is conducting robust work, and wields equally sophisticated tools, to combat cybercrime. For its part, Visa has invested over $11 billion in technology worldwide, serving to reduce fraud rates and enhance network security. Such hyper-vigilance forces bad actors to constantly work around institutions’ defences – obliging them to focus on the very end, and most vulnerable part, of the value chain: merchants, its employees, merchant terminals, and consumers themselves.

The preferred tactic, on the part of fraudsters, involves monitoring the engagement of their campaigns, or click-through rates – be it for target populations, geographies, types of networks, or specific languages used in the phishing outreach – and doubling down on the most successful iterations. When a tactic becomes tired or no longer yields adequate returns, the criminal outfit then pivots to the next-best strategy. This is a tried-and-tested mechanism that has been practiced for decades; only the technologies used to action it have advanced.

Our recommended response is three-pronged. First, it must be acknowledged that there are more ways to pay, and be paid, globally, than ever before. While this multi-rail system has boosted competition and service quality, it has also broadened the attack surface. As such, multi-rail security strategies are now imperative – ensuring an organisation can monitor all payment forms in real time. This may entail risk-based authentication and real-time scoring for card, account-to-account, and other payments.

Second, it is vital that – just as every cybercrime is tailored to the victim – every solution is tailored to the client. Indeed, banks, merchants and payment service providers must have both global capabilities that run across rails, as well as bespoke measures to respond to and manage their own environments, business lines, customer bases, and vulnerabilities. Mechanisms need to be leveraged that test an organisation’s unique payments security setup, in order to mitigate their threats.

Finally, and most importantly, these tailored solutions must not stand alone – they should be combined to cumulative effect. It is useful here to think of security as multi-layered – from network segmentation measures, to varying trust controls across the employee base, which exist within the perimeter; to all the cyber security, anti-money laundering (AML), Know-Your-Customer (KYC), and anti-fraud strategies and technologies that extend beyond it. The payment lifecycle should be protected in a similar way, layering security solutions across account-to-account transactions, business-to-business (B2B), business-to-consumer (B2C), card, crypto, and so on. If a successful fraud attempt does take place, compensation measures for merchants and end-users should be efficient, simple and automated. 

As we look beyond 2025, GenAI must become a key part of institutions’ arsenals, to inform and deepen transaction risk scoring – for both the payee and payer – as well as facilitate real-time authentication models. It may also be deployed for internal operations, enhancing the speed at which development, coding, and the training of account-attack-intelligence models, can take place. Collectively, these strategies will ensure that payment providers can stay ahead of the ever-evolving threat landscape.

Securing the long-term health of payments

Despite the challenges discussed in this article, card payments remain one of the most secure ways to pay, send, or receive money, globally. Given today’s rapidly shifting and highly sophisticated fraud landscape, a multi-rail, multi-layered – and most importantly – tailored approach to payments security is imperative for ensuring long-term security. 

Channels

Keywords

Comments: (0)

Sponsored

This content has been created by the Finextra editorial team with inputs from subject matter experts at the funding sponsor.