Long reads

The FinCrime outlook 2024: Money mules, AI, and new PSR rules

Níamh Curran

Níamh Curran

Senior Reporter, Finextra

Financial crime is on the rise every year, as criminals hone their tools with new technologies and find more ways to circumvent controls. As cybercriminals become more sophisticated, users should be aware of the threats they pose.

So what kind of developments might we expect from the coming year, and what have we learned from the mistakes made in 2023?

To find out I spoke with Patrick Craig, UK head of financial crime, EY; Ivan Heard, global head of fraud, Quantexa; Gabriella Bussien, CEO, Trapets; Henry Balani, global head of industry and regulatory affairs, Encompass Corporation; and Adam Davies, VP, FICO Fraud and Identity Solutions.

How did the 2023 predictions fair?

Last year I was told that the types of financial crimes were not going to change much, but technologies would offer criminals a different angle to attack people. The particular area of concern was around the use of OpenAI’s ChatGPT for nefarious uses, and the increased use of deepfakes.

For deepfakes, it seems as though these fears were true. I wrote about this towards the end of last year, and this issue will continue to garner interest throughout 2024 with their potential use in elections and as the deepfakes bill makes its way through congress.

Research from Sumsub found a tenfold increase in the number of deepfakes detected globally between 2022 and 2023. However, some experts that I have spoken to tell me these fears are slightly overhyped, and that the cost of using this technology for crime is too high and too technical. Admittedly Sumsub are selling technology that helps detect deepfakes, but they would also be in the best place to have statistics on how many they’re seeing. 

For generative AI, these predictions seemed to also prove true. Heard comments: “In 2023 we saw criminal enterprises jailbreaking publicly available generative AI tools to augment fraudulent activity. Researchers at Carnegie Mellon University found a formula to circumvent almost every class of large language models and cause chatbots to generate objectionable behaviours. Which means that, despite the safeguards in place, generative AI has eliminated many of the traditional barriers to implement a scam.”

He adds: “The ‘lesson’ in all of this is that access to the toolkit to commit fraud is available to anyone with a computer.”

Bussien agrees on this point: “Generative AI has evolved at a speed no one could have anticipated, and financial criminals have quickly integrated it into their MO. Staying one step ahead of this development is critical for any financial institution.”

Romance scams were another area where a rise was expected. My colleague Sehrish wrote a detailed account on romance scams rise and the role AI and deepfakes have played in this area.

What changed in financial crime in 2023?

Many people became more vulnerable last year as the cost of living crisis came to a head. This caused some changes in demographics of who were the targets of certain scams.

This is something observed by Patrick Craig, UK head of financial crime at EY: “We also saw the demographics targeted by financial fraudsters change in 2023. Individuals experiencing financial difficulty are increasingly being recruited as money mules, and lower-skilled workers looking for ‘get rich quick’ opportunities are growing targets.”

Yet, the development in 2023 that is most likely to impact 2024 is the potential change in liability for repayment originating in the UK with the PSRs legislation. This legislation is leading to a equal split in liabilities for APP scams between sending and receiving banks, making more incentive to prevent fraud and scams at the root. While this is UK at the moment, it will be interesting to see how this is viewed globally and the impact it may have.

Davies adds an observation on the importance of this new legislation on other sectors: “There is also the wider ecosystem to consider most scams originate on tech platforms such as marketplaces and social media as well as via telcos. In Australia, these learnings have been considered as they look to spread their liability model to encompass these organisations.”

A number of those I spoke to thought that defences against financial crime lagged in 2023.

Craig comments that “in 2023 many financial services firms continued to under-invest in fraud prevention, meaning some have struggled to keep pace with evolving threats.”

Bussien adds to this sentiment: “We saw that financial institutions of all pedigrees are still lagging in the fight against financial crime, and even making basic mistakes. In the UK, for example, only one in five financial services companies always check whether new customers are under sanction or are Politically Exposed Persons.”

What scams and fraud are expected in 2024?

From speaking to experts, my takeaway has been that the threat posed by generative AI will likely continue to develop throughout this year as it becomes even more available. However, increasing regulatory pressure will add to the need for financial institutions to be proactive about fraud.

Craig combines the concerns around generative AI and deep fakes: “Generative AI is fuelling growth in deepfake technology, which is expected to have a big impact on financial fraud in 2024. Fraudsters have quickly adopted deepfake technology to replace more traditional forms of identity fraud with sophisticated impersonation and social engineering scams.”

However, he points to this technology’s use in social engineering: “LoveGPT is a romance scam tool that creates fake dating profiles and interacts with victims, bypassing CAPTCHA controls whilst anonymising access using proxies and device spoofing. Tools such as this enable fraudsters to create quick, emotional bonds with victims which they then exploit to try and elicit money in the guise of needing financial help for something, such as a medical emergency. Once money is sent, the romance scammer disappears.”

Bussien adds to how these technologies might be used in 2024: “AI-powered techniques will help criminals create fake content to deceive security systems and carry out identity theft or account takeovers. We expect those strategies to include voice cloning and impersonating people by generating deep-faked facial images and videos, and voice cloning to bypass biometric authentication systems or to trick FI employees.”

She also adds that generative AI may also be used to create synthetic identities: “For example, we will likely see more criminals attempt to use such accounts as synthetic money mules, essentially a series of shell accounts through which money is transferred for laundering purposes.”

In response to the regulatory landscape, Heard comments that “the challenge for banks is transitioning from reactive approaches to actively searching for suspect accounts and preventing their use”.

Actions to take in 2024

Davies comments that looking forward financial institutions should “take a holistic approach to technology when tackling scams, that doesn’t focus on just one part of the scam’s lifecycle.”

Generative AI may be scammers tool, but it can also be a tool of defence. Criag states: “Firms can integrate deepfake detection into prevention controls, and incorporate advanced machine learning and pattern recognition to detect – and in some cases predict – fraud. While machine learning and AI have key roles to play in detecting financial fraud, firms must also ensure their people are trained to understand how to work alongside these new technology solutions.”

Improving KYC was also a theme from respondents. Bussien argues that this is already a legal requirement, but financial institutions should also consider multi-factor identification, multi-level approval flows, geolocation data, and real-time risk assessment.

Balani adds to this sentiment: “Fintech and regtech will be critical, presenting technology driven opportunities that will create options for dealing with complex data sets and making recommending decisions in response to compliance obligations and as part of the KYC process across financial services.”

He continues: “The time is now for financial institutions to utilise Corporate Digital Identity (CDI)-driven processes. CDI incorporates real-time data and documents from authoritative global public data sources and private customer information to create and maintain digital risk profiles.”

Heard offers the solutions of network analytics and composite AI, saying “This approach allows analysts to look at the payment flows across the network to identify known mule fraud network activity. Any of these indicators when viewed in isolation can generate large numbers of misleading signals and create false alerts. But by viewing these accounts as a network and calibrating the analysis, it is possible to build task-specific, real-time views of the entities and their connections in your data.”

Craig concludes with a warning about the regulatory changes: “To respond to the regulatory changes this year, firms will need to develop holistic responses across their organisations, ensuring fraud, anti-money laundering and cyber teams work closely together. At a minimum, firms should develop plans for readiness assessments, ensuring a thorough understanding of fraud documentation and governance structures that may be used to demonstrate compliance. Firms should also engage with industry bodies, regulators, and cross-industry initiatives to help educate and share learning on evolving threats with the wider financial ecosystem.”

Comments: (0)