/financial crime

News and resources on financial crime, including fraud, scams, Anti Money Laundering and Know Your Customer.

MPE 2024: AI-driven misinformation and consumer education in fraud

In an exclusive interview at Merchant Payments Ecosystem 2024, trust and safety architect at Sift Brittany Allen spokes with Finextra on new challenges in the fraud space, and how the spread of misinformation on social media is a major concern with AI-driven technology.

  2 Be the first to comment

MPE 2024: AI-driven misinformation and consumer education in fraud

Editorial

This content has been selected, created and edited by the Finextra editorial team based upon its relevance and interest to our community.

With experience working in fraud departments at Etsy, Airbnb, and 1stDibs, Allen explains how she is able to bring a wealth of knowledge of different aspects of fraud into the financial space. From e-commerce to luxury fraud, she indicates that there are more ways to commit fraud than imaginable in the digital marketplace. Allen even references that while working in luxury fraud, they managed to avoid being duped by Anna Delvey, the notorious con artist of Netflix fame.

Allen points out that a common trend in fraud currently is ‘refund’ or ‘return fraud’. She also says that consumer education is critical for e-commerce businesses and financial institutions, especially when scammers are as active as they are now. Consumers are increasingly developing an inclination for fraud influencers, who participate in a fraud-as-a-service economy where scammers are sharing their skills to gain money.

“In the old model, they may go onto a website on the dark web, get some stolen credit cards and then be able to use those to buy something and that's how they commit fraud. Whereas now, if they go to the deep web, on secure messaging apps like Telegram, they can connect to a wider audience. Maybe they will augment the sale of those credit cards with mentorships on how to do a refund fraud, or selling guides or Bibles for how to take advantage of a website and commit fraud there.

“We are also seeing more fraud chatter on the surface web, which is how they recruit people onto Telegram. For example, if you're a normal person who doesn't know much about fraud, you looking for a knockoff reproduction of an item on social media like TikTok, and fall into a channel where people are advertising refund fraud. Then that also brings up more on your for you page of recommendations about actual fraud with stolen payment methods and stolen accounts, and usually the recruiter has a Telegram link which takes people to those channels.”

Allen states that how businesses are able to detect this sort of fraud is where it becomes challenging. On TikTok, for example, influencers may not be talking about illicit content but include it in captions, banners within the video, text boxes on the screen, or a manner of other ways which makes it more difficult to detect.

Moving on to the topic of AI and deepfake technology being used to dupe scammers, Allen says that the technology involved with specifically imitating a close friend or family member are less likely considering the amount of effort needed to scam a small amount of people. However, the imitation of celebrities or well-known individuals to gain funds, such as to ask for donations to charity, is a big threat.

She further details how fast misinformation can spread: “We are all going to have to be extra vigilant this election cycle. This is the first major US federal election with easy access to AI tools. There absolutely will be deepfakes of party candidates saying wild things, doing wild things, and it may or may not be generated by the opposing party, it may be generated by another government. There may be election interference that way from an external party, but it's going to be something that happens. Hopefully, the ability to detect it will outpace its spread, because I know there'll be some people who see it and they will not recognise that it's generative AI.”

Even the viral buzz about the doctored image of Princess Kate and her family released this week was indicative of how people need to look more closely at what sources are determined trustworthy on social media. Allen emphasised the point of doing your research, saying that initially people would scroll past the photo, but further investigation from news sources revealed otherwise.

Speaking to what financial institutions and merchants should be doing to prevent payments fraud, Allen details: “Merchants and financial institutions see robust sets of data, but very different data from each other. A merchant will be able to see all that particular customer has done on their site: they know browsing activity, items that have been added to favourites, items put in and taken out of a cart, browsing patterns, and more, so they are able to determine fraud. Financial institutions don't get visibility into any of that, but what they see is your entire other spending patterns. They know where you have been, when you usually shop, how you interact with their app, and what kind of deals you might click through.

“The one thing they need to do is establish collaboration where possible, to be able to bridge that gap to strengthen their ability to assess risk, and the consumer's ability to access risk. Sift is partnering with a particular issuing bank to do just that and data share, and that's extremely important.”

She adds that there is still a lot to be ironed out when it comes to the responsibility of financial institutions in preventing payments to scammers and where banks have rights to prohibit payments which regulators are still working on.

On the use of AI and machine learning for fraud detection and prevention, Allen says that her biggest concern is that datasets are unbiased and ensured against making discriminatory decisions.

“There are talks of a fraud-free future, I don't think that's possible. You will then be turning away good customers, you need to have that balance of risk tolerance that makes the decision between how much fraud are you willing to allow in to not insult too many customers, so it'll still be a discussion of prevention. I would really like the industry moving towards something that is more easily accessible and digestible to the merchant, and transparent so they understand why a decision was made by a machine learning model or an AI driven model for example. Merchants will then be able to understand why that decision was made and what they can do to potentially change that decision for future transactions.”

Sponsored [Webinar] PREDICT 2025: The Future of Faster Payments in the US

Comments: (0)

[Webinar] Operational Resilience in the age of DORAFinextra Promoted[Webinar] Operational Resilience in the age of DORA