Community
More and more deepfake technology is becoming common and widespread, particularly on social media. After all, it only takes a few dollars (or its equivalent in bitcoin) and a just a few minutes to create a convincing deepfake with the technology that’s available.
Just recently, President Macron’s deepfake video used to promote the AI Summit in Paris was subject to divided opinion. While many thought the humour-led video was fun, others highlighted that even seemingly harmless deepfake videos come with risks. Paul McKay, analyst at Forrester, said "Normalising deepfakes in this way should not be encouraged as it continues the difficulty with telling what is real and what isn't”.
Deepfake technology: cheap, easy and dangerous
The bottom line is deepfakes blur the lines between reality and fiction, which in the hands of fraudsters can be very dangerous. As a result of the surge of deepfake technology, fuelled by the advancements in artificial intelligence (AI) and specifically Generative AI (GenAI), a new breed of threats that demand immediate attention have been introduced.
The UK’s Advertising Standards Authority, for example, recently reported fake advertisements that feature celebrities and public figures are the most common scam advertisements that are appearing online – many encourage viewers to invest in ‘get rich quick’ schemes that aren’t legitimate. As a result of the rising use of AI-powered deepfakes, they’ve issued a warning that these adverts are more convincing and more dangerous to consumes than ever before.
With the proliferation of GenAI tools, fraudsters have the means to create hyper-realistic images, videos, and audio, making it increasingly difficult to discern between genuine and fake digital interactions. Popular face-swapping apps have only heightened these risks, challenging the foundations of trust online. There are no signs of this slowing down either – our research reveals many professionals in fraud prevention believe GenAI, deepfake biometrics, and generated documents will be the biggest trends in identity verification and fraud over the next three to five years.
As what’s real and what’s fake becomes harder to detect, businesses must remain vigilant and proactive in combating the sophisticated tactics employed by modern-day fraudsters.
The impact on identity verification
In the 2010s, mobile biometrics took off, quickly becoming a standard feature in smartphones. Fingerprint recognition was first, followed by facial recognition, making secure transactions and logins commonplace. With biometric security now mainstream, fraudsters have been forced to evolve, devising new methods to bypass these advanced systems. Deepfake technology is just the latest weapon they have in their arsenal.
The threat to businesses, particularly in the finance industry, has never been more pronounced. Deloitte reports over half of senior executives expect deepfake financial fraud to increase in frequency and scale within the next year.
As deepfake technology advanced, it has exposed critical vulnerabilities in remote identity proofing systems, which are designed to verify the authenticity of IDs and the presence of their rightful owners. In this landscape, business need to be able to answer two critical security questions:
How to protect your business against deepfakes
However, while fraudsters are using AI to create convincing fake media, the same technology can be used to detect and prevent fraud. Just like deepfakes, this technology too is advancing and becoming more sophisticated at speed.
To protect your business in the era of deepfakes, you need to leverage the latest innovations, including:
Fraudsters are getting more sophisticated with presentation attacks that target biometric systems by using fake biometric data to mimic their victims. In facial recognition, this threat often comes in the form of deepfake videos or images, or even masks presented to the camera. These digital deceptions are displayed on screens, making them appear 'live' to the mobile or laptop camera.
To guard against this, liveness detection technology is essential. It ensures the biometric data being presented is from a real person and not a deepfake. Passive liveness detection, which checks for subtle signs like skin texture and blood flow, offers a quicker and more seamless solution compared to active detection methods.
As deepfake technology advances, so must the defences against it. Deepfake media analysis can detect signs of manipulation in images and videos, such as pixel inconsistencies and synchronisation failures. This technology, accelerated by machine learning, provides a powerful defence against deepfake attacks.
Unlike presentation attacks that use manipulated video on a second screen, fraudsters can bypass the camera by injecting deepfake media directly into the authentication process. This injection attack involves hacking the device's camera hardware or software and replacing its signals with deepfake footage from an external or virtual camera.
You can protect against this type of attack in two ways. First, deepfake media analysis can examine image or video frames for signs of impersonation, such as pixel structure inconsistencies, lighting issues, or lip movement mismatches.
The second method involves injection attack detection. This sophisticated security technique analyses camera hardware and software for evidence of non-standard cameras or system code alterations that might suggest a 'man-in-the-middle' attack, thereby ensuring the integrity of the identity proofing process.
There are several ISO standards for security in information management systems that address the performance of video injection attack detection.
The dark web has become a marketplace for counterfeit identity documents. GenAI technology has made it easier for criminals to create hyper-realistic fake IDs. Secure identity proofing systems can detection techniques that pick up suspicious anomalies, inconsistencies or absent security features such as face swapping, text tampering and document presence – whether it’s a physically real ID or if a digital fake inserted into an image or video.
Financial institutions should adopt a multi-layered approach to combat deepfake fraud. Combining biometric authentication, identity proofing, and data verification strengthens defence. While fake documents may look genuine, their data can be cross-checked. Using a global data intelligence network ensures consistency in document number, dates, names, and addresses without compromising privacy. This comprehensive strategy prevents criminals from exploiting gaps in identity security.
The era of deepfakes is here to stay. From now on, we’ll only see them get more advanced and harder to spot, especially with the naked eye. Businesses need to stay alert and proactive in tackling these threats and ensure they leverage the right technology to fight back against the sophisticated nature of deepfakes.
This content is provided by an external author without editing by Finextra. It expresses the views and opinions of the author.
Sergio Barbosa CIO of Global Kinetic, and CEO of FutureBank. at Global Kinetic and FutureBank
21 February
John Bertrand MD at Tec 8 Limited
Saumil Patel Content Marketing Manager at InCred Money
Katherine Chan CEO at Juice
Welcome to Finextra. We use cookies to help us to deliver our services. You may change your preferences at our Cookie Centre.
Please read our Privacy Policy.