Join the Community

22,252
Expert opinions
44,219
Total members
417
New members (last 30 days)
210
New opinions (last 30 days)
28,759
Total comments

Why the Zoom cat filter raises banking security questions

Since the onset of pandemic, digital tools have been a means of survival across many industries. Remote working, distance learning, online shopping, virtual GP appointments, and even digital court hearings have all enabled us to continue to function as a society – albeit with a few changes here and there. 

However, the shift has been far from a seamless experience. One example is of the recent viral video of a ‘cat’ passing ‘judgement’ via a Zoom call – when a judge attempted to use his assistant’s computer during a virtual hearing. While the instant meme brought joy to many, it has highlighted that our digital interactions may not be as secure as they seem. 

Could video calls, a digital tool that's widely accepted as secure, raise questions on banking security? How would this threat, if left unmasked, impact our ability to work, borrow and buy securely? Lets find out.

The fraudster behind the filter

This isolated incident, ‘catcalling’ if you will, should not be dismissed as a one-off. In fact, during the first nine months of the pandemic, a quarter of Brits and 23% of Americans compromised their security at home, sharing their work passwords with their flatmate, partner, or family member amid increased home-schooling, remote working, and socialising. 

According to research by SailPoint, our lockdown cyber hygiene has slipped – which isn’t making the already high risk of fraud any easier to manage. We trust our eyes the most, which means videos in particular can create a false sense of security. 

With our social interactions mostly reduced to messages and video calls, what does this mean for retail banks, approving more mortgages, loans and new customers online than ever before? Or corporate banks, where video calls are now the mainstay of relationship managers and corporations large and small? With billions in hard-earned cash on the table, could video calls be the biggest fraud risk yet? 

They just might be. Banks and fintechs have already started establishing partnerships to squash the use of spoofed videos – found to be a new favourite trick of fraudsters a few months into the pandemic – as ‘deepfake’ crimes continue to be the biggest consumer worry. It’s not without reason. Deepfakes and synthetic identities are likely to open the door for the next wave of identity fraud. 

A portal for deepfakes?

While fraud rises, businesses can’t stay still when it comes to security. With digital, including video calls, we often rely on the safety of the channel we use, such as end-to-end encryption, but not how our identity is used there. 

‘Frankenstein fraud’, or synthetic identity fraud, is changing that. We are seeing fraudsters gaining access to ever-more sophisticated technologies to create not just false ID images and video feeds, but fake data records that back up that false identity. 

Deepfakes posing as real famous people spotted on videos, including Elon Musk and Tom Cruise, are quite hard to tell from their real counterparts. What’s the probability then of spotting a spoof when a brand new customer is trying to sign up to a new banking service? It certainly is a big threat for fintech companies, banks and e-commerce giants alike. 

This means our identity verification technologies must take a risk-based, zero trust approach. The reality is that the identity risk profile of a person can change, and probably will over their lifetime – for example if they become a victim of identity fraud. Our technology must stay flexible to enable a change in parameters if the situation develops, protecting consumers from the risks of identity fraud, stopping it in its tracks. 

To keep real people protected, we have to perfect our “computer vision” and train digital identity verification algorithms on a variety of diverse profiles, lighting, and proximities. The technology of today and tomorrow must be able to tell a mask, deepfake photo or video from a real person – and avoid disabling people’s access to vital financial products or services at the same time. 

Convenience versus security

Naturally, we like to get what we want or need in the moment and avoid longer processes at all costs. Experience is king, meaning onboarding customers is often a game of push and pull, convenience versus security, speed versus catching more fraud. 

While it can be inconvenient when an app – whether it be a bank, payment provider, or retailer – asks us to move closer to the camera or step back, change the frame or lighting, it is not doing it to make the process more difficult. The technology is there to protect us, and these hurdles are keeping imposters at bay. 

There are positives and negatives with the digital tools surrounding us. What we see may not be what it seems: not every Frankenstein face will look strange or unrealistic, and not every human face will pass the test first time round. The occasional cat filter, therefore, may be one of the most innocent human representations yet. Or it could be a sinister warning sign of fraud to come. 

 

External

This content is provided by an external author without editing by Finextra. It expresses the views and opinions of the author.

Join the Community

22,252
Expert opinions
44,219
Total members
417
New members (last 30 days)
210
New opinions (last 30 days)
28,759
Total comments

Now Hiring