SoSafe’s report explores the global tension between AI adoption and the associated security risks, highlighting the views of 500 global security professionals and 100 SoSafe customers across 10 countries.
While a staggering 87% experienced AI-driven cyber attacks in the last year, 91% anticipate a significant surge in AI-driven threats over the next three years. However, only 26% express high confidence in their ability to detect these attacks, showing how dangerously exposed organisations are today.
Further, advancements in AI are enabling multichannel cyberattacks, allowing attackers to infiltrate through email, text, social media and other platforms. 95% agree that there has been a noticeable increase in this style of attack in the past two years.
Referencing the example of WWP's CEO, where WhatsApp was used to build trust, Microsoft Teams for further interaction, and an AI-generated deepfake voice call to extract sensitive information and money, it is evident that adoption of AI is inadvertently expanding organisations’ attack surfaces. This, in turn, permits organisations to be exposed to data poisoning and AI hallucinations.
SoSafe’s survey also found that 55% have not fully implemented controls to manage the risks associated with their internal AI solutions, and concerns continue to rise. Obfuscation techniques, such as AI-generated methods to mask the origins and intent of attacks, are the top concern by over 51%.
45% also reported that the creation of entirely new attack methods was their biggest worry, while 38% mentioned the scale and speed of automated attacks. Andrew Rose, CSO, SoSafe, says that “AI is dramatically scaling the sophistication and personalisation of cyberattacks. While organisations seem to be aware of the threat, our data shows businesses are not confident in their ability to detect and react to these attacks.”
Rose continues: “Targeting victims across a combination of communications platforms allows them to mimic normal communication patterns, appearing more legitimate. Simplistic email attacks are evolving into 3D phishing, seamlessly integrating voice, videos or text-based elements to create AI-powered, advanced scams.
“Even the benevolent AI that organisations adopt for their own benefit can be abused by attackers to locate valuable information, key assets or bypass other controls. Many firms create AI chatbots to provide their staff with assistance, but few have thought through the scenario of their chatbot becoming an accomplice in an attack by aiding the attacker to collect sensitive data, identify key individuals and identify useful corporate insight. It is imperative that businesses couple their own AI adoption with a rigorous approach to security that protects against both technological and human vulnerabilities,” Rose explains.
Niklas Hellemann, CEO, SoSafe, adds: “While AI undoubtedly presents new challenges, it also remains one of our greatest allies in protecting organisations against ever-evolving threats. However, AI-driven security is only as strong as the people who use it. Cybersecurity awareness is critical. Without informed employees who can recognise and respond to AI-driven threats, even the best technology falls short. By combining human expertise, security awareness and the careful application of AI, we can stay ahead of the curve and build stronger, more resilient organisations.”