Join the Community

22,462
Expert opinions
44,539
Total members
535
New members (last 30 days)
189
New opinions (last 30 days)
28,848
Total comments

Chasing Shadows: How Shadow AI complicates the compliance outlook

GenAI and AI usage are becoming a staple for office workers, simplifying and streamlining tasks with the help of easy-to-use chat interfaces like ChatGPT and Gemini. According to recent research, 75% of knowledge workers already use AI, which is set to rise to 90% in the near future. Furthermore, almost half of employees using them would refuse to give them up, even if their company banned them tomorrow. When AI use is unofficial (ie ‘Shadow AI’), it’s increasingly difficult for businesses to get a clear picture of what’s being used within their infrastructure and processes, leading to security risks, compliance headaches, and potential inaccuracies.

The issue lies not in staff using AI models, but in the business not knowing what is being used and how the outcomes feed into business practices. This lack of oversight makes it nearly impossible to produce an effective strategy for managing it all.

The biggest problem caused by unauthorised applications in regulated industries is compliance. The Digital Operational Resilience Act (DORA) will take full effect this month, adding to PS21/3 in the UK, and high levels of Shadow AI will raise serious questions about the compliance gap.

At their hearts, Operational Resilience regulations centre on building a unified framework to report operational resilience. But Shadow AI, by definition, means there are processes that sit outside of this framework. It’s a phenomenon with wider reaching impact than a lot of Shadow IT, which is often more confined to the boundaries of the corporate IT environment (ie company devices). Generative AI is much more accessible on personal devices and in non-corporate channels (e.g. Whatsapp), therefore protecting against data loss or non-compliance transcends the usual corporate IT controls of IP whitelists and DLP systems. Far from helping operational resilience, Shadow AI is causing operational chaos.

One option could be to ban AI tools, as we have seen deployed by some banks. But our research suggests that 46% of those using Shadow AI would continue to do so in the face of such a ban. A more practical option is to try and bring Shadow AI into the light by listening to what employees need and rolling out truly relevant tools.

Ultimately, Shadow AI is here to stay, so what can financial institutions do about it?

Be part of the solution

One reason for Shadow AI’s prevalence is the lack of either well-established AI policies and/or authorised AI solutions. In fact, 33% of workers use non-approved AI precisely because they don’t have the tools they need.

Staff utilise AI where it has a specific and helpful role to play in their day-to-day. If more employees used ‘official’ AI tools, business can capitalise on the opportunity to train AI to truly understand and adapt to your unique needs and processes. AI’s greatest value could be in bridging the gaps in your organisation’s workflows…which you might not even know about yet.

For example, employees might turn to Shadow AI tools to speed up tasks like drafting client communications or analysing financial data. This can be because they are time-restricted, or it could be because the ‘official’ tools are outdated, cumbersome or simply not the right fit for what they need.

It’s important to understand what employees actually need, what are the cracks they’re trying to paper over with AI, and how can the right tools be brought into an authorised domain of corporate tools.

Be strategic

Before implementing AI at scale, it is essential to consider how it aligns with your organisation’s overarching goals as well as the diverse needs of your people. Auditing a department’s processes to determine where AI and automation can help make these overlaps clearer.

And remember, AI isn’t a one-size-fits-all solution. Effectively embracing AI starts with defining the change, understanding the impact, and being flexible enough to incorporate new ways of working. This approach enables leaders to make the right investments to create the most value.

C-level understanding and involvement are essential if AI is to be deployed in a way that helps overall competitiveness. While leaders may often look to hire AI expertise, it is increasingly important to also cultivate this knowledge themselves, taking on both operational and technical perspectives to bridge strategy and execution effectively.

Train your staff

Our research found that more than half of knowledge workers either don't have access to training, or do, but it’s very basic and doesn’t cover the biggest risks.

There are two types of training when it comes to AI: learning how to use a tool, and learning the pros and cons of AI at a broader level. When employees work around limitations in authorised by using their phones or personal devices, it’s imperative that they have an awareness of how to use AI responsibly.

Better understanding helps with risk management and security, but when employees optimise the use of any tools at their disposal, it can benefit the business. More rigorous training can also encourage many who don’t currently use AI to do so. The challenge is creating a training framework that brings through the most crucial lessons so that anyone using unapproved AI can do so responsibly.

This includes creating an understanding of when it’s acceptable to use external GenAI tools (for example when responding to some emails or creating images to support presentation), but also when an in-house tool is needed (for example for any tasks that contain private, sensitive data).

The road ahead

GenAI changed the world overnight and is now an inextricable part of our lives. It’s a go-to for office workers, who are now able to analyse large amounts of data and turn it into digestible insights, saving time and effort in their work lives.

With Shadow AI, there’s a risk to the security and/or compliance of key processes that are invisible to IT and compliance teams. Stringent operational resilience and data security regulations only heighten the importance of understanding the risks of Shadow AI.

So if you can’t beat them, join them. Just make sure that you have the right policies and structures in place to support staff and ensure they’re doing the right thing. The benefits of doing so are vast and only begin with compliance.

External

This content is provided by an external author without editing by Finextra. It expresses the views and opinions of the author.

Join the Community

22,462
Expert opinions
44,539
Total members
535
New members (last 30 days)
189
New opinions (last 30 days)
28,848
Total comments

Now Hiring