Join the Community

22,039
Expert opinions
43,969
Total members
395
New members (last 30 days)
177
New opinions (last 30 days)
28,688
Total comments

AI’s role in FS businesses’ cyber defence and risk assessment

It’s no surprise to see cybersecurity holding its position as one of the Financial Services (FS) sector’s most discussed topics right now.

While some may assume that major FS businesses have robust cyber defences - thinking they’re perhaps impervious to hackers - the number of cybersecurity breaches for UK FS firms has in fact tripled over the last 3 years, according to a report from international law firm RPC. Additionally, the number of ransomware incidents reported to the UK’s Financial Conduct Authority (FCA) doubled in 2023.

Unfortunately, digital threats are on the rise across nearly all sectors and industries - according to ISACA's State of Cyber Security report for 2023, 48% of organisations experienced a rise in cyberattacks in Q4 2023 compared to the previous year.

But for FS businesses operating in a space where data privacy, security and customer experience are top of the agenda, the risk of cyberattacks is a particularly challenging trend and one that’s even deemed the number one systemic risk to the financial system, according to a recent Bank of England survey.

AI’s challenges vs opportunities

Of course, generative AI’s development and its incorporation into the digital ecosystem is behind a good proportion of these stats, not only aiding the increasing sophistication of cyber threats, but making it far easier for cybercriminals to attack.

On the flip side, however, AI has simultaneously opened the door to more positive opportunities, particularly for the cyber insurance sector and those involved in cyber defence roles.

Despite the fact that threats, risks and potential entry points are evolving and changing all the time, AI-driven defence tools can do everything from helping prevent cyber threats, to limiting an FS company’s losses and accelerating the recovery process in the event of a cyber incident.

Looking ahead, machine learning and AI algorithms will be instrumental in real-time threat detection and response, enabling faster and more effective security measures.

AI also holds great potential in helping to detect and prevent threats at a comparable speed to AI-enhanced cyber attacks by, for example, enabling continuous underwriting for cybersecurity risk. AI-powered tools can be used to layer multiple data streams from various sources - including historical incidents, threat intelligence feeds, and external data sources - to pinpoint risk factors effectively, providing a more comprehensive understanding of potential threats. But the real power of AI lies in its ability to source and analyse a far greater volume of data than what was previously possible, which further improves the understanding of the likelihood and potential severity of a risk.

With this in mind, AI is reshaping insurance underwriting. An augmented underwriting approach and tech-driven platform - fed by vast quantities of data (risk pool) - allows insurers to use AI to assess risk more accurately, ensuring fair premiums and broader coverage options. Machine learning techniques also enable the analysis of many sources of data to understand what is important and the signal it provides about the risk. And analyses can be rerun frequently in case changes in the risk environment alter the significance of a particular data source. This matters more for cyber insurance due to the dynamic nature of the risk environment and our evolving understanding of cyber risk. AI algorithms can identify correlations and patterns across these diverse datasets, providing insights into emerging threats and enabling proactive risk management strategies. By insurers working hand-in-hand with tech teams to analyse risk through this lens and track trends in this space daily, it’s an advancement that improves decision-making, enhances the customer experience and informs more accurate pricing.

With the power to identify high-risk areas and vulnerabilities, FS businesses can also prioritise cybersecurity investments and allocate resources effectively to mitigate any potential losses. And, should a cyber incident occur, AI-powered incident response tools can then facilitate rapid detection, containment, and remediation of threats, thereby minimising downtime and financial losses. AI-driven analytics can also support post-incident analysis and forensics, helping FS businesses learn from security breaches and strengthen their defences against future attacks.

Balancing innovation and human expertise

Despite clear potential and recent progress, however, it’s important to remember that AI techniques are still in their infancy, needing careful handling to avoid errors.

One potential issue is that AI models are constructed using a finite dataset, which can lead to the development of biases over time if not carefully monitored. When stepped outside the boundary, AI also tends to hallucinate. This can be particularly problematic when the output of AI models are used to automate various insurance workflows, potentially leading to significant issues and complications. In the cyber insurance industry, for example, a model developed to define cyber risk for businesses may develop a bias towards underestimating small business risk if it is primarily trained on historical data from large corporations without considering data from small and medium-sized enterprises (SMEs).

AI models can also incorrectly flag a low-risk business as high due to a bias in the dataset or a flaw in the model's logic; it could lead to complications such as drift loss ratio over time.

While AI can streamline processes and provide valuable insights, human judgement and creativity are still needed to review and guide AI insights and execution, keeping a keen eye out for any bias.

Going forward, it will be crucial to find the right balance between innovation and human expertise, but those in the FS sector open to embracing AI effectively certainly stand to benefit from enhanced risk assessment.

 

External

This content is provided by an external author without editing by Finextra. It expresses the views and opinions of the author.

Join the Community

22,039
Expert opinions
43,969
Total members
395
New members (last 30 days)
177
New opinions (last 30 days)
28,688
Total comments

Trending

David Smith

David Smith Information Analyst at ManpowerGroup

Best 5 White-Label Neobank Solutions in 2024

Ruoyu Xie

Ruoyu Xie Marketing Manager at Grand Compliance

Governance, Risk and Compliance: How AI will Make Fintech Comply?

Now Hiring