Blog article
See all stories »

Why we need a new approach to cyber-security and risk assessment.

In the finance market risk is modeled and forecast using a combination of statistical modeling and “stress testing”. Why don't we take a similar approach to cyber-security? The traditional response is that cyber-threats are different, they don't behave in a predictable statistical fashion; the events are targeted and malicious not random.

The problem of information security, however, seems more fundamental since unlike market and operational risk we lack a common understanding and language in which to discuss “information risk” in any quantitative fashion. As a result, the emphasis is always on regulatory compliance rather than on the risk itself and this makes it hard to quantify the appropriate level of investment in information security or to measure the potential benefits. On the other hand, questions such as, “what is the appropriate cost of life assurance?” and “what proportion of loans will not be repaid?” are ones that have been repeatedly faced in the financial services industry and for which adequate models and tools have been developed.

In finance, a portfolio of investments is subject to quantifiable “market risk” and/or ‘’credit risk’’ based on the expected market price fluctuations. Organisations trading in such markets have traditionally estimated these risks by creating parametric (mathematical) models and using these in conjunction with either historical data or data simulations with equivalent statistics (mean, variance, correlation etc.). Until 2008 these “Value at Risk” methods (link) were considered sufficient and became institutionalised as part of the regulatory process. Even after 2008, they have not been abandoned but expanded and incorporated into the more robust process of “stress testing” as the accepted method to set capital requirements and determine stability. 

Similarly, in the physical sciences, engineering and the health sectors, risk is analysed through a combination of mathematical and statistical modelling coupled to accurate experimental data. An overall system is broken down into its key parts and risk analysis is performed for each component in the system and for each step in the operational chain. “Fault trees” are studied and “critical paths” are traced. In some cases, e.g. the spread of an epidemic, the time scale is measured in days or weeks. In others, e.g. reactor safety, it is much shorter. In each case, however, the emphasis is on building a model that spans the appropriate parametric space and running it fast enough to produce a forecast that is sufficient to enable mitigating action to be taken to avert or limit the consequences of a disaster.

 

Can a similar approach be taken to model cyber-risk? As in the engineering and medical areas, cyber-security is a combination of threats (deliberate attacks) and vulnerabilities (built-in weaknesses); consider for example, a new biological virus and a population that has been only partially vaccinated. In cyber-systems, the analogous threat might be a “zero day exploit” and the corresponding vulnerability, a partially patched server with an inadequate intrusion prevention device. Superficially the physical sciences approach looks promising but more research based on real security incidents is needed before we can get to a reliable model and extract the key parameters. In the physical world, threats and vulnerabilities can often be considered as s independent of one another. In the cyber-world there is much greater “feedback” and “non-linearity”; the discovery of a vulnerability in an IT system often triggers an avalanche of attacks which specifically target exactly that component. A system that was once “mostly secure” rapidly changes to one that is “mostly at risk”.

In the engineering sphere this phenomenon has been addressed by including so called “coupled mode” failures; the failure of one component places additional stress on the next in line and that fails as well. In the finance sector the analogous problem had perhaps not been taken seriously until the crash of 2008. The “Value at Risk” models made insufficient allowance for trading behaviour that was both highly correlated across an apparent diverse range of financial instruments and where trading was very rapid (because of automated trading systems). As a result, it was the extreme “tails” of the probability distribution functions that were encountered. Nevertheless, it is the existence of such models and their relative success and failure that has led to greater understanding of the risk. In short, the physical and financial communities have an agreed “language of risk”.

Engineering and economic scientists have been collecting and processing risk data for many years. Equally, there is no lack of data in the cyber-world on which to base risk statistics: major Security Operation Centres (SOCs) and CERT groups regularly observe and categorise billions of security events every day. Similarly, there is extensive information on the likelihood of finding IT systems with vulnerabilities (although most organisations are usually somewhat shy of publicly admitting that their servers are unpatched and their anti-virus signatures out of data).

The piece of the puzzle that appears to be missing in the cyber-world is an accepted mathematical risk model. In the physical sciences, weather forecasting for example, simulations are based on solving a detailed set of equations linking temperature, pressure and flow.. In sociology and economics the equations may be less well founded but that has not prevented them being adopted as a common language for modelling and predicting events. In financial markets, arguably an area of extreme stochastic behaviour, the power of mathematics to understand risk is reflected in the huge rise in demand for “quants” analysts.

In areas where there is an absence of accepted or fundamental equations, many sophisticated tools (neural networks, Markov models, Monte-Carlo techniques etc.) have been developed that can be used instead. Similar, tools can be used to extract statistics in information assurance from threat and vulnerability events and to build risk analysis models. Such models may be limited in their ability to make long term forecasts because of the unpredictability in the discovery of new software exploits and attacks. Nevertheless, they will establish a common language and methodology for describing information risk and describing a problem is usually a key step in managing it.

In credit, market and operational areas the aim is to translate the whole risk into an adequate amount of capital deemed appropriate to safely run a particular business. Models depend on a complex set of correlations where risks in various positions off-set each other, essentially the more risk, the more capital set aside. Until we have a similar semi-quantitative approach to modelling cyber-risk we believe the industry will continue to misunderstand the problem and make inadequate investment in counter-measures and risk management.

8946

Comments: (0)

Now hiring