Community
A recent Accenture/Chartis paper featured on the RiskTech site (link) suggests that part of improving overall risk management is to acknowledge that Operational Risk and Cyber-Risk have similar consequences and that they can be treated by a similar mixture of tools and procedures. Indeed, we would argue that cyber-risk is a class of operational risk for all companies, so there is probably efficiency to be gained and savings to be made by exploiting this “convergence”. Indeed, a similar point is made in the ISO, NIST and ISF information assurance frameworks.
The problem of information security, however, seems more fundamental since unlike operational risk we lack a common understanding and language in which to discuss “information risk” in any quantitative fashion. As a result, the emphasis is always on compliance with standards rather than the risk itself and this makes it hard to quantify the appropriate level of investment in information security vs the risk or to measure the potential benefits. On the other hand, risk related questions such as, “what is the appropriate cost of life assurance?” and “what proportion of loans will not be repaid?” are ones that have been repeatedly faced in the financial services industry. It is perhaps by adopting some of the methods developed there that we can derive a better understanding of the problem and make progress towards a solution. In summary, in this note we are advocating an even greater convergence between the methodologies of informational, operational and financial security as a means to understand risk.
The question to be answered is what parameters should be incorporated in a new approach and for what reasons?
In the finance sector, a portfolio of investments is subject to quantifiable “market risk” and/or ‘’credit risk’’ based on the expected market price fluctuations. Organisations trading in such markets have traditionally estimated these risks by creating parametric (mathematical) models and using these in conjunction with either historical data or data simulations with equivalent statistics (mean, variance, correlation etc.). Until 2008 these “Value at Risk” methods (link) were considered sufficient and became institutionalised as part of the regulatory process. Even after 2008, they have not been abandoned but expanded and incorporated into the more robust process of “stress testing” as the accepted method to set capital requirements and determine stability.
Similarly, in the physical sciences, engineering and the health sectors, risk is analysed through a combination of mathematical and statistical modelling coupled to accurate experimental data (both measured and simulated). An overall system is broken down into its key parts and risk analysis is performed for each component in the system and for each step in the operational chain. “Fault trees” are studied and “critical paths” are traced. In some cases, e.g. the spread of an epidemic, the time scale is measured in days or weeks. In others, e.g. reactor safety, it is much shorter. In each case, however, the emphasis is on building a model that spans the appropriate parametric space and running it fast enough to produce a forecast that is sufficient to enable mitigating action to be taken to avert or limit the consequences of a disaster.
Can a similar approach be taken for information assurance? As in the engineering and medical areas, cyber-security is a combination of threats (deliberate attacks) and vulnerabilities (built-in weaknesses); consider for example, a new biological virus and a population that has been only partially vaccinated. In cyber-systems, the analogous threat might be a “zero day exploit” and the corresponding vulnerability, a partially patched server with an inadequate intrusion prevention device. Superficially the physical sciences approach looks promising but more research is needed before we can get to a reliable model. In the physical world, threats and vulnerabilities can often be considered as s independent of one another. In the cyber-world there is much greater “feedback” and “non-linearity”; the discovery of a vulnerability in an IT system often triggers an avalanche of attacks which specifically target exactly that component. A system that was once “mostly secure” rapidly changes to one that is “mostly at risk”.
In the engineering sphere this phenomenon has been addressed by including so called “coupled mode” failures; the failure of one component places additional stress on the next in line and that fails as well. In the finance sector the analogous problem had perhaps not been taken seriously until the crash of 2008. The “Value at Risk” models made insufficient allowance for trading behaviour that was both highly correlated across an apparent diverse range of financial instruments and where trading was very rapid (because of automated trading systems). As a result, it was the extreme “tails” of the probability distribution functions that were encountered. Nevertheless, it is the existence of such models and their relative success and failure that has led to greater understanding of the risk. In short, the physical and financial communities have an agreed “language of risk”.
Engineering and economic scientists have been collecting and processing risk data for many years. Equally, there is no lack of data in the cyber-world on which to base risk statistics: major Security Operation Centres (SOCs) and CERT groups regularly observe and categorise over a billion events per day. Similarly, there is extensive information on the status of IT vulnerabilities (although most organisations are somewhat shy of publicly admitting that their servers are unpatched and their anti-virus signatures out of data).
The piece of the puzzle that appears to be missing in the cyber-world is an accepted mathematical risk model. In the physical sciences, weather forecasting for example, simulations are based on solving a detailed set of equations linking temperature, pressure and flow.. In sociology and economics the equations may be less well founded but that has not prevented them being adopted as a common language for modelling and predicting events. In financial markets, arguably an area of extreme stochastic behaviour, the power of mathematics to understand risk is reflected in the huge rise in demand for “quants” analysts.
In areas where there is an absence of accepted or fundamental equations, many sophisticated tools (neural networks, Markov models, Monte-Carlo techniques etc.) have been developed that can be used instead. Similar, tools can be used to extract statistics in information assurance from threat and vulnerability events and to build risk analysis models. Such models may be limited in their ability to make long term forecasts because of the unpredictability in the discovery of new software exploits and attacks. Nevertheless, they will establish a common language and methodology for describing information risk and describing a problem is usually a key step in managing it.
In credit, market and operational risk the aim is to translate the whole into an amount of capital deemed appropriate to safely run a particular business. Models depend on a complex set of correlations where risks in various positions off-set each other, essentially the more risk, the more capital.
For cyber risk to be measured in a compatible way, one hypothesis would be to translate three main factors into numbers, the profile of the target (eg large retail bank with millions of customers vs investment bank with only corporate clients), the level of investment in counter measures and the likely effect of security breaches of different magnitudes on the capital base, all offset by any insured risk.
Further, the equivalent of a stress test on a bank could perhaps be to measure the ability or inability of the systems to stand up to a planned ethical hacking exercise mandated by the regulator.
In summary, we agree that it makes sense to acknowledge the convergence of IT/Cyber and Operational Risk and to explore using a common set of compliance techniques for their management. We also believe there remain deeper issues in information assurance that need to be addressed to establish a common and quantitative understanding of the problem of Information Risk itself. We suggest that the way forward is to draw on the risk modelling and forecasting algorithms that are already a standard tool of the engineering, social and economic sciences with a potential aim of coming up with one number or capital equivalence. In this way we can create and justify the arguments that will ultimately drive the investment needed to ensure that the trend in information security breaches is finally brought under control.
This content is provided by an external author without editing by Finextra. It expresses the views and opinions of the author.
Roman Eloshvili Founder and CEO at XData Group
06 December
Robert Kraal Co-founder and CBDO at Silverflow
Nkiru Uwaje Chief Operating Officer at MANSA
05 December
Ruoyu Xie Marketing Manager at Grand Compliance
Welcome to Finextra. We use cookies to help us to deliver our services. You may change your preferences at our Cookie Centre.
Please read our Privacy Policy.