Join the Community

22,060
Expert opinions
44,004
Total members
397
New members (last 30 days)
189
New opinions (last 30 days)
28,693
Total comments

Addressing Bottlenecks around Real-Time Risk Reporting

The demand for real-time information in today’s banking and finance world is constantly growing. There has always been a need for real-time data on the trading desk of course, but that requirement is now expanding into areas such as enterprise risk reporting, which has traditionally operated on more of a batch-driven, T+1 basis.

Although much of this demand is being driven by regulations such as BCBS239 and MiFID II, there are also some very valid business reasons why banks are exploring how to achieve greater real-time visibility into their exposures. After all, the sooner you are aware of a risk, the more efficiently you can manage it. And being able to manage risk effectively is one of the cornerstones of successful banking practice.

 

Multi-dimensional reporting

There are however some significant challenges around the reporting of risk in real time, particularly for global investment banks that run multiple business units across multiple geographic locations, in multiple asset classes. Enterprise risk reporting is multi-dimensional, highly complex and generally requires data from a wide array of sources. Which is why, in the past, it has typically been run on an end-of-day basis.

In more recent years, firms have taken advantage of things like distributed processing, grid computing and OLAP cubes to improve the performance of their risk reporting, which – although these approaches have worked up to a point – have still not enabled banks to run complex, enterprise-wide reports in anything closely approaching real-time.

 

Data bottlenecks

One of the main reasons for this is that most risk processes are heavily database-bound, which is where significant bottlenecks occur, particularly when reading data from disk to perform what-if analyses and to look at scenarios from various different dimensions. The problems are compounded by the fact that data loading is often duplicated (e.g. banks often use the same yield curve across different risk calculation processes) and that multiple databases exist across different business units, different asset classes, even for different regulations. More databases lead to additional bottlenecks, which all add up to significant delays in processing.

For these reasons, banks are now increasingly adopting in-memory database technology, which can not only remove the aforementioned bottlenecks by making terabytes of data available in real time applications (such as those required for risk reporting), but more importantly can help banks to leverage their existing legacy applications without having to invest heavily in new infrastructure. This is because with state-of-the-art in-memory technology, there is no need for additional proprietary hardware or software to be installed.

 

Stress & Crisis Situations

In-memory databases are particularly well suited to real-time risk reporting due to their ability to cache of large amounts of data, facilitating fast ad-hoc and what-if type analysis. And in fast moving or highly volatile markets, where data is changing all the time, the ability to perform accurate risk calculations is not only critical to a bank’s business, but also one of the key principles of BCBS239, which states: “A bank should be able to generate aggregate risk data to meet a broad range of on-demand, ad hoc risk management reporting requests, including requests during stress/crisis situations, requests due to changing internal needs and requests to meet supervisory queries”

Although BCBS239 was introduced in response to the financial crisis of 2008 (when banks had little visibility of their exposure to risks such as sub-prime debt and were therefore unable to react to those risks in a timely fashion), deficiencies in banks’ risk infrastructures have come to light on various occasions since then, for example in January 2015 when the Swiss National Bank unpegged the Franc from the Euro, which led to many banks suffering severe losses, some of which did not become apparent until days later.

 

Conclusion

In conclusion, in-memory databases can play a vital role in the next generation of risk architectures. By making large risk data sets and aggregation results available in real-time during the normal trading day, not only can banks satisfy the regulators by moving from T+1 to intraday and real-time reporting, but more importantly, they can enhance visibility into potentially catastrophic areas of risk during periods of market stress, crisis and uncertainty.

External

This content is provided by an external author without editing by Finextra. It expresses the views and opinions of the author.

Join the Community

22,060
Expert opinions
44,004
Total members
397
New members (last 30 days)
189
New opinions (last 30 days)
28,693
Total comments

Trending

Now Hiring