Blog article
See all stories »

Engagement stops outages getting out of hand

The UK’s Financial Conduct Authority (FCA) saw a 187% increase in technology outages reported to it in the 12 months to October 2018. According to Megan Butler, executive director of supervision for Investment, Wholesale and Specialists at the FCA, 20% of those outages were “explicitly linked to weaknesses in change management.”

Sell-side firms need a constant, measured engagement with technology providers. A dialogue with solution and service delivery firms is necessary to gauge what is and is not possible to deliver, and keeps them on top of the latest trends. It also improves the likelihood of achieving real impact through change management.

Pressure on banks to migrate from legacy technology to a more functional technology stack is implicitly required by post-crisis regulatory reform. The success of these new rules hinges upon financial firms’ ability to monitor and control data. Firms – and their management – have been asked to take what was once exclusively internal information and provide reports to authorities and counterparties.

Yet complying with these new frameworks is challenging. Many are already in place such as MiFID II and Dodd-Frank, some are changing, others are being rolled out. Depending on geography and business line, banks and brokers will find themselves somewhere on a broad spectrum of maturity in terms of their data management systems.

The outages observed by the FCA are often a result of firms making big changes within small windows of time. Banks may have made incremental changes historically, but if they have remained tethered to their legacy systems, they will not have been able to make significant advances. At some point, the gap between current operational ability and the demand for better performance becomes too great. Then change is not made at the pace of firms’ choosing.

The weaknesses of legacy technologies are often inherent in their design. Built with a specific purpose hardwired into them, any tailoring will have been conducted through new – and non-native – coding that creates tools, data flows and middleware as a supporting structure.

Engineering them to be fit for purpose inevitably creates a challenge when new purposes emerge. An example of this is reporting.

The current scope of regulatory reporting can capture the output from a range of risk calculations, trading input via the front office, external market information and proxy reports on behalf of clients. None of these systems have had a reason to talk with one another, historically.

If structured products are in scope for a bank, the process is further complicated by the number of moving parts necessary to support trading in the first place, which make the inputs to calculations even more complex and interdependent. To support derivatives trading for clients, and the hedging of those positions, the rules in the last ten years have shifted towards intraday collateral management and margining, making the traditional overnight batch processing of risk positions look antiquated.

US trade reporting rules around swap execution facilities (SEFs), launched as a response to the 2009 G20 Pittsburgh Summit decision to trade derivatives on platforms where possible, are currently under review. Public comments are due in part at the end of January and in part in mid-February. Banks need to continually adapt as rulebooks change.

However, data is hard to move and manipulate between monolithic, task-specific systems. Complexity also exists where a business works across jurisdictions, even in the US and Europe, where national and state rules are typically harmonised. Moving the data within an organisation across borders may create compliance challenges, in addition to any operational difficulties.

To paraphrase Otto von Bismark, change management is the art of the possible. Taking an existing IT infrastructure based upon silos, and enabling it to support the type of functionality mandated by regulatory reporting and risk management, requires a path of transformation which avoids the kind of overreach which can create risk. A real challenge for any bank is in the analysis of capability beyond hype, and thereby either mismatching a technology and its purpose, or becoming a test case for a vendor.

The use of a componentised system, that has the capacity to provide a full transformation, is a great starting point. In areas that need immediate work, for example aggregating and normalising data for a particular set of reports, function specific modules can be used. Modular design means their use does not create silos for the future, which will require further work to overcome. Assuming that rules will change in the future is a safe bet.

Cloud technology also offers great advantages in supporting a bank’s modern data architecture. By virtualising data, cloud allows an enterprise to process it within different applications more easily, but crucially does not require legacy data storage to be interfered with. Delivery of new services is faster, the cost of a cloud-based architecture is lower and there is limited operational risk as there is no need to install systems or hardware.

Change is more possible and practical as a result of these technologies. What necessitates them is not the concern of regulators, or the evolution of technology, but the impact that failure can have on the customer. Banks should ensure that they are taking the first steps – communication with vendors – and preparing a strategy to support incremental change, today.

 

 

5744

Comments: (0)

Now hiring