Join the Community

21,462
Expert opinions
43,679
Total members
363
New members (last 30 days)
134
New opinions (last 30 days)
28,513
Total comments

Making sense of the Data Lake

Be the first to comment 3

Today, receiving and providing accurate data, is king. But rather than firms searching for more data, they are now looking at how best to store, access and manage the data that already exists. How can you tap into your data lake to not only meet regulatory requirements, but to also start innovating by applying that data in new and interesting ways? How do you manage your data protection responsibilities in the process?

Wherever you sit, whether buy-side, sell-side or a vendor - challenges need to be overcome in order to meet existing and forthcoming reporting requirements. Firms must grapple with intraday and intra-minute data feeds coming in from various external sources and in differing formats, which must then be streamlined into a format that flows easily into existing systems.

Once the data is captured, there is a further challenge in how that data is reported back to regulators in a timely and accurate fashion - with different regulations requiring different formats and levels of granularity. For example, under MiFID II a slew of data needs to be streamlined into one transaction hub in the bank, which becomes a huge challenge for many firms who may be using legacy systems or that may even still confirm trades via manual means for illiquid securities. But failure to comply is not an option, as regulators are not messing around when it comes to punitive public sanctions for firms’ failures to get their data houses in order.

In the MiFID II era, firms must provide 65 field transaction reports which range in detail from asset type to issuer, to maturity, from seller to transaction creator to end recipient. The advent of SFTR is expected to drive additional demand for the transformation of data from existing internal messaging formats – such as FpML or FIX – to ISO 20022 to meet new reporting requirements.

Where does one even begin to unravel these data layers? One option of course is for firms to create large, in-house teams to constantly monitor, manage and correct incoming and outgoing data – typically by essentially “throwing bodies” at the problem. As the problem continues to grow relentlessly in complexity with increasing moving parts, so do the risks of getting it wrong due to human error.  Specialist fintech vendors with a detailed understanding of the regulatory landscape can seamlessly integrate with existing systems and help streamline the entire data capture and reporting process, working to automate and integrate previously densely complex and inefficient processes.

If firms can ensure data is streamlined, verified, audit-ready, and conforms with regulatory reporting standards, the pain associated with this process is virtually eliminated. It’s only once data processes have been improved, that firms can start to innovate and offer truly exciting new products and services, by applying innovations such as AI, robotics and machine learning. And with all the data available today, this is real possibility, but first you must unravel and streamline the data that exists. It’s still the case that putting meaningless data “in”, means receiving meaningless data “out”.  Find the solution that fits with your business, and this conundrum is solved. 

 

External

This content is provided by an external author without editing by Finextra. It expresses the views and opinions of the author.

Join the Community

21,462
Expert opinions
43,679
Total members
363
New members (last 30 days)
134
New opinions (last 30 days)
28,513
Total comments

Trending

Now Hiring