Solving for Post-Trade Data: Enabling, not Disabling, Innovation
Held on 7th July, 2020
Following on from our recent publication on “Solving for Post Trade Data: Enabling, not Disabling Innovation in Capital Markets”, Ascendant Strategy hosted a panel discussion to discuss the challenges of data management and innovation in the post-trade space. In our original report, we had highlighted the massive opportunities for innovation associated with data remediation in the trading and execution space, and posed the following questions: “Given the value of solving the data conundrum – as seen within trading – and the cost opportunity – within post trade – why has progress been so slow? And more importantly, what should organisations be doing differently to try and solve the problem?” The panel addressed these questions over the course of an in-depth, open and frank discussion that was moderated by Clive Posselt, Principal Consultant at The Realization Group. Panellists included: Alastair Rutherford, Managing Director of Ascendant Strategy; Lucy Watson, co-founder at Cyoda; and Chris Wells, Managing Director at Nomura. The panel encompassed both the Fintech and traditional banking sectors, giving us a unique opportunity to explore the challenges of post-trade data from different perspectives.
The barriers to post-trade data efficiency
The panel began by examining the nature of post-trade data challenges. It was noted that these do not arise from externally sourced data, such as market data; rather, it is the consistency of data (or the lack thereof) internally within organisations. The ideal data source would be a single repository containing all trade, position, market and other data, available in the format and timescales required by users. In reality, this is not often possible, and data is fragmented across multiple systems. Many firms have multiple trading systems, each having its own data model. This data inconsistency manifests itself in terms of inefficiencies and breakages in front to back flows, as well as across horizontal aggregations of data for risk, finance and analytics. There is also significant variance in the level of maturity in organisations around the ownership model of data. When issues arise, it may not always be clear who the owner of the underlying data is, and therefore who the owner of the problem is.
Lucy Watson summarised this as the challenge of accessing required data, when it’s disparate and fragmented in siloes across the organisation, and then bringing together these different bits of data to create a coherent view. She identified the underlying challenges as being fundamentally technological: systems are opaque, and it is difficult to know what processing and transformation is occurring behind the scenes. It is not easy to look under the hood, to understand data flows and how data transforms across the architecture, under these circumstances. Alastair Rutherford went on to describe the obstacles posed by different data structures and multiple representations of trade data. When flowing down to the post-trade world, these need to be translated to a common format before they can be used in a meaningful and consistent manner – a task which simply is not often possible.
Identifying workable solutions to unworkable obstacles
Chris Wells echoed the sentiments of the other panellists, that standalone data remediation projects based on ROI simply aren’t feasible due to the scale and costs associated with them. Remediation should rather be incorporated into regulatory deliveries, and tackled with a more incremental approach, driven by regulatory requirements and timelines. With many organisations focussing their resources on regulatory compliance, data quality forms an important and integral part of these deliveries (particularly when it comes to regulatory reporting!) and regulatory projects provide the ideal environment in which to implement improved data management.
The panellists broadly agreed that a big bang approach to data remediation is rarely either affordable or successful. Alastair Rutherford introduced the concept of “data jails”, in which organisations find data trapped within legacy applications with no way to effectively access it. In these situations, great value can be gained by incrementally tackling the problem, building interfaces for bidirectional transfer of data that can open that data up to wider organisation and data architecture.
Lucy Watson suggested that firms start small and keep their minds open to innovative Fintech solutions, investing relatively small amounts of funding and resource in proof of concepts or pilots to test out new approaches, with rigorous controls and measures around them. Fintechs, especially those established by industry veterans who understand both the technology and cultural constraints around data remediation in organisations, can play an important role in developing novel solutions to long-standing problems. Rutherford added that unstructured data is a prime example of how technology innovation can make a huge impact, for example through solutions using AI / ML to process unstructured data from spreadsheets, chat, email, voice and other sources.
For Chris Wells, technology is one part of the puzzle, but data ownership and governance within the organisation is fundamental to getting it right and to moving along the data maturity curve. There is also a mind shift that needs to take place within organisations, away from process ownership and towards data ownership. When making the benefits case for data remediation, it is vital to go beyond ROI and to bring in the regulatory requirement and the cost of not getting it right. Alastair Rutherford concluded by reiterating one of the key points in the original paper: it is necessary to take a business-led approach to finding a solution, especially given that data remediation can improve overall business outcomes. This can also create more of a case for swinging some of the innovation budget that is typically applied to the front office, into the post-trade environment.