How much more change can banks’ creaking infrastructures endure?

Facing a volatile economic and geopolitical outlook, competition from new entrants and the never-ending slew of new regulations, it’s hardly news that many organisations’ infrastructures are being pushed to the brink. Regulatory deadlines are aggressive and non-compliance is not an option, so inevitably vast human and financial resource is thrown – by hook or by crook – at getting the job done. Thus endless ‘temporary’ fixes and workarounds are layered over antiquated architectures – with the very best of intentions to sort out a cleaner, more efficient solution ex-post. Sadly, these strategic solutions are typically de-prioritised, as hot on the heels of the initial requirement comes the next major regulation, rapidly consuming all available resource.

The inevitable consequence is not pretty: ever more system proliferation, spiralling complexity, mushrooming maintenance – as well as significant operational risks of keeping the ugly beast alive and running.

Beyond the specific reports, there is also an increasingly assertive and probing regulatory presence, with supervisors not just checking compliance with the requirements but making judgements about the spirit as well as the letter of the law. We now expect ‘mystery shopping’ phone calls: ”Tell me your global exposure to Greece…” or more subjectively “How are your exposures to the London property market impacted by problems in Greece?” Navigating the anatomy to answer these granular enquiries requires PhD-level dissection skills. Data will need to be aggregated from black-box legacy systems, spreadsheets, and other disparate sources – each using different data definitions, formats and structures – i.e. slow, laborious and expensive.

Beneath this somewhat grim picture, there are glimmers of change on the horizon. A recent Morgan Stanley/Oliver Wyman blue paper “The World Turned Upside Down” predicts that in the coming years wholesale banks are likely to enjoy a ‘tempering of regulation and rising rates’, leading to ‘increasing capacity and revenue’. This will enable a new focus on improving operating models and allow banks to unlock major savings via the adoption of new technology. Indeed, the paper predicts that banks could release 12-15% of their cost base within 5 years simply by harnessing currently available technology. If we extrapolate and add in rapidly emerging technologies – which within those same 5 years will be capable of dramatically more than they are today (I’m thinking scalable ‘big data’, AI and the like) – it’s not a big stretch to predict that the scale of potential savings will be transformational.

Meanwhile, the talk at every reg-related industry event is of standardisation and simplification…If only all these global regulators could talk to each other and agree on one standard data model we could use for everything? – Wouldn’t that make our lives dramatically simpler?

Well yes….to a point.

I’m a huge fan of standardisation. There are vast numbers of counterparties and securities out there for which this should be pretty straightforward. Currently, unbelievable amounts of non value-adding effort is expended in banks across the globe in pursuit of a single, common view of a client. There are some great initiatives out there striving to tackle this problem. A big hats off goes to Francis Gross of the European Central Bank for pushing through the significant bureaucratic and political hurdles to get the global LEI framework off the ground to transform the world of reference data. And there are many other notable initiatives out there such as FIBO, ISDA, FpML, FIX…

But just how far can standardisation solve the complexity conundrum?

Let’s consider the major data vendors. For a large amount of the more straightforward data, you would imagine they might all agree. Sadly this is not always the case. Dig beneath the surface and you will discover subtle differences in definitions, semantics, update policies and more. Indeed, the challenge reaches beyond the actual data into language and meaning. I’m sure I can’t be the only one who has often tripped up during the first few weeks within a new institution, because of slight (but critical) differences in the interpretation of common terms.

As we take a step towards the more esoteric, the differences become even more apparent. Each part of a bank will prefer different sources, depending on what they perceive as advantageous – which will be determined by the line of business, history/convention, the data model within their existing IT systems etc. For complex structured products, getting to a common view may be trickier still: should a composite transaction be split into its component parts – or represented as a single transaction?

Finally, there’s the human nature of the equation. Banks are large, complex beasts which naturally divide and conquer. Silos evolve their own ways of working, vocabulary and data models, adapted to their specific needs and objectives. It may appear a no brainer that to maximise speed and agility we need to align data across operations, internal reporting and compliance – but the reality may be trickier. As anyone working on FRTB will be painfully aware, Financial Control and Risk have divergent views on the same data because their objectives are fundamentally different. There is undoubtedly work that can be done to bring these views closer – but is really feasible to reach a single, bank-wide view of the world? – Moreover, organisational silos face the classic prisoner’s dilemma: compromise and collaboration may promise the best outcome for all, but everyone must be convinced that everyone else is up for the ride.

Thus the journey toward standardisation is a worthy one – but the path is unlikely to be smooth. And the further we progress, the more we may struggle to reach those remaining fruit.

At some point, progress is likely to stall. Most banks are still siloed behemoths struggling with disparate IT systems. Internal standardisation remains a herculean task – let alone getting to agreement on the global arena.

And is it realistic expect global regulators to converge toward greater standardisation in a world of ever-increasingly geopolitical divergence?

So beyond data standardisation, what else can be done?

Thankfully standardisation is not the only weapon in our arsenal in the war on complexity.

There is plenty of scope to get a lot smarter about how we manage vast volumes of disparate, complex data that is endemic to capital markets.

Indeed, even if we were to converge on a single data model for all external reporting, this would not change the fact that banks are reliant of thousands of legacy systems – all supplying data in different formats. Each time there is a material update to the way data is stored at source, there will be updates to interfaces of all downstream systems. Likewise, whenever a new reporting requirement arises, the interfaces from all source systems may need to be updated to supply additional data. Given the pace of regulatory and business change – and the fact that a medium-sized investment bank may have tens of thousands of interfaces, this creates an on-going industry of systems integration.

Thus to reduce the complexity of banking IT, it’s not sufficient to lessen the burden of external requirements, it’s also critical to look within.

Four smart ways to tackle complexity:

1. Go Scalable

The proliferation of systems across banks has largely been driven by the fact that existing systems are generally operating at the limit of their capabilities. They simply can’t handle any more volume or functionality. Thus if you want to do more business or have a new requirement, there’s no choice but to put in a new system.

True scalability eliminates all limits. – Processing power scales linearly with computing power – opening the door to extending and consolidating systems, rather than adding yet more interfaces.

Of course, it will take years to undo the complexity created by decades of non-scalable systems, but if we could reach a point (sadly we’re still some way off..) where all new implementations were truly scalable, then we could be confident we’re beyond ‘peak complexity’.

2. Think Schema Agnostic

Schema agnostic basically means a system that takes in data in whatever format it is readily available, rather than build a bespoke interface which maps everything to a custom data model.

Sadly – the concept often gets a bad rap. It’s associated with huge ‘data lakes’ which suck anything and everything into a black hole – never to be seen again.

This is a shame – as the schema agnostic concept is basically a very good one. It brings the time to ingest data from new systems down from weeks or months to just hours. It means that business critical logic is defined on an ‘as needed’ basis – rather than wasting expensive resource mapping everything under the sun that might potentially come in handy at a later date. And it avoids major interfacing re-work each time a new unforeseen requirement arises.

It’s kind of analogous to the car industry moving from vast stores of expensive inventory to Just-in-Time manufacturing…

So let’s hold judgement on the data lakes (hell – why not even go for a ‘data ocean’?) – what really matters is that the data is subsequently both accessible and usable.

3. Ruthlessly pursue simplicity

This one’s kind of obvious – but it’s remarkable how often we see banks going to great to lengths to replace a sprawl of legacy with a new architecture that eventually proves almost as complex – leading to reliability issues and high running costs.

I believe the key here is a relentless commitment to de-layering. Once scalability is in place, there is really no reason why each new business requirement cannot be used as an opportunity to de-layer and simplify. It will require strong leadership of the IT architecture to resist the inevitable internal pressure to cobble together ‘quick & dirty’ fixes to meet urgent requirements – but just a little more effort invested up-front will unlock substantial long-term gains in cost and agility.

4. Don’t forget the people

Technologists are apt to assume that great Tech can solve all ills. Working at a tech-heavy start-up, it’s something we’ve been guilty of on multiple occasions and we’re learning that being successful commercially is all about winning hearts and minds. The reality is that the best technology in the world is useless if the people it impacts don’t see the value or aren’t committed to it – and in the case of anything designed to reduce complexity, we also need to be sensitive to the reality that some may feel threatened by the new approach.

To Summarise:

IT within investment banks is reaching complexity break-point – making regulatory compliance extremely challenging, and causing a major drain on profitability.

The war on complexity has 3 major fronts:

Firstly, the most commonly debated is the drive to converge and standardise regulatory data reporting standards – so as to simplify outputs and ease the external reporting challenge. This is a vital piece of the puzzle and there are some great initiatives ongoing. There is also plenty of scope to align data, language and meaning across internal silos – but given the inherently different needs and objectives of the key functional areas, is it really realistic to reach a single view of the world?

In parallel, banks need to look internally to embark on a path toward the gradual consolidation of their vastly complex architectures. Simpler, more transparent IT is just as critical to the complexity challenge as data standardisation. Indeed, institutions that succeed in modernising their ageing, complex and inefficient IT infrastructures will unlock a huge cost and agility advantage.

Finally we must not ignore the human element of the picture. We may like to think of ourselves as rational bankers, but the neuroscience suggests otherwise. The limbic (or emotional) part of our brain is central to our decision-making. Thus even the most brilliant technology or data standardisation model is doomed to fail unless the people who will use it are convinced and committed.

——————————————————————————————————–

Thank you for taking the time to read my musings; I’d love to hear what you think: How far will we get down the road to data standardisation by 2022 or even 2027? What will investment banking IT look like on those time horizons? – Will we have embedded simple, scalable technologies into the heart of core banking? – or will the complexity beast be bigger and uglier than ever?

Please share your thoughts below!