Post-trade finds its feet with AI
Large Language Models and Generative AI are poised to transform the world of post-trade, enabling firms to enhance client interactions and improve productivity. While AI will bring benefits for the industry, issues must be addressed on data quality, regulatory compliance, operational resilience, and the risk that vital workplace skills could be lost, as the technology’s adoption gathers pace. Experts discussed these opportunities and challenges at TNF in Warsaw.
AI – A Driver of Change
Artificial intelligence (AI) – having previously occupied the orbit of science fiction writers and obscure academics - is now firmly embedded in the world of financial services, including post-trade.
While technologies such as machine learning and predictive analytics are already being used widely in post-trade (albeit with varying degrees of success), it is the latest wave of AI tools– popularised by Large Language Models (LLMs) and Generative AI - which is likely to have the biggest impact.
Yvan Mirochnikoff, Head of Digital Solutions at Societe Generale Securities Services (SGSS), participated on a panel at The Network Forum’s (TNF) Annual Meeting in Warsaw, where he – together with other industry leaders quoted below– discussed how AI could potentially revolutionise post-trade.
Enhancing the service proposition
By integrating LLMs and Generative AI into Chatbots, most experts agree that next generation AI will help transform client service and communications.
Our industry is no longer in the proof of concept (POC) stages for AI, but rather we are scaling up the number of applications for AI. One of the main areas where we believe AI will have the most impact is in customer support. We are constantly working on ways to improve our customer support. While AI can answer simple client queries through Chatbots, the priority has to be on developing tools that can answer complex or very market specific questions
A number of firms are also leveraging next generation AI to drive up productivity - and post-trade is no exception here.
A report by GitHub, for instance, found people had a 55% faster completion rate of coding tasks with higher success rates when using GitHub CoPilot, while MIT analysis showed ChatGPT had a 37% faster completion rate of knowledge work with comparable results1.
One of AI’s biggest benefits is its ability to succinctly summarise large data sets or detailed documents.
“Imagine if I received a 60-page document containing information about the latest anti-money laundering (AML) rules. By using ChatGPT, I can obtain a structured executive summary of that report, which is a huge efficiency benefit. A few years ago, we tried to test optical character recognition to read documents for tax reclaim purposes, but the results were disappointing. In contrast, the latest versions of AI can read manual documents and generate incredible results off the back of it,” commented Alexis Thompson, Head of Global Securities Services at BBVA.
In the case of SGSS, Mirochnikoff said AI is being used to expedite document management processes during Know-Your-Customer (KYC) checks, an initiative which will not only deliver internal efficiencies but also improve the customer onboarding experience.
AI will ultimately help free up internal resources, allowing people to spend more time on revenue generating or client facing activities. Meanwhile, the productivity gains will enable providers to net meaningful savings - at a time when their margins are facing significant downward pressure.
Overcoming the data barriers
Although AI could unlock all sorts of strategic benefits, the technology is not without challenges.
According to Mirochnikoff, AI will only work well if it is fed with good quality and accurate data.
If AI is programmed using poor or error-strewn data, then the results and analytics it produces will be equally subpar, possibly leading to mistakes – or worse – financial losses at providers and their clients.
Ensuring that data is accurate, well structured, and has proper lineage must be a priority for the industry, but this is easier said than done, however, and the problem is likely to get worse before it gets better.
Sparked by the growing shortage of available data that can be used to train AI, but also because of mounting concerns about privacy, Anders Hvid, Co-Founder of Dare Disrupt, said some technologists are now training AI on so-called synthetic data, or computer-generated data.
Just as defective real-world data elicits bias and inaccuracies, so too can badly generated synthetic data, and this is a risk providers need to be aware of2.
Another issue facing custodians is that data is evolving fast and we are operating in a real-time environment, but the data we use to train AI needs to have a long-term focus. Building predictive analytics solutions requires firms to carefully incorporate both instant information and long-term data
Limitations around technology also need to be addressed if AI is to flourish.
“In our industry, we are always talking about cutting edge technology, but a lot of our systems infrastructure is based on old technology stacks. While AI is an inspiring technology, it is being constrained by legacy systems,” said Dr. Christian Geberth, Division Head Global Investor Services, at Raiffeisen Bank International.
Operational resilience and concentration risk considerations must also be factored in when using AI, with Virginie O’Shea, Founder of Firebrand Research, warning that banks need to avoid becoming excessively reliant on just a handful of AI providers.
Longer-term, the technology could be self-defeating if people come to depend on it too much, and start forgetting vital skills, a problem the industry cannot afford to ignore, added Mirochnikoff.
Navigating regulatory and compliance risk in a new AI world
Advancements in AI cannot come at the expense of regulatory compliance.
Firstly, the data underpinning the AI’s training has to be stored safely and used in a way that does not contravene data protection legislation, such as the EU’s General Data Protection Regulation.
Custodians will also need to be cognisant about the incoming AI Act in the EU, which takes a risk- based approach to regulating AI, categorising the technology into four distinct risk buckets, namely unacceptable risk (i.e. AI systems used for social scoring, which are banned outright); High Risk (e.g. AI used as a safety component in a product); limited risk (i.e. Chatbots) and low risk3.
“We have two years to understand and implement the AI Act. We will need to assess the level of risks which our AI tools potentially pose, in addition to the AI risks that we are exposed to. I anticipate the vast majority of the AI banking tools will be categorised as low -risk applications,” said Mirochnikoff.
Even so, AI is notoriously opaque, and banks do need to be transparent about how they use the technology.
A lot of AI technology is a black box. Firstly, when using an external AI partner, you need to be very clear contractually about what you do and how you do it. For example, if you work with an external AI engine, you need to understand how it is designed and the sort of data it is using. When using certain AI models with statistical engines, answers are not always black and white. You must be clear with clients that these statistical engines do not always produce binary ‘yes’ or ‘no’ answers
Where do we go next?
Despite the various obstacles facing the technology, AI has the potential to transform post-trade and the way providers support their clients. In this increasingly competitive landscape, those custodians offering clients best in class services, enabled by AI, will be among the winners moving forward.
1 Boston Consulting Group – AI in Financial Services – Making it happen at Scale
2 Syntheticus – How to evaluate synthetic data quality
3 White & Case – May 21, 2024 – AI Watch: Global Regulatory Tracker – European Union