/artificial intelligence

News and resources on artificial intelligence systems, innovations and initiatives worldwide.

NextGen: AI: Finding the right strategies to overcome AI limitations

The first afternoon panel discussed three of the major discussion points when it comes to AI: data, culture, and skills.

  0 Be the first to comment

NextGen: AI: Finding the right strategies to overcome AI limitations

Editorial

This content has been selected, created and edited by the Finextra editorial team based upon its relevance and interest to our community.

The panel titled ‘What are the solutions to our limitations’, was moderated by Finextra’s Gary Wright. Speakers consisted of James Benford, executive director & chief data officer, Bank of England; Kshitija Joshi, Ph.D, vice president (data & AI solutions), chief data office, Nomura International; Kerstin Mathias, policy and innovation director, City of London; and Ed Towers, head of advanced analytics & data science units, Financial Conduct Authority.

The discussion started with what each panellist is currently working on in their organisations when it comes to AI. Towers spoke about the survey recently released by the FCA, stating they found that currently 75% of financial organisations are already using some form of AI, and 17% are already using some for of generative AI. The majority of use cases they identified were in lower materiality areas, with a lot of adoption observed in financial crime prevention and the back office.

Benford explained that, at the Bank of England, they’ve had an advanced analytics division for around 10 years, which has working on about 100 projects with what we consider traditional AI. Specific examples included machine learning for policy setting or investigating the impact of unemployment on inflation. He continued that an AI task force has been set up last year and have started rolling out generative AI more broadly, particularly to transform legacy code.

Joshi explained that at she was the first person hired when Nomura International set up their centralised data science team three years ago. The idea was to ensure a centralised team existed that could oversee data governance and management principles. “All AI rests in the assumption that the underlying data is of good quality. In reality, that’s not really true, and we all understand that in financial services.” So the question was, how to you test for toxicity, bias, and hallucinations - and do so at scale?

From Joshi’s point of view, there are two different periods she observed when it comes to AI adoption: Before the rollout of ChatGPT and after the rollout of ChatGPT. “Before, if you went to stakeholders - not even about AI, but about deploying data analytics - it was a big no no. Ater ChatGPT, everyone wanted to use it, and do something - anything - with AI."

The conversation then turned towards education, as Joshi further explained how their team ensured the appropriate training and education was in place at Nomura International. Mathias emphasised that training was also a crucial priority at the City of London. “Our main objective is that London remains a leading financial centre,” she explained. “And AI helps with that.

“We look at it in three buckets. One is internal policies, two is investment, and three is skills. There has been a 150 fold increase in job ads that look for generative AI and conversational AI skills in the past 24 months. There is no way that those skills can be plugged just by waiting for people to come through the pipeline. So upskilling and re-skilling the existing workers is crucial - and your data systems, and your legacy system, are a part of that.”

When it comes to addressing risk in AI models, Benford emphasised the need to build solid model and risk frameworks. “We’ve gone down the path of looking at all internal policies, and how to allocate resources and focus on the low hanging fruit,” he explained.

“Traceability to the source document is important guardrail. It’s not just the model, it’s context. It’s all the data you’re using to build your model on. The context is changing as the organisations’ knowledge base evolves. You cannot predict how a model will respond in 6 months’ time. Stress testing is crucial here.“

Lastly, Towers stressed the importance of collaboration with regulators, and how the FCA helps address this. “We published an AI updated last year in response to a request from the government, which lays out how our current policies, like Consumer Duty, apply to AI. But it's really the time for engagement and collaboration between industry and regulators.“

Sponsored [Webinar] 2025 Fraud Trends: Synthetic Identity, AI and Incoming Mandates

Comments: (0)

New Report – The Future of AI in Financial Services 2025Finextra PromotedNew Report – The Future of AI in Financial Services 2025