/artificial intelligence

News and resources on artificial intelligence systems, innovations and initiatives worldwide.

NextGen: AI: How the intelligence revolution is driving what AI can do for banking

At NextGen: AI 2024 taking place for the first time in London, Dr Joe Lyske, co-founder and managing partner of Time Machine Capital², presented the keynote, ‘What can we do with AI?’

1 Like 0 Be the first to comment

NextGen: AI: How the intelligence revolution is driving what AI can do for banking

Editorial

This content has been selected, created and edited by the Finextra editorial team based upon its relevance and interest to our community.

Dr Lyske highlighted the transformative potential of AI in banking, pointing to how his company uses AI technology to stress test various scenarios and come up with detailed solutions. There are many positives to the influx of AI integration in every field, but there are also ethical problems to be considered, he noted, indicating the use of AI to replicate human behaviour, potentially bypassing human input.

“Our next revolution is here. I know it's been spoken about today already, but it's the intelligence revolution, data converted to information, converted to knowledge and processed with wisdom to predict an outcome, to help us reduce risk and increase certainty. After all, prediction of the future is what intelligence is all about. Now, through this lens, how does it change the perception of what AI can do for us?”

On to risk management potential, Dr Lyske stated that AI has the ability to price and manage risk which could shape compliance and product creation; for example through tokenising routine behaviours, and monitoring crypto wallets and social media for accurate KYC and AML practices. Dr Lyske then discussed the possibilities of Web3.0, which redefines how people interact with each other, their data, and institutions.

He stated that with the rise of social media, data is owned by the platform and permissions are extended to the individual. Banking is built on trust, but “individuals are tired of this notion of trust and permission, and have consequently embraced technologies that allow them to do what they want to do in a trust-less and mission-less way. This is the cornerstone of Web3.0 as a social, political and economic endeavour.”

He continued that cryptocurrencies have satisfied this need.

Dr Lyske stated that through the development of AI, there is a future where AI can ratify global identities and facilitate seamless transactions across borders. He explained how banks can act as ‘blockchain oracles’ to connect blockchains from external systems based on inputs from the real world. Banks can give blockchain permissions and ratify customer identities and legitimise transactions; banks hold the key for verification and in monitoring through use of AI.

The first panel session of the event, ‘Where next with AI?’ was moderated by Daniel Szmukler, director of the Euro Banking Association (EBA), with panellists Dr Jochen Papenbrock, head of financial technology EMEA at NVIDIA, and Jeff Tijssen, global head of fintech at Bain & Company.

Szmukler stated that the EBA believes that AI is going to be transformational technology in the incoming years because of its “human touch”, he furthered. “AI immediately touches you, because it has very real applications that everyone, intergenerationally speaking, can make use of. That is something that is quite unique about this technology; it's very tangible in the right use cases and applications, which are manifold.”

When asked about AI unpredictability and challenges with AI adoption, Tijssen emphatically stated: “The challenge with financial services, from a regulatory perspective particularly, is that 97% or even 99% accuracy isn't enough, and therefore you need to work towards 100% accuracy.” He outlined how a big challenge lies in explainability, and that leads to complex discussions in how to ensure that is offered to customers, users, and stakeholders consistently.

Dr Papenbrock agreed with Tijssen’s points, and stated that another sizable obstacle to adoption is the lack of focus on technology:

“AI needs a lot of technology to be able to be successful. At Nvidia, we think about this in terms of the AI factory. So a bank or financial organisation is an institution that produces financial intelligence, made by humans, served by humans, and AI needs to be in the loop wherever it can be. This the ‘AI Centre of Excellence’. It's based on an AI factory that serves all these needs, because running huge models, LLMS and so forth, is very expensive if you don't do it in a proper way.”

He detailed that the ‘AI factory’ is where AI processes can be customised, activated, and orchestrated.

Tijssen then referenced the lack of the a regulatory framework is also a major concern, and states that banks need to gain new skill sets to effectively reach the potential of AI. Tijssen highlighted that a mindset shift is essential to drive adoption. He stated that organisations need to be committed to driving growth in every aspect of their business and looking for new opportunities, products, and initiatives.

Szmukler said that banks are focusing on technology elements over cultural needs, which can have a negative impact on their reputation and customer relationships.

He explained: “I think it's fair to say that banks by culture are quite conservative, because the single biggest asset you put forward as a bank is the trust that your customers have in you. If you cannot really audit how AI has come to certain conclusions, if there's this ‘black box’ phenomena, if it's potentially unethical or not fair in its outcomes, this would cause huge, tremendous reputational damage to a bank, and it would deprive the bank to build on its biggest single asset, which is trust. It's important that we look at all the vectors that play; technology, data, and they all come back to culture.”

When discussing solutions for AI adoption, Dr Papenbrock said that there needs to be an interoperable platform to process data at a scale, and that accelerated computing programs can improve data curation and processing. He added that it is important to have a space to test and fine-tune AI models to ensure they are safe and accessible.

Concluding the panel, the speakers emphasised the need of explainable AI to prevent bias and the significance of education and training for AI adoption. Tijssen noted that while education and understanding the theoretical knowledge behind AI is essential, there needs to be people on the executive level pushing for AI adoption and improvement for it to be implemented.

Dr Papenbrock stated that the Deep Learning Institute offers free courses, and that Nvidia is looking to democratise AI access through educational programmes.

Sponsored [Webinar] 2025 Fraud Trends: Synthetic Identity, AI and Incoming Mandates

Comments: (0)

[New Report] Managing Fraud Risks with Synthetic Data: A Practical Approach for Businesses ServicesFinextra Promoted[New Report] Managing Fraud Risks with Synthetic Data: A Practical Approach for Businesses Services Industry