"AI is the new electricity" - TSB's Janet Adams frames the debate at Finextra conference

"AI is the new electricity" - TSB's Janet Adams frames the debate at Finextra conference

Finextra welcomed over 200 attendees to NextGen Banking London, the one-day event that focused on how developments in AI have become a business-critical consideration for traditional banks.

Head of AI at TSB Bank Janet Adams’ statement that “AI is the new electricity” resonated throughout the day and provided a good centre point for other keynotes and panelists to riff off, or indeed disagree with.

Adams’ opinion was one that focused on how artificial intelligence has the potential to power everything we do in the future and help banking customers through the wealth creation stage of their lives. She explained that “only when you get into deep neural networks, AI beats humans. Deep learning has got everyone excited and has put AI on the map. No one has found deep learning’s limit to learn.”

Models for AI success

On the B2B side, Marc Corbalan, product management at Vocalink, highlighted that AI “is already moving the needle today by solving business challenges that previously could not be solved”. With a focus on fraud, Corbalan reiterated the importance of automation to figure out whether fraudsters are human, or are themselves automated processes (robots) and added a note on how impactful this could be for Open Banking, especially if multi-layered scams occur.

“AI is a tool for now and we expect much greater reliance on this technology in the very near future. The AI world has benefited from the collaborative framework,” Corbalan said, going on to highlight the benefits of the amount of computing power available today.

Despite hype around uncovering the mysteries that surround the technology, Adams pointed out that business models cannot succeed without proper education of staff in financial services, and only then, strategic advantage can be gained. “Data equals training equals insight.”

Roshan Rohatgi, AI lead at RBS, agreed and added that “everyone is keen to use this stuff, but the system, the fabric, is not mature yet. It’s all well and good to go from POC to pilot, but it never really reaches the real world.”

This discussion moved on to how there is a danger of the gender pay gap getting bigger because of AI, and Adams suggested how in countries such as China, India and Romania, where digital skills are taught in schools, this would not become problematic for girls.

“There are so many industries that will be impacted by AI, but we always overestimate the technology change in the first year and underestimate it for the decade,” Adams said, referencing the Apple iPhone. She continued: “10% of the AI project is AI - the easy bit. 40% is the data, where you’re getting it from and cleaning it and 50% is the regulatory risk.”

AI FOMO

The hype discussion continued in Karan Jain, head of technology Europe and Americas from Westpac’s keynote, in which he explained that a lot of discussion about AI is around FOMO - fear of missing out. And this “FOMO generation” have different expectations and “want their banking services to be available in a couple of clicks.”

It was also argued in a later panel discussion that this FOMO also exists within the corporate banking infrastructure, where the board may ask executives if they are working with artificial intelligence - after having heard about the technology in the news - for the execs to reply, revealing that their bank has been using machine learning for a few years.

The point on moving from POC to pilot to production was discussed here also. Abhijit Akerkar, head of applied sciences, business integration, Lloyds Banking Group highlighted that before a new technology is introduced into infrastructure, it must be understood and the business use case established. However, Dan Reid, CTO and founder, Xceptor had the view that if the technology is poorly understood, it might be a good idea to “try it out” and then see what it can do practically.

Akerkar continued to say that the data science lifecycle is different to the traditional IT product lifecycle. With new technology, it is a “discovery process, but you are still trying to figure out where you will end up at the end. You will learn something after the POC or pilot, rather than just ticking a box.”

Reid added: “The hype around AI has created the impression that there is such a thing as an AI solution, but AI plays a part in the products and services that we offer, and in the future, will play a part in everything we do.”

Governing AI

Maciej Janusz, head of cash management Nordic Region at Citibank brought up the subject of regulation, that continued to be spoken about in later panels. Janusz said that regulation “comes when something crashes. Banks will be reluctant to implement AI without human oversight.”

Comments on regulatory frameworks were also made by Monica Monaco, founder of TrustEU Affairs, who revealed that governance - at the moment - only exists in the form of data protection, specifically Article 22 in GDPR, which could become a source for future principles to govern AI and the use of algorithms in financial services.

Monaco also made reference to the European Commission’s ‘AI for Europe’ report which was published on the 25th April, which she recommended everyone read. On GDPR, Monaco said that the right to be forgotten could become problematic, as it would also apply to institutions, not just individuals.

As the discussion moved on to standardising technology, a question was raised as to whether AI could be a leveler, as the technology is shining a light on all issues, especially the non-diverse nature of the industry. Michael Conway, associate partner, global business services, IBM agreed and gave low-level examples of what happened when testing “I’ve lost my wallet” against “I’ve lost my purse” - the latter was not recognised by the system.

Ekene Uzoma, VP digital product development at State Street argued that the issue with data abuses is that they start to take on different forms, so predicting may be a little difficult. He returned to the point on education spoken about in earlier sessions and that there needs to be a recognition that we cannot look to the “altar of technology” to solve problems.

According to Terry Cordeiro, head of product management - applied science and intelligent products at Lloyds Bank, “AI will automate repeatable work, but where does that leave us [humans]? We could say that the workforce of the future will be more relationship-based. Banks need to look at how to foster new talent and how to develop existing teams.”

Cordeiro continued: “Even algorithms need parents. And the parents have the responsibility to train them, but where are these people? They don’t exist.”

Explainable AI

While concerns around the ethics of AI exist, Rajiv Desai, SVP - US operations at Pelican, encouraged the audience to “always keep in mind that explainability is important” because “AI understands context. “Explainable and ethical AI are paramount. In banking it is essential that AI technology is compliant,” Desai said.

Jason Maude, head of technology advocacy at Starling Bank, advised that explainable AI is necessary - for reasons outlined by Desai - “we cannot just say that it is because the computer has said. People are not going to trust that answer, when they are declined for a loan application or another product. The trust that people need to have for banks to function will not be there if we leave it up to the computer.”

Dr Michael Dewar, head of AI/analytics at Vocalink, added that while “machine learning might not be at the heart of our processes, but that does not mean we shouldn’t interrogate them. We should take a look at the APIs. It’s not expensive and it’s not difficult, we are always able to interrogate the current status of a system and then properly regulate.”

Maude agreed, but said that software engineering techniques like version control should be introduced. “Do testing where data sets are randomised before being put into the system to see how the output has changed. Provide an audit trail that regulators could start demanding.”

However, Jonathan Williams, principal consultant at Mk2 Consulting pointed out that as regulators are not from a technological background, this is difficult. “The other challenge is looking at the outcomes and checking they are in line with what we would expect. Humans bring their own biases, but we cannot automatically test those. Regulators have a steep learning curve to ascend.”

Maude then made a very poignant point, that remained with most of the audience before the end of the conference. “I don’t think we will reach a point where humans will not be able to explain what is going on. We may get to a point where the cost of explainability outweighs the benefit.”

Concluding comments highlighted that those who were averse to the technology, have started to understand AI more so over the past 12 months as a result of developments, while others are more interested in the practical applications.

Comments: (0)

Trending