/artificial intelligence

News and resources on artificial intelligence systems, innovations and initiatives worldwide.

NextGen: AI: Busting five AI myths

The next afternoon panel was moderated by Debi Bell-Hosking, with speakers consisting of Dr Janet Bastiman, chief data scientist, Napier AI; Stuart McDowell, UK CIO, Societe Generale; and David Tracy, head of data products, Smart Data Foundry.

  1 1 comment

NextGen: AI: Busting five AI myths

Editorial

This content has been selected, created and edited by the Finextra editorial team based upon its relevance and interest to our community.

The goal of the session was to bust five AI myths, with the help of audience participation. The panel facilitated this with  interactive polls and commentary.

Myth 1: Human-in-the-loop will not be a vital role in financial services

  1. Disagree: 84%
  2. Agree: 16%

The first myth that was addressed was that the human-in-the-loop (HITL) will not be a vital role in the future. 84% of the audience disagreed, and so did the panellists. “We're going to need human-in-the-loop going forward," Bastiman commented. "We're seeing regions worldwide starting to introduce regulations in financial services. Human-in-the-loop is part of that, and having human decision making is huge. I really liked what Jochen said in his session earlier: rather than being ‘human-in-the-loop’, it should be ‘AI in the human process’”.

McDowell added an example from his time as a trader: “When algorithm trading and machine learning came in, people thought it would be the end of the human trader - and it wasn’t. You still need humans accountable to the regulator and to the clients. It’s about augmenting the job, not losing the job.”

Tracy added that low risk and no-impact automation will take humans out of the loop to some extent, but understanding context and nuance is crucial. An audience member agreed. When Bell-Hosking asked whether one of the audience members who voted ‘agree’ would like to comment, a delegate from Swift commented that we will need the human less and less as we start trusting AI more, and become more aware of how its processes work.

Myth 2: Soft skills will become unimportant in financial services

  • Disagree: 94%
  • Agree: 6%

The soft skills in question include critical thinking, attention to detail, interpersonal skills, negotiation, empathy, critical skills, and collaboration.

Bastiman disagreed with the statement and started with an example: “Particularly in the financial crime space, the transaction patterns of someone working multiple zero-hour contracts, where money-in is quickly followed by money-out - to having to literally put cash in a jar to put money on their gas card because they're not even able to get direct debits - that profile overlaps with an awful lot of the rules that we see for money laundering activity. In some respects, critical thinking is more important than mathematics. Being able to look at information that you're presented with and make critical assessments of it is one of the most important skills of this century.”

McDowell expanded on the importance of soft skills inside a company: “Everyone wants to work for a good manager. No one wants to get their end-of-year review from a chatbot that’s read your emails all year. The more senior you get, the more EQ you want rather than IQ, because that’s what brings people together.”

An audience member from Santander agreed, stating: “AI decisions need to go through a human lens. Those skills will become more important as we train AI. So it can lead us to decisions that are more human-led.”

Another audience member from S&P Global added: “And on top of that, what’s important is to develop inquisitiveness, asking critical, good questions. AI are getting better and better, so it's very important to know exactly what to ask and how to ask it, in order to be able to get the answers that are going to be valid for us in the future. That is part of the analytical mindset that I think is going to be important.“

Tracy saw both sides, stating there was a kernel of truth in the statement: “Soft skills will remain important, but what those skills are might change.”

Myth 3: AI isn’t human and therefore doesn’t have bias.

  • Disagree: 83%
  • Agree: 17%

The third myth to be busted was that AI does not have bias, which was unanimously disagreed with by the panel. McDowell stated that, depending on the training data, AI can have a significant bias. Bastiman explained there are mathematical proofs that show that, if AI models are trained with unbalanced data sets, the model itself will be biased one way or another.

Tracy commented: “Again, I think there is a kernel of truth in it. The bias I worry about is the human bias. Maybe I'm too pessimistic about human nature, but how diligent are people going to be when performing those checks? How gullible are we? I think the answer is, quite gullible, and lazy, right? Sometimes AI holds up quite a depressing mirror to us as society, and the biases it tends to inherit from us.”

Myth 4: Sensitive data is safer with advancing technology than with humans.

  • Disagree: 43%
  • Agree: 57%

The next myth on the menu had both audience and panellists split. They argued that, while in theory, technology has more potential for safekeeping, people tend to find ways to subvert it.

“Think strong passwords,” Bastiman stated. “We all know people tend to write down their complex passwords and leave it on a post-it; or use 123456 as a phone pin. At some point, everything we try to do to protect data, we end up with people trying to subvert it. We already know that with the advent of quantum computing, encrypted data is being harvested off the web from when they can crack it. We’ll come up against that challenge soon.”

Myth 5: Regulation can’t keep up with AI and enforce the role of the human.

  • Disagree: 13%
  • Agree: 87%

The last myth to be busted was that regulation cannot keep up with AI and enforce the role of the human. 87% of the audience agreed with the statement, to which Tracy simply commented: “That’s quite a depressing take, and I hope it’s not true.”

McDowell commented: “Regulation changes in financial services. Lots of regulators are looking at it and the complexity it comes with; and different regions are going to be doing different things. The UK is principle-based - so whether it’s Mrs Jones answering the phone and giving you a mortgage quote, or whether it’s a generative AI producing it, you’ll have to abide by the same principles.”

Bastiman concluded: “There’s times when it isn’t keeping up, but it can keep up. They just need to make sure that regulation avoids being too technically specific, because we’re seeing the sizes and models are changing. So if you put down regulation talking about parameters or size of the machine, it will get out of date very quickly.”

Sponsored [Webinar] Trusted Transactions: The Future of Risk-Based Authentication

Comments: (1)

Ketharaman Swaminathan

Ketharaman Swaminathan Founder and CEO at GTM360 Marketing Solutions

I find it hard to accept the speaker's claim “When algorithm trading ... came in, people thought it would be the end of the human trader - and it wasn’t." Data says it was. While AlgoTrading did not literally end human trader, it did replace 50% of them (Source: ChatGPT).   

[Webinar] Operational Resilience in the age of DORAFinextra Promoted[Webinar] Operational Resilience in the age of DORA