Is 'ethical AI' deeply complex or just a contradiction in terms?

Be the first to comment

Is 'ethical AI' deeply complex or just a contradiction in terms?

Contributed

This content is contributed or sourced from third parties but has been subject to Finextra editorial review.

What if everyone could cover one point of view, switch to an opposing take on the situation, explain both, and conclude by explaining their personal opinions on the topic?

And do it all in 45 minutes? Almost sounds like a job for a super-human, right? Or for artificial intelligence perhaps?

Several dozen of Sibos 2023 conference’s estimated 8,000-plus attendees witnessed several experts do just that. Scott David, a lawyer and technology/ethics expert from the University of Washington, Dr Cindy Gordon, technologist and CEO and Founder of Sales Choice,  and Jared Bielby, theologian, digital ethicist, and consultant on AI issues for the City of Edmonton took the stage to tackle the challenge.

Amid all the conversations on key issues and business deals being done, the three-person panel discussed, in a special Sibos forum, “Contrarian views: AI is the future...or not?” Artificial intelligence, and how it is and might be used in many corners of financial services operations and support functions, is a very hot topic in the industry and at this year’s SWIFT gathering.

Moderator Ghela Boskovich, Head of Europe for the Financial Data and Technology Association and a prominent advocate for improving inclusiveness and diversity in financial services, said at the start that she’d be leading her three expert speakers on an interesting journey from one side of the AI issue to the other. “My poor panel has no idea what's coming, in part because I have asked them to argue both sides of everything under the sun around AI.”

Boskovich explained the ‘rules’ up front for the audience: several AI topics would be explored, and halfway through the session, the roles would be flipped, with each panellist taking an opposing view to their former positions.

Starting with the ‘big picture’ as the moderator explained, Boskovich asked:  “What does it mean to be human in the age of AI? What does it mean to be human?”

What does it mean to be human in the age of AI?

David, the lawyer and ethicist from the Seattle-based university’s Applied Physics lab offered his initial views on the issue. One highlight: “Being human in the age of AI is yet another opportunity for us to synthesise our intelligences together, as we're doing here in the trade show comparing notes. Understanding what we do in the banking sector, this, and other use cases, I think this is another instance for humans to step up and reaffirm their humanity together in ways to deal with their collective anxieties and challenges.”

Next the moderator asked Bielby, the Edmonton ethicist, “would you argue that what we're building now still has humanity - or is that something that we need to be a bit more conscious about, more careful about?”

Bielby shared the position, in part, that “the argument that there's a difference between anything that AI could become consciously and that which makes us human is a very important argument to have a way to encourage - for many reasons. The differentiation that I would make is that AI is the sum of its parts. Humanity is greater than the sum of its parts and humans have a certain drive to do things that we might put in the categories of creativity and meaning that AI can only simulate [...] even if (AI) gets to the point that it can do art better than humans. The difference lies in our experience of it. What AI will not experience in its own existential phase. That which it is programmed with - that comes from humanity itself - that experience is human alone.”

Do we need to be concerned about the gap between AI and humanity getting closer?

So, Boskovich asked, don’t we need to be concerned about the gap between AI and humanity narrowing? Gordon, the AI company CEO answered: “Well, I do think you have to worry. We have to look at what it is that we can capture (among) all the knowledge worldwide. All of the different dialects. All the different sounds, all the emotions, and with the affective computing side [...] actually categorise all the range of emotions, whether they're sad or angry - the dimensionality. I think we really need to recognise that we are in a period of rapid fusion and rapid experimentation. But the sense-making within the sentient is evolving. I think it's probably 20 years away.”

The moderator ventured the possibility of humans being all the things that can be mapped, i.e., a human’s particular emotional makeup now just a series of numbers. Bielby’s response to this was that “AI - at the end of the day - is a computer program. It's a super intelligent computer program like a human is at the end of the day, an animal kind of a super-intelligent animal. Both AI and humans will always only evolve based on their own origins for us, our DNA, our experience, and so forth. I think it's important […] to reflect on the language we use to describe it. If it is artificial intelligence, it will never truly be first-case, but always be second case. It will always be built on something we've created.”

How should we manage the issues of inherent bias in data used by AI models?

What about inherent bias in data used to power AI models? Dr. Gordon offered her take on the topic: “In my mind, we can solve the problem of data bias, but it starts with the insightful thought process of defining the problem clearly, and for the data set that you're asking it to ensure that you've recently completed a fairness  deep data bias assessment. There are over 360 known forms of data bias within the AI literature at this point, and it's growing. So, one of the challenges we have is sufficient expertise to understand what the meaning of data bias is...I think what will happen is we'll start to move to more synthetic datasets that will be creating the world and the context that we want to move humanity ahead.”

The CEO of Sales Sense continued that this might impact a very common financial transaction scenario where consumers are applying for loans.

For instance, it would be difficult for an AI model to effectively remedy the reality that minority groups which have faced decades of financial exclusion without some kind of synthetic intelligence applied to the datasets being used.

“This is where synthetic intelligence has to come into balance. It takes some really wise leaders that are now in the organisations building out the whole machine learning infrastructure, but I think it can be solved.”

Boskovich offered a challenge, and began shifting to another question aimed at former practicing lawyer, David, when a buzzer sounded signifying the halfway point in the session, and he immediately began to assert the opposite point of view. “That’s ridiculous!” he replied, albeit with tongue firmly in cheek and eliciting laughs from the audience, continuing with a supportive example: “Kandinsky, the painter said that violent societies yield to abstract art. And so one question is: “Is abstraction the reverse of that? Is abstraction itself a form of violence? And maybe the abstractions that we undertake when we implement these systems will actually do violence to us that will lose sense of our humanity.”

From there, each panellist began to advocate views opposite those of their initial responses.

What are the unintended outcomes of AI over-engineering?

The moderator steered the questions to the concerns re: unintended outcomes of AI engineering, or over-engineering. “We've got the corporate level and commercialisation, the application layer that sits on top of that protocol, where you're going, how do we reconcile all of that? Should I be scared? Because I am,” offered Boskovich.

Bielby replied, “Yes, we should be concerned.”

Gordon, in her ‘new view” countered, “I don't think we're quite at the point where we need to worry too much about AI being an existential threat. Because AI is now able to solve the most complex problems that we humans have not been able to solve. I think that's a really key distinction. We will conquer cancer in our lifetime, I think still in my lifetime. But it's going to be because of these incredible capabilities. We as a human species have not been able to curtail climate change. We are putting our efforts into AI to engineering new ways. The only thing that's going to save humanity, quite frankly, is AI and it's going to go beyond what we're talking about.”

Boskovich: “In the next 50 years, we also have different and unknown outcomes that could potentially be incredibly detrimental. So, despite the big social challenges and the big human challenges, on a grand scale can you benefit from AI? We're also talking about the speed of that solution, and the amount of damage you can do in the interim. How do we address that?”

David's reply: “We don't have time […] The amount of energy required to maintain living forms is exceeding the capacity of the Earth to sustain. With all the other apparatus that this system needs for it to be alive and maintain (itself) along with the human systems which are already eating the planet [...] we don't have time and we will destroy the thing that upon which we depend.”

As the session concluded, Boskovich asked each panellist to inform the audience of their ACTUAL views on the topic of AI and its use.

Bielby: “The true existential threat is not technical. It's not even ethical. It's the loss of our humanity [...] autonomy is the greatest existential threat. If we lose our autonomy, our ability to be ourselves without a crash, then we’ve lost humanity. In a positive spin on that, if we realise this, we can use AI to assist our own autonomy to assess our own ability to be human, and to continue to be human.”

David: “One of the things that I'm hoping that you take from the conversation is to embrace the paradox and understand the different positions because I think that's going to help lend clarity to your operations. The second thing is, in terms of your operations as banks, you are a self-regulatory set of organisational structures. In fact, banks and financial organisations are THE global risk sharing infrastructure right now, period. And it's because of the way that money operates as a risk consolidation tool that you will have an opportunity for leadership in terms of where AI goes.”

Dr Gordon: “I think the most important thing all of us can do is to make sure that our little ones are being introduced to concepts earlier in terms of the ethical constructs (of artificial intelligence) because the world that they're going to into...if we go back to the invention of the cell phone, most people don't realise that (thereafter) human cognition has dropped by about 30%. Our attention spans are now around eight seconds - less than a goldfish by the way, at around 10 seconds. We are into an evolution of cognitive decline. I honestly think it's happening so fast, that the best thing we can do is to really drive the awareness in our little children, (and) also learn how to collaborate, appreciate ethnical diversity, and inclusiveness.”

Comments: (0)

/Sibos Long Reads

Scott Hamilton

Scott Hamilton Contributing Editor at Finextra Research

Fintech founder says bottom line remains top priority for SMBs

/Sibos

Scott Hamilton

Scott Hamilton Contributing Editor at Finextra Research

Why this tech CMO wants accountability for sustainability

/Sibos

Scott Hamilton

Scott Hamilton Contributing Editor at Finextra Research

How to solve the needs of the unbanked and underbanked

/Sibos

Scott Hamilton

Scott Hamilton Contributing Editor at Finextra Research

ISO 20022’s top benefits and challenges for corporate customers

/Sibos

Editorial

This content has been selected, created and edited by the Finextra editorial team based upon its relevance and interest to our community.