/artificial intelligence

News and resources on artificial intelligence systems, innovations and initiatives worldwide.
Filtering the ethics of AI

Filtering the ethics of AI

As decision-making factors using AI become more accepted, pure economics might not align with the softer strategies of a bank. Many financial institutions are questioning how artificial intelligence must be governed within an organisation and how it can be taught to align with a bank’s brand and ethos, but without influence from human judgement.

While AI has dominated news headlines over the past year or so, the majority of announcements and research has been around the ethics of the technology and how to manage or avoid bias in data. In April 2019, a fortnight after it was launched, Google scrapped its independent group set up to oversee the technology corporation’s efforts in AI tools such as machine learning and facial recognition.

The Advanced Technology External Advisory Council (ATEAC) was shut down after one member resigned and there were calls for Kay Coles James, president of conservative thinktank The Heritage Foundation to be removed after “anti-trans, anti-LGBTQ and anti-immigrant” comments, as reported by the BBC. Google told the publication that it had “become clear that in the current environment, ATEAC can’t function as we wanted.

“So we’re ending the council and going back to the drawing board. We’ll continue to be responsible in our work on the important issues that AI raises, and will find different ways of getting outside opinions on these topics.”

The big tech example

Many industry experts expressed confusion at the decision and referred to Google as being naïve. However, as it was only the external board that was shut down, as Bloomberg reported, the “Google AI ethics board with actual power is still around.”

The Advanced Technology Review Council was assembled last year as an attempt to “represent diverse, international, cross-functional points of view that can look beyond immediate commercial concerns.” Many technology giants have laid out ethical principles to guide their work on AI, so why haven’t financial services institutions?

Bloomberg referenced the AI Now Institute which wrote in a report last year that “Ethical codes may deflect criticism by acknowledging that problems exist, without ceding any power to regulate or transform the way technology is developed and applied. We have not seen strong oversight and accountability to backstop these ethical commitments.”

Days after Google scrapped their external ethics board, the European Union published new guidelines on developing ethical AI and how companies should use the technology, following the release of draft ethics guidelines at the end of last year.

After the EU convened a group of 52 experts, seven requirements were established that future AI systems should meet:

1. Human agency and oversight: AI systems should enable equitable societies by supporting human agency and fundamental rights, and not decrease, limit or misguide human autonomy.
2. Robustness and safety: Trustworthy AI requires algorithms to be secure, reliable and robust enough to deal with errors or inconsistencies during all life cycle phases of AI systems.
3. Privacy and data governance: Citizens should have full control over their own data, while data concerning them will not be used to harm or discriminate against them.
4. Transparency: The traceability of AI systems should be ensured.
5. Diversity, non-discrimination and fairness: AI systems should consider the whole range of human abilities, skills and requirements, and ensure accessibility.
6. Societal and environmental well-being: AI systems should be used to enhance positive social change and enhance sustainability and ecological responsibility.
7. Accountability: Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes.

The EU also explained that in the summer of this year, the Commission will launch a pilot phase that would involve a number of stakeholders, but companies, public administrations and organisations are welcome to sign up to the European AI Alliance today.

Potential regulation

At NextGen Banking London, Maciej Janusz, head of cash management Nordic Region at Citibank brought up the subject of regulation and said that regulation “comes when something crashes.

Banks will be reluctant to implement AI without human oversight.” Comments on regulatory frameworks were also made by Monica Monaco, founder of TrustEU Affairs, who revealed that governance - at the moment - only exists in the form of data protection, specifically Article 22 in GDPR, which could become a source for future principles to govern AI and the use of algorithms in financial services.

Monaco also made reference to the European Commission’s ‘AI for Europe’ report which was published on the 25th April, which she recommended everyone read. On GDPR, Monaco said that the right to be forgotten could become problematic, as it would also apply to institutions, not just individuals.

A question was raised as to whether AI could be a leveler, as the technology is shining a light on all issues, especially the non-diverse nature of the industry.

Ekene Uzoma, VP digital product development at State Street argued that the issue with data abuses is that they start to take on different forms, so predicting may be a little difficult.
He also spoke about education and how there needs to be a recognition that we cannot look to the “altar of technology” to solve problems.

According to Terry Cordeiro, head of product management - applied science and intelligent products at Lloyds Bank, “AI will automate repeatable work, but where does that leave us [humans]? We could say that the workforce of the future will be more relationship-based. Banks need to look at how to foster new talent and how to develop existing teams.”

Cordeiro continued: “Even algorithms need parents. And the parents have the responsibility to train them, but where are these people? They don’t exist.” In conversation with Finextra, Monzo’s machine learning lead Neal Lathia highlights that “there is bias everywhere, and a lot of active research on measuring, detecting, and trying to remedy it. I don’t think it’s too late - it’s a problem that will have to be constantly revisited.”

Nooralia also has a view on this and says: “The challenge lies in AI’s ‘black box’ problem and our inability to see the inside of an algorithm and therefore understand how it arrives at a decision. Unfortunately, as we’ve seen in several circumstances, AI programmes will replicate the biases which are fed into them and these biases originate from humans. So, the first step in eliminating these biases is to open the ‘black box’, establish regulations and policies to ensure transparency, and then have a human examine what’s inside to evaluate if the data is fair and unbiased.”

Sara El-Hanfy, innovation technologist - machine learning & data at Innovate UK explains that “an AI system is not in itself biased. These systems are being designed by humans and will therefore reflect the biases of the developers or the data that is selected to train the system. While there is absolutely a risk that AI systems could amplify bias at scale, there is also an opportunity for AI to improve transparency and tackle existing biases. She provides recruitment as an example and says that it is “good that we are becoming more aware of the possible unintentional harms of using AI technologies, and by having these conversations, we can advance understanding of AI and establish best practices.”

Innovate UK’s Stephen Browning adds that there is a “need for humans to work in a way that doesn’t perpetuate bias into the data and on to the system. We are very conscious of that as something that would hold back the use of this type of technology or damage the benefits you could potentially obtain from AI,” he says - somewhat paraphrasing the concerns of the AI Now Institute.

Browning continues to say that “what really holds AI back and undermines it is the human aspect, and not the technical aspects, and that is what we’re working on. There are also activities across the UK government that are trying to address this, such as the Centre for Data Ethics and Innovation.

Prag Sharma, head of Emerging Technology, TTS Global Innovation Lab at Citibank, also believes that this is a real concern in this day and age, especially with the emergence of explainable AI and more financial institutions wanting to know how certain decisions are being reached.

In order to solve the issues with ethical AI, Sharma suggests introducing “rules and regulations around an audit trail of the data, so we are aware of what is produced, what is consumed and how the result will reflect that.” But in reality, we are only just coming to terms with how this technology actually works and it is not a case of financial services staying a step ahead of big technology corporations either.

Annerie Vreugdenhil, chief innovation officer, ING Wholesale Bank says that it “is not about winning or losing, it is about making the most of partnerships and the technical expertise and capabilities from both sides. For example, we believe that collaborating with fintechs is key, because we can’t do it alone anymore. Partnerships can be beneficial for both parties: fintechs can bring agility, creativity and entrepreneurship, while financial institutions like ING bring a strong brand, a large client base with an international footprint, and breadth of industry expertise.”

Finextra's The Future of Artificial Intelligence 2019 report explores how the financial services industry can leverage tried and tested experiments of AI in other industries to transform how transaction services can be reshaped.

Download here

Comments: (0)

Trending