Join the Community

22,024
Expert opinions
44,216
Total members
425
New members (last 30 days)
171
New opinions (last 30 days)
28,678
Total comments

IT’S TIME FOR AI TO EXPLAIN ITSELF

The US financial regulator announced earlier this month that it has opened an investigation into claims of gender discrimination by Apple Card. Apple’s own co-founder, Steve Wozniak, tweeted that algorithms used to set credit limits are inherently biased against women.

This comes at a time when banks and other lenders are increasingly using machine-learning technology to cut costs and boost loan applications.

Such accusations are the tip of the iceberg of the major challenges facing AI today.  While there is no denying regarding the intelligence and potential business power of this technology, the truth remains that there is no end to the level of bias that opaque box AI systems can show.

While the AI revolution continues to sweep through the financial services industry, the bias issue is one that needs to be resolved now as AI becomes increasingly mainstream.

The nightmare scenario that haunts AI experts is the emergence of the ‘terminator scenario’ where an opaque AI system can teach itself to become more biased as it reinforces its own decision making. This problem is exacerbated by the investment in ‘black box’ opaque AI systems, which cannot communicate to the operator, regulator or customer how the model operates and how decisions have been made. Since ‘black box’ opaque AI systems rely on data, they learn from each interaction they can, thus rapidly accelerating poor decision making from corrupt or biased data.

The FCA referred to the issue over the summer. Speaking at The Alan Turing Institute, Executive Director of Strategy and Competition Christopher Woolard noted that there is a growing consensus among industry leaders that algorithmic decision-making needs to be ‘explainable’. We agree.

The only solution to this is ‘white box’ Explainable AI (XAI) systems which explain in simple, human language how the AI model operates and thus get data to speak the human language. This enables business stakeholders to easily add any missing information (not captured by data) or modify the model to suit any changes of operation conditions, while allowing to audit the models for any inconsistencies or bias to remove them. For the end users, the XAI model will be able to explain how decisions have been made and is able to answer follow up questions aimed at maximising the customer’s financial wellbeing.

For example, If a mortgage or life insurance policy is denied or a less credit limit is offered to a consumer because of a machine’s choice, an XAI model will be able to explain how and why the decision was made and consequently help consumers and companies understand what they need to do to get a different outcome (e.g. turn a rejected mortgage application into an acceptance). The technology helps consumers take appropriate action on one end, while also opening new business avenues for banks and other institutions by offering more suitable products.

Certainly, the scenario of bias reinforcing bias in opaque AI systems is a pressing problem. It has to be fixed if organisations using AI are to avoid governance issues and potential legal action while being able to earn the trust and confidence of the end user.

By ignoring this, instead of finding ourselves galloping towards a bright future, we might find ourselves sadly going towards a dystopian one.

By Hani Hagras, Chief Science Officer, Temenos 

External

This content is provided by an external author without editing by Finextra. It expresses the views and opinions of the author.

Join the Community

22,024
Expert opinions
44,216
Total members
425
New members (last 30 days)
171
New opinions (last 30 days)
28,678
Total comments

Trending

David Smith

David Smith Information Analyst at ManpowerGroup

Best 5 White-Label Neobank Solutions in 2024

Ruoyu Xie

Ruoyu Xie Marketing Manager at Grand Compliance

Governance, Risk and Compliance: How AI will Make Fintech Comply?

Now Hiring