Join the Community

22,024
Expert opinions
44,216
Total members
425
New members (last 30 days)
171
New opinions (last 30 days)
28,678
Total comments

Why Showing Your Workings is Essential for Trusted AI and KYC

The concept of artificial intelligence (AI) has been in our collective consciousness for decades, mainly due to far-fetched Hollywood depictions of AI-powered killer Terminators, i-Robots and Replicants. These fictional portrayals, twinned with outspoken views from technology titans like Elon Musk and Professor Stephen Hawking, have led to some pretty fearful views of this powerful technology.

But in a relatively short amount of time, the usage of AI has exploded, with the global market expected to reach a revenue of $118.6 billion by 2025. So does this boom in uptake correlate with increased trust in the market? Or does the industry need to work harder to win customer confidence? In this blog I’ll look at what can happen when AI goes unchecked, and how know your customer (KYC) analysts could benefit from explainable AI (XAI).

Biased Black Boxes

As with any unfamiliar and revolutionary technology, there will always be a healthy dose of scepticism from the market in which it operates. When Jethro Tull introduced the horse-drawn seed drill in 1700 he was no doubt met with a fair amount of eye-rolling or a raised eyebrow here and there. This type of uncertainty has followed into the information age, and not without good reason. A cursory look at the many cases of shocking AI-decision making shows why:

  • The Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) is a case management and decision support tool used by U.S. courts to assess the likelihood of a defendant becoming a recidivist. Though optimised for overall accuracy, the model predicted that black defendants were more likely than white defendants to be incorrectly judged to be at a higher risk of recidivism, while white defendants were more likely than black defendants to be incorrectly flagged as low risk.
  • Just five years ago Amazon found out that the algorithm it was using for hiring employees was biased against women. Why? Because the algorithm learned from the past to inform the present, and as it based decisions on the number of successful resumes submitted over the past ten years – overwhelmingly from men – it was trained to favour men over women.
  • Most recently and indeed currently in the UK, Ofqual’s (The Office of Qualifications and Examinations Regulation) use of algorithms to grade A-level students because exams were not taken due to the COVID_19 pandemic has been met with much controversy and complaints and seems to have ‘not worked’, causing the government to return to a much more ‘human’ approach. One wonders if they will have to explain their AI algorithms to the Information Commissioner’s Office at some point.

The general explanation for this kind of discrimination is that AI models are made by humans and therefore reflect their biases. They also reflect the bias inherent in the data itself. But whether that bias is conscious or unconscious, human-led or data-led, is almost irrelevant. Once the trust is gone, it is extremely hard to win back. If AI decisions are made within opaque black boxes, the technology will never shake off its shady image.

XAI: The Key to Unlocking Trust

To increase the trustworthiness and transparency of AI programmes, many firms should have adopted explainable AI (XAI) to describe the purpose, rationale and decision-making process in a way that can be understood by the average person.

Of course, the more information is provided to this kind of technology, the more data it has to work with to disambiguate and find information on the right client. So, for structured data sources, fixed parameters can be put in place e.g. full name, year of birth, country of birth, country of residence and company affiliation. An AI model can then do much of the heavy lifting associated with searching, reading, disambiguating and filtering results when conducting background checks, all while providing the KYC analyst with a full audit trail at every stage whether the decision is to onboard or reject.

For unstructured data, the process gets tougher, but not impossible, especially if a multilingual search and analysis platform is in place. This kind of platform can automatically surface contextually relevant intelligence, providing reasoning for categorising the decision the way it did.

The Expectation of Explainabilty

The perception of AI has come a long way from the science fiction of the past, but it still has some work to do to be widely accepted and trusted by firms and its customers. Explainability is key to changing perceptions and, while it isn’t crucial for every AI project, when business relies on detailed knowledge of specific individuals, such as during client onboarding or KYC checks, it is crucial that any decision can justified quickly and easily to avoid any accusations of bias. If customers receive anything less, not only will AI continue to suffer from a negative PR problem, but you could potentially suffer too.

External

This content is provided by an external author without editing by Finextra. It expresses the views and opinions of the author.

Join the Community

22,024
Expert opinions
44,216
Total members
425
New members (last 30 days)
171
New opinions (last 30 days)
28,678
Total comments

Trending

David Smith

David Smith Information Analyst at ManpowerGroup

Best 5 White-Label Neobank Solutions in 2024

Ruoyu Xie

Ruoyu Xie Marketing Manager at Grand Compliance

Governance, Risk and Compliance: How AI will Make Fintech Comply?

Now Hiring