Community
Some machine learning tools are incredibly complex. So complex that they are considered to be “black box” systems. On the one hand, they are determining more and more accurate outputs to increasingly difficult problems. But on the other, the inner workings of these solutions can’t easily be understood by a human, and the rationale for the decisions made or actions taken by the solution are often opaque.
In Financial Services, a lack of clarity around decisions made by a machine won’t cut it for customers nor regulators.
If AI is used to make a lending decision for a customer, the customer has a right to understand the rationale for that decision. If AI is used to manage a fund, the fund manager and investors will want to know why a particular asset mix was chosen, and the regulators will want to ensure that the risk to customers’ money is being carefully managed.
This is why the effort to create explainable artificial intelligence solutions is gathering so much momentum and is often coupled with the ethical AI agenda.
IBM, for instance, have created the AI Explainability 360 project. This open source toolkit encourages developers and researchers to use and improve algorithms that seek to better interpret what machine learning solutions are actually doing.
You can see how this combined, coordinated effort will eventually improve our ability to understand the outcomes of a machine learning solution.
But you also have to wonder, is it possible for a machine to fully, and accurately, explain the rationale for its decisions in the same way that humans do?
Daniel Kahneman, in his bestselling book “Thinking Fast and Slow” describes a human system in two parts. System 1 acts quickly, based on learned biases and cues with little effortful thought. System 2 acts slowly, more purposefully and with effort. The common example is that the answer to 2 + 2 comes to mind instantly, based on your learned and now innate understanding of simple mathematics, but the answer to 24 x 17 will take far longer and require much more mental effort.
From research we know that the majority of human decisions are based on intuition and heavily guided by a System 1 way of thinking. However, when we come to describe the rationale for these decisions we think logically and purposefully to construct what we believe is an acceptable story.
This story must be born in some truth for it to be accepted, but the reality is that the human rationale for a decision will rarely fully marry up with the true driver for that decision.
As far as the explainability of AI is concerned, this creates something of a paradox.
If the aim of the explanation is to be accurate, how can an AI system boil down the true rationale for a decision based on the analysis of 100s of variables so that it is easily understood by humans?
If the aim of the explanation is to be understood and accepted, how does the system ensure that its explanation is accurate, true and complete?
Either we get a rationale that is accurate but incomprehensible, or one that is readily understood but only accurate to a point.
Regulators that continue to demand explainable, transparent solutions are aware of this challenge but there still isn’t great clarity on how this can be addressed.
As these tools increase in complexity and in their use across financial services, those creating them need to continue to work to open up the “black-box” without dulling the power of the tool. Those implementing them need to be aware of what is known and what is not yet known about the decision made by the system.
This content is provided by an external author without editing by Finextra. It expresses the views and opinions of the author.
Alex Kreger Founder & CEO at UXDA
16 December
Kajal Kashyap Business Development Executive at Itio Innovex Pvt. Ltd.
13 December
Kathy Stares EVP North America at Provenir
11 December
Darren Carvalho Co-Founder and Co-CEO at MetaWealth
Welcome to Finextra. We use cookies to help us to deliver our services. You may change your preferences at our Cookie Centre.
Please read our Privacy Policy.