Join the Community

21,469
Expert opinions
43,716
Total members
378
New members (last 30 days)
131
New opinions (last 30 days)
28,520
Total comments

From Black Box ML to Glass Box XAI

1 comment

One of the difficulties in stepping into a red-hot technology space is you’re not sure what to expect. As you’re grappling with unexpected technical curve balls what if your own stakeholders beat you to death after seeing the results which are wrong by their expectations or understanding. The current nascent implementation of Artificial Intelligence and Machine Learning models are not a smooth sailing by any stretch of the imagination either.

As much I am excited to work with the new technology and get new futuristic visions to see the sunshine of reality, I also know the labor that goes in to get it out there in a production system is no less than breaking a mountain, and what if you realize that the rocks underneath is not conducive to build a smooth road. All that effort gets dumped in a breakneck speed and with it goes the dream of bringing something new to being and get it implemented.

A typical conversation

Imagine a below conversation between Mark (IT Project Manager of a Machine Learning project) and Shawn (Head of Sales Team for whom the models were built).

Mark: Hey Shawn, good news the models we build are now deployed to Production
Shawn: Great, let’s get fresh feeds & generate predictions of our Risk exposures for next quarter
Mark: Sure, data feeds are already in and models will be predicting the results by evening
Shawn: That’s very good as we must get these included in our quarterly Risk portfolio too.

After 5 hours the complexion of the conversation changed to something like this…

Shawn: Mark, I saw the reports, how the hell it shows my Risk portfolio is bumped up by 3% to Medium risk 12% instead of stable 9%, can you give me the top 5 reasons for this risk assessment.
Mark: I am afraid I may not be able to give you the top 5 features as there is no way to know the role each feature play in the results that come in.
Shawn: No Mark, that’ll be difficult to digest, as I may be asked for the reasons for a sudden bump in my risk exposures. What justification can I give?

And this conversation will lead to an awkward silence or uncomfortable moments. What ends up happening is that Shawn will lose trust in the results coming out of the models. All the hard work going in to bring the results from the jazzy Machine learning models have now ended up in justifying their legitimacy.

A lot of ML or AI projects go through this uneasy phase until people realize the underlying black box nature of the model. The question to bring justification to the results is the biggest challenge for any ML / AI model project’s manager. How to bring the trust?

Just to avoid this scenario, people are exploring Explainable AIs. Unfortunately, designing Explainable AI, that is interpret-able enough that humans can understand how it works on a basic if not specific level, has drawbacks.

The troubles and the way ahead

The black box challenge surrounding machine learning has been debated and argued for years for just one main reason — the need for trust. What made the machine take this decision? It’s very uncomfortable for project managers and stakeholders, to not know these answers if you are providing the reports, risk assessments or potential threats based on which management may be taking important decisions. These decisions are based on a machine’s decisions that we don’t thoroughly understand yet. This is where the uncertainty and anxiety kick in. This is the core reason for a rising demand for Explainable AI (XAI).

An Explainable AI (XAI) is an AI whose actions can be easily explained by understandable parameters. It somewhat demystifies the “black box” issues of a standard ML model. Getting transparency to an otherwise black box system means you’ll have to give away some of the complex and fuzzy logic features in order to establish a known and understandable empirical decision making. This may sometimes simplify (or dumb down) the effectiveness a bit.

AI or ML systems have a component of unsupervised learning that one hand allows the system to optimize in doing a certain job, but that may not be in tune to the non-functional requirements the modeler tried to meet. One of the common applications of XAI are being looked into are for knowledge extraction from black-box models and model comparisons. These are typically handy for Risk Management, Surveillance or Audit departments to ensure which model is more apt for the company and which has more exposure.

As per Wikipedia, the recent advances in the field indicates, “As regulators, official bodies and general users come to depend on AI-based dynamic systems, clearer accountability will be required for decision-making processes to ensure trust and transparency. Evidence of this requirement gaining more momentum can be seen with the launch of the first global conference exclusively dedicated to this emerging discipline, the International Joint Conference on Artificial Intelligence: Workshop on Explainable Artificial Intelligence (XAI).”[1]

Sources:

  1. This article first published on www.agilityexchange.com
  2. Images Courtesy Royalty Free Images from Google
  3. Wikipedia reference Link — https://en.wikipedia.org/wiki/Explainable_Artificial_Intelligence

 

External

This content is provided by an external author without editing by Finextra. It expresses the views and opinions of the author.

Join the Community

21,469
Expert opinions
43,716
Total members
378
New members (last 30 days)
131
New opinions (last 30 days)
28,520
Total comments

Trending

Abhinav Paliwal

Abhinav Paliwal CEO at PayNet Systems- A Neo Banking Software Platform

What Are Digital Wallets? Exploring Their Rising Popularity

Donica Venter

Donica Venter Marketing coordinator at Traderoot

Why Bankers Need to Think Like Entrepreneurs

Dmytro Spilka

Dmytro Spilka Director and Founder at Solvid, Coinprompter

Can The Payments Industry Use AI To Detect Fraud In 2024?

Raktim Singh

Raktim Singh Senior Industry Principal at Infosys

Industry cloud platforms: The future of Cloud

Now Hiring