Community
Defining fairness is a problematic task. The definition depends heavily on context and culture and when it comes to algorithms, every problem is unique so will be solved through the use of unique datasets. Algorithmic fairness can stem from statistical and mathematical definitions and even legal definitions of the problem at hand. Furthermore, if we build models based on different definitions of fairness for the same purpose, they will produce entirely different outcomes.
The measure of fairness also changes with each use case. AI for credit scoring is entirely different from customer segmentation for marketing efforts, for example. In short, it’s tough to land on a catch-all definition, but for the purpose of this article, I thought I’d make the following attempt: An algorithm has fairness if it does not produce unfair outcomes for, or representations of, individuals or groups.
Explainable Outcomes for All
Even with the above definition, it’s clear that creating a model that is 100% fair for every person or group, in every instance, is a highly challenging task. The best we can hope for is that we build with fairness in mind so that we can stand by and explain outcomes for individuals and groups.
Now, however, there is the problem of the difference of definitions of individual and group fairness.
Individual fairness focuses on ensuring that statistical measures of outcomes are equal or similar for similar individuals. Put simply, if you and I are alike in many ways, i.e. are the same age, earn roughly the same amount, and live in the same area, and we apply for a loan, then we should have a similar outcome.
Group fairness partitions a population into pre-defined groups by sensitive or protected attributes, such as race, ethnicity, and gender, and seeks to ensure that statistical measures of outcomes are equal across groups. For example, if we look at groups divided by gender, then similar decisions should be made for the groups as a whole. One gender should not be favored over another.
There are two world views on how to approach fairness when building decision-making models. The first is the view “We're all equal” (WAE), which states that groups have the same abilities, so differences in outcomes can be attributed to structural bias, rather than differences in ability. The second is the “what you see is what you get” (WYSIWYG) approach, which holds that observations reflect the abilities of groups.
An easy-to-understand example of this in finance is FICO scoring. The WAE worldview states that the overall FICO score being different across different groups of people should not be mistaken for a different inability to pay off a mortgage. The WYSIWYG worldview states that the FICO scores correlate well to fairly compare applicants’ abilities to pay off mortgages.
Why It’s Important
Bias is prevalent all around us. We see it in almost every aspect of modern life. A quick Google search on algorithms and bias will pull up hundreds of examples where models were not tested for fairness before they were released into the wild.
We've seen issues with insurance companies using machine learning for insurance premiums that discriminated against the elderly, online pricing discrimination, and even product personalization steering minorities into higher rates. The cost of such mistakes has been severe reputational damage, with customer trust irretrievably lost.
Here are a few quotes from individuals who have experienced algorithmic bias firsthand:
The point is, customers, are aware when algorithmic unfairness occurs and, if we are to put out lending models that affect the lives of our clients and customers, we need to ensure we test them for fairness. The decisions made by unfair algorithms can also be propagated as training data, which our models again learn and evolve from. This feedback loop can cause a vicious cycle that propagates biased data.
The Performance Problem
To illustrate how bias can creep into algorithms, I took some publicly available Home Mortgage Disclosure Act (HMDA) data, which requires lenders to report the ethnicity, race, gender, and gross income of mortgage applicants, and a popular adult income data set and built some preliminary machine learning models. I left the sensitive attributes out to make sure I had balanced data, created some great features, and got some really good performances. If performance were the only measure of success, I would say I did a pretty good job, however, I did some inspection into fairness.
Using the HMDA data, I examined how the race was affecting outcomes, and the results were unsettling. In the below diagram, we can see the probability of a loan being accepted given race. Looking at the difference between the orange and blue areas on the right, we can clearly see that the probability that a loan will be accepted for white customers has a much higher prediction distribution than non-white. If this was fair, we should see the lines close together or, ideally, overlapping.
Clearly, if this algorithm were to be productized and used in the real world, there could be serious consequences for non-white customers. Eliminating the possibility of such models reaching the market must be a key driver for organizations across industries. Whether it's loans, insurance, or healthcare, AI can be a powerful ally for improving business and operational outcomes, but this must not come at the expense of groups or individuals against whom bias has propagated.
Customers will remember instances where they felt unfairly treated, particularly when the effect of a decision has significant consequences for their wellbeing. As data scientists, we must begin to implement fairness from the inception of any work we undertake.
This content is provided by an external author without editing by Finextra. It expresses the views and opinions of the author.
Kyrylo Reitor Chief Marketing Officer at International Fintech Business
15 November
Francesco Fulcoli Chief Compliance and Risk Officer at Flagstone
Nkahiseng Ralepeli VP of Product: Digital Assets at Absa Bank, CIB.
14 November
Jamel Derdour CMO at Transact365 / Nucleus365
13 November
Welcome to Finextra. We use cookies to help us to deliver our services. You may change your preferences at our Cookie Centre.
Please read our Privacy Policy.