International Women's Day: ChatGPT and how gender bias in AI can impact innovation

  8 Be the first to comment

International Women's Day: ChatGPT and how gender bias in AI can impact innovation

Contributed

This content is contributed or sourced from third parties but has been subject to Finextra editorial review.

The 2023 International Women’s Day campaign theme is to Embrace Equity and asks our world to discuss why equal opportunities aren’t enough. “Equity isn’t just a nice-to-have, it’s a must-have. A focus on gender equity needs to be part of every society’s DNA. And it’s critical to understand the difference between equity and equality,” according to the official International Women’s Day website.

An equitable world is one where there is no bias, stereotypes or discrimination. However, as human beings, we expect patterns and make decisions based on those. When using technologies such as artificial intelligence (AI), we expect it to work in a similar way, and in some cases, better.

According to Fawn Fitter and Steven T. Hunt at SAP, machine learning – a subset of AI – “does, in fact, have the potential to be a tremendous force for good. Humans are hindered by both their unconscious assumptions and their simple inability to process huge amounts of information.”

However, AI, “on the other hand, can be taught to filter irrelevancies out of the decision-making process, pluck the most suitable candidates from a haystack of résumés, and guide us based on what it calculates is objectively best rather than simply what we’ve done in the past.”

AI can relieve our society of problems around bias, stereotypes, and discrimination because it is based on logic and isn’t influenced by human fallibility. Although, anyone with an interest in AI will know at this point that any system built with AI technology contains whatever biases are present in the data used to create it, or the biases the programmers who developed it.

Despite it being 2023, we have not implemented a solution to this equity problem. There are famous examples of AI bias affecting women, such as Amazon’s recruitment AI which favoured men, and it is common knowledge now that credit scoring AI services are likely to discriminate against women, as proved by Women’s World Banking. AI acts more as a reflection of our current world instead of a window to our future one.

In the New York Times, Kate Crawford argued that AI has a “white guy problem,” but it has been seven years since this article was published and AI is only becoming more part of our lives, and increasingly, a part of our financial lives. With projects like ChatGPT garnering a lot of interest, it questioned whether tools like this could become central to the functioning of the financial services industry. How might the bias in the data and engineering of current AI affect the future of the financial world?

Bias in AI: how does it get there?

There are a number of ways in which bias can enter into AI systems, but primarily through the data used by the machines. Women’s World Banking allowed for four main ways in which bias enters an AI system:

  • Sampling bias: this is where there is an overrepresentation or underrepresentation of particular groups in the training data set. This can also be seen as incomplete data.
  • Labelling bias: data is classified and assigned by data scientists with particular properties and characteristics to make it searchable by the machine. This is where human bias can enter the way which the data is perceived.
  • Outcome proxy bias:  this can occur when a machine learning assignment is not well-defined. The machine might input an incorrect piece of information to finish a calculation, which results in an incorrect result.
  • Algorithmic construction: the majority AIs are built on machine learning algorithms, and the algorithms themselves can be bias due to how it is constructed.

This was also reflected by Darlene Newman, executive director, head of internal IT innovation & research at DTCC, who told Finextra: “Several factors can be at play when it comes to AI bias. Gender bias can be introduced into machine learning models through limited or incomplete training data, or directly reflected in the data collected and used to train the model. Bias can also exist in the construction of the model.

“For example, human beings choose the data that algorithms uses and also decide how the results of those algorithms will be applied, which can have downstream impacts. If the development team is also not diversified, they may not be aware of certain biases that exist in their data or algorithms because they have not experienced them personally.”

With these biases introduced into AI, these systems will start releasing biased results exponentially. Globant’s global head of artificial intelligence and data science, Juan José López Murphy argued: “Gender bias in AI continues to be a serious concern – the results of that biases are seen on a daily basis. Whether that is giving women lower purchase limits on credit cards or suggesting family and household products predominantly to women.”

López Murphy further stated that “algorithms can be built on historical data reflecting past bias against women, those biases are then perpetuated in the present day. We see this in the workplace too – hiring is just one example. AI-run recruitment systems are built on data of successful applicants – in the past, particularly in technology roles, those applicants have been male-dominated. So you’re looking at a system that is biased towards male over female candidates.”

This proves that the result of these biases is a continued hindrance for women. For financial institutions, that impacts both inside the company and their customers. López Murphy describes how this can impact financial institutions and the wider world: “It can mean we disadvantage women against access to roles, perpetuating the gender pay gap. These effects are not only unfair, but also counterintuitive to the DE&I goals for the companies implementing such systems.”

ChatGPT: bias influences on new technology

ChatGPT and other AI chatbots have the potential of becoming integrated into the banking customer journey. This could be in the form of handling straight forward customer questions, but down the line, chatbots could provide financial advice or guidance around financial products. This is a topic that is currently being discussed in the wealth management space. It’s important to note that at this point, ChatGPT is not allowed to or able to give financial advice when asked directly.

I asked ChatGPT about its potential gender bias and this was the response:

“As an AI language model, I was trained on a large corpus of text data that was scraped from the internet. This data reflects the biases and patterns present in the text of the internet at the time it was collected. Therefore, it is possible that my database contains gender biases that were present in the text data used to train me.

“However, efforts have been made to mitigate these biases as much as possible during the training process. Additionally, ongoing efforts are being made to identify and address any biases that may exist in my responses or outputs. It is important to note that as an AI language model, I do not hold any beliefs or biases of my own and simply reflect the patterns present in the text data I was trained on.”

Open AI have acknowledged the limitations of this platform and the inherent biases. However, the general public is increasingly using this tool and it has already reached 100 million users. More importantly, banks like JPMorgan and the Central Bank of Ireland have already begun to restrict its use by staff, potentially revealing how much it was being used.

On this, López Murphy commented: “The current gold rush use of ChatGPT has democratised access to cutting-edge language technology (such as ChatGPT) like never before. But public understanding of the limitations and biases of these tools  is limited - we are using these tools at a speed and scale that risks accepting and normalising biases before we’ve even identified they’re there.”

As part of the discussion, he discussed some of ChatGPT’s financial uses: “There’s a possibility the public will begin to use tools like ChatGPT to inform their financial decisions. Right now, we can’t guarantee the inputs informing language technologies’ outputs are factually true – making those outputs inherently biased.”

However, this is a hypothetical. López Murphy also offered an example of this technology being used to influence survey results. “There's an interesting study by Cornell University showing how you can use AI models (like GPT-3) to simulate and profile segments of the population and test how they would respond to surveys.

“It does not replace field work, but it shows that ‘algorithmic bias [is] both fine-grained and demographically correlated, meaning that proper conditioning will cause it to accurately emulate response distributions from a wide variety of human subgroups’.”

He continued: “Simply put, this means you could stress-test responses to surveys, allowing you to tweak a survey to influence the outcomes you want before actually surveying anyone. Obviously such ‘influenced’ survey results considerably reduces their reliability and accuracy, limiting their value to financial institutions in decision making processes.”

It is clear that we cannot divorce tools like ChatGpt from the algorithmic biases we already know exist in AI, especially as we continue to use these and they become more central in financial services.

How to prevent AI bias

Bias will have an impact on AI and the future technology which uses it. These biases are a reflection of our current society, but the vision for AI is to improve our society. If we want to use AI in an efficient way, there needs to be better understanding of the biases in the data and improvement in how they are dealt with.

When it comes to solving these, Finextra highlighted some of these in our article for International Women’s Day 2022. These include making sure that women are contributing to the data and ensuring that the entire gender spectrum is represented in these technologies. It is important for developers to be aware of these biases at every stage of the development of AI and to be actively testing and checking for it.

However, as we increasingly use AI, it has never been more important for companies to be aware of the biases present in them, gender and otherwise. These are not perfect tools removed from human bias, they are reflections of us and our bias. 

 

Comments: (0)

Editorial

This content has been selected, created and edited by the Finextra editorial team based upon its relevance and interest to our community.