Join the Community

22,260
Expert opinions
44,236
Total members
382
New members (last 30 days)
183
New opinions (last 30 days)
28,760
Total comments

Machine learning - Ethics Rights and Conduct

Time to read: 5 minutes

The rise of the algorithm | governed by the algorithm

Last week the UK Regulators’ (FSA and PRA) jointly issued a significant individual penalty and organisational special requirements sanction against a UK universal bank for conduct violations*. Having occupied senior leadership roles on a number of UK conduct risk/past business review programs (e.g., payment protection insurance and interest rate derivative hedging) that resulted in banking back book/balance sheet exposures measured in the hundreds of billions (GBP).

I reflected upon this recent sanction and the discussions I have chaired regarding the strategic opportunities, design, risks, ethics, rights and conduct implications of digital, in particular in respect to Machine Learning (ML).

Before we begin, lets define ML - apologies to the purists, however here is my simple definition: 

“ML is part of the Artificial Intelligence (AI) family and refers to computer systems that have the ability to evolve/ self-learn independently from humans"

There is nothing new about automated, rule-based decision-making systems – in fact, these systems exist across private (e.g., financial services and matchmaking) and public sectors (e.g., healthcare, education and criminal justice systems). They govern, influence and impact our daily lives. However, significant advances in data volume, data, predicative analytics and technological options have elevated the prominence of ML.

Pausing for a moment, let’s reflect on the implications of a system that evolves /self-learns and has the capacity to surpass human intelligence – all while directly influencing our lives.

Is ML exciting, commercially advantageous, liberating, disquieting or emotionally thought-provoking? 

Recently, Facebook decided to shut down chatbots Alice and Bob after they developed their own secret language (transcript below):

Bob: “I can can I I everything else”.

Alice: “Balls have zero to me to me to me to me to me to me to me to me to”.

This looks rather meaningless - however, it prompted an immediate shutdown by Facebook on the basis that Facebook was concerned by this short-hand type of the conversation that Facebook did not understand.

The Computer Says “No”

How many of us are prepared for a ML-driven system to:

  • Provide medical and/or legal advice and decide upon outcomes?
  • Decide whether we are entitled to a product/service?
  • Decide whether we are appropriate for a new job or promotion?
  • Decide where we live as a result of medical care or educational provisioning?
  • How prepared are you to be governed (e.g., politically, socially and economically) by an algorithm?

I suspect we would be comfortable if we lived in a world where ML-driven systems always returned a yes. However, this will not be the case. ML advancements present significant opportunities and risks. I recently discussed this subject with a number of academics, and legal and financial services professionals. The fundamental tenants of the Human Rights Convention of 1948 was cited within the discussion.

“All people are free and equal, and have a right to be treated with dignity”. 

There is a view that if ML is left unchecked and open to gamification; without transparency and robust governance; ML decision making systems may lead to bias (intentional or not) and could lead to economic, financial and social exclusion.

The Challenges and Realities

The use of ML offers significant competitive advantages. However, organisations that embark upon ML programs would benefit from considering the below (not exhaustive):

  • ML goals/business value | Research and development design and setup | ML/algorithm design | Data selection, quality and bias | Redress | Governance/control, etc...

I fully acknowledge that Google, Apple, Facebook and Amazon ('GAFA') are examining some – if not all – of these factors. However, there are natural competitive and intellectual property limitations to the level of transparency, and thus scrutiny, that is possible. 

Organisations and Governments (for that matter) that embark upon ML in a meaningful manner would benefit from considering ethics, rights and conduct for the digital age, in addition to the obvious competitive and commercial benefits. In addition, Regulators and Government policies should provide the 'commercial and societal guard rails' in which an inclusive and enriching form of ML flourishes.

The long and short term economic, social, commercial, brand and balance sheet implications of getting ML wrong are not trivial, ML requires careful consideration.

External

This content is provided by an external author without editing by Finextra. It expresses the views and opinions of the author.

Join the Community

22,260
Expert opinions
44,236
Total members
382
New members (last 30 days)
183
New opinions (last 30 days)
28,760
Total comments

Now Hiring