Long reads

Understanding ethical AI: Why financial services are leading the charge

Hamish Monk

Hamish Monk

Reporter, Finextra

Unlike the preceding technological revolutions of history – perhaps with the exception of the third, computation – artificial intelligence has the potential to move into almost any industry and, critically, supplement or replace white collar work. There is no precedent for a technological development of this size, which is why the battle plans of the innumerable sectors currently staring down its barrel are so wildly sundry.

The financial services sector, it transpires, is one of the few that seems to be managing the situation with sophistication and tact. In its recent Enterprise of the Future report, California-based computer software company, Alteryx, showed that the industry is ahead of other sectors when it comes to enthusiasm for, and maturity of, data governance and ethics policies in the use of artificial intelligence (AI). Indeed, 9 in 10 of the 2,800 financial professionals surveyed call for more regulations and standards governing the use of AI and generative AI.

But don’t interpret these calls as suspicion. The industry asks for these policies because it is now so comfortable with regulation and acknowledges the immeasurable benefits of AI for institutions and their customers – providing the technology is handled properly, of course.

The risks of failing to do so are heavy, hence the prevailing drive to put appropriate safeguards in place. Legal and ethical costs, damage to workplace desirability, injury to brand reputation, loss of intellectual property and data, are just some outlined by financial professionals in the survey. And if those were not justification enough, implementing AI policies is seen as critical to avoid being negatively impacted by the EU’s upcoming AI Act.

The financial sector’s battle plan

So the industry is busy strategising, and the Enterprise of the Future report shows that firms are setting the pace across the board – with 86% having already implemented AI security, ethics and governance policies to secure the success of their businesses. That marks an 11% edge on the global average across public sector, manufacturing and technology industry verticals.

These policies typically include multi-factor authentication (MFA) to access systems, applications and data, integrated security implemented into the development process, and zero-trust security models or secure access service edge security frameworks. Regular security awareness training for employees and robust governance frameworks that define roles, responsibilities and decision-making processes related to security are also expected to roll out more widely.

Jason Janicke, SVP EMEA, Alteryx, highlights the importance of spearheading ethical AI in the financial sector: “The evolution of AI capabilities, specifically generative AI, has presented the FS&B sector with significant opportunities to unlock the power of data to automate tasks for productivity value creation while lowering costs. However, it has also promoted data privacy, ethics and cybersecurity concerns. Financial services and banking professionals are stewards of highly sensitive data, so any misuse of AI – even if unintentional – may leave organisations vulnerable…It’s right for the sector to take a proactive approach to the implementation of effective guardrails that include practical checks on data quality, privacy, and governance. Data literacy upskilling will help establish a robust data-centric approach to AI, which is critical to safeguarding businesses and maintaining security and trust amongst stakeholders and customers.”

All this focus, caution, and planning will undoubtedly ensure that as an industry, the impending wave of AI - and the EU’s subsequent AI Act - is not only survived, but surfed.

Comments: (0)