Despite the welcome publication of the Government’s AI Opportunities Action Plan last month, the latest rumours around Westminster are that Ministers have delayed plans for an AI Bill as the UK government seeks to align itself with the U.S. administration
on the technology.
There is, self-evidently, a more than pressing need for cross-sector, AI-specific legislation. I have recently published a
report setting out a series of serious issues and how such legislation could help. The report highlights eight archetypal examples of people living at the sharp end of unregulated AI in the UK: the voter, the scammed, the benefit claimant, the job seeker,
the teenager, the creative, the transplant patient, and the teacher. In other words, all of us.
When launching the AI Opportunities Action plan, the PM highlighted economic and social benefits, expressing a desire to "harness the power of AI to fulfil our promise to the British people of better jobs, better public services and better lives".
During a debate about the Plan in the House of Lords, the Minister also stressed that the benefits "extend far beyond economic growth" describing it as "the catalyst that we need for a public service revolution, including, of course, in the NHS".
This focus on public service transformation and better lives for all is positive, as is the change of tone from regulation vs. innovation to "regulation assisting innovation." Although the Minister acknowledged that "the regulatory environment will be critical
in driving trust and capitalising on the technology" there was no sign of an AI Bill. The Minister instead pointed to the AI Safety (now Security) Institute, development of the AI assurance ecosystem and data protection legislation rather than anything specific
or cross sector on AI.
We need AI to help solve the huge problems in finance, not least fraud which is growing faster than the finance industry or the global economy. Simon Taylor writes, “The gap between what AI can do and what we're letting it do in finance is widening daily.
Every day financial institutions hesitate, they're leaving economic value on the table, letting criminals find new attack vectors, and watching competitors race ahead”.
The scammed
One of the vignettes set out in my report is ‘the scammed’. In 2019, the CEO of a UK-based energy firm was tricked by an AI-generated voice deepfake into transferring thousands of pounds to a fraudster. Since then, LLMs and deepfakes are driving a fraud
tsunami.
In the last twelve months, it is estimated that $1.03 trillion has been lost to scams worldwide. AI is making scams cheaper, more efficient and more effective. In the UK, fraud now accounts for 40% of all reported crimes.
The days of badly spelled ‘I’m lost abroad please send money’ emails are over, Fraud-as-a-Service LLMs like WormGPT will generate scams at your request. Scams targeting company executives and employees (where a caller impersonates the CEO or a senior executive),
are significantly on the rise, as are other types of social engineering attacks including phishing, romance scams, and business email compromise - 82% of data breaches involve a human element.
Fraudsters are calling banks directly and impersonating customers (or vice versa) or convincing a network provider to swap a user’s phone number to a SIM card in their possession allowing them to compromise bank accounts that are linked to the number.
In the UK, £459.7 million in losses were attributed to authorised push payment fraud out of a total of £1.17 billion stolen by criminals over the previous year. This type of fraud occurs in real-time and once the money is deposited in the fraudsters’ account
it is incredibly difficult to recover.
Despite new rules introduced by the Payment Systems Regulator making financial institutions liable for APP fraud losses and splitting reimbursement between sending and receiving institutions, there are ongoing questions and concerns about how to address
AI-driven fraud.
Charlotte Crosswell, chair of the Centre for Finance, Innovation, and Technology responded to my report emphasising the scale of the problem, saying that “the UK is becoming a home for economic crime”. She is understandably concerned about the problem of
fragmented data and hopeful that the
Data (Use and Access) Bill provisions for smart data sharing and digital verification services will help.
The scale of the fraud pandemic is shocking, the three largest economies on the planet currently are the United States of America, China, and then cybercrime, currently in the region of $9.5 trillion.
Simon Taylor’s excellent piece on “the compliance and explainability paradox” argues that “AI doesn't need to be perfectly explainable - it needs to be explainable
enough to build justified trust. Enough to show it's not perpetuating harmful biases. Enough to demonstrate it delivers better outcomes than what came before.”
It’s a powerful and provocative argument and I agree with much of it. I certainly agree that the potential solution is the deployment of these technologies to solve for the technologies. AI and other ‘new’ tech can be our sword as well as our shield if we
ensure the correct regulatory environment to build trust, protect consumers, and unleash creativity and innovation in financial services. This is precisely what my private members bill, the Artificial Intelligence (Regulation) Bill, aims to do.