/regulation & compliance

News and resources on regulation, compliance, legal and governance issues for banks and fintechs.

UK publishes first AI whitepaper

Five principles, including safety, transparency and fairness, will guide the use of artificial intelligence in the UK, as part of a new national blueprint for our world class regulators to drive responsible innovation and maintain public trust in this revolutionary technology.

  0 Be the first to comment

External

This content is provided by an external author without editing by Finextra. It expresses the views and opinions of the author.

The UK’s AI industry is thriving, employing over 50,000 people and contributing £3.7 billion to the economy last year. Britain is home to twice as many companies providing AI products and services as any other European country and hundreds more are created each year.

AI is already delivering real social and economic benefits for people, from helping doctors to identify diseases faster to helping British farmers use their land more efficiently and sustainably. Adopting artificial intelligence in more sectors could improve productivity and unlock growth, which is why the government is committed to unleashing AI’s potential across the economy.

As AI continues developing rapidly, questions have been raised about the future risks it could pose to people’s privacy, their human rights or their safety. There are concerns about the fairness of using AI tools to make decisions which impact people’s lives, such as assessing the worthiness of loan or mortgage applications.

Alongside hundreds of millions of pounds of government investment announced at Budget, the proposals in the AI regulation white paper will help create the right environment for artificial intelligence to flourish safely in the UK.

Currently, organisations can be held back from using AI to its full potential because a patchwork of legal regimes causes confusion and financial and administrative burdens for businesses trying to comply with rules.

The government will avoid heavy-handed legislation which could stifle innovation and take an adaptable approach to regulating AI. Instead of giving responsibility for AI governance to a new single regulator, the government will empower existing regulators - such as the Health and Safety Executive, Equality and Human Rights Commission and Competition and Markets Authority - to come up with tailored, context-specific approaches that suit the way AI is actually being used in their sectors.

The white paper outlines 5 clear principles that these regulators should consider to best facilitate the safe and innovative use of AI in the industries they monitor. The principles are:

safety, security and robustness: applications of AI should function in a secure, safe and robust way where risks are carefully managed
transparency and explainability: organisations developing and deploying AI should be able to communicate when and how it is used and explain a system’s decision-making process in an appropriate level of detail that matches the risks posed by the use of AI
fairness: AI should be used in a way which complies with the UK’s existing laws, for example the Equality Act 2010 or UK GDPR, and must not discriminate against individuals or create unfair commercial outcomes
accountability and governance: measures are needed to ensure there is appropriate oversight of the way AI is being used and clear accountability for the outcomes
contestability and redress: people need to have clear routes to dispute harmful outcomes or decisions generated by AI

This approach will mean the UK’s rules can adapt as this fast-moving technology develops, ensuring protections for the public without holding businesses back from using AI technology to deliver stronger economic growth, better jobs, and bold new discoveries that radically improve people’s lives.

Over the next 12 months, regulators will issue practical guidance to organisations, as well as other tools and resources like risk assessment templates, to set out how to implement these principles in their sectors. When parliamentary time allows, legislation could be introduced to ensure regulators consider the principles consistently.

Science, Innovation and Technology Secretary Michelle Donelan said

AI has the potential to make Britain a smarter, healthier and happier place to live and work. Artificial intelligence is no longer the stuff of science fiction, and the pace of AI development is staggering, so we need to have rules to make sure it is developed safely.

Our new approach is based on strong principles so that people can trust businesses to unleash this technology of tomorrow.

Businesses warmly welcomed initial proposals for this proportionate approach during a consultation last year and highlighted the need for more coordination between regulators to ensure the new framework is implemented effectively across the economy. As part of the white paper published today, the government is consulting on new processes to improve coordination between regulators as well as monitor and evaluate the AI framework, making changes to improve the efficacy of the approach if needed.

£2 million will fund a new sandbox, a trial environment where businesses can test how regulation could be applied to AI products and services, to support innovators bringing new ideas to market without being blocked by rulebook barriers.

Organisations and individuals working with AI can share their views on the white paper as part of a new consultation launching today which will inform how the framework is developed in the months ahead.

Lila Ibrahim, Chief Operating Officer and UK AI Council Member, DeepMind, said:

AI has the potential to advance science and benefit humanity in numerous ways, from combating climate change to better understanding and treating diseases. This transformative technology can only reach its full potential if it is trusted, which requires public and private partnership in the spirit of pioneering responsibly. The UK’s proposed context-driven approach will help regulation keep pace with the development of AI, support innovation and mitigate future risks.

Grazia Vittadini, Chief Technology Officer, Rolls-Royce, said:

Both our business and our customers will benefit from agile, context-driven AI regulation. It will enable us to continue to lead the technical and quality assurance innovations for safety-critical industrial AI applications, while remaining compliant with the standards of integrity, responsibility and trust that society demands from AI developers.

Sue Daley, Director for Tech and Innovation at techUK, said:

techUK welcomes the much-anticipated publication of the UK’s AI white paper and supports its plans for a context-specific, principle-based approach to governing AI that promotes innovation. The government must now prioritise building the necessary regulatory capacity, expertise, and coordination. techUK stands ready to work alongside government and regulators to ensure that the benefits of this powerful technology are felt across both society and the economy.

Clare Barclay, CEO, Microsoft UK, said:

AI is the technology that will define the coming decades with the potential to supercharge economies, create new industries and amplify human ingenuity. If the UK is to succeed and lead in the age of intelligence, then it is critical to create an environment that fosters innovation, whilst ensuring an ethical and responsible approach. We welcome the UK’s commitment to being at the forefront of progress.

Rashik Parmar MBE, chief executive, BCS The Chartered Institute for IT, said:

AI is transforming how we learn, work, manage our health, discover our next binge-watch and even find love. The government’s commitment to helping UK companies become global leaders in AI, while developing within responsible principles, strikes the right regulatory balance. As we watch AI growing up, we welcome the fact that our regulation will be cross-sectoral and more flexible than that proposed in the EU, while seeking to lead on aligning approaches between international partners. It is right that the risk of use is regulated, not the AI technology itself. It’s also positive that the paper aims to create a central function to help monitor developments and identify risks. Similarly, the proposed multi-regulator sandbox [a safe testing environment] will help break down barriers and remove obstacles. We need to remember this future will be delivered by AI professionals - people - who believe in shared ethical values. Managing the risk of AI and building public trust is most effective when the people creating it work in an accountable and professional culture, rooted in world-leading standards and qualifications.

Sponsored [On-Demand Webinar] Solving the KYC challenge with end-to-end processes

Comments: (0)

[On-Demand Webinar] Solving the KYC challenge with end-to-end processesFinextra Promoted[On-Demand Webinar] Solving the KYC challenge with end-to-end processes