How upcoming legislation will harness algorithms, AI, and big tech for public good

  1 Be the first to comment

How upcoming legislation will harness algorithms, AI, and big tech for public good

Contributed

This content is contributed or sourced from third parties but has been subject to Finextra editorial review.

The fourth industrial revolution, as with the previous three, has ushered in an age of exciting new opportunities thanks to the digital tools and products now available to us, initiating innovation and making more choice and better services for consumers and citizens possible.

These are all positive benefits that we should celebrate but as we design the laws that govern our digital world, we must think hard about how to mitigate potential social and environmental harms. We have to design, human-led, inclusive ways of working and being in the digital world. There are three important new laws (Data Protection, Digital Markets, and my own AI Regulation Bill) in the Lords this session that I think are key.

The Data Protection and Digital Information Bill had its second reading in December, considering  the need to get the right framework for the digital identity and smart data/data sharing aspects of the bill.

The Digital Markets, Competition, and Consumers Bill is currently moving through committee stage in the Lords. This is our opportunity to consider the bill line by line and propose any amendments we feel may improve the bill. I am putting forward several amendments that deal with proportionality, accountability, and competition reforms; all important, but perhaps those of most obvious interest in terms of the human-led digital world are those that pertain to inclusive design and unfair commercial practices.

My ‘inclusive by design’ proposal is the suggested addition of this text to clause 20: “’accessible’ means that the digital activity is— (a) compatible with assistive technology; and (b) compliant with the accessibility standards in the Public Sector Bodies (Websites and Mobile Applications) (No. 2) Accessibility Regulations 2018.”

Another proposal is designed to clarify what would be included in Clause 227 with regard to the prohibited ‘unfair commercial practices’ by including — “Examples of where a firm is not acting in good faith would include, but are not limited to— (a) failing to take account of customers’ interests, for example in the way the trader designs a product or presents information; and (b) seeking inappropriately to manipulate or exploit customers, for example by manipulating or exploiting their emotions or behavioural biases to mislead or create a demand for a product.”

I have also drafted several amendments that will focus the discussion on what rights and protections consumers should expect, and what responsibilities companies and the government, has in establishing and communicating those rights and protections to consumers. Proposed amendments in full:

“Review of consumer protection legislation in relation to artificial intelligence Within six months of the passing of the Act, the Government must undertake a review of all consumer protection legislation to assess its suitability to address the challenges and opportunities presented by artificial intelligence.

“Consumer protection: artificial intelligence labelling. Any person supplying a product or service involving artificial intelligence must give consumers clear and unambiguous information regarding any relevant health warnings and opportunities to give or withhold their informed consent to use of artificial intelligence in advance of consuming any product or service.”

Consumer protection is the right for the consumer to be informed about repairability of goods In the Consumer Rights Act 2015, after section 10, insert: “10A Information to be provided on whether goods are repairable (1) Traders must provide information before a consumer enters into a contract or makes a purchase about the extent to which the trader’s goods are repairable and have been designed for repairability. (2) The information required under subsection (1) must include— (a) whether there are spare parts or repair services available for the goods in question, and if so, how to access them and how much they are likely to cost; and (b) whether the trader provides additional information on how to make repairs to the goods in question and, if so, how to access that information.””

Colleagues are operating in a similar space with several amendments designed to help employers and engineers to involve workers and their representatives in the design, development, and deployment of algorithmic systems, with a procedure for ongoing monitoring. Their amendments would include provision for ‘good work algorithmic impact assessments’ which would give the CMA an overarching duty to monitor and consider impacts on workers in the big tech companies (likely to be given special ‘strategic market status’ (SMS) under the terms of the bill) as part of monitoring adverse effects on competition and/or a relevant public interest.

With the Algorithmic Assessment Act and the AI Act the US and the EU are shifting responsibility towards companies, giving them a burden of proof to ensure that they are meeting reasonable standards around worker rights and conditions, environmental protection and so on. These amendments seek to do something similar. They want impacts on work, and on workers in particular, to be considered in SMS designation, competition decisions, position of conduct requirements, and compliance reports.

The Ministers’ response stressed that the Government would continue to actively look at whether new regulatory approaches are needed in response to developments in AI and will provide an update on their approach through the forthcoming AI regulation White Paper response.

I am not sure if this response conveys sufficient pace. Again, developments in the US and EU suggest that corporate governance frameworks that encompass ethical considerations, transparency in AI decision making and stakeholder engagement are looking increasingly likely.

In January the US Securities and Exchange Commission (SEC) ruled that Apple and Disney cannot avoid shareholder votes about their use of artificial intelligence. Apple and Disney had both argued that they didn't need to include the proposals in shareholder ballots because they related to "ordinary business operations" such as the company's choice of technologies.

The SEC did not agree: "In our view, the Proposal transcends ordinary business matters and does not seek to micromanage the Company," the agency wrote in separate letters.

While we wait for the Government’s AI regulation white paper response, I have introduced a Private Members’ Bill, the AI Regulation Bill, that looks at exactly these issues. I drafted the Bill with the essential principles of trust, transparency, inclusion, innovation, interoperability, public engagement, and accountability running through it.

Regulating AI was very much on the agenda at Davos this year and I think 2024 will be the year when legislators and regulators undertake to incorporate all of the principles set out above, deeply consider copyright and IP rights and ultimately, achieve greater clarity around what is required to ensure we can realize the economic, social, and psychological benefits AI can bring.

I expect my bill to be debated in the Lords in March and would love to hear from anyone with any feedback on the aims, intentions and drafting of the bill.  We all have a part to play.

Comments: (0)

Contributed

This content is contributed or sourced from third parties but has been subject to Finextra editorial review.