The UK is outwardly pursuing an adequacy decision from Europe regarding its current data protection and privacy regulations. However, the idea of eschewing outright assimilation with relevant EU laws is gaining traction as it provides an opportunity to sculpt
a more appealing privacy framework.
As it stands, come 31 December 2020, the UK will have departed the EU, and the transition period which ensured the stability of continuing UK/EU regulation to bridge the exit will cease.
In conversation with Finextra Research, Miriam Everett, partner and global head of data and privacy at Herbert Smith Freehills, refers to a
statement made by the UK Prime Minister Boris Johnson in February 2020 in which he alludes to the possibility of altering the application of current privacy laws under GDPR:
“The UK will in future develop separate and independent policies in areas such as[…]data protection, maintaining high standards as we do so…the UK would see the EU’s assessment processes on financial services equivalence and data adequacy as technical and
confirmatory of the reality that the UK will be operating exactly the same regulatory frameworks as the EU at the point of exit.”
While the public narrative remains ‘business as usual’ during the Brexit transition period, this speculation about departing from certain areas of the GDPR illustrates that the motivation to capitalise on the economic and strategic potential of AI may demand
divergence from certain elements of EU regulation.
Despite GDPR being a very welcome introduction in the push toward consumer protection and security of personal data in an increasingly digital era, it arguably comes at a cost for innovation and the ability to delve deeper into AI use which, naturally, generates
more sophisticated insights the more information it is fed.
Promoting innovation through a loosening of regulatory frameworks is somewhat of a double-edged sword. To prioritise the importance of innovation over and above the right to privacy would be anathema to the UK’s preceding approach to the protection of citizens
through the protection of their digital identity.
Everett believes there is a desire to “relax” GDPR laws in the UK to be more business friendly and to foster business and technology. This means however that the more we favour this approach the more difficult it will be to achieve an adequacy decision.
“The UK has always been very proud of its status as a European tech hub and it will want to maintain that in a post-Brexit world. But an issue I’ve certainly heard debated is whether privacy regulation hinders innovation - especially in the AI space,” Everett
argues.
That is to say, do UK privacy laws prevent us from keeping up with the likes of China and other nations which are perhaps less restrictive in this area? By extension, Everett questions, is Brexit an opportunity to relax our privacy laws in order to encourage
innovation?
Tightrope tension: Striking a balance between ethics and advancement
Everett navigates the tensions that are increasingly at play between the protection of the individual’s right to privacy and the development and use of AI technologies: “There is a tension around data collection under GDPR where firms aren’t supposed to
collect data for the sake of it, yet as we know, AI gets better and learns more the more information you give it.”
In February 2020, the European
strategy for data was released by the European Commission (EC) and included the white paper on ‘Artificial Intelligence – A European approach to excellence and trust.’
The paper drills into the key objectives of the EU, emphasising their intention to become a leader in the AI space and to actively deploy the data both stored and processed within Europe in line with its economic weight by 2030.
It expects that by 2025 a strong majority of data will be stored in smart-connected devices or objects which will allow this to be more readily utilised.
While the white paper leans heavily into becoming a competitive global player and capitalising on the potential of AI and data technology across the EU, it does not fully explain how it will promote this technology alongside the cumbersome regulatory demands
of GDPR.
Everett argues that “with respect to the sort of tensions between AI and privacy I can’t see the EU changing the GDPR. It took six years to get it through, and to amend the GDPR it could take another six years, and this is lifetime in the tech world.”
Vikram Khurana, senior associate, Bristows LLC says it’s no surprise that the EU wants to invest heavily into AI, given the potentially transformative effects it may have across industry, society and its citizens.
While it seems unlikely that the EU or indeed any European country or entity will match the US or China dollar-for-dollar in AI investment, “an area that Europe can really lead in is the development of a regulatory framework for AI – one that is based around
trust and ethics as highlighted in the EC’s recent white paper.
“Like in other heavily regulated fields that have extra-territorial impact, where Europe leads, the rest of the world often eventually follows.”
He notes that the paper flags a range of ethical issues that may apply to how AI is deployed in the financial services sector, such as the opacity of some AI decision-making processes (the so-called ‘black-box’ effect).
Some contextual nuts and bolts of the AI dilemma
Steve Elliot, managing director, LexisNexis Risk Solutions flags automated decision making (ADM) and how the use of this is influencing dialogue on AI.
ADM is currently used across financial services in a number of circumstances, Elliott explains: “ADM can be viewed as a very basic form of AI and decision making. It is often being done by humans across financial services, which actually may not need to
be done by people.”
Its value, Elliot explains, lies in the ability for FIs to deploy machines to carry out these “logic decision tree tasks,” allowing human intelligence to be more efficiently positioned in the value chain.
“Currently, because people are applied early on the in the process you can’t get the volume of high-quality outputs that you need to be getting to. ADM allows firms to move people to the right point in the supply chain instead of having them doing repetitive
automated tasks and cookie stamping.”
While the topic of automation has a tendency to spur nervousness around job losses in the industry, Elliot thinks “that’s a slight distortion of the market. The work is still there for people, it’s just further down the supply chain.” In fact, he asserts
that the introduction of technology allows the industry to innovate and create more products that weren’t previously available, ergo, boosting employment.
The boundaries within which ADM is permitted to be utilised are
best explained by the Information Commissioner’s Office (ICO):
“Automated decision-making systems are a key part of our business operations – do the GDPR provisions mean we can’t use them? The GDPR recognises this and doesn’t prevent you from carrying out profiling or using automated systems to make decisions about
individuals unless the processing meets the definition in Article 22(1), in which case you’ll need to ensure it’s covered by one of the exceptions in Article 22(2).”
The processing under ADM which is restricted under Article 22(1) of the GDPR refers to circumstances where
solely automated decisions (no human influence) are made which have a legal or similarly
significant effect on the individual. Financial services falls under the ‘significant effect’ category and therefore the use of ADM profiling to determine the approval/rejection for a loan application would not be permitted.
However, FIs seeking to further improve their ADM systems can hit a roadblock due to the ‘profiling’ component stipulated under GDPR. As ADM often (but not always) involves profiling, which often (but not always) uses algorithms to predict behaviour or control
access to a service for instance, the issue of explainability is called into question.
Elliot says this requirement is of significant concern to market players: “I think that right now firms are worried that regulators require them to be able to explain decisions. And if you’re starting to use machines to make decisions, at one level it can
be very easy if you’re only using logic decision trees, but if you’re starting to use machine learning and beyond that deep learning, it becomes more difficult to explain the decisions you arrive at. But this needn’t be the case.”
If machines are used to risk-rate an individual, Elliot contends, and the decision reached has gone through a huge range of variables and analysis, firms could still have a human-driven review of the outcome if the individual was not satisfied with the original
decision. That is, there are other arrangements firms can implement in order to re-assess queried decisions.
Elliot draws attention to the concern of individuals wishing to query a machine-made decision in the example of mortgage approvals (or rejections) and counters that “I think that the opposite situation may apply as right now people are making these decisions
with fairly blunt decision-making tools. We see how quickly people’s credit card or mortgage applications are declined.
“However, if you have machines making decisions using many more variables and are able to be much more fluid, more people will be approved credit and these approvals will be granted with better alignment to the individual’s true risk profile.”
Everett echoes this line of argument and notes that the EU’s white paper references the potential of human decision making not being immune to mistakes and biases themselves:
“The legal and ethical issues tend to interface here also as the specific rules in the GDPR about ADM often stem from the potential for bias in the technology, yet, there is little acknowledgment for bias with humans. To my mind we seem to be holding technology
to a higher standard than we hold humans to.”
Yet, though the innate fallibility of man may lead to a prejudiced or biased decision about one or two people, there is consensus that the risk of an AI tool making prejudiced decisions about entire databases of individuals is of the utmost concern.
Particularly when the ADM ventures closer to AI and machine learning technologies, Elliot adds that regulators are rightly concerned about closely supervising this part of the market because “if the data going into the machine is not adequately managed then
it could skew the outcomes one way or another.”
Of pertinence in this discussion, Elliot highlights, is that while explainability remains the priority of the ICO, “we need to work out whether people want the right to explainability on the likes of risk assessments, or, whether they want the best, most
effective product that meets their risk profile.
“If it’s the latter then we must encourage greater use of machine learning because it can do much more risk assessment than a human can.”
Yet, unanimity is found on the topic of trust
Khurana observes: “The EC is saying that trust needs to be the bedrock of AI. If the industry can tackle this issue and build trustworthy AI systems, the more likely AI is to be accepted and taken up by businesses and individuals. Better uptake means
the benefits promised by AI can be realised more fully, for business as well as society at large.”
Companies recognise the denigration of trust between companies and their customers following high-profile scandals and data attacks continues Everett and are flipping the narrative on its head. By focusing on re-building this trust firms distinguish themselves
through this unique selling point and drive loyalty among their base and potential customers.
Will a divergence from the regulatory demands of GDPR lead to instability and manipulation, undermining this quest toward trust?
Everett thinks not.
In fact, she believes that companies can and are using this arguably unstable landscape to their advantage.
“I’m seeing a lot of organisations looking into data ethics. Rather than throwing money and technology at the ethical problems AI presents, firms have shifted their approach to think about what they
should be doing with this data.”
By taking this approach, firms are addressing the seemingly endless wait for legislation to be drafted, passed, and implemented by pre-empting the regulatory frameworks and applying data strategies that they perceive will be compliant with future requirements.
This not only protects their reputation by taking the initiative to lead with consumer-first strategy but encourages regulators to be more accommodating when the frameworks are inevitably put into place.
Everett elaborates: “While it may not be an entirely altruistic approach, if firms are behaving well and paving a responsible path with their data strategy the regulators may be willing to implement less stringent requirements because they see the market
working toward an effective solution itself.”
Khurana points to AI tech powers including the US and China to draw a comparison with the UK and Europe. When it comes to unparalleled data and resources in conjunction with less privacy regulation and minimal (read: nil) consumer rights in the US and China
respectively, the UK and Europe simply can’t match up. However, where they can compete Khurana argues, “is in skills, thought leadership, and –shown by the white paper – a concerted effort in developing a regulatory and governance framework for AI.”
If the UK seeks to diverge from the burdens of the GDPR, as Everett suggests, “For some that will be a popular choice, but for others an onerous burden. Either way, for someone, it will be a difficult decision to make.”
Artificial Intelligence is a key topic to be discussed at EBAday, the Euro Banking Association's annual conference in partnership with Finextra. European banks, fintechs, and payment providers will gather to explore changes in the industry to develop an
open dialogue across key industry players.
Register here for EBAday at The Hague, Netherlands on the 19th-20th May, 2020.