Join the Community

22,060
Expert opinions
44,004
Total members
397
New members (last 30 days)
189
New opinions (last 30 days)
28,693
Total comments

Will the Online Safety Bill protect people online?

The Online Safety Bill is a landmark piece of legislation designed to lay down in law a set of rules about how online platforms should behave to better protect their customers and users. 

 It aims to:

  • prevent the spread of illegal content and activity such as images of child abuse, terrorist material and hate crimes, including racist abuse 
  • protect children from harmful material 
  • protect adults from legal – but harmful – content 

The bill was introduced in the House of Commons on 17 March 2022, having been scrutinised by the joint parliamentary committee over several months and reviewed by the Department for Digital, Culture, Media and Sport (DCMS).

Even before its introduction, various parts of the bill were drip-fed via the media, such as measures to protect people from anonymous trollsprotect children from pornography and stamp out illegal content. Each development was met with intense scrutiny. 

And since its introduction, this has continued with many current and former politicians, tech execs and business leaders sharing their views on the bill described by the UK government as ‘another important step towards ending the damaging era of tech self-regulation’.

Each development and announcement to date has been met with intense scrutiny. But the big question is, will the bill protect people online and hold tech giants to account?

Important step towards a safer internet 

The bill has broadly been accepted as a good starting point for proposed updates to rules that have needed to be changed for a long time. These rules are now much clearer and, therefore, should be easier to police.

At last, big tech will be held accountable as the bill imposes a duty of care on social media platforms to protect users from harmful content, at the risk of a substantial fine brought by Ofcom, the communications industry regulator implementing the act.

It’s a step towards making the internet a safer, collaborative place for all users, rather than leaving it in its current ‘Wild West’ state, where many people are vulnerable to abuse, fraud, violence and in some cases even loss of life.

Lacking clarity 

When you get into the nitty gritty of it, there is some language that could be tightened and issues which need ironing out.

For example, the bill needs to be more specific about the balance between freedom of speech and how people are protected from online abuse.

While fraud is mentioned, it is often lost amongst the headline-catching underage access to porn and abuse. Fraud is an epidemic in the UK and needs to be a central part of the bill.

An initial issue I had with the earlier version of the bill, is that it positions algorithms which can spot and deal with abusive content as the main solution. This does not prevent the problem; it merely enables action to be taken after the event. 

Arguably in recognition of this, the UK Government recently added the introduction of user verification on social media. It will enable people to choose only see content from users who have verified they are who they say they are – all of which are welcomed.  

But the Government isn’t clear on what those accounts look like and its suggestions on how people can verify their identity are flawed. The likes of passports and sending a text to a smartphone simply aren’t fit for the digital age.

Account options 

In my view, there should be three account options for social media users.

  • Anonymous accounts: available for those who need it e.g., whistle blowers, journalists or people under threat.  There will still be a minority who use this for nefarious reasons, but this is a necessary price to pay to maintain anonymity for those who need it. The bad actors will receive the focus of AI to identify and remove content and hold the platforms to account. 

  • Verified accounts: Orthonymous (real name) - accounts that use a real name online (e.g., LinkedIn) and are linked to a verified person. 

  • Verified accounts: Pseudonymous - accounts that use an online name that does not necessarily identify the actual user to peers on the network (e.g., some Twitter), but are linked to verified accounts by the services of an independent third-party provider. Leaving identification in the hands of the social media platforms would only enable them to further exploit personal information for their own gain and not engender the security and trust a person needs to use such a service.  The beauty of this approach is that it remains entirely voluntary and in the control of each individual to choose whether to verify themselves or continue to engage in the anonymous world we currently live in. 

 We expect that most users would choose to only interact with verified accounts if such a service was available and so the abuse and bile from anonymous, unverified accounts can be turned off. After all, who doesn’t want a nicer internet where there are no trolls or scammers?

Verifying users 

In terms of verification, the solution is a simple one. Let’s looks to digital identity systems which let people prove who they are without laborious and potentially unreliable manual identity checks.

Using data from the banks, which have already verified 98% of the UK adult population, social media firms can ensure their users are who they say they are, while users share only the data they want to, so protecting their privacy. This system can also protect underage people from age-restricted content.

Such digital identity systems already exist in countries such as Belgium, Norway and Sweden and have seen strong adoption and usage for a range of use cases. There is of course no suggestion that such a service will eradicate online abuse all on its own, but it would certainly be a big step in the right direction. 

The Online Safety Bill is certainly a progressive move. While this type of legislation is being discussed in different countries, the UK is now leading the charge and its approach is consistent to those being considered around the world. 

However, the Government can’t win this fight on its own. It needs buy-in from social media firms, banks, businesses and consumers. Through collaboration and adopting the right tools, we can help make the internet and social media platforms a safer place for all.

 

External

This content is provided by an external author without editing by Finextra. It expresses the views and opinions of the author.

Join the Community

22,060
Expert opinions
44,004
Total members
397
New members (last 30 days)
189
New opinions (last 30 days)
28,693
Total comments

Trending

Now Hiring