Julio Romo

View Original

Can Ofcom police the internet?

UK broadcast regulator Ofcom is to be given the power to police the internet and regulate content posted online. This enormous task comes after the publishing in April 2019 of the ‘Online Harms White Paper’ by the Government, which sets out its vision for regulating the internet in the United Kingdom. Their focus is for ‘the UK to be the safest place in the world to go online, and the best place to start and grow a digital business.’

After years of letting technology companies regulate themselves, with very limited success - given the ease to access hateful and damaging content, Governments around the world are slowly moving to a position where they feel they need to act.

Here in the UK, the ‘Online Harms White Paper’ proposes amongst other things a new statutory duty of care that companies must abide by and a series of principles of regulatory best practice, with OFCOM as the regulator.

But is this ambition possible and practical, and can the UK bring the online wild-west to heel?

Let’s start by reminding ourselves of some basics:

1. Digital and technology companies are in general US companies.

Yes, I know, this is very obvious, but it’s important that we remember this.

Facebook, Twitter, Google, YouTube, Reddit and many other channels that we use on a daily basis were founded in the US. They are enterprise companies set up in a jurisdiction in which freedom of speech is enshrined in their Constitution - their First Amendment.

The premise of freedom of speech, or freedom of expression as the White Paper describes, has taken hold around the world. Freedom of expression is also a key in the Government’s vision, where ‘the UK is committed to a free, open and secure internet, and will continue to protect freedom of expression online.’ Is this a nod to tech companies?

2. Online companies receive protection in U.S. Law

Ok, so now that we’ve reminded ourselves that some of the major online companies were founded in the U.S., let us present some of the protection that they are afforded by U.S. Law.

Have you heard of Section 230 of the US Communications Decency Act? If you haven’t then let me explain it as simply as possible as this act has made the internet that we all know and use today.

Passed in 1996 in the US, the Act ‘has been interpreted to say that operators of Internet services are not to be construed as publishers (and thus not legally liable for the words of third parties who use their services).’ Simply put, it gives protection to internet companies from what users publish online because unlike recognised media outlets they are not publishers with editorial control.

Facebook, Twitter, Google and others (forums, ISPs, etc) benefit from this Act, which ‘immunizes both ISPs and Internet users from [civil] liability for torts committed by others using their website or online forum, even if the provider fails to take action after receiving actual notice of the harmful or offensive content.’

This makes challenging US digital/technology company in their jurisdiction very difficult. Not impossible, but a challenge nonetheless. So to police these global digital and tech companies, what is the stick? Will the UK pivot to seeing these companies as publishers and therefore liable for content that users post? Could they do this? Should they take this approach, especially given the fact that trade talks between the US and the UK will be taking place.

3. Can the UK regulator police content on the internet?

The bigger question is if there’s any appetite by the Government to give Ofcom the necessary powers to regulate the providers of content? And if so, what is the carrot and stick approach that they would establish.

We’re at a stage where tech companies will want to establish their narrative, one that confirms their view that they can and have successfully been policing themselves.

With regards to policing content, we should look at the legal tools that individuals already have at their disposal in relation to libel and defamation. Carter-Ruck states that, in relation to an overseas resident that is based outside of England & Wales, an individual may still have a claim for defamation if there is publication here and there is serious harm to your reputation here. If they are domiciled outside the EU or a contracting state to the Lugano Convention the court will consider if England and Wales is clearly the most appropriate place in which to bring their claim.

4. Does the UK public want regulation of online content?

Let’s look at this from the perspective of users and businesses.

The behaviour of people online has changed dramatically in the last number of years. For social media firms, since they were founded it’s been about scaling and monetising. Most have done this very well - Facebook and Google (YouTube) are examples.

Users have adopted these channels and consumed a lot of content and ads. As a result, the more eyeballs social media platforms and online forums get the more they attract advertisers.

Companies like Facebook are reaching peak in terms of user numbers, so for them, it’s about protecting their business.

People have started to move towards messaging apps like WhatsApp and Facebook Messenger, both of which are owned by Facebook. At the same time, we know that 1.4bn Facebook users use its groups feature and that there are around 10 million groups on Facebook, which are either open, closed or private. People are moving towards private conversations and groups, which is where you see content that can be seen as questionable. Businesses know this and if they are concerned, then so should online firms.

A lot of misinformation starts in private groups and closed forums. Will Ofcom be policing content here? If people’s opinions are being shaped with misinformation in closed groups, then what?

Like me, a good number of commentators have been surprised by the time that it’s taken for a discussion about policing and regulating social media to be formalised. It’s about time, but the discussion that is now being had is very different from the outcomes that are good for people, businesses and governments. Not dealing with this issue makes those that don’t regulate hateful content that is online complicit.