SecureRedact

View Original

The Online Safety Bill: balancing content moderation and privacy

Online content, diverse and ever-growing, demands effective moderation. The tragic Molly Russell case illustrates online perils, especially when it comes to children. Some content is blatantly illegal - promoting drugs, weapons, or acts of terrorism. But other forms of content, like “potentially harmful but legal materials” tread a finer line - e.g. content that might promote eating disorders, self-harm, or bullying. This has prompted the UK government to pass the Online Safety Bill - an attempt to address the critical need for comprehensive content moderation. 

But a dilemma persists: how does it balance data privacy and individual freedoms? How do moderators distinguish between illegal and harmful content, without compromising privacy?

Some key provisions of the Online Safety Bill include:

  • Duty of care: Platforms must proactively shield users, responding promptly to harmful content.

  • Service categories: The bill separates online services into categories depending on their level of risk. While category 1 addresses giants like search engines, category 2 hones in on smaller entities.

  • Targeted platforms: From tech giants, like Meta, to community-run platforms, like Wikipedia, the bill's reach is extensive. It is primarily targeted at online user-to-user services and search engines. 

  • Ofcom’s role: With the power to levy heavy fines, Ofcom ensures companies toe the line. These fines can be up to £18 million or 10% of global annual turnover (whichever is higher).


How can the Online Safety Bill balance online safety and privacy?

For digital platforms, there are different facets of online safety and privacy. Privacy covers data privacy and individual privacy, but the safety of children online sometimes means crossing certain boundaries so that various harmful, sensitive or revealing content is hidden. 

The Bill tries to bridge this gap. But what are these parameters for online monitoring and content moderation that don’t compromise the privacy of others? 

What constitutes “harmful” content is still a challenge, with blatantly illegal content on one end, and grey areas like misinformation or deep fakes on the other. Businesses are uncertain and confused about what they are legally responsible for moderating or removing from platforms, and how to do so securely. Platforms like WhatsApp and Signal argue that scanning encrypted messages undermines privacy. Even with the best of intentions, there's no guarantee against backdoor misuse - whether by malicious actors, hackers, or overreaching authorities. 

The initial "legal but harmful" provisions of the Online Safety Bill tried to address these concerns. The now revised bill attempts to balance the scales and focuses on transparency and empowerment.

The bill aims to strike a balance between the two. With emphasis on the removal of clearly illegal content, it simultaneously tries to provide a framework for addressing contentious "legal but harmful" materials. This aims to offer businesses a more straightforward path without infringing on free expression and user privacy.


What is the global impact of the Online Safety Bill and content moderation worldwide? 

The ripple effects of the UK's Online Safety Bill will be closely watched globally, as nations grapple with their own challenges of online content moderation.

The EU has already taken steps with its Digital Services Act, which aims for a transparent and safer online environment. Across the Atlantic, the USA's Section 230, though often debated, offers a distinct approach by largely shielding online platforms from liability for user-generated content. 

The UK's stance on online safety can set a precedent for other nations, pushing them to re-evaluate their own legislation and strategies for online content moderation. Additionally, global tech companies that often operate across multiple jurisdictions will need to adapt to these varying regulations, which could potentially lead to a more unified approach to content moderation worldwide. 


The Online Safety Bill embodies the UK's efforts to harmonise online safety with user privacy. As nations worldwide grapple with content moderation, the UK offers a blueprint that seeks a delicate equilibrium between safeguarding users and preserving privacy. 


Need help to prioritise privacy in video sharing?