UK Online Safety Bill: Prepare for Compliance with Upcoming Regulations
Updated: Feb 1
With potential to be in effect as early as spring 2023, the UK Online Safety Bill will require online platforms that operate in the UK (regardless of where their platforms are based) to take a more proactive approach against harmful content posted to their sites. In this article, you’ll learn everything you need to know to ensure your platform complies with the bill’s requirements.
Included in this article:
What is the Online Safety Bill and why is it in legislation?
The basics: Regulatory body, repercussions of non-compliance, and timeframe
Support your platform’s compliance with video identification tools
While platforms operating within the UK are certainly familiar with various legal duties directed at the removal of harmful content, the current processes put in place to catch and remove harmful content—often user reports and understaffed/overworked content moderation teams—have failed to serve as an effective solution.
In an effort to address the epidemic of harmful content across the web and protect users (especially children) from being presented with it, the UK’s aptly named Online Safety Bill (OSB) may soon require online platforms to take an unprecedented series of proactive steps toward ensuring the safety of users online.
In anticipation of this bill, and as thought leaders in the detection of harmful content, we’ve put together an overview of the key touch points your platform will need to be aware of as you prepare for compliance.
What is the Online Safety Bill and why is it in legislation?
Currently in the reporting stage of the House of Commons, the Online Safety Bill (OSB) is a UK law that aims to set firm regulations upon platforms that host user-generated content.
With the intention of making the UK the world’s safest place to use the internet, the government has been refining the bill since it was first drafted in May 2021, with firm emphasis on and dedication to ensuring the protection of freedom of expression while managing the circulation of harmful content.
If passed, the bill will require online platforms to tackle the issue of harmful content posted on their platforms, with the intended result specifically directed at reducing the amount of child sexual abuse material (CSAM) and terrorist and violent extremist content (TVEC) that gets circulated online.
UK Online Safety Bill basics: Regulatory body, repercussions of non-compliance, and timeframe.
Who regulates the new law and who will the new law affect?
The UK government will enforce the OSB with communications regulator Ofcom, who will oversee around 25,000 tech companies currently hosting UGC. These platforms include forums, messaging apps, cloud storage services, and the most popular pornography sites. Since search engines give users access to platforms, Ofcom will regulate them as well.
Additionally, though the OSB is UK law, the internet is global; the regulator will take appropriate action against a company no matter where they are based, so long as the platform has users in the UK.
What are the repercussions of non-compliance?
Non-compliant platforms will be at risk of fines up to £18m or 10% of their global annual turnover, whichever is higher, and in extreme cases may risk their platform being blocked.
When will the Online Safety Bill go into effect?
The bill is still pending, but has some potential to be in effect as early as spring 2023. With this in mind, tech companies should soon begin investigating and implementing efficient and effective means of self-regulation for their platforms. Platforms should act now and get prepared to avoid risking the company’s image and revenue losses. Under the OSB, users will be able to report a company for non-compliance if harmful content is on their platform. With the risk of being fined or blocked entirely, the company must be quick to take down harmful content.
Primary objectives and focuses
Given that easy access to the internet has made children more susceptible to consuming harmful content than ever before, implementing means for securing children’s safety is the primary focus of this bill. Platforms will be required to put protections in place that prevent children from accessing content that is harmful to them, such as pornography.
Platforms will also be required to report child sexual abuse material (CSAM) to the National Crime Agency (NCA), which will aid law enforcement in their initiatives to locate perpetrators and rescue victims.
Protecting users’ rights while avoiding violence and racism
The OSB intends to protect users’ freedom to debate and share opinions that may be offensive by requiring platforms to enforce this right. Regulator Ofcom will further ensure that platforms do not discriminate against a specific political viewpoint.
Protecting women from harassment
This bill plans to protect women and girls against content-related illegal interactions and abuses, such as non-consensual intimate imagery (NCII), harassment, and cyberstalking. In addition, cyberflashing, the act of sending explicit images or videos without the receiver’s consent, will be made illegal by the OSB.
What platforms will need to do to comply with the Online Safety Bill
1. Protect children from harmful content and abuse
Alongside illegal user-generated content, platforms will need create restrictions that disable children from accessing content such as pornography, self-harm, and eating disorder content.
Platforms intended for children will also be responsible for creating protections from abuse that isn’t clearly defined as illegal but can still be extremely harmful, such as cyberbullying. In the event of this kind of abuse, platforms will be required to offer easy ways for children (and their parents) to report the problem as well as take action against it.
2. Provide terms and conditions with easy content reporting system
The OSB will not allow arbitrary removal of harmful content and platforms must clarify the boundaries on permissible UGC.
For children and adults, there must be an easy way to report harmful content, which may be in the form of comments, photos, videos, and emojis. Simple terms and conditions and having an accessible reporting system are also critical to maintaining compliance as a platform.
3. Moderate harmful misinformation and disinformation
The OSB will require companies to eliminate posts that could indirectly cause harm to children, such as misinformation and disinformation containing incitement of violence and vaccine-related topics. Ofcom will attend to this by obtaining transparency reports from companies to not only ensure that they are complying with the OSB, but are equipping their users with ways to identify such falsehoods.
4. Address racist abuse and hate speech
The OSB emphasizes that platforms allowing racist UGC on their websites will be highly scrutinized. Platforms must respond quickly whenever a user posts racist content to avoid being fined or blocked. The OSB urges platforms to have processes in place specifically for identifying racist content, whether it be text, photos, videos, or emojis.
5. Report illegal content to the National Crime Agency
Each platform must report illegal and abusive content found in both public and private channels on the platform to the National Crime Agency (NCA), who will be responsible for taking further steps to resolve the crimes. Especially regarding CSAM, the regulator will enforce companies’ duty to report crimes as quickly as possible to stop any active abuse towards the children depicted.
6. Advanced technology for automation and optimization of scanning methods
The OSB urges affected companies to adopt advanced technology to help manage and automate the harmful video/image removal process. In the event that it is necessary, Ofcom will be able to require platforms to implement technology that scans private channels, like private messaging, for harmful content without compromising user privacy. For maintained privacy, platforms must use a highly accurate tool to extract and identify such material and leave the legal content at bay.
Video identification tools: Helping ensure compliance with the Online Safety Bill
Video and image-based identification tools made possible by visual fingerprinting technology can give online platforms the ability to automate the level of content moderation required by the OSB. Using an advanced API to match newly posted videos and images against a reference database, video identification tools can help platforms begin to build visibility over the content that ends up on their site, as well as provide a thorough and effective method of scanning for illegal harmful content.
While larger platforms most likely already have some form of AI detection processes in place to identify harmful content, these technologies tend to only pick up a small percentage of the slack. This leaves the brunt of the work to human moderators, which can be both harmful to those in charge of catching said material as well as a slower, less efficient method of video and image identification. Automating this technology with new and unique approaches to content identification can aid in filling these gaps to ensure platforms account for the UGC that appears on their site.
Using an advanced video identification system would assist platforms in:
Detecting harmful content
Finding harmful content repeatedly posted to online platforms
Reducing the need for an extensive content moderation department
Moderating and curtailing abuse
Assisting law enforcement in finding criminals and rescuing victims
Helping hotlines track distribution of CSAM
Contact us to discuss how we can help address your platform's particular video identification needs.