top of page
  • Writer's pictureSigurður Ragnarsson

Prepare to be Regulated: The UK Online Safety Bill Will Proactively Shield Users From Harm

Updated: Aug 29, 2023

The UK Online Safety Bill (OSB) is expected to receive Royal Assent in autumn 2023, changing the legal framework for all social media platforms with users in the UK. Should the bill go into effect, online platforms will be required for the first time ever to take a proactive approach in preventing harm to users. In this article, you’ll learn about this shift, what it entails, what harms are covered by the Online Safety Bill (OSB), and some of your platform’s new responsibilities - with a particular focus on user-generated content.


 


Included in this article:

Correcting the imbalance in user-to-platform responsibility


Although social media platforms do have legal duties in the UK, until now they have been able to discharge those responsibilities in an entirely reactive way. Unfortunately, it is common for smaller platforms to do nothing about harmful content beyond cooperating with legal requests. Those who have chosen to do more have relied on an insufficient mix of user reports and under-equipped content moderation teams to flag and remove content they believe is harmful, offensive, or illegal. Though there are instances when moderators and analysts are able to review and remove harmful content before it is able to reach users, the persistent avalanche of user-generated content (UGC) that makes it onto the web each day is overwhelming for content moderation teams, which in turn allows much of the responsibility to fall on the shoulders of users.


Putting the bulk of the responsibility on users leads to many issues, such as the material being re-uploaded across the internet, subsequently exposing more individuals—particularly children—to such material. Far too often, user reports are left unattended by online platforms’ current methods of moderation, leading to a detrimental amount of harmful content left on the web.


Additionally, a lack of initiative to ensure a clean platform does nothing to help the overflow of harmful content that makes it past moderators and user reports, resulting in harmful content living online for an extended period of time.


With this in mind, the UK OSB aims to correct the imbalance by requiring platforms to proactively assume responsibility for potential harms related to content that is posted on their site. Platforms that host user-generated content will need to evaluate all forms of content posted by users and run a full evaluation of the risks their site(s) may pose.


What is user-generated content?


Put simply, user-generated content (UGC) is any post of any type made or shared by a user, rather than content created or shared by the platform itself. UGC makes up a significant chunk of global digital space, and with powerful influence. Over 50% of people say they create some kind of content at least once daily*, and 45% of people say they watch UGC-hosting sites like YouTube and Facebook for more than an hour a day**.


With this in mind, the potential for harmful UGC to appear on the web–and for people to be exposed to it–permeates significantly.


The OSB regulates UGC where it could be encountered on the service by other users, or surfaced by a search engine. Content originated by recognized news publishers, and shared by a user, is not covered.


UGC to be scrutinized for harmful material by platforms will include:

  • Videos

  • Images

  • Text-based comments and posts, including emojis and hyperlinks

  • User profiles

  • Private messages (under certain circumstances)


What is this shift in the legal framework?


Platforms hosting UGC have taken highly inconsistent approaches to harmful content. It is unfortunately common for smaller platforms to do little or nothing beyond responding to legal requests and court orders, and even the largest platforms have relied heavily on a clunky and ineffective combination of user reports and content moderation teams to alert them of harmful content. Without a clear legal framework to define the good practices platforms should adopt and to hold them accountable, it is thought that harmful content online will continue to perpetuate damage to individuals and society.


In the future, platforms with UK users will be under no illusions as to what their responsibilities are regarding harmful content, which may in turn elicit the need for platforms to dramatically improve the technology they use to perform content moderation - including detecting, reviewing, and removing bad content.


Ofcom, the UK communications regulator, is responsible for the new OSB regime. Its role will include providing guidance, receiving transparency reports from platforms, monitoring compliance, and enforcement - including powers to investigate, require improvement measures and to levy fines. Beyond the potential fines, non-compliance with the OSB carries many other negative implications for a platform. Having to engage with a regulatory case is expensive, time-consuming and a significant distraction for senior executives who would otherwise be focusing on growing their business. It can also create very damaging publicity, putting the platform’s reputation and user growth at risk.


Why is this shift occurring?


There is a growing epidemic of harmful content online and, as described above, current reporting and content moderation methods have been ineffective and inconsistent. The OSB aims to address the abuse of and exposure to harmful content by giving explicit legal duties to those responsible for creating these spaces in the first place. These focus on risk assessment, reactive and proactive safeguarding measures, transparency, and reporting.


Harms in focus: The scale of the problem


The regulatory changes and prioritization of content moderation efforts described above will be focused on specific types of harmful content, including Child Sexual Abuse Material (CSAM), Terrorist and Violent Extremist Content (TVEC), and Non-consensual Intimate Imagery (NCII). The harms and abuses that can result from these types of content take many forms.


In particular, the distribution of CSAM online is a prime example of how complex and differentiated these harms and abuses are. CSAM may depict acts of direct abuse perpetrated by an offender. CSAM can also be generated when a child posts a self-generated sexualized image at the request of a remote perpetrator, via grooming or coercion, which can then be shared and distributed without knowledge or consent. Sexting and sextortion generated by underage peers can also result in extended abuse as these images are continually shared online. In the most horrific cases, abuse may even be live-streamed, a method of production and distribution chosen by perpetrators to quickly monetize victims’ abuse while leaving minimal digital evidence.


It is vital to stop the circulation of CSAM. Not only because it is abhorrent, but also because continued circulation perpetuates the offending behaviors that drive abuse in the first place and encourages new users to join abuse circles. Detecting circulating CSAM is also helpful for law enforcement as it can provide clues enabling them to locate both victims and perpetrators.


Revictimization poses another threat, as victims’ abuse videos often continue to circulate on the internet even after they are taken down. The Canadian Centre for Child Protection reports that 67% of CSAM victims feel that the persistent reposting and distribution of their images online impacts them differently than their physical abuse; these images are permanent and their distribution can in some cases be never ending, subjecting the victim to extended periods of abuse every time the content is reuploaded***.



Providing protection for users


In short, platforms will be responsible for preventing the spread of harmful content since they have full control of what is allowed on their site(s). In order to thoroughly manage what users post online and more efficiently manage users’ rights, platforms will be held accountable for creating a system that effectively accomplishes the OSB’s requirements.


Additionally, the OSB hopes that this shift will provide users with a more sufficient amount of protection. This includes protection of users’ rights, but also protection from harmful content. The unfettered and unpredictable way in which UGC spreads leaves users vulnerable to the possibility of being shown harmful material, and the direction of this material’s abuse flows bi-directionally: 1. Harmful UGC can negatively affect the user psychologically simply by consuming the content. 2. Harmful UGC, such as CSAM, extends the abuse of the content’s victim every time a piece of harmful UGC is reuploaded to the internet.


With the impending shift, platforms’ newly implemented technologies and protection against harmful UGC will aim to stop these harms in their tracks.


New responsibilities for platforms


Risk assessment is at the heart of the new regime


Today, social media platforms are under no particular obligation to proactively manage potential harms relating to the use of their services: this is what the OSB aims to change. All platforms, regardless of size, will be required to perform their own risk assessments relating to illegal content and potential harm to children. Based on findings, platforms can then decide for themselves what proportionate steps to mitigate harms are appropriate. In addition, regulator Ofcom will maintain a list of 30-40 special category platforms deemed to be the highest risk - these platforms will have additional duties, including a requirement to provide transparency information to Ofcom, and will be supervised more carefully. Some smaller non-categorized services will also be expected to engage proactively with Ofcom where they raise particular risks.


The Online Safety Bill and age verification: Protecting children


Protecting children is a primary objective behind the OSB. All platforms will have a duty to protect children from age-inappropriate content (as well as illegal content), such as pornography. To do this, platforms will need to implement some form of age verification so that they can create different user journeys for children, including blocking access to content, features, or the entire service itself. There are different approaches to age verification, and the expectation is that higher-risk services will need to use more robust measures.


Empowering adult users to protect themselves


Prior iterations of the OSB were criticized for requiring platforms to remove ‘legal but harmful’ content, such as pornography, racism, eating disorder-related content, and misogyny. These provisions have now been removed, but platforms must offer adults tools to manage what content they see. Doing this will require platforms to categorize content so that users can choose whether or not to view it.


Protect adults' online freedoms


Platforms will be prohibited from removing or restricting user-generated content and banning users that do not violate terms of service or the law. This part of the bill intends to prevent the arbitrary banning of users that have not violated the guidelines of a platform. If platforms do remove or ban a user, they must allow the user the right to an appeal.


In the event that a user believes they were illegitimately banned, they need to have an accessible appeal process where the platform can review the situation that caused the user to be removed and choose to lift or reinforce the ban.


Have clear terms of service and strictly enforce them


That being said, platforms will be responsible for having clearly stated and enforced terms of service. The OSB will legally require platforms to take down illegal content and any content that violates their terms of service.


Platforms will need to clarify what legal content users are allowed to upload to prevent content that breaches terms and services. This will include outlining specific categories of acceptable and unacceptable content within each site’s terms and conditions.


Repercussions for non-compliant platforms


So long as a platform has UK users, the regulator can take appropriate action against any platform no matter where they are based. When this bill comes to fruition in UK law, non-compliant platforms could face fines of up to £18 million or 10% of the company’s global annual turnover, whichever amount is higher.


In severe cases, the regulator would have the authority to block certain platforms from being used in the UK if they are not compliant with the OSB, going beyond issuing a large fine and shutting down their source of income altogether. Some UK parliamentarians have also indicated their desire to introduce a power to hold platform executives criminally liable.


However, platform executives should not assume that these ‘last resort’ penalties are the only powers the regulator has up its sleeve. In practice, regulatory concerns would trigger an early process of mandatory engagement where a platform may be investigated, given advice, and forced to provide regular progress reports. Having to engage with a regulator in this way for an extended period of time is extremely expensive and distracting for senior executives, who will often conclude that compliance is the wisest option if they want their business to thrive.


Ofcom and its role in enforcing the OSB


The UK government has already assigned a face to online safety regulation, assigning the communications regulator Ofcom to enforce the new regulatory regime. Ofcom is already preparing the ground with guidance, consultations, informal platform engagement, and research which will enable it to move quickly once the OSB passes into law.


Ofcom’s regulatory role focuses on platforms, not users. It will include publishing guidance for platforms, such as detailed codes of practice which help services understand what is required of them in each area. Although Ofcom can bring enforcement powers to bear on any platform which is thought to be non-compliant, it will only proactively monitor and supervise a limited set of higher-risk services. These higher-risk platforms should expect to have an ongoing relationship with the regulator based on transparency and full disclosure - enabling Ofcom to step in and take early action on any concerns. Most other platforms will not need to provide transparency data or other reports.


The OSB will provide an important role for platform users, who can ensure and maintain the safety of platforms by alerting Ofcom of instances of non-compliance. Anyone will be able to report a platform to Ofcom if they find that it is violating the requirements of the OSB. Ofcom has also started to carry out regular research with users to understand their concerns and experiences, which will feed into the developing regulatory regime.


Summary of Ofcom’s expectations:


According to Ofcom’s Roadmap to Regulation document regarding their upcoming role in OSB regulation, they expect platforms to:


  • Show how they respond to risks of harm

  • Consider how they prioritize user protection

  • Incorporate safety considerations into product and engineering design decisions

  • Continually consider needs and rights of all users

  • Consider tradeoffs for the sake of platform safety

  • Regularly discuss safety issues between senior decision makers


The regulator’s initial goal is to fulfill the top priorities of the OSB, with heavy emphasis on reducing the spread of child sexual abuse material, terrorist content, online fraud, and prevent children’s access to inappropriate and harmful content.


 

Citations

*Crowdtap

**SEMRush

***The Canadian Centre for Child Protection

Comments


bottom of page