top of page
  • Writer's pictureSigurður Ragnarsson

5 Ways Platforms Fail to Effectively Moderate Harmful Content

With the upcoming legislations across the globe that aim to tackle harmful content online, the responsibility will ultimately fall on the platforms that host the user-generated content to take the necessary action. Harmful content will only stay offline by platforms upholding harmful content restrictions and moderating to the best of their abilities.


In this article, we’ve outlined 5 possible oversights platforms make when regulating harmful content posted by their users.

 


Is your platform at risk of . . .





1. Insufficient moderation


Most platforms rely on human content moderators to manually review content flagged automatically or reported by users. This method can be both harmful to those in charge of catching said material as well as a slower, less efficient method of video and image identification. Manual review is simply not enough when it comes to the level of harmful material moderators must review every day. With such a high volume of content, especially on larger platforms, it’s virtually impossible to catch every harmful post as it goes live on the website.


For smaller platforms, there may not be enough resources for hiring an ample amount of moderators or implementing technologies to flag content automatically, so a substantial amount of harmful content could go unnoticed.


2. Failure to act on reports


Even when harmful content is reported to the platform or automatically flagged, platforms may fail to take down the material, or if illegal, report it to the correct authorities in a timely manner. When platforms can’t get to a report, the potentially harmful or even illegal content stays live on the platform and can be damaging for any number of users that view it. This can also lead to a similar pattern of behavior on the platform if users responsible for the content go under the radar.


When illegal content goes live on a platform, gets reported by multiple people, and isn’t removed promptly, there is often a hoard of users frustrated with the platform for their inaction. From the point of view of C-level staff, it’s easy for this to go unseen among the many other reports they receive each day. Yet, users see that the harmful content is seemingly okayed to stay on the platform and their trust in the platform is eroded. When reports cannot be acted on quickly, the image of the platform could be damaged.


3. Inconsistent guideline enforcement


Platforms may apply their content rules inconsistently, resulting in confusion or mistrust among their users, creating more opportunities for harmful content to stay on the platform. This often has to do with the guidelines themselves; they can be written in an unclear way that blurs the lines between what is allowed and permitted on the platform.


That being said, not all platforms are built with the user’s best interests in mind. Keeping the user’s attention makes money by way of advertising and other digital products and services, while user safety takes a back seat. Truthfully, it’s difficult for platforms to juggle both; keeping harmful content off their site while maintaining user engagement, and each platform will face different challenges and needs.


4. Leaning too heavily on AI


While larger platforms will likely utilize AI, algorithms, and other technologies to identify harmful content, these technologies only pick up a small percentage of the slack. Much of what AI flags still requires human eyes to accurately address the piece of reported content in question, meaning that platforms that lean too heavily on AI for content moderation will ultimately garner ineffectual results.


The plain truth is that AI is still too primitive to solve content moderation alone. Many of the users uploading harmful content online expect moderators to find it and deliberately modify the material in order to escape detection, making it more difficult for platforms to automatically flag content based on a specific file. AI’s major limitations include false positives, false negatives, human-programmed bias, and a lack of understanding of context. These can lead to the removal of acceptable content or the failure to remove unacceptable content, cause undue discrimination, or accidentally promote harmful/explicit content.


5. Following a one-size-fits-all approach


One area platforms get wrong is adopting the same content guidelines for their platform as another. Not all platforms are created equal, nor do they all have the exact same risks of harmful content.


The digital market is highly fragmented with thousands of platforms with a diverse range of user-base volume. No one set of guidelines will suffice for all of these platforms, each should assess its size and staffing alongside the platform’s user volume and understand the risks posed by their user-generated content. Creating a unique set of guidelines per platform ensures there’s less likelihood of harmful content being spread since it would be tailored to the content commonly found on the platform.


bottom of page