top of page
  • Writer's pictureSigurður Ragnarsson

How Do We Stop the Spread of Deepfake Pornography?

Updated: Aug 31, 2023

A new form of non-consensual intimate imagery (NCII) has begun to spread online. Deepfakes, hyper realistic images often depicting a victim’s face spliced into pornographic content, are subjecting celebrities, influencers, and average people to ongoing stints of image-based harassment. As of today, there are no tools that can help victims effectively stomp out a deepfake and stop its spread. But hash-based image and video identification technology, the same technology used to help identify child sexual abuse material, may offer a solution.


In this article, learn about what deepfakes are, the harms they cause, and possible technological solutions to help aid victims and stop the spread of their deepfaked images and video.

 

Included here:


NCII is mutating, and its new strain is spreading fast


The broad definition of non-consensual intimate imagery (NCII) includes any type of intimate media, often sexual in nature, that is distributed online without consent from the depicted subject. A central issue of NCII is that, once these images have made it onto the web, there is not much that a victim can do before they are duplicated and shared, leading to prolonged stints of humiliation, harassment, and trauma. Predictably, NCII abuse tends to disproportionately target women, girls, people of color, LGBTQ+, and other marginalized communities.


Now, with the rise of extremely advanced AI and other image-rendering technologies, a new strain of NCII has entered the fold: Deepfakes. The term “deepfake” started where all web vernacular is born these days, on Reddit. Soon becoming a household term, deepfake’s official definition has become “a video of a person in which their face or body has been digitally altered so that they appear to be someone else,” often used in the context of sexually explicit content for harassment–i.e. imaged-based sexual abuse–and other malicious purposes.


This technology has made it extremely easy for the faces of celebrities, influencers, and average people to be stolen without consent and all too realistically transplanted into pornographic content, leading to unimaginable humiliation, online harassment, and extended bouts of abuse as the images circulate across the web.


Where do deepfakes come from?


People use AI and other types of digital rendering software to copy/paste faces to bodies with surgical accuracy. In fact, with the advent of these technologies, an entire deepfake industry has grown in their wake.


NBC News reported in early March of 2023 that a 230-iteration ad campaign for a deepfake generator app ran across Facebook, Instagram, and Meta Messenger, effectively slipping through the cracks of platform content moderation. The ad featured “Harry Potter'' star Emma Watson kneeling as if about to engage in sexual activity, the ad copy inviting clickers to, “Swap any face into the video.”


Another instance, a separate NBC report from late March 2023 describes how, “A creator offered on Discord to make a 5-minute deepfake of a ‘personal girl,’ meaning anyone with fewer than 2 million Instagram followers, for $65,” demonstrating that deepfakes are actively being manufactured, and that there is an increasingly expanding market for them.


What’s so alarming here is the ease. By simply downloading an app or paying someone to fabricate it for you, anybody can create a deepfake of anybody else, so long as an image of them exists somewhere online.


The harm caused by deepfaking


As noted above, deepfakes are disproportionately aimed at women. Sensity AI, an organization dedicated to tracking deepfakes, showed 96% of deepfakes were non-consensually created sexual images, with 99% of these targeting women.


Read any recent article on deepfake pornography, and you’ll find that its victims don’t always belong to the social rung you might expect. Increasingly, average people with only medial online presence are finding themselves the subject of deepfakes, the number of instances of this increasing dramatically every day.


Deepfake pornography has certainly affected the lives of high level celebrities–e.g., Emma Watson, Scarlet Johansen, the list goes on–but the technology has become so readily available to the public that, even if you’ve only posted a minimal amount of images of yourself on the web, the average person risks discovering their likeness hyper-realistically copy/pasted onto bodies acting out behaviors they have never taken part in.


Image identification technology to aid deepfake victims


The current lack of effective takedown tools


With deepfakes’ ability to victimize virtually anyone, the central concern is the question of takedown. However, general consensus from victims points to a disturbing truth. Instance after instance of these deepfake attacks show that there is no straightforward, effective solution for removing this content once it makes it onto the web. Part of this problem is due to the speed with which deepfake images are reposted from site to site, but is mainly due to the lack of tools that might enable victims to take their situation into their own hands.


In a November 2022 article from TechCrunch, Natasha Lomas explains that, “Victims of revenge porn and other intimate imagery abuse have complained for years over the difficulty and disproportionate effort required on their part to track down and report images that have been shared online without their consent.”


This sentiment echoes across almost every deepfake attack. The Huffington Post reports one victim’s attitude, quoting her stating “You know how the internet is—once something is uploaded it can never really get deleted . . . It will just be reposted forever.” A second victim from the same report states, “As disappointing and sobering as it is, there aren’t a lot of options for victims,” adding that she feels deepfakes exist “to monetize people’s humiliation.”


A potential solution to the deepfake problem


The good news is that addressing the deepfake problem isn’t rocket science. Tech moguls tend to approach the problem with complex deep learning algorithms, when a much simpler, highly practical solution already exists.


Image and video identification based in hash technology, a method of visual fingerprinting that can accurately match an image against any file in a database regardless of whether that image has been altered, has shown powerful effectiveness in identifying other types of harmful content. Hash technology is often used to trace, take down, and stop child sexual abuse imagery (CSAM), but also terrorist violence and extremist content (TVEC) and non-deepfake NCII.


Building an effective deepfake reference collection


Using hash technology to stop the spread of deepfakes would rely on developing a comprehensive deepfake database, the same as the National Center for Missing and Exploited Children (NCMEC) has done with CSAM, and could effectively speed up the process of removing these images once they’ve start to spread.


By creating a reference collection consisting of templated deepfake porn videos, as well as of known deepfake porn videos, it would be possible to identify the deepfake variations where victims’ faces have been placed in the videos. Once a deepfake video has been identified, it would then be possible to perform a second pass where face identification would take place, effectively identifying the victim and alerting them.


Of course, as deepfakes are consistently created, this reference collection would need to be updated. One possible method of maintaining a collection of such size and scope is to scrape the deepfake porn sites and services responsible for generating and hosting this content, and continually update the reference collection based on the content found on those sites. Another method would be to allow victims to upload deepfake porn videos to add to the collection, which would afford victims some agency in putting a stop to their harassment.


As for who would create and/or maintain such a collection remains to be discussed. Several hotlines dedicated to aiding victims of revenge porn already exist, including the Revenge Porn Hotline, the Cyber Civil Rights Initiative (CCRI), and Safe Horizon, each of which are the type of model organization to take on such an initiative.


The tech to curb the spread of deepfakes exists, it’s just a question of application


Applying hash tech to the deepfake problem would take some practical engineering in terms of creating tools built with user ease in mind, but the good news is that the technology exists. It’s simply a question of parsing out how victims and hotlines can work together to create an effective takedown process using a tool like the one described above.


In the meantime, here are a few resources for victims.




bottom of page