A New Kind of Tech-Enabled Abuse: Deepfakes

One of the newer, more alarming types of digital abuse is called ‘deepfakes.’ A deepfake is a video of someone whose face or body has been digitally altered to make them look like someone else. Deepfakes are particularly dangerous because they can be used to make a person appear as if they are doing something they didn’t do or would not usually consent do on film.  

According to Amsterdam-based cyber security company, Deeptrace, 96% of deepfake videos on the internet are pornographic and most of the victims are women. And anyone can become a victim—celebrities Kristen Bell and Scarlett Johansson have both been victims of deep fake videos. 

Unfortunately, deepfakes are another way abusive partners exert power and control over their victims. Similar to revenge porn, where images or video that are actually of the victim, are distributed by a vengeful former intimate partner, deepfakes can be used in the same way without needing real footage of the victim. 

Abusive intimate partners may use deepfakes to humiliate the victim with friends or family. Explicit deepfakes have also been known to lead to endangering a person’s livelihood if they are shared widely. 

Deep fakes and other forms of image-based abuse are not always about revenge, though. My Image My Choice sums up the issue like this. 

It’s not all ‘revenge’ – people share images because they want sexual gratification, control, money, or because of voyeurism, extortion, misogyny, obsession. Some want increased social status and feel entitled to share these images for a laugh. Research on unsolicited images shows that some people believe it’s flattering or flirtatious.  

It makes us wonder, is the only answer to just hope we don’t become victims? One alternative solution is legislation.  

Just in the last two years, California passed two bills to address the issue of deepfakes, and one of them, AB 602, addresses the creation and distribution of sexually explicit deepfakes. Former laws in place did address the need for consent to distribute sexually explicit material, but deepfakes fell into a kind of loophole that needed to be closed. In 2022, the current administration launched a national task force focused on preventing online harassment and abuse with the intent to prevent and address technology-facilitated gender-based violence. 

Is there anything you can do to protect yourself from deepfakes? One recommendation out of the cyber security field is to use watermarks on the digital images you share online. This can make it more difficult for someone to make a realistic deep fake of you.  

How do you protect yourself from digital abuse? Share with the community in the comments section below.