New Research Shows Deepfake Harassment Tools Spread on Social Media and Search Engines


New Research Shows Deepfake Harassment Tools Spread on Social Media and Search Engines

A new analysis of synthetic intimate image abuse (SIIA) found that the tools for making non-consensual, sexually explicit deepfakes are easily discoverable all over social media and through simple searches on Google and Bing.

Research published by the counter-extremism organization Institute for Strategic Dialogue shows how tools for creating non-consensual deepfakes spread across the internet. They analyzed 31 websites for SIIA tools, and found that they received a combined 21 million visits a month, with up to four million visits in one month.

Chiara Puglielli and Anne Craanen, the authors of the research paper, used SimilarWeb to identify a common group of sites that shared content, audiences, keywords and referrals. They then used the social media monitoring tool Brandwatch to find mentions of those sites and tools on X, Reddit, Bluesky, YouTube, Tumblr, public pages on Instagram and Facebook, forums, blogs and review sites, according to the paper. “We found 410,592 total mentions of the keywords between 9 June 2020 and 3 July 2025, and used Brandwatch’s ability to separate mentions by source in order to find which sources hosted the highest volumes of mentions,” they wrote. 

The easiest place to find SIIA tools was through simple web searches. “Searches on Google, Yahoo, and Bing all yielded at least one result leading the user to SIIA technology within the first 20 results when searching for ‘deepnude,’ ‘nudify,’ and ‘undress app,’” the authors wrote. Last year, 404 Media saw that Google was also advertising these apps in search results. But Bing surfaces the tools most readily: “In the case of Bing, the first results for all three searchers were SIIA tools.” These weren’t counting advertisements on the search engines that the websites would have paid for, but were organic search results surfaced by the engines’ crawlers and indexing.

X was another massively popular way these tools spread, they found: “Of 410,592 total mentions between June 2020 and July 2025, 289,660 were on X, accounting for more than 70 percent of all activity.” A lot of these were bots. “A large volume of traffic appeared to be inorganic, based on the repetitive style of the usernames, the uniformity of posts, and the uniformity of profile pictures,” Craanen told 404 Media. “Nevertheless, this activity remains concerning, as its volume is likely to attract new users to these tools, which can be employed for activities that are illegal in several contexts.” 

One major spike in mentions of the tools on social media happened in early 2023 on Tumblr, when a woman posted about her experience being a target of sexual harassment from those very same tools. As targets of malicious deepfakes have said over and over again, the price of speaking up about one’s own harassment, or even objecting to the harassment of others, is the risk of drawing more attention and harassment to themselves. 

‘I Want to Make You Immortal:’ How One Woman Confronted Her Deepfakes Harasser
“After discovering this content, I’m not going to lie… there are times it made me not want to be around any more either,” she said. “I literally felt buried.”
New Research Shows Deepfake Harassment Tools Spread on Social Media and Search Engines

Another spike on X in 2023 was likely the result of bot advertisements for a single SIIA tool, Craanen said, and the spike was a result of those bots launching. X has rules against “unwanted sexual conduct and graphic objectification” and “inauthentic media,” but the platform remains one of the most significant places where tools for making that content are disseminated and advertised.  

Apps and sites for making malicious deepfakes have never been more common or easier to find. There have been several incidents where schoolchildren have used “undress” apps on their classmates, including last year when a Washington state high school was rocked by students using AI to take photos from other children’s Instagram accounts and “undress” around seven of their underage classmates, which police characterized as a possible sex crime against children. In 2023, police arrested two middle schoolers for allegedly creating and sharing AI-generated nude images of their 12 and 13 year old classmates, and police reports showed the preteens used an application to make the images. 

A recent report from the Center for Democracy and Technology found that 40 percent of students and 29 percent of teachers said they know of an explicit deepfake depicting people associated with their school being shared in the past school year. 

Laws About Deepfakes Can’t Leave Sex Workers Behind
As lawmakers propose federal laws about preventing or regulating nonconsensual AI generated images, they can’t forget that there are at least two people in every deepfake.
New Research Shows Deepfake Harassment Tools Spread on Social Media and Search Engines

The “Tools to Address Known Exploitation by Immobilizing Technological Deepfakes on Websites and Networks” (TAKE IT DOWN) Act, passed earlier this year, requires platforms to report and remove synthetic sexual abuse material, and after years of state-by-state legislation around deepfake harassment is the first federal-level law to attempt to confront the problem. But critics of that law have said it carries a serious risk of chilling legitimate speech online.

“The persistence and accessibility of SIIA tools highlight the limits of current platform moderation and legal frameworks in addressing this form of abuse. Relevant laws relating to takedowns are not yet in full effect across the jurisdictions analysed, so the impact of this legislation cannot yet be fully known,” the ISD authors wrote. “However, the years of public awareness and regulatory discussion around these tools, combined with the ease with which users can still discover, share and deploy these technologies suggests that takedowns cannot be the only tool used to counter their proliferation. Instead, effective mitigation requires interventions at multiple points in the SIIA life cycle—disrupting not only distribution but also discovery and demand. Stronger search engine safeguards, proactive content-blocking on major platforms, and coordinated international policies are essential to reducing the scale of harm.”

Leave a Reply

Your email address will not be published. Required fields are marked *