A number of AI video generators, mostly released by Chinese companies, lack the most basic guardrails that prevent people from generating nonconsensual nudity and pornography, and are already widely used for that exact purpose in online communities dedicated to creating and sharing that type of content.
A 404 Media investigation into these AI video generators show that the same kind of ecosystem that’s developed around AI image generators and nonconsensual content has already been replicated around AI video generators, meaning that only a single image of someone is now required to create a short nonconsensual adult video of them. Most of these videos are created by abusing mainstream tools from companies with millions of dollars in venture capital funding, and are extremely easy to produce, requiring only a reference image and a text prompt describing a sexual act. Other tools use more complicated workflows that require more technical expertise, but are based on technology produced by some of the biggest tech companies in the world. The latter are free to use, and have attracted a large community of hobbyists who produced guides for these workflows, as well as tools and models that make those videos easier to produce.
“[These AI video generators] need to put in safeguards to prevent the prompting and creation of NCII [nonconsensual intimate images],” Hany Farid, a professor at UC Berkeley and one of the world’s leading experts on synthetic media, told me in an email. “OpenAI’s DALL-E, for example, has some pretty good semantic guardrails on the user prompt input, and image filtering on the image output to prevent the widespread misuse of their image generator. This type of output filtering is relatively standard now and used in many social media platforms like Facebook/Instagram/YouTube to limit the uploading of NSFW content.”
The most popular tool I’ve seen people use to create nonconsensual intimate videos is Pixverse, which is made by a Beijing-based company called AIsphere and founded by Wang Changhu, the former “head of vision technology” at TikTok owner ByteDance. People who create nonconsensual content use different AI video generators for different purposes, and they commonly use Pixverse to create videos that make female celebrities look as if they’re taking their tops off.
0:00
A nonconsensual video created with Pixverse that was shared on Telegram and censored by 404 Media.
404 Media is not sharing the exact prompts that produce these videos, but they use the same loophole we’ve seen multiple times with other generative AI tools, most notably Microsoft’s AI image generator Designer, which was used to create nude images of Taylor Swift viewed by millions of people on X. Essentially, the prompts describe a sexual act or nudity without using any explicit terms that developers use to flag prompts their tool shouldn’t generate. I’ve seen this method used across a variety of generative AI tools since the Taylor Swift incident in January of 2024, but generally developers have had more strict guardrails that prevent this type of abuse since. But a new crop of AI video generators seem remarkably lax in this respect.
In addition to not having strong guardrails on what kind of prompts these AI video generators will accept, it appears AI video generators are easier to abuse because of their image-to-video feature. This allows users to feed a single still image to the AI generator, then type a text prompt in order to animate that image how they like. With Pixverse, people took images of celebrities from the red carpet, social media, and other sources, and then typed a prompt in which they described the celebrity undressing. In other instances, users first AI generated a nonconsensual explicit image of a real person, then fed that AI-generated image to the AI video generators to animate them. In fact, many of the nonconsensual AI videos I’ve seen are reusing the same Taylor Swift images that went viral in 2024, or other nonconsensual explicit images of other celebrities created with Microsoft’s Designer before the company introduced stronger guardrails.
Judging by the hundreds of videos I’ve seen, it appears certain AI video generators are better at producing specific types of nonconsensual videos. For example, while Pixverse is often used to make videos in which women take their tops off, Hailuo, an AI video generator from a Chinese company called Minimax, is often used to make videos of women who turn around and shake their bare ass at the camera. Hailuo is also used to make videos where two people can be made to kiss, an increasingly popular use of AI video generators, and a feature that’s been advertised on Instagram. KlingAI, from the Chinese company Kuaishou, has been used to animate nonconsensual AI generated nude images, or videos that drench the person in the video with white liquid that looks like semen, but people in this community say that there’s been a “purge” on Kling that now makes it harder to produce these videos.
The community that’s dedicated to making these videos and sharing prompts that bypass guardrails tends to move from AI tool to AI tool as they discover new vulnerabilities or as companies close loopholes. It’s easy to identify which AI video generator they are using because sometimes they either say so or share instructions and screenshots of the app’s interface. In many cases, the videos they produce include a watermark with the AI generator’s branding.
At the time of writing, many users have flocked to Pika, a US-based AI video generator. People in this community have discovered that Pika will easily produce very graphic nonconsensual videos, including videos of celebrities performing oral sex. Again, all users need in order to produce these videos is a single image of a real person and a text prompt.
“Pika is very liberal with both image uploaded and prompt,” one user in a Telegram community for sharing nonconsensual content, who uploaded nonconsensual content created with Pika, said. Another user, who shared a video created with Pika that animates a graphic image of a female celebrity, suggested that if users’ prompts get blocked, they can just keep trying to generate the video over and over again until Pika produces the video. I was able to produce a nonconsensual video with Pika using the instructions shared in this community on my first try.
Users can also produce this content with these apps on their phones. Pixverse, Kling, Hailuo, and Pika are also available via the Apple App Store. Pixverse, Kling, and Hailuo are available via the Google Play Store.
Apple did not respond to my request for comment. Google acknowledged my request for comment but did not provide one in time for publication. As my previous reporting has shown, both companies have struggled to deal with the “dual use” nature of these apps, which seem innocent on the app stores, but can also be used to produce nonconsensual content that violate the companies’ policies.
“While the lewd photos and videos may be fake, the harms to victims of these AI-generated deepfakes are very real. My DEFIANCE Act would finally give victims the ability to hold perpetrators accountable, and it’s time for Congress to pass it into law,” Senator Dick Durbin (D-IL), Ranking Member of the Senate Judiciary Committee, told me in an email. In February, Durbin sent Mark Zuckerberg a letter asking why the company can’t get a handle on AI nudify ads being promoted on Meta’s platforms. In December, other members of Congress pushed the CEOs of Apple and Google to explain their role in hosting these apps on their app stores as well.
“For AI services that refuse to put in reasonable guardrails, Apple and Google should remove them from their app-store,” Farid said. “And, if these services utilize any financial services (Visa, Mastercard, Amex, PayPal), these financial services should drop these bad actors thus crippling their ability to monetize.”
The vast majority of people who AI generate nonconsensual videos use these types of apps because they are free or cheap, easy to access, and easy to use. But users with more technical expertise and access to powerful GPUs are now using open AI models produced by tech giants to make more believable videos without any restriction.
The most popular open AI video generation model in this community is HunyuanVideo, which was developed by the Chinese tech giant Tencent. Tencent published a paper introducing HunyuanVideo on December 3, 2024, along with Github and HuggingFace pages that explain how it works and where users could download the model and run and modify it on their own. What followed was an accelerated version of what we’ve seen with the evolution of Stable Diffusion, an open AI image generation model: people immediately started to modify HunyuanVideo to create models designed to produce hardcore porn and the likeness of specific real people, ranging from the biggest movie stars in the world to lesser known Instagram influencers and YouTubers.
These models are primarily distributed on Civitai, a site where users share customized AI models, which in recent months significantly grew its AI video category and has a category dedicated to HunyuanVideo models. Civitai’s site shows that some of the most downloaded HunyuanVideo models are “HunyuanVideo POV Missionary” (18,000 downloads), “Titty Drop Hunyuan Video LoRA” (14,300 downloads), and “HunyuanVideo Cumshot” (9,000 downloads), each promoted with a short video showing the act described in their title. Some of the other most popular models are of celebrities like Elizabeth Olsen, Natalie Portman, and Pokimane.
When asked how he made AI generated pornographic videos of specific female YouTubers and Twitch streamers, one of the most prolific creators in a Telegram community for sharing nonconsensual content told others in that channel that they use HunyuanVideo, and in one instance linked to a specific model hosted on Civitai that’s been modified to create videos with the likeness of a specific female Twitch streamer.
Since Tencent released HunyuanVideo in early December, hundreds of custom HunyuanVideo AI models have been shared on Civitai, the most popular of which are either dedicated to specific celebrities or sexual acts.
As I reported on February 27, Wan 2.1, another open AI video generation model developed by Chinese tech giant Alibaba, was also immediately adopted by this community to create pornography. It took about 24 hours since Wan 2.1 was released on February 24 for modified models like “Better Titfuck” to start popping up on Civitai.
Previous 404 Media stories about Civitai have shown that it is widely used by people who create nonconsensual content. Civitai allows users to share AI models that have been modified to produce the likeness of real people and models that have been modified to produce pornography, but does not allow users to share media or models of nonconsensual pornography. However, as 404 Media’s previous stories have shown, there’s nothing preventing Civitai users from downloading the models and using them to produce nonconsensual content off-site.
“Please note that our community guidelines strictly prohibit any requests related to pornography, violence, illegal activities, or anything involving celebrities. These types of requests will be rejected in accordance with our policies,” a representative from Minimax, which makes Hailuo, told me after I reached out for comment. The company did not respond to a question about what it’s doing to stop people from making nonconsensual content, despite it being against its policies.
“HunyuanVideo is an open-source project available to developers and the community,” a Tencent spokesperson told me in an email. “Our acceptable use policies prohibit the creation of illegal, unlawful, or universally undesirable content. We encourage the safe and fair use of HunyuanVideo and do not condone the misuse of its open-source tools or features.”
Tencent also said that it’s aware that people have used its software to create specialized models to produce illegal or prohibited content, which is outside its intended and permissible use of HunyuanVideo.
KlingAI maker Kuaishou, Pixverse maker AIsphere, and Alibaba did not respond to a request for comment.