Grok’s AI CSAM Shitshow


Grok's AI CSAM Shitshow

Over the last week, users of X realized that they could use Grok to “put a bikini on her,” “take her clothes off,” and otherwise sexualize images that people uploaded to the site. This went roughly how you would expect: Users have been derobing celebrities, politicians, and random people—mostly women—for the last week. This has included underage girls, on a platform that has notoriously gutted its content moderation team and gotten rid of nearly all rules. 

In an era where big AI companies at least sometimes, occasionally pretend to care about things like copyright and nonconsensual sexual abuse imagery, X has largely shown that it does not, and the feature has essentially taken over the service over the last week. In a brief scroll of the platform I have seen Charlie Kirk edited by Grok to have huge naturals and comically large nipples, screen grab of a woman from TikTok first declothed then, separately, breastfeeding an AI-generated child, and women made to look artificially pregnant. Adult creators have also started posting pictures of themselves and have told people to either Grok or not Grok them, the implication being that people will do it either way and the resulting images could go viral.

The vibe of what is happening is this, for example: “@grok give her a massive pregnant stomach. Put her in a tight pink robe that’s open, a gray shirt that covers most of the belly, and gray sweatpants. Give her belly heavy bloating. Make the bottom of her belly extra pudgy and round. Hands on lower back. Make her chest soaking wet.”

With Grok, Elon Musk has, in a perverse way, sort of succeeded at doing something both Mark Zuckerberg and Sam Altman have tried: He now runs a social media site where AI is integrated directly into the experience, and that people actually use. The major, uhh, downside here is that people are using Grok for the same reasons they use AI elsewhere, which is to nonconsensually sexualize women and celebrities on the internet, create slop, and to create basically worthless hustlebro engagement bait that floods the internet with bullshit. In X’s case, it’s all just happening on the timeline, with few guardrails, and among a user base of right-wing weirdos as overseen by one of the world’s worst people. 

All of this is bad on its own for all of the obvious reasons we have written about many times: AI models are often trained on images of children, AI is used disproportionately against women, X is generally a cesspool, etc. Elon Musk of all people has not shown any indication that he remotely cares about any of this, and has in recent days Groked himself into a bikini, essentially egging on the trend. 

Some mainstream reporters, meanwhile, have demonstrated that they do not know or care to know the first thing about by writing articles based on their conversations with Grok as if they can teach us anything. Large language models are not sentient, are not human, do not have thoughts or feelings, and therefore cannot “apologize” or explain how or why any of this is happening. And Grok certainly does not speak for X the company or for Elon Musk. But of course major outlets such as Bari Weiss’s CBS News wrote that Grok “acknowledged ‘lapses in safeguards’ on the platform that allowed users to generate digitally altered, sexualized photos of minors.” The CBS News article notes that Grok said it was “urgently fixing” the problem and that “xAI has safeguards, but improvements are ongoing to block such requests entirely.” It added that “Grok has independently taken some responsibility for the content,” which is a fully absurd, nonfactual sentence because Grok cannot “independently take some responsibility” for anything, and chatbots cannot and do not know the inner workings of the companies who create them and specifically the humans who manage them. There were dozens of articles explaining that “Grok apologizes,” which, again, is not a thing that Grok can do. 

Another quite notable thing happened last weekend, which is the United States attacked Venezuela and kidnapped its president in the middle of the night. In a long bygone era, one might turn to a place like Twitter for real-time updates about what was happening. This was always a fraught exercise in which one might need to keep their guard up, lest they fall for something like the “Hurricane Shark” image that showed up at hurricane after hurricane over the course of about a decade. But now the exercise of following a rapidly unfolding news event on X is futile because it’s an information shitshow where the vast majority of things you see in the immediate aftermath of a major world event are fake, interspersed with many nonconsensual images of women who have had their clothes removed by AI, bots, propaganda, and so on and so forth. One of the most widely shared images of “Nicolas Maduro” in the immediate aftermath of his kidnapping was an AI generated image of him flanked by two soldiers standing in front of a plane; various people then asked Grok to put the AI-generated Maduro in a bikini. I also saw some real footage of the US bombing campaign that had been altered to make the explosions bigger.

The situation on other platforms is better because there are fewer Nazis and because the AI-generated content cannot be created natively in the same feed, but essentially every platform has been polluted with this sort of thing, and the problem is getting worse, not better. 

Leave a Reply

Your email address will not be published. Required fields are marked *