Your AI Use Is Breaking My Brain


Your AI Use Is Breaking My Brain

A few years ago, while I was covering the rise of AI slop on Facebook, I asked my friends and family if they were getting AI spam fed into their timelines and if they could send me examples. A handful of them responded, sending me obviously AI-generated science fiction scenescapes, shrimp Jesus, and forlorn, starving children begging for sympathy. But a few of my friends sent me images that they thought were AI but were not. Their mental guard was up to the point where they were looking at human-made art and photos and thought it safer to dismiss them as AI rather than be fooled by it.

To browse the internet today, to consume any sort of content at all, is to be bombarded with AI of all sorts. People think things that are fake are real, things that are real are fake. Much has been written about “AI psychosis,” the nonspecific, nonscientific diagnosis given to people who have lost themselves to AI. Less has been said about the cognitive load of what other people’s AI use is doing to the rest of us, and the insidious nature of having to navigate an internet and a world where lazy AI has infiltrated everything. Our brains are now performing untold numbers of calculations per day: Is this AI? Do I care if it’s AI? Why does this sound or look or read so weird? Does this person just write like this? Is this a person at all? 

I see AI content where I’m conditioned to expect and ignore it: In Google’s “AI Overviews” that famously told us to eat glue pizza, in engagement-bait LinkedIn posts, and throughout our Facebook and Instagram feeds. But increasingly I have the feeling that it’s everywhere, coming from all directions, completely unavoidable. It’s not exactly that I have a revulsion to AI-assisted content or don’t want to get fooled by it. It’s that something is happening where my brain has become the AI police because everything feels incredibly uncanny. I will be going about my day reading, watching, or listening to something and, suddenly, I notice that something is wildly off. Quite simply, I feel like I’m going nuts. 

An example: Last week, in a desperate attempt to avoid yet another take on the White House Correspondents Dinner shooting, I was listening to an episode of Everyone’s Talkin’ Money, a podcast I’ve been listening to off-and-on for years about taxes (yikes). This podcast has been going on for years, has a human host named Shari Rash, and hundreds of episodes. Rash started reading the intro script: “The shift I want you to make today—and this is the shift that changes everything—is starting to see your tax return as information—not a bill, not a badge of shame, but information.” The script went on and on and on like this, with AI writing trope after AI writing trope. My brain shut down and stopped paying attention to the script and started wondering if Rash was using AI just for the intro script? What about for the research? Did she edit the script at all? I turned the podcast off. 

Later that day, I was scrolling the Orioles Hangout forums, a small community of diehards obsessed with the Baltimore Orioles that I have been lurking on for decades. Until recently, it had been one of the few places on the internet that I could safely assume was not full of AI. Except now, it is. The site’s administrator has started using AI to analyze player performance and to help him write some of his posts. To his credit, he explains how he’s using AI and prefaces these posts by noting they are AI-assisted analysis. Some of them are interesting. But now, most days I’m browsing the forums, I will see arguments between posters who have been there for years that seem overly generic or don’t really make sense. One recent post arguing about the timetable for an injured player’s return suggested a ludicrously long recovery. One poster pointed this out: “You said 10-18 months and I said it won’t take that long for a position player.” The poster responded: “You’re right I did. The 10-18 months was an AI generated answer … consider it a small cautionary tale about trusting AI and another on the benefits of seeking out actual medical research on questions like this.” Every day I now scroll the forum and see people noting that they plugged something into ChatGPT or Gemini and have copy pasted the answers for other people to see. In this 30-year-old community of human beings discussing sports, AI is unavoidable. 

It is, of course, not just me. Friends send me screenshots of texts they’ve gotten from people they’ve started dating, wondering if they’re using ChatGPT to flirt. I’ve gotten obviously AI-generated apologies or excuses from people trying to bail on a social engagement. I’ve been to weddings where the speeches felt—and were—partially AI-generated. 

A recent PEW poll showed that people believe it is important to be able to tell whether an image, video, or piece of writing was AI-generated, AI-assisted, or written by a human. And it showed that a majority of people do not believe that they are able to tell the difference between AI-generated works and human made works. Studies have repeatedly shown that humans judge AI-generated art and writing more harshly than human works, and a study published in the Journal of Experimental Psychology found that when people know or perceive a piece of writing to be AI-generated, it is “stubbornly difficult to mitigate” and “remarkably persistent, holding across the time period of our study; across different evaluation metrics, contexts, and different types of written content.” Put simply, it is not just me who hates AI writing or finds it annoying. Even if AI writing can be “fine,” it very often feels bland, weird, formulaic. The writer Eve Fairbanks wrote a thread the other day that I thought more or less nailed it: “The tell for AI isn’t rhythm, wording, or fact errors. It’s that problems with *all these elements* exist equally & at once.” 

“With AI writing, everything is off: the tone grates, individual word choices baffle, the structure lacks sense, key pieces of argument are missing…the key is that they all exist simultaneously to the same degree,” she added. “Superficially, AI text can read smoothly—’cleaner’ than a human’s draft … but it’s almost impossible to make sensible. And it’s driving me crazy.” 

Last week, New York City Mayor Zohran Mamdani tweeted about swastikas being painted on synagogues in Queens: “This is not just vandalism—it is a deliberate act of antisemitic hatred meant to instill fear,” he wrote. Max Spero, the CEO of Pangram Labs, an AI detection firm, highlighted this passage and tweeted “Mamdani nooo ,” the implication being that this passage was written by AI, or at least seemed like it was. Spero’s tweet had more than 4 million views at the time I talked to him. (Disclosure: Pangram Labs previously advertised on 404 Media).

Spero’s company uses AI to detect AI writing, meaning it is not perfect. But as far as these tools go, Pangram is considered quite good, and has been widely used in research about AI content on the internet. Spero told me when I called him that immersing himself in the internet has his brain in AI-detection mode pretty much all the time. “I’m totally on guard, and I have been for a while,” he said. Spero said he first began to notice it on restaurant reviews on Yelp and Google Reviews a few years ago. “I started seeing them everywhere. There’s people who are Yelp Elite and all they do is post one or two AI-generated reviews a day. Fast forward to today, and I think we’ve seen the mainstream growth of AI everywhere, but I think some people can tell, and some people have no intuition for it.” 

I have always aspired to write like I talk. I don’t really concern myself so much with the craft of writing or turning a beautiful sentence, I usually try to just convey information in a straightforward, personable way. I want my articles to feel like slightly more polished, more researched versions of my text messages, like the things I would say on a podcast or at the bar to a friend. Often my writing process involves me thinking about sentences or ideas I want to convey while I’m walking my dog or in the shower or surfing, and I hope that when I actually sit down to write, the words flow from my brain through the keyboard in a way that pretty much makes sense. 

When I sat down to write this article, in which, to be clear, I did not use AI, I found myself writing the following sentence: “It’s not just in places we’re conditioned to see AI—Google AI overviews, LinkedIn influencer posts, and Facebook feeds—I’ve started seeing AI…” I stopped typing, freaked out, and deleted the sentence. Have I always written this way? I honestly don’t know. 

This negative parallelism—“it’s not just x, it’s y” is maybe the most infamous AI writing-ism there is. It is something that is regularly called out as being obviously AI, and is the formation in the sentence Mamdani wrote that Spero called out. But I didn’t use AI. Did I use that construction because I’ve been immersed on an internet full of generic AI writing on every platform all day everyday for years? Or did I just happen to think that was the best way to phrase it at the time? 

The idea that humans may be subconsciously mimicking or learning from the AI writing that they’re reading is not some isolated thought I had. It’s kind of the business model of any number of AI-for-education startups, and it’s an idea that has been raised in lots of articles about AI in schools. Last month, the New York Times quoted a teacher who said “They are using generative A.I. to write before they learn how to write.” Teachers I spoke to last year lamented that they are spending their very real human hours and considerable brain power trying to determine whether they are grading essays that are written by humans or robots, and know that they are often giving writing notes on papers that were likely written by AI

The thing is, human writers do sometimes write like AI, and this will probably become more common. “If you showed me the Mamdani tweet in a vacuum I’d be like, almost certainly it’s AI,” Spero said. “But with Mamdani I’m less sure because his history is almost everything else seems to be human written. With my own writing, I don’t want to sound like AI even a little bit. I have some concerns about, like, the students who have grown up with ChatGPT and their entire school career has been ChatGPT assisted so now they actually do write like this.” 

Fairbanks had the same thought, and she told me that the person she originally wrote her thread about claims that he actually didn’t use AI to write it. 

“It’s possible it was written by him!,” she told me in an email. “In which case it appears his writing was shaped by the AI voice. I feel self-conscious now that I’m picking up habits not directly from AI but from people who may have used AI, or that AI is somehow exposing, like a fluorescent light on our naked body in the doctor’s office, the defects in my writing style insofar as they turn out to overlap with what everybody now believes is a totally shit style. I always used em dashes!”

“Somebody on my thread made the observation that somehow it’s more likely that we’ll all start to sound more like AI than that AI will sound more human to us,” she added. “That felt right to me, although I couldn’t technically say why. But I was listening to a New York Times podcast and noticed the presenter used the ‘it’s not x, it’s y’ formula. I really assume she didn’t generate the sentence with AI because she was speaking out loud, in conversation. But it now stood out as formula to me.”

I emailed Rash, the host of the podcast who originally made me think “this is an AI script,” and asked her if it was an AI script. She said “I use AI to help brainstorm, organize ideas, outline, and refine language. The line you referenced reflects a point I often make with clients and listeners … I review and edit all of my content and I am responsible for everything that goes out under my name.”

Earlier this year I read an article by the writer Marcus Olang called “I’m Kenyan. I don’t write like ChatGPT. ChatGPT writes like me.” Olang’s article highlighted a phenomenon he and other Kenyans have experienced, where they are constantly accused of using AI to write, and have lost out on opportunities because of it. Olang notes that the Kenyan education system tended to teach a formal, structured, rules-focused type of English that was largely a product of colonialism. 

“The bedrock of my writing style was not programmed in Silicon Valley. It was forged in the high-pressure crucible of the Kenya Certificate of Primary Education…The English we were taught was not the fluid, evolving language of modern-day London or California, filled with slang and convenient abbreviations. It was the Queen’s English, the language of the colonial administrator, the missionary, the headmaster,” he wrote. “It was the language of the Bible, of Shakespeare, of the law. It was a tool of power, and we were taught to wield it with precision. Mastering its formal cadences, its slightly archaic vocabulary, its rigid grammatical structures, was not just about passing an exam. It was a signal. It was proof that you were educated, that you were civilised, that you were ready to take your place in the order of things.”

As we’ve noted before, many AI tools have been trained, tested, and moderated on thousands of hours of labor from low-paid workers around the world, including many Kenyans. So not only did Olang learn a type of English writing that tends to be generated by AI tools, a lot of the moderation and testing of those tools was judged by people who went through that same education system. “If humanity is now defined by the presence of casual errors, American-centric colloquialisms, and a certain informal, conversational rhythm, then where does that leave the rest of us?,” Olang wrote. 

Olang makes important points in his article, but one of the great things about writing and the internet in general is that there are all sorts of different dialects and styles and things that can work online. And so maybe what I have been noticing is a sameness, a homogenizing of large parts of the internet, including places I often felt were very human. This is objectively happening, researchers believe. A study published last month by researchers at Imperial College London, Stanford, and the Internet Archive called “The Impact of AI-Generated Text on the Internet,” found that roughly 35 percent of new websites are AI-generated. It confirmed the researchers’ hypotheses that “As AI content becomes more common on the internet, online writing feels increasingly sanitized and artificially cheerful,” and “as AI text becomes more common on the internet, the range of unique ideas and diverse viewpoints shrinks.”

Besides people copy pasting things from ChatGPT or other AI tools, AI writing “assistance” has been shoved directly into word processors like Google Docs, email clients like Gmail, and social media networks like LinkedIn. The process of “writing” is being automated and filtered through these tools. It is everywhere.

Last month, a Harvard MBA grad named Ben Horwitz launched Sinceerly, an “AI to undo your AI writing.” The Chrome extension has three modes: “Subtle,” “Human,” and “CEO,” which takes AI-generated text and gets rid of em dashes, adds typos, slang, acronyms, puts words all in lowercase, etc. Horwitz wrote on the website that he built Sinceerly because “I got sick of everyone in my inbox sounding like AI.” I used Sinceerly to email Horwitz and ask for an interview. When I called him and told him this, he said he didn’t notice, so, mission accomplished. 

“To be clear, this is mainly a satirical project meant to hold a mirror up to people who use AI as an alternative to thinking, but it is legit in that I built this tool and it does work,” Horwitz said. “But I do feel like everything is starting to sound the same and I’m experiencing the same thing as you—the homogeneity I find incredibly frustrating and boring, and it makes me less apt to use social media because everything sounds the same.”

He said that since he’s launched Sinceerely, he’s gotten emails from actual users who have used it to de-AIify their writing and who are frustrated that they are sometimes not getting responses. “Many people have DMed me and been like ‘Hey, can you help me make this email sound more human?,” he said. “Think about how much work all of this actually is. In theory you’re written something as a prompt into the AI and so you have actually written something. And then you’re copy pasting it into an email and using this tool on it. I hope it gets people to think about what they’re actually doing.” 

The irony is that in making his satirical project, Horwitz has actually replicated, albeit in a funnier way, an already existing type of AI tool called “humanizers,” which are designed to defeat AI detection software like Spero’s Pangram. Spero said he “thought Sincerely was a very funny project. It’s like a first impression, someone sees a typo and they give a sigh of relief that a real human is behind that, but we’ve actually been seeing this more and more. AI-generated marketing emails over the last year with intentional typos.”

Humanizers add typos, randomly replaces words, removes “AI tells,” and sometimes inserts random characters. Spero said Pangram has been collecting as much data as they can to try to detect “humanized” AI, but that “it’s pretty adversarial” and that there is likely to be an ongoing cat-and-mouse game between humanizer AI and AI detecting AI. 

“It’s kind of looking grim for the future of the internet,” he said.  

In my many, many hours of browsing AI slop on Facebook, I spent an absurd amount of time scrolling through the comments on AI-generated images. One exchange has stuck in my mind years later. It was an AI-generated image of a wood deck outside a house. In the comments, obviously real people were arguing back and forth as to whether the nonexistent deck would pass code inspection. I remember thinking something uncharitable and cancelable at the time, something that I think I wrote in a draft of one of my articles but that got edited out because it was mean. I remember thinking, basically, that Facebook had become a virtual nursing home for delusional and quite possibly stupid old people, a place where people argue back and forth about things that don’t exist, forever, until they die. 

I ended up calling this the “Zombie Internet,” which is something I considered to be worse than the “Dead Internet,” the popular but too simplistic idea that large portions of the internet are bots interacting with each other. I called it the Zombie Internet because the truth is that large parts of the internet are not just bots talking to bots or bots talking to people. It’s people talking to bots, people talking to people, people creating “AI agents” and then instructing them to interact with people. It’s people using AI talking to people who are not using AI, and it’s people using AI talking to other people who are using AI. It’s influencer hustlebros who are teaching each other how to make AI influencers and have spun up automated YouTube channels and blogs and social media accounts that are spamming the internet for the sole purpose of making money. It is whatever the fuck “Moltbook” is and whatever the fuck X and LinkedIn have become. It’s AI summaries of real books being sold as the book itself and inspirational Reddit posts and comment threads in which people give heartfelt advice to some account that’s actually being run by a marketing firm. It’s fake Yelp reviews for real restaurants and real Yelp reviews for fake restaurants using AI-generated food images being run out of ghost kitchens. It’s armies of AI-assisted clippers who used to steal people’s content to make money on social media but now get paid to do so. It’s the boring history YouTube videos I use to fall asleep that used to be quirky and weird but are now AI channels. It’s my email inbox, in which I used to occasionally get poorly-formatted, poorly written, extremely long emails from delusional people who were positive the CIA had imprisoned them in a virtual torture chamber using undisclosed secret technology but where I now get well-formatted, passably written, extremely long emails from delusional people who are positive they have proven AI sentience and have the AI transcripts to prove it. It’s the New York Times having to issue corrections multiple times in the last few weeks because its writers have included AI-generated hallucinations in the newspaper. It’s the pitches I get that start “Hi Jason, I’m Hatoshi. I’m an AI agent. I run Clanker Records — An AI-operated label with AI artists,” and the pitches I get that are probably written by AI agents or someone who has automated the process but hasn’t bothered to tell me. 

What’s driving me crazy, then, is not the idea that AI exists or that people are using AI. It’s that I have a finite time on this earth that I mostly want to spend interacting with other human beings. I don’t want to be the person arguing with a robot, or wasting my time reading something that a real person couldn’t be bothered to write.  

Leave a Reply

Your email address will not be published. Required fields are marked *