At the young age of only 42, the internet died this month.

Few people knew this would happen, some feared it would, but most didn’t even realize such a thing was even possible.

And what was the cause of death?

That’s right — AI.

Okay, I’m just being dramatic. The internet isn’t dead. And AI didn’t kill it.

But we did cross a line that we’re not going to come back from.

As of October 2025, the amount of AI-generated content on the internet is higher than the amount of human-generated content.

And how much of that “human” generated content is actually from humans who are still using an AI in the background?

This actually is a remarkable stage in our story as humanity.

How many of the people you know, including yourself, get most, if not all, of their information from the internet?

From social and political to economic and technological advancements of society, our culture is shaped by the information we share online.

Especially millennials and younger generations who grew up in the era of the internet, who are essentially cyborgs that literally cannot survive without the internet.

Now, what happens when that internet is mostly filled with AI-generated content that is then used to train AI models that create more AI-generated content?

We get stuck in a loop.

Measuring the decay

So how do we know about this AI-generated content scaling?

Graphite.io is a private company that conducted a study where they took 65,000 random samples (webpages) from Common Crawl,1 between January 2020 and May 2025, ran them through AI detection tools, and predicted that it was AI-generated if the score in the detection tool was over 50%.

The main objections to this study deal with the accuracy of the AI detection tools themselves. The amount of false positives and false negatives could be relatively high with these tools.

The study has assumed that most content released before the launch of ChatGPT was human-written and tested to see how much their detector said was “AI-generated” before then. The detector classified 4.2% as AI before ChatGPT — suggesting an overall 4.2% false positive rate.

On the other hand, SurferSEO’s AI detection algorithm correctly classified 99.4% of the AI-generated articles as AI-generated, suggesting a 0.6% false negative rate for GPT-4o.

The raw data with classifications is available here.

Dead Internet Theory

In 2021, an anonymous user posted a famous essay called “Dead Internet Theory: Most of the Internet is Fake“ on an online forum.

“Large proportions of the supposedly human-produced content on the internet are actually generated by artificial intelligence networks in conjunction with paid secret media influencers in order to manufacture consumers for an increasing range of newly-normalised cultural products,” IlluminatiPirate wrote in 2021.

Since then, this has been one of the conspiracies (that also somehow feels very true) that has circulated around the internet.

With this latest study, there have never been so many believers.

And now that the venture capital firms are backing botfarms that can increase content production speed by 1,000 times, the rate of decay is only going to get faster.

Back to tribes

Another fun conspiracy-ish theme that ties in here is the “Dark Forest Theory,” made popular by the book series “Three-Body Problem.” In the book, this theory describes how higher intelligence beings hide in space, like animals in a dark forest, so as not to get eradicated by predator species.

Similarly, on the internet, humans will retreat into private chats, communities, private servers, encrypted communications, etc., to protect themselves from the bots that monetize our attention.

There’s a great piece written about this by entrepreneur/writer Yancey Strickler, who says that the internet is dying on the outside but growing on the inside.

The author looks at the schism of the bot vs. human content, but not who produces more. Instead, it’s about who has more power.

The bots have power over the “narrative of the masses,” and humans might need to play their part “on stage” out in this public discourse. But “off stage,” their actions might contradict what appears on the public channels.

Only time will tell how human communications and bonds continue digitally, but there are too many signs to ignore that a change is coming.

Safety check: OpenAI says ChatGPT can now handle crisis chats better after finding that ~0.07% of weekly users show signs of psychosis/mania and ~0.15% show suicidal planning. Tweaks aim to respond with care without reinforcing delusions.

Orbit ops: NVIDIA took the first step toward putting data centers in space, through the Starcloud H100-powered satellite. The company is pitching sustainable high-performance compute in orbit, powered by solar energy.

Legislation in motion: A bipartisan group of U.S. senators introduced a bill to safeguard children from AI chatbots that often push them toward self-harm. (We wrote about the underlying problem last month.)

Capitol watch: Arizona state Rep. Alexander Kolodin scheduled a Nov. 14 hearing on AI’s implications for democratic governance and elections, framing risks and oversight as priorities for the Legislature’s next session.

Keep Reading

No posts found