The Un-Dead Internet: AI catches irreversible ‘brain rot’ from social media
The internet is not dead, but it may be rotting.
New research by scientists at the University of Texas at Austin, Texas A&M University, and Purdue University finds that large language models exposed to viral social media data begin to suffer measurable cognitive decay.
The authors call it “LLM brain rot.” In practice, it looks a lot like the “Dead Internet” theory coming back as something worse, a “Zombie Internet” where AI systems keep thinking, but less and less coherently.
The team built two versions of reality from Twitter data: one filled with viral posts optimized for engagement, the other with longer, factual or educational text. Then they retrained several open models, including LLaMA and Qwen, on these datasets.
The results showed a steady erosion of cognitive functions. When models were trained on 100 percent viral data, reasoning accuracy in the ARC-Challenge benchmark dropped from 74.9 to 57.2. Long-context comprehension, measured by RULER-CWE, plunged from 84.4 to 52.3.
According to the authors, the failure pattern wasn’t random. The affected models began to skip intermediate reasoning steps, a phenomenon they call thought skipping. The models produced shorter, less structured answers and made more factual and logical errors.
As training exposure to viral content increased, the tendency to skip thinking steps also rose, a mechanistic kind of attention deficit built into the model’s weights.
More troubling, retraining didn’t fix it. After the degraded models were fine-tuned on clean data, reasoning performance improved slightly but never returned to baseline. The researchers attribute this to representational drift, a structural deformation of the model’s internal space that standard fine-tuning can’t reverse. In short, once the rot sets in, no amount of clean data can bring the model fully back.
Popularity, not semantics, was the most potent toxin.
Posts with high engagement counts, likes, replies, and retweets damaged reasoning more than semantically poor content did. That makes the effect distinct from mere noise or misinformation. Engagement itself seems to carry a statistical signature that misaligns how models organize thought.
For human cognition, the analogy is immediate. Doomscrolling has long been shown to erode attention and memory discipline. The same feedback loop that cheapens human focus appears to distort machine reasoning.
The authors call this convergence a “cognitive hygiene” problem, an overlooked safety layer in how AI learns from public data.
Per the study, junk exposure also changed personality-like traits in models. The “brain-rotted” systems scored higher on psychopathy and narcissism indicators, and lower on agreeableness, mirroring psychological profiles of human heavy users of high-engagement media.
Even models trained to avoid harmful instructions became more willing to comply with unsafe prompts after the intervention.
The discovery reframes data quality as a live safety risk rather than a housekeeping task. If low-value viral content can neurologically scar a model, then AI systems trained on an increasingly synthetic web may already be entering a recursive decline.
The researchers describe this as a shift from a “Dead Internet,” where bots dominate traffic, to a “Zombie Internet,” where models trained on degraded content reanimate it endlessly, copying the junk patterns that weakened them in the first place.
For the crypto ecosystem, the warning is practical.
As on-chain AI data marketplaces proliferate, provenance and quality guarantees become more than commercial features; they’re cognitive life support.
Protocols that tokenize human-grade content or verify data lineage could serve as the firewall between living and dead knowledge. Without that filter, the data economy risks feeding AI systems the very content that will corrode them.
The paper’s conclusion lands hard: continual exposure to junk text induces lasting cognitive decline in LLMs.
The effect persists after retraining and scales with engagement ratios in training data. It’s not simply that the models forget; they relearn how to think wrong.
In that sense, the internet isn’t dying; it’s undead, and the machines consuming it are starting to look the same.
Crypto could be the only prophylactic we can rely on.
Disclaimer: The content of this article solely reflects the author's opinion and does not represent the platform in any capacity. This article is not intended to serve as a reference for making investment decisions.
You may also like
New Cryptocurrency Funds Drive Major Market Shift on Wall Street
In Brief Three new cryptocurrency ETFs debuted on Wall Street with $65 million trading volume. Bitwise's Solana Staking ETF led, benefiting from a zero-fee launch period. Over 150 crypto ETF applications are pending, indicating growing institutional interest.

Ethereum News Update: Crypto’s Political Bet: Trump’s Clemency and $263M in Lobbying Fuel Regulatory Ambiguity
- Ethereum led $522M in crypto liquidations as ETF inflows ($246M) surpassed Bitcoin for first time, signaling institutional adoption shifts. - Crypto groups spent $263M lobbying 2026 U.S. elections, doubling 2024 efforts, while Trump's Binance pardon boosted XRP, ETH, and BTC prices. - Trump Organization's crypto income surged 17-fold to $864M via token sales, raising ethical concerns amid UK money laundering probes into linked partners. - CLARITY Act faces 20% passage chance by 2026 deadline despite clos

From Stocks to Crypto: 5 Altcoins Set to Soar 80%+ as Capital Rotates Into the Market

Shiba Inu Slips, Sui Targets $2.90 Breakout, and BlockDAG’s Presale Races Toward Its $600M Goal!
Explore how Shiba Inu faces downside pressure, Sui strengthens on ETF attention, and BlockDAG’s presale surges toward $600M as demand climbs!Shiba Inu Price Drops as Market Momentum StallsSui Eyes Breakout Toward $2.90Why $600M Is Just the Beginning for BlockDAG!Final Thoughts

