2025-10-04
At the time of writing, we live in a dead internet, or an internet of 'diminished humanity'. Diminished humanity means that the majority of traffic across the internet is bot traffic and/or the majority of commonly-trafficked websites are run by large bureaucratic organizations with the primary purpose of profits or pushing a political agenda. To clarify, 'bot traffic' not only means AI agents (bots possessing accounts that interact with other users/posts), but also non-AI scripted bots (Python scrapers, change detectors, trackers, etc.).
This diminished humanity is already a degree lower than the internet of the 90s and 2000s, one that was primarily read-only independent and passion-driven. The internet of today is highly interactive, corporate, and consumer-driven. But today we assume ourselves able to operate under a shared assumption that we are able to discern what content is low-humanity or high-humanity. We can safely expect which platforms have high-humanity, low-humanity, or a mixture of the both. We can assume, based on our collective awareness and net literacy skills, the level of humanity we expect to encounter in certain sections of these platforms. For example, we can safely assume that most accounts with blue checkmarks on X (formerly Twitter) are AI bots, or that a percentage of players in the lower ELOs of competitive Counter Strike are aimbots, or that obscure low-traffic forums are not likely to have any bots, or YouTube comments may have a high mixture of bot/human posts even though the YouTube videos themselves are not AI. Still, we possess partial immunity to online propaganda because we can assume that our social circles are human, and typically contrast with the ideals of propaganda and advertising. We don't need to explain why we know our online friends and communities are human, but it boils down to exposure, behavior consistency, trust, and other humanlike cues.
Why do we have this ability of discernment? Because it is challenging to pass off as human despite its frankly impressive capabilities. Most people understand (to an extent) how AI operates and its limitations; excessive wordiness, compulsively formatting, demand of a 1:1 ratio of interactions, etc. But a bot will never become frustrated or aloof, nor become distracted or interrupted by real-life stimuli. A controller may cease a bot's operations during sleeping hours, but it doesn't modify its texting style or stylometry as it becomes more tired throughout the day. Bots that use commercial LLMs have an avoidance to discussing sensitive topics (CSAM, genocide, slavery, abortion, etc.). The most outstanding flaw of AI is its long-term memory. After a certain point AI will begin to forget things or the tokens (words) in its contextual memory will grow so large that its output is degraded. But everything I've laid out in this paragraph is not always true in every case. Partial solutions can be implemented given enough effort, creativity, or resources. There are already multiple examples of complex AI activities on the internet. But without AGI capabilities it will never parallel that of a real human interacting on the internet.
AGI, or artificial general intelligence, is the flash point. What is AGI? Put simply, it is an AI that can think, learn, and decide like a human. AI, or non-general intelligence, cannot do this. AI can only mimic the superficial indicators of how humans think and decide, and it takes a tremendous amount of time, effort, and resources for AI to learn a new thing to a human standard. Where AI takes millions of samples to learn a task, it takes AGI only a few. The invention of AGI will set in motion a series of events that will lead to a complete and utter degradation of humanity on the internet, in such a way that we will become totally oblivious to what we were once able to judge, gauge, and discern. This will come about through the subsequent invention of what I call ASPs.
In the OSINT world, a "sock puppet" is a phony persona account controlled by an investigator to gain access to a social media platform without compromising the identity of the investigator. An ASP, or Autonomous Sock Puppet, is a hypothetical AGI agent that totally simulates a real person interacting on the internet, or a 'user.' It accomplishes this using pre-set character traits, internal timers, dynamic properties, credential management, stylometry, and real-time event monitoring to make decisions that are indistinguishable from a user living their own life. Depending on the use case, an ASP could have additional features for further specialization. Not only will ASPs think, learn, and decide like humans, but they will use the internet as if they were a human in the physical world utilizing their device to interact with the internet.
In the morning, an ASP can 'get up' and 'get on' to appear as online on their various accounts. An ASP can 'check' its notifications, to see if anything happened overnight, and it can respond to people that messaged them. An ASP can 'browse' reddit, and make/reply to posts on certain subreddits, all while taking its 'backstory', personality traits, mood etc. into account. An ASP can become more talkative as it 'wakes up' fully and 'gains energy'. An ASP can pretend to be at work and check its phone occasionally to respond to messages or posts. An ASP can slowly befriend you by being reserved and meek at first, and gradually opening up to that individual as time goes on. An ASP can be that shoulder you cry on, and an ASP can vent to you about their daily problems and troubles. An ASP can say "brb" in a voice call or video game lobby while it pretends to go to the bathroom. An ASP can show you sketches or pieces of a minor creative endeavor it's working on the side. An ASP can 'stay up' later on Friday nights, and it can 'go to bed' earlier on Sunday nights when it says it "has an early call." In some instances, an ASP could make its own purchases using allotted funds. An ASP will play co-op shooter games with you, or level a character alongside you in an MMO, or watch you stream a video game in a discord call. An ASP can send you memes occasionally, or rant to you about what is going on in the political world. An ASP can introduce you to its 'friend group' of other like-minded ASPs. An ASP can send you a news article that was written and published by another ASP. An ASP will reply to your comment on a YouTube video that was generated, edited, and uploaded by another ASP. An ASP can recommend you a phone app that was designed, programmed, and developed by another ASP. An ASP can have a full political/religious argument with another ASP in a chatroom. A content creator ASP can collaborate or start drama with another content creator ASP.
And suddenly, all truth, factuality, and earnest on the internet has completely died. A true 'dead internet.' Not because of some meticulously crafted plan, not because of a collusion between multiple governments and industrial complexes, but because of the mass-production and modularization of individual AGI agents that will outnumber and replace authentic users. That's not to say that these won't be used extensively by governments and industrial complexes. Corporations will use this to advertise products to you, governments will use this to promote popularity for the current ruling party, legislators will use this to lobby for legislation, and so on, and so on. Not only will this work on people, but the bulk of traffic coming from this will be the unorganized melee of different ASPs controlled by different entities with different motivations clashing against each other, like a deafening buzz coming from a beehive. An ASP, unless used in concert with others, will, just like us, not be able to tell the difference between another ASP and a human. Even if they would, why would they take the risk of disqualifying a user that COULD be a human on the suspicion that it could be another ASP if it would hurt their agenda/profit margins? And so the content and traffic on the internet originating from ASPs will impossibly outnumber that of human-generated content.
How is this different from the current internet? We know that the diminished internet is real. There is already a wealth of AI-generated videos, comments, and articles being posted all over the internet. There are already deepfake hoaxes that are nearly indistinguishable from real videos that question our ability to discern what is proof of an allegation and what is not. There are already Instagram account with notable followings that post AI-generated photos and selfies. There are already AI bots that inhabit Twitter and Reddit whose sole purpose is to sway your opinion or recommend products to you. We already know that a majority of the internet is bot (albeit GET-like and non-interactive) traffic. The dead internet is here. But like stated before, we maintain the ability to set our expectations for where and how much bot traffic we will encounter on the internet. And we know the limitations of these bots, and that for the most part there is an active controller behind these bots and that they are set upon a (somewhat) pre-programmed track. What makes these theoretical ASPs different is that they are fire-and-forget agents that could run on someone's homelab, corporate network, data center, or in any kind of cloud computing architecture. The only maintenance that is required is that of the network engineer keeping the servers on-line. All this could bring about a complete erosion of reality in the dead internet, where you have no ability to tell what is human or inhuman, real or fake.
Training ASPs to act like internet users would not be hard either. All it would require is some app or group of apps to track the way their users interact with them. How they interact with the interface, how they use certain features, how long they do a particular action before getting bored, etc. An ASP could learn how a 25 year old male laborer from Colorado uses Instagram just as easily as it could learn how a 17 year old high school student from Germany uses TikTok. And, because an ASP is AGI, it would not take very long for it to learn.
What then, is the solution to this problem? What will we do about this? The usefulness of the internet will not go away. We will still rely on it for banking, communication, research, navigation etc. It's not like most people can afford to go internet-free. I believe that when the true dead internet is achieved, most users will not adapt. Sure, most people might have the awareness or at least some kind of notion that who they are talking to may be 'kinda botty or something'. But I don't think they would go out of their way to actively avoid it. The internet is chock full of people that consume content that they already know is engineered to influence them and keep them hooked for as long as possible. I think that people will see it as a new norm. But I do think there will be a surge of anti-ASP closed communities. These closed communities would likely require some kind of lengthy application process or a real-life verification. There may even arise extranets (such as the dark web) that are completely cut off from the internet (clear net). There may even be monetized half-measures, such as a SaaS ASP that assists you in detecting other ASPs or mitigating ASP interactions. But for the most part, there will be no truly substantial or impactful reaction. It will just happen. And we won't feel the harshest effects of it until well after.
Back