You‘ve probably seen that the discussion around the so-called „Dead Internet Theory“ has skyrocketed in popularity and can be found in threads all across the web. Though its origin comes from a time before services like ChatGPT were publicly available, the „Dead Internet Theory“, which imagines a world where the vast majority of supposedly human interaction online is, in fact, the result of bots, it’s no surprise that the theory has more relevance today than when it was first proposed.

Illustration by Michael Zheludev

Not only has AI increased the amount of ‚slop‘ content, but it has also influenced the social spaces that form naturally around topics on the internet. The concern is not merely that the video you’re watching, or the article or post you’re reading, is artificially generated or reposted by a bot, but that a substantial amount of the views, interactions, and, yes, comments that flood these posts are also artificially generated. Significant portions of Spotify listeners, YouTube commenters or Reddit accounts are in reality just a way to inflate the numbers game that is modern social networking and self promotion.

What I‘d like to talk about, though, is yet another evolution of the influence that bots and automation is having on our social spaces; rather than a complete desolation of human interaction, users are becoming more and more comfortable passing off AI-generated content as their own, using AI as a middle-man between their own thoughts and the conversation they‘re participating in. Not just for the purpose of marketing, to swindle and steal, and to inflate their apparent credibility for the sake of profit, but for the sake of self-indulgence. The simple pleasure of appearing intelligent seems enough a reward for many to justify using AI not just to create, but to respond, to interpret, and to socialise.

At its extreme, this can result in online spaces where few, if any at all, are using their own human facilities to communicate with others in that space. Entire subreddits or online communities now consist of exchanges between humans, filtered through AI. They say it helps them ‚construct their arguments‘ or ‚format their ideas‘ when we all know that, if not said by an AI, but a secondary source like an author or philosopher, someone who that idea or that thought could be traced back to, they would never dare claim those words as their own. Many of us would agree, this is plagiarism.

But the difference between those who use AI to amplify their supposed humanity and those of us who see it as wrong lies in our inability to agree on what plagiarism is really all about. They think the inherent issue with plagiarism is the ‚theft‘, so if they’re stealing from an inanimate machine who themselves plagiarised it elsewhere and reworded it, it’s a victimless crime. Why then, refrain from using it everywhere? With everyone, and everything?

But the issue with plagiarism is only partly an issue of theft. It’s also an issue of sincerity, and an issue of transparency. Plagiarism is wrong, not just because someone might get hurt or be wronged, but because plagiarists demonstrate a disrespect for those around them by ‚tricking‘ them into thinking they are more literate, more structured, or more nuanced than they actually are. They assume themselves smarter than those who think their words are their own, and assume themselves smarter than those who work hard to develop those skills because they found a shortcut that, to them, makes their efforts meaningless. That’s why plagiarism is wrong, even if you aren’t trying to make a profit.

They also create a vicious cycle, where individuals browsing the internet feel inferior to those they interact with because they think those people are far more equipped for difficult conversations than they will ever be, pushing some of those people towards AI to, themselves, keep up. Before long, we‘ll all just be talking to each other through AI-Babbelfish. GPT-masks that hide our humanity behind a dry, stale echo of everything humankind has ever achieved.

Perfect, tidy spaces without any of that troublesome human error that keeps holding us back. A sanitised, robotic environment for all of us to reminisce in. We could all finally speak without stuttering. We could all seem smart without even thinking. Then all the loud, selfish, arrogant thinkers such as myself would finally be drowned out by the status quo. A truly dead internet.

One response to “Truth Decay – Dead Internet Theory”

  1. The changes in the ways people communicate likely began via the creation of the cellphone. This was particularly true of our youth and an intriguing new text messaging capability. Many children sitting around a table choose to text one another as opposed to communicating verbally. Perhaps it is because a text message gives them more time to express their thoughts, thereby preserving what others perceive as their intelligence level. OTOH, it may also be a means of communicating thoughts and ideas one wishes to keep secret.
    Face-to- face communications, customarily require a fairly rapid exchange of thoughts and ideas. I see a potentiality for many – if not most – of todays children eventually having extreme difficulty engaging in FTF verbal exchanges.
    This thought aligns with your expressed thoughts about AI intervention. Both are seen as capable of enhancing one’s perceived image. AI should more accurately be identified as ‘Artificial Imposition’!

    Like

Leave a reply to gil.g Cancel reply

Trending