Illustration of a person on a computer using an AI bot that tells her "I'm contacting the authorities."
Design by Anna DeYoung

As generative artificial intelligence is growing increasingly popular and accessible, many user-based companies in the digital world have been affected by AI in some way or another. Google has used AI to “assist” their search engine and X has seen an insurgence of bot accounts that use AI models to generate interactions, for example. Due to this increased usage of artificial intelligence, user-based companies that operate in the digital world are both on the decline and losing their attraction

Google searches have recently been getting much more inefficient. An obvious culprit as to why is their inclusion of artificial intelligence into their search engine. Prior to the rise of generative AI, Google — although their exact algorithms were unknown — relied on keywords to bring up relevant information about your search. Now, Google uses artificial intelligence to try to guess additional information you might want based on the context of your search, and then populate information relevant to that context. 

This process allows Google to show you results from advertisers that are related to, but don’t quite match, your search. For example, if you search for “fishing,” Google could show a result for a fishing rod from a company that pays Google to advertise their business. 

Artificial intelligence is also exacerbating spam. Generative AI has allowed content to be produced faster than ever. Entire articles can be generated with a single prompt. Large amounts of spam, often with incorrect information, are then pulled up by Google due to the incorporation of faulty AI

Nowhere is AI-generated spam more prevalent than on X. The amount of bot accounts that generate tweets using AI has become worse than ever, and with Elon Musk’s new “X premium” model that pays  subscribers based on post engagement , which includes AI-generated spam posts, the problem will only continue to grow.

X’s spam problem highlights a larger issue in the digital sphere involving generative AI: Instead of creators making their own original content, companies are turning to AI to produce content while cutting labor costs. This is no more evident than when Sports Illustrated published multiple articles that were completely AI-generated. This will have intense ramifications. Not only will jobs be lost — potential industries as a whole could be in jeopardy. Beyond the obvious difficulties of replacing tens of thousands of jobs, it is hardly justifiable, much less ethical, to completely remove people from their jobs in such a manner. Forcing these workers to get other jobs may result in them going to an industry that they are less passionate about, and, as a result, less happy in.

Beyond digital companies’ AI usage, AI itself has already become less and less reliable than it was at its inception. Generative AI was once billed as the next big thing in the digital world. Despite its apparent faults, it seems like every company is competing for ways to effectively use AI, which will most likely result in something similar to Sports Illustrated: cutting labor and time costs to produce artificial content. Since AI itself is becoming less reliable, this artificial content will, at best, become repetitive and formulaic, and at worst, become borderline unreadable. Eventually, people will fight back against this usage of AI.

It seems like that may come sooner rather than later. AI companies have recently lost $190 billion in stock market value. Beyond AI being bad for entire industries and the people employed in those industries, if AI cannot be profitable, it cannot endure as “the next big thing.”

Proponents of generative AI see it as a way to democratize various processes; those who did not have the capability to produce certain forms of content now do with generative AI. Since lots of AI models are free, or at least have accessible free variants, the options for what one person can create are now relatively limitless. Those who share this view also believe that the person who creates the artificial intelligence prompt owns the content, as opposed to the view that AI generated the content. 

This view is ill-informed. Artificial intelligence has already gotten, and is continuing, to get worse. As such, it doesn’t make sense to replace originally created content with artificially-created content, given that it’s both worse than original content and obviously AI-generated. People spend years, sometimes their entire lives, working to hone skills so they can produce art, movies and other forms of content and entertainment. While anyone is entitled to make their own content, they are not entitled to bypass the steps previous generations’ creators have gone through to make transcendent content.

AI has a place in our future, but that role shouldn’t be replacing original content. AI should exist to enhance content in ways that could not be done by humans otherwise. For example, AI could be an extremely useful tool in enhancing an image that was captured in a worse quality originally or to quickly catch mistakes in an essay, not to generate entire images and essays itself.

The widespread use of generative artificial intelligence in the digital world is detrimentally reshaping user experiences. Google’s reliance on AI algorithms has led to less relevant search results and increased spam and advertisements, while platforms like X are becoming flooded with AI-generated content. AI’s potential democratization is tempered by its diminishing function and the degradation of original content creation. As we navigate the future of digital platforms, we must utilize AI as a tool for enhancement rather than replacement, preserving the creativity of human-driven content creation.

Gabe Efros is an Opinion Columnist who writes about American Culture and Politics. He can be reached at gefros@umich.edu.