Design by Tamara Turner. Buy this photo.

Like any other student, a part of my daily routine is checking my phone constantly, whether it’s to text someone or see an update on one of my favorite sports teams. However, I find that of everything that I use on my phone, social media takes up the majority of the time. From TikTok to Instagram, and even Twitter, social media dominates my screen time, and it’s the same for most other college students. But while students spend hours a day on social media, this is through no fault of their own. The issue lies with the social media companies and the manipulative algorithms they create.

Social media companies make their money primarily by showing ads on their sites and selling user data to those advertisers to better personalize those ads. Whether in the middle of a Twitter thread or before a video on TikTok, ads are pervasive throughout social media, even if they appear hidden initially. As more users see more ads, social media companies make more money. While these ads are an inconvenience to most users, they are a necessary evil users deal with to keep these platforms alive. 

Users allow for this interaction on social media to occur for two main reasons. First, these ads take up such a small portion of their time on the site that it is a minimal annoyance in the way of something they get a lot of benefit from overall. Second, the use of the social media site is free aside from being subjected to ads every so often. However, ads become problematic when social media companies realize that maximizing ad viewership correlates to more money, and use manipulative tactics in this pursuit of more money.

The best way social media companies have discovered to maximize the number of users viewing their ads, and thus their profit, is by maximizing the time spent on their sites. If a social media company can keep a user online longer than before, more ads can be shown to them. This provides more appealing metrics to those wanting to purchase ads, with social media companies boasting views on their site as a selling point. As platforms push out algorithms that maximize engagement by their users, problems begin to arise. One of the main issues that these profit-maximizing algorithms create is that of online extremism and fragmentation, especially on the political front.

For some, this might not be the biggest realization, as there are articles, videos and even personal testimonies describing how social media leads people to fall into the “alt-right pipeline.” The alt-right pipeline seen on social media is mostly attributed to young teenagers and the content they view online. While this is an issue in its own right, the larger issue at hand is the methods social media companies use to allow these “pipelines” to be created in the first place — this is where algorithms come into play.

Social media algorithms are designed to maximize time spent on their sites so the companies that own them can, in turn, maximize their profits. Where the issue arises is how this is done. Social media companies have found that in order to keep users on their sites, they need to push them into environments where they interact with only like-minded users, and, in some cases, engage with extreme political views. In sum, social media algorithms create echo chambers.

The push for algorithms that only feed users content they agree with has fueled the rise of extremism online. While social media companies avoid most of the backlash they should receive for doing this, they have faced some criticism. However, social media sites are protected under Section 230 of the Communications Decency Act, which protects them from liability for user-generated content. Because they aren’t technically responsible for what their users say, social media companies have gone free for the most part. 

Social media sites have taken some steps to address the issue, including the implementation of their own fact-checking systems as well as custom timelines to alleviate the negative effects of algorithms. While these new features do work to ease the issue, they still don’t solve it. Online extremism hasn’t stopped as a result, and has even grown in some cases. The main issue is that these companies are still operating on their own terms. 

While the U.S. government tried to implement an oversight system earlier this year to address the spread of fake information on social media, which in turn contributes to echo chambers and extremism, it was met with immediate backlash due to its perceived partisan nature. Despite this failure, there remains some hope that a nonpartisan oversight committee can be formed to help eradicate the growing fragmentation online, as well as end the algorithms that place the profits of social media companies over the well-being of their users. If we want to stop radicalization on social media, we must achieve this goal.

Tom Muha is an Opinion Columnist & can be reached at tmuha@umich.edu.

Have thoughts about our pieces? The Michigan Daily is committed to publishing a diversity of Op-Eds & Letters to the Editor. Submission instructions can be found here.