The United States has a long history of disinformation. One of the first major manipulations of the media took place in 1782 when Ben Franklin oversaw the publishing of an entirely fake issue of the Boston Chronicle. Printed on the pages of the issue was an incendiary story about the scalping of 700 colonists by Native Americans meant to generate sympathy among British citizens for the plight of Americans.

However, the contemporary perpetrators of “fake news” are capable of far more harm than anything the founding fathers could have imagined (or concocted themselves). With the immense volume of data and ease of accessibility provided by the internet, bad actors are now able to have an exaggerated influence on public opinion. Keeping the internet free for everyone while simultaneously controlling the effects of disinformation is a balancing act, but ultimately we all as digital citizens have a responsibility to be aware of the myriad of online threats and its consequences.

We know that disinformation is nothing new, but how exactly did it come about on the internet? The pioneers of this new breed of digital content are known as “trolls,” invoking the mean-spirited mythological creature that taunts its victims from under a bridge. According to Data & Society, a research institute that focuses on the social and cultural impact of data in the modern world, a troll is defined as someone who deliberately baits people to elicit an emotional response. Trolls started to pop up in the early 2000s on internet message boards such as the website 4chan, where anonymous users can post content consisting of simple words and pictures.

It was primarily on these anonymous platforms that the malicious side of trolling took shape. Though many trolls claim to be apolitical — simply trolling for the “lulz,” as they like to call it, the reality is trolling is skewed much more toward “alt-right” viewpoints. For example, the users of 4chan’s /b/ sub-board use deliberately offensive hate speech to create an emotional impact on their targets. As opposed to the moralistic, sometimes smug political correctness and affinity for fairness supported by the left, trolls bring out the worst in toxic white rage, meninism, nativism and similarly twisted views about vulnerable groups that slip through the cracks of “alt-right” ideology.

The effects of trolling are not limited to sub-boards, though, and in the past few years mainstream conservatives have adopted the highly sensational tactics of trolling to appeal to voters. From the first moments of Donald Trump’s 2016 campaign, when he rode down the escalators to announce his candidacy and said of Mexicans, “They’re bringing drugs. They’re bringing crime. They’re rapists. And some, I assume, are good people,” his speech had all the hallmark elements of trolling.

Back then, people still thought that Trump was running to highlight the hypocrisy of the political elite. Yes, he was running for office from an official standpoint, but very few people thought he was serious. This represents a key trolling tactic of preserving ambiguity called Poe’s Law, an internet adage that asserts the difficulty of distinguishing between sincere expressions of extremism and satire of extremism. Therefore, trolls theoretically always have the moral authority of challenging the establishment, as Trump positioned himself so many times on the campaign trail, rather than simply engaging in hateful discourse.

Unfortunately, armed with hateful rhetoric and their own version of moral justification, a very visible sub-species of trolls (shall we call them ogres?) emboldened by online trolling behavior has come to dominate today’s political discourse. In a country that supposedly champions liberal values, the most valued content in our online atmosphere proves to be sensational and damaging. Websites such as Twitter and Facebook are struggling to curtail the effects of racist, misogynistic and xenophobic accounts, most of which are run by foreign actors or fake bots designed to algorithmically post inflammatory statements. In 2017, Facebook reported up to 3 percent of its accounts were fake, totaling 60 million “users” not associated with a real person. The internet, and social media sites in particular, are predisposed to promote content that attracts attention, yet a consequence of this is the promotion of sensational messages that sow discord and harm.

Because the natural tendency of the internet is to guide users toward attention-grabbing sensationalism, we need creative solutions to take back control of our online spaces. There have been attempts at pulling the policy lever on this issue, such as legislation passed by the French Parliament that allows courts to remove fake news during election periods. However, when governments or companies gain the authority of censoring online content, it creates a slippery slope; one that may ultimately lead to the infringement of our first amendment rights in the United States.

Instead, the way to combat disinformation is to make users more digitally literate. Students have it drilled into them since middle school that they need to use credible sources for their essays, but this warning should go beyond the classroom. Let us make it a personal responsibility for everyone to be conscious surfers of the web. This can be done without limiting the freedoms of any individual user, while also impacting their choices of who and what to interact with online. For example, qualified professionals in schools could teach online safety, content evaluation and personal data protection that may help students make better choices. Structuring these courses in the same way as drug and alcohol awareness or consent education seminars would be a good first step toward promoting more responsible internet usage.

The internet is the biggest playground in the history of human civilization, and every playground has its bullies. But through education about disinformation, fake news and trolling behavior, it is possible to give each internet user the resources they need to function in this complicated ecosystem. A more educated populace will lead to a safer internet for all and provide a chance to reverse some of the negative consequences of unlimited information.

 

Alex Satola can be reached at apsatola@umich.edu.

Leave a comment

Your email address will not be published. Required fields are marked *