This week, following outcry related to the Christchurch mass shooting in New Zealand, Facebook announced it would ban white nationalist and white separatist content from its platform. The gunman livestreamed part of the attack on Facebook, which resulted in copies of the video spreading across major social media platforms. The questions provoked by the attack — how did the gunman radicalize? Why has it proven so hard for companies to handle his video and manifesto? — have highlighted the content moderation problem social media platforms have struggled with for years. Namely: How do you protect the ideals of free speech while preventing the spread of vile and dangerous content?
Some believe the solution is a matter of incentives and advocate for holding social media companies accountable for the content that appears on their platforms. For example, lawmakers in Germany passed a law that mandates social media companies delete offending posts within 24 hours of being notified or risk heavy fines. Others advocate for the internet as a bastion of free speech, like when Twitter CEO Jack Dorsey referred to his platform as the “free speech wing of the free speech party.”
Neither of these approaches is without drawbacks. Holding companies accountable for their content may stifle competition and restrict free speech. The overhead of complying with regulations, as well as the risk of litigation from disgruntled users, could cripple startups who lack the resources of large social media platforms to deal with such threats. Furthermore, regulation may incentivize companies to “delete in doubt,” or remove any borderline content, which may have a chilling effect on speech.
But unrestricted speech is not the answer either. A number of examples demonstrate the serious consequences of a laissez-faire approach to this issue. For instance, see the role of misinformation in the 2016 election, the spread of conspiracy theories leading to Pizzagate and the use of Facebook to incite violence against Rohingya Muslims.
A more active strategy has taken hold in recent years due to outcry over events such as those mentioned above. Facebook has 30,000 employees dedicated to safety and security, 15,000 of whom are content moderators, and a number of countries are considering online content regulations. But they are focusing on a problem which may be intractable. Facebook moderates billions of posts per week in more than 100 different languages. YouTube sees the equivalent of 65 years of video uploaded each day. That scale would be a perfect application for the automation capabilities of Silicon Valley, if it weren’t trying to resolve the nuances of human discourse. Though artificial intelligence has made significant strides in identifying specific content such as nudity and graphic violence, the subtleties and cultural contexts of human language are not easily automated, leaving social media platforms and their content moderation armies playing a global game of whack-a-mole.
This active approach focuses on the content moderation machine itself — the human moderators, AI systems and policies employed by social media companies to address content on their platforms. However, this misses the root cause of the issue: social media’s design. The desire to create a “global community,” as Facebook puts it, as well as the emphasis on virality and relevancy, as defined by the tech companies, are the real culprits. Users are inundated with content that is meant to grab their attention — and hold it — for as long as possible. This goal to lead users down a rabbit hole often leads to recommendations of more and more extreme material. A Wall Street Journal investigation of YouTube found that users who watched relatively mainstream news sources were often fed extreme video recommendations on a wide variety of topics. For example, if you searched for information on the flu vaccine, you were recommended anti-vaccination conspiracy videos.
Additionally, the emphasis on virality leads to features that quickly amplify and legitimize content. WhatsApp’s popular message forwarding feature allows users to forward messages without any indication of their origin, making it seem as if a message which may have been shared thousands of times is coming directly from a close friend or family member. This feature was recently limited after lynchings in India were fueled by rumors spread on the service.
WhatsApp isn’t the only platform that has disabled functionality in response to tragedy. YouTube disabled part of their search feature following the Christchurch shooting, and Facebook was ordered offline temporarily in Sri Lanka after false rumors led to riots against Muslims. This tendency to disable functionality during crises is telling. When push comes to shove, the platforms themselves acknowledge that only by addressing problematic features will the problem be solved.
This is not to say a redesign of these platforms would be simple. Putting aside companies’ incentives to maintain the status quo and the legislative hurdles that would accompany any sort of intense regulation, there are legitimate arguments to be made for preserving features currently under fire. The same features that cause outcry now inspired optimism following the Arab Spring. A successful solution could not merely limit functionality, as that would ignore the capacity for good of these technologies.
Fundamental questions about free speech and the role of technology in society are being left to conference rooms in Silicon Valley, creating a situation where human moderators and imperfect AI systems implement a global censorship regime dictated by a handful of corporations. The solution for this problem is unclear, but whatever it is will require reckoning with the massive influence of social media platforms. The model of these sites ensures they don’t simply mirror society — they change it.
Chand Rajendra-Nicolucci can be reached at chandrn@umich.edu.