Facebook has been through a tough year. The company is under scrutiny for launching an app targeted toward children in January and allowing Russian actors to meddle in the most recent U.S. presidential election. The new developments in the Cambridge Analytica scandal are only adding fuel to the fire.

On March 17, the Cambridge Analytica scandal revealed Facebook’s negligence when it comes to protecting user data. Cambridge Analytica, the London-based political consulting firm that worked with the Donald Trump’s presidential campaign, inappropriately (and potentially illegally) obtained data on 50 million Facebook users. Though we don’t know exactly what the firm did with the data, it was likely used to influence an election — suggesting Trump may not have won the election fairly.

Cambridge Analytica obtained this data through a third-party personality quiz app developed by a Cambridge University researcher. When you take one of those quizzes provided through a third-party app, the developer is able to access not only your data but the data of all of your friends as well. Because 270,000 people took the personality quiz developed by the university researcher, data was collected on over 50 million people.

To remedy the scandal, Facebook could decide to simply prohibit any third-party apps from collecting user data. Facebook would never make such a decision, however, because its business model relies on these third-parties. Facebook generates revenue because advertisers are willing to pay for space on its platform. Advertisers are willing to pay more when users are more engaged with the platform — ideally, advertisers want users to be addicted.  

To encourage addiction, Facebook promotes viral content created by third-party apps (like quizzes that ask, “Which The Office’ character are you most like?”). These third-party apps also add to Facebook’s revenue stream, as they pay to be promoted. Facebook is thus incentivized to keep these third-party apps happy — as they are the company’s customers.

Related to this issue is Facebook’s allowance and active promotion of fake news — another form of viral content that sparks user interest. News that has an eye-catching headline (even if it is from an unreliable source) is exactly what gets people to click. The fake news epidemic that Facebook, along with other social media platforms, has perpetuated could have damaging effects on society. The editorial board of the Wall Street Journal wrote, “A few thousand Russian ads on Facebook didn’t turn the 2016 election, but the proliferation of fake news is tainting public discourse.”

Because over 110 million Americans get their news from Facebook, the platform must take steps to ensure the accuracy of its content. The company also needs to ensure its algorithms are not actively promoting unreliable content on its users’ News Feeds.

Facebook needs to realize its platform — which reaches over 2 billion users worldwide — can be dangerous. CEO Mark Zuckerberg has recently responded to public scrutiny by asserting the platform will only promote publishers whose content is “trustworthy, informative and local.” I hope Zuckerberg keeps this promise — I hope users will be more accurately informed in the coming months and years.

As for protecting Facebook’s users’ privacy, The Economist has called for the company to take its data protection protocols to a new level: “Facebook needs a full, independent examination of its approach to content, privacy and data … (which) should be made public.” The publication’s editors called for the creation of a “Data Rights Board” that would enforce rules and regulations regarding the use of user data.

Though the regulations that The Economist are calling for are intense, I believe that they are necessary. Facebook is simply too influential to turn a blind eye to its negligent business practices.

Erik Nesler can be reached at egnesler@umich.edu.

Leave a comment

Your email address will not be published. Required fields are marked *