Though Facebook is the most commonly cited bad guy when it comes to misuse of its users’ personal information, especially in the wake of the Cambridge Analytica scandal or the recent breach affecting 50 million accounts, this week Google joined its ranks following a massive data exposure incident of its own.
The main issue with this recent data incident is not its size or the fact that it even happened at all. The ethical problem with Google’s treatment of this incident is that it withheld information pertaining to the safety and well-being of consumers’ data long after the incident had occurred. In the future, let us curtail the hegemony of these corporations and put more power in the hands of regulators and individuals to shift the balance and put users more in control over their own data.
In March of this year, internal investigators at Google found a software glitch in the Google+ social network platform that exposed the private information of hundreds of thousands of users. Google’s legal and policy staff at the time recommended not notifying users for fear of increased regulatory scrutiny and public backlash. Though the incident cannot be called a breach because there were no signs of abuse, it still shows negligence on the part of Google toward its users’ personal information. As part of the current response to the incident, Google has announced the closure of Google+, finally putting an end to its failed attempt at challenging Facebook as the dominant social platform.
So what exactly is a data breach? What companies are really trying to prevent in such an event is the infiltration of a computer network by either insiders or remote cybercriminals. In the case of Cambridge Analytica, Facebook — as a business practice — knowingly gave away the data of its users to a third party, whereas in the recent Google+ incident a privacy task force known as Project Strobe conducted a company-wide audit of its software and found the bug in the Google+ code, which Google then decided to hide from the public.
The unfortunate truth is that data exposures are very difficult, if not impossible, to prevent. Every company that does business on the internet is vulnerable, and no matter how much companies spend, hackers always find a way. In a speech given last August at the University of Georgia, Rick Smith, former CEO of the credit agency Equifax, ominously said, “There’s those companies that have been breached and know it, and there are those companies that have been breached and don’t know it.” A few weeks later, Equifax reported to the Securities and Exchange Commission that the personal data of approximately 143 million U.S. consumers — including social security numbers, names, birth dates and more — had been stolen by hackers.
We cannot hope to eradicate cybercrime altogether, but it is possible for companies to be more transparent about how they process user data. Especially in the event of exposure, consumers should have the right to know if they are at risk. Furthermore, they should have the ability to manage what they share and how their information is transferred to third parties.
Thankfully, more oversight by consumers and governments seems to be the trend in the regulatory landscape. Under the European Union General Data Protection Regulation, “Data subjects” must provide “unambiguous” consent to collect and process personal data, and that consent must be free, correctable and reversible, meaning consumers can modify or even stop the collection of certain types of data. The new regulation also puts in place strict penalties for companies that do not disclose a breach within 72 hours. The strict rules of this legislation aim to give more control over the treatment of personal information to the user and hold companies more accountable in cases of negligence.
Another common theme in recent incidents of data exposure is large processors of data such as Facebook or Google exporting information to third parties without the knowledge of data subjects. In the case of Facebook, the political consulting firm Cambridge Analytica exploited users’ profile information to influence the 2016 U.S. election without the consent of users. After the scandal, Facebook CEO Mark Zuckerberg commented on limiting the access of its credential-based application programming interfaces, which makes user data available to app developers with the proper permission. The danger of loosely restricted application programming interfaces, especially those related to personal information, is that bad actors posing as app developers can gain access for unauthorized purposes.
While companies should not have to notify users of every single move they make, any transfer of their users’ personal data is worthy of notice. Consumers had no idea that any of these data transfers were taking place before it was reported in the news. Google, on the other hand, had been sitting on this information for months, perhaps waiting out the Cambridge Analytica firestorm. Companies have little incentive on their own to release this kind of information, so the fight for more transparency about use of personal data has to come from us, with the support of more stringent regulation such as the General Data Protection Regulation.
But relying on user consent poses practical problems. It would be extremely annoying to read over the terms and conditions for every new app, and that is exactly what companies want. We have been trained to accept everything and question nothing when it comes to the transfer of our sensitive personal information, but as a consequence we have given companies free reign to engage in whatever activities they want. In this unbalanced environment, governments must step in to reign in companies’ monopoly of control over personal information and be a defender of users. On a business-wide level, a data breach represents a public relations headache and the loss of revenues, whereas for individuals they can have life-changing effects. It is time that we acknowledge the misuse of our personal information by tech giants and strive for greater control of our digital lives.
Alex Satola can be reached at apsatola@umich.edu.