Anamaria Cuza: Where are the ethics in tech?
Over 20,000 employees of Google from all over the world walked out of their offices on Nov. 1 protesting the way in which the tech giant has dealt with claims of sexual harassment, gender inequality and systemic racism. The event happened in response to a New York Times article, which revealed that Google has been paying millions of dollars in exit packages to its executives accused of sexual harassment. This is not the first time Google’s employees have walked out in protest of the tech giant’s conduct, projects and policies. In the last few months, petitions have been signed, walk-outs have been organized and employees have resigned in protest of Google’s increasingly worrying projects, from working with the Pentagon to develop the future generation of military artificial intelligence, to secretly developing a censored search engine for China.
With every single scandal Google gets itself involved in, there is a worrisome pattern emerging: As Google is increasingly becoming less transparent, more profit-focused and more forgetful about its founders’ motto “Don’t Be Evil,” employees are only finding out about the company’s projects and issues through leaks published by journalists. You could argue that this pattern is expected for a large company, whose only way of managing its employees is through increasingly bureaucratic measures. This argument, though, goes against the original nature of what technology was supposed to be.
From its very inception, Google’s main goal was “to organize the world’s information and make it universally accessible and useful.” This goal was very much aligned with the idealistic stances of hackers and technologists of the ’80s and ’90s: They saw technology as a way of demolishing bureaucracy, of giving each person a voice, of reclaiming knowledge from those in positions in power and distributing it equally to every citizen of our world.
When someone says Google, they most likely think about the abstract structure, generating profit and deploying projects. I think we have to start thinking about Google as a community of people: people who once had the idealistic perception on technology of the ’80s and ’90s. These people nowadays are developing a censored search engine for China, despite having 14 human rights groups warning them about the inevitable human rights violations of developing this project. People deploying artificial intelligence systems to the military as a first step towards using this technology in advanced weapons. In this way, instead of blaming this abstract entity called Google, we can start asking ourselves who these people are.
Usually, they are us. Or me. Or any skilled graduate with a science degree, whose dream has been to work at Google and, from early on, pursued internships at the company. They are people who spend their undergrad taking the most challenging classes, developing their interviewing skills and going through textbooks filled with coding challenges. These are the people who are told that they will do well.
We are taught how to solve increasingly hard problems, we are taught how to enhance our LinkedIn profile by adding more numbers to our project descriptions, we are taught to regard the “cool” internships that we get as our badge of honor for all our hard work, but we are missing out on a huge chunk from what we are supposed to learn. What if the numbers that we added to our internship description were the number of humans that our product negatively affected? What if one of the challenging problems we had to solve was how a government can keep track of its citizens’ every move?
We are supposed to continue solving interesting problems without placing them above people’s needs. If we want our work to have a good impact, it has to abide by some ethical guidelines. Guidelines which aren’t even clear to philosophers of artificial intelligence ethics. We are supposed to balance hours and hours of coding, while also mindfully considering the consequences our code might have on others. Why do we not get any guidance in doing all of this?
Getting a degree in Computer Science is already challenging. So challenging that our curriculum, peers and mentors sometimes forget about the negative disruptive power of tech. They forget that what they call “developing leaders” could also mean developing leaders for an autocratic regime, leaders of companies that completely disregard human rights, leaders of start-ups that contribute to systemic inequality and discrimination. Just read more about Uber, Facebook and the like and you will see what I am talking about. So what kind of leaders do we want to develop? What kind of leaders do we want to become?
Anamaria Cuza can be reached at firstname.lastname@example.org.