Harvard University’s Berkman Klein Center for Internet and Society hosted a virtual webinar Tuesday afternoon to discuss fairness in algorithms and their intersections with the legal and technological field. 

Holli Sargeant, a Doctoral exchange student at Harvard Law School and PhD Candidate in Law studying algorithms at the University of Cambridge, moderated the event and opened the discussion with a brief primer on the current state of ethical artificial intelligence. 

Deborah Hellman, a visiting professor of Law at Harvard Law School, was one of the panelists. When asked how she entered into the algorithmic fairness “ecosystem,” Hellman said her research into discrimination is what originally made her interested in working with algorithms and fairness. 

“For a scholar of discrimination, it was super interesting … thinking about how to bring that conservation that we have about privilege, law and philosophy into (bettering a) nation and into the world of the technical types,” Hellman said.

Ben Green, a postdoctoral scholar at the Ford School of Public Policy, then discussed some of the primary concerns about the algorithms that motivated his initial research.

“We define a metric and try to optimize an algorithm to satisfy that metric which means it’s certifiable,” Green said. “What I’m really interested in my work is how we move beyond that purely formal and formalization-based approach to thinking about the broader social and political context to ensure that we’re not just taking fairness as a convenient mathematical definition, but as sort of the bigger picture.” 

Sharad Goel, a computer science professor at Harvard University, discussed his work on a discrimination and statistics study around 10 or 15 years ago and how the problems he was facing in publishing his findings are different than the problems he faces now. 

“The first papers that we were writing … were getting rejected by statistics journals because they were too political,” Goel said. “Now, the same paper that I’ve been writing for like 10 years … (is) getting rejected because it’s ‘not political enough.’” 

Goel said he thinks scientists have decided the existing approaches — which entail political engagement — are unsustainable and are trying to reevaluate their preconceived notions.

“I think we’ve realized that that’s not a sustainable approach, but we’re still trying to figure out what the field needs, and it alludes to the dominant way of thinking about it,” Goel said. “I don’t see the mathematization of fairness being a productive way forward. But the problem is that this is what computer scientists do.”

Daily News Contributor Alexis Spector can be reached at alexissp@umich.edu.