Caltech Home > PMA Home > News > Can AI Be Fair?
open search form

Can AI Be Fair?

Experts from across the country in computer science, philosophy, law, and other fields gathered June 10-12 in Caltech's Baxter Hall to discuss a hot topic in some academic circles: Can artificial intelligences, or machine-learning algorithms, be fair?

The experts were part of the Decisions, Games, and Logic workshop, organized primarily by Boris Babic, the Weisman Postdoctoral Instructor in Philosophy of Science at Caltech. Babic says he got the idea for the workshop after teaching a seminar called Statistics, Ethics & Law in the Division of the Humanities and Social Sciences. One of the activities in the class involved looking through studies investigating the fairness of machine-learning programs, or algorithms, used for making predictions in college admissions, employment, bank lending, and criminal justice.

"The amount of literature out there about ethical constraints on machine learning has exploded," says Babic. "One key question is: Is there a way to provide guarantees or safeguards that these machine-learning algorithms are not going to produce what we deem unfair effects across different demographic subgroups, such as those based on race or gender?"

Babic says interest in the field of AI and fairness was ignited after a 2016 investigative study by ProPublica claimed to find racial bias in a classification algorithm used by judges in Florida courts. The algorithm was designed to assess the risk that a convicted person would commit another crime, or recidivate. The judges would then use the calculated risk scores to help in making bail decisions. But, according to the ProPublica report, the algorithm would falsely identify a black person as being high risk more often than it would falsely identify a white person as high risk.

"If you are training an algorithm on data that have preexisting biases—for example, from a police department that disproportionately targets minorities for petty crimes—then those biases will be reflected in the algorithm's results," says Babic. "A lot of researchers are looking into solutions to this problem."

At the workshop, various computer scientists talked about addressing these issues using specific types of machine-learning techniques. Machine-learning programs typically learn from so-called training data and then, from these data, come up with a model that makes predictions about the future. The goal is to attempt to remove any possible racial or other bias from the models.

One speaker, Ilya Shpitser, a computer science professor at Johns Hopkins University, stressed the importance of "causal inference" techniques, which separate causal effects that are deemed unfair from those causal effects that are considered fair. This approach asks a "counterfactual" question—basically a "what if" question—about the decisions one would have made if the world had been fair. In other words, once unfair causal effects are identified—such as one's race leading to different bail decisions in a court—the goal is to selectively remove those effects from an algorithm that makes automated decisions.

But Shpitser also explained that one has to be careful about "proxies" in data, where something like racial bias may have seemingly been removed but is, in fact, still influencing the outcome via a proxy. He gave the example of African Americans receiving literacy tests in the South in the 1960s to determine if they could vote. If they did not pass the test, they could not vote. In this case, the test—the proxy—was an attempt to mask the racial bias.

"We want to move to the fair world even though the real world has problems with it," says Shpitser.

Deborah Hellman, a professor of law at the University of Virginia, proposed what she thinks is the best way to measure fairness in machine-learning programs. She said that the numbers of false positives and false negatives derived from a machine-learning program should be looked at and compared for different groups. In the ProPublica example, false positives occurred when people were wrongly predicted to commit future crimes and false negatives occurred when people were wrongly predicted to not commit future crimes. Hellman says that by comparing the ratios of false positives to false negatives for different groups, such as black people and white people, one can determine whether an algorithm is fair or not. Misaligned ratios, she says, would be indicative of disparate treatments.

Another speaker, David Danks, a professor of philosophy and psychology at Carnegie Mellon University, said he thinks we can sometimes trust machine-learning programs, even knowing that they are often imperfect and can have bias. He said there are some circumstances where the bias may not be relevant and would not cause harm. And, he says, there may even be situations where the bias can be used to determine which groups need more social support.

Danks gave the example of a system that is designed to predict the best employees for a certain task but is biased against people who wear blue shirts. If nobody is wearing blue shirts, he explained, the system can be trusted. "It's carrying out my values even though it's biased. … The harm comes when the prediction is used."

In the end, the participants of the workshop said they thought the cross-disciplinary nature of the workshop was tremendously useful.

"Fairness in automated-decision procedures is a problem that cuts across disciplines," says Frederick Eberhardt, professor of philosophy at Caltech. "We need to understand what we actually want and mean by fairness, we have to figure out how to implement it in modern AI technologies, and we have to ensure that these technologies are subject to the demands of the law and the public in ensuring scrutiny, recourse, and transparency. At the workshop, it was fascinating to see how the various experts would disagree in ways that were not aligned with their disciplinary backgrounds and to see how much everything was still in flux. There is a lot of work to be done in this area."

Written by Whitney Clavin

Whitney Clavin
(626) 395-1944