X
Innovation

Google's Brain Team: 'AIs can be racist and sexist but we can change that'

Google has devised some mathematical answers to attack discrimination in machine learning.
Written by Liam Tung, Contributing Writer
machinelearningcodeistock.jpg

Google's methodology could have applications in any scoring system, such as a bank's credit-scoring system.

Image: Getty Images/iStockphoto

In an age where data is driving decisions about everything from creditworthiness, to insurance and criminal justice, machines could well end up making bad predictions that just reflect and reinforce past discrimination.

The Obama Administration outlined its concerns about this issue in its 2014 big-data report, warning that automated discrimination against certain groups could be the inadvertent outcome of the way big-data technologies are used.

Using social networks or location data to assess a person's creditworthiness could boost access to finance for people who don't have a credit history.

However, big-data techniques might also raise barriers to finance and, since a decision could be made by a proprietary algorithm, there'd be no way to challenge the decision.

To address this threat to minority groups, the White House has called for "equal opportunity by design". But as three Google researchers note in a new paper, there currently is no vetted methodology for avoiding discrimination against sensitive attributes in machine learning.

"Consider the case of a group that we have relatively little data on and whose characteristics differ from those of the general population in ways that are relevant to the prediction task," writes Moritz Hardt, a research scientist on the Google Brain Team and one of the paper's authors.

"As prediction accuracy is generally correlated with the amount of data available for training, it is likely that incorrect predictions will be more common in this group. A predictor might, for example, end up flagging too many individuals in this group as 'high risk of default' even though they pay back their loan," Hardt notes.

"When group membership coincides with a sensitive attribute, such as race, gender, disability, or religion, this situation can lead to unjust or prejudicial outcomes."

One approach might be "fairness through unawareness" or removing a sensitive attribute from the equation, but as Hardt acknowledges, an algorithm could still end up inferring that attribute based on a combination of other data points.

Another approach, called "demographic parity", would require a prediction to be uncorrelated with the sensitive attribute, but Hardt argues in the case of predicting medical conditions such as heart failure, it's "neither realistic nor desirable to prevent all correlation between the predicted outcome and group membership".

Google's attempt to navigate these issues starts with the concept of "equality of opportunity" in machine learning, built on the idea that "individuals who qualify for a desirable outcome should have an equal chance of being correctly classified for this outcome".

In a credit context, if there are two groups who have applied for a loan, the same proportion of each should get the loan instead of allowing race or gender to influence the decision.

The methodology could have applications in any scoring system, such as a bank's credit-scoring system that uses thresholds to determine whether a person should get a loan.

According to Hardt, its methodology not only can measure and prevent discrimination based on sensitive attributes but also help scrutinize predictors. It also allows for adjustments to be made to predictors so that the user can choose between classification accuracy and non-discrimination.

The methodology makes it possible to address discrimination in scoring systems, even for those who don't control the underlying system, and provides organizations with an incentive to invest in better scoring systems, according to Google Research.

"When implemented, our framework also improves incentives by shifting the cost of poor predictions from the individual to the decision maker, who can respond by investing in improved prediction accuracy," writes Hardt.

"Perfect predictors always satisfy our notion, showing that the central goal of building more accurate predictors is well aligned with the goal of avoiding discrimination."

But the researchers admit that mathematics alone won't be able to tackle discrimination in machine learning.

Read more about artificial intelligence

Editorial standards