Chapter 6 Learning to Discriminate

Author: Davies Benjamin, Douglas Thomas
Publisher: Oxford University Press

ABOUT BOOK

It is often thought that traditional recidivism prediction tools used in criminal ~sentencing, though biased in many ways, can straightforwardly avoid one particularly ~pernicious type of bias: direct racial discrimination. They can avoid this by excluding race ~from the list of variables employed to predict recidivism. A similar approach could be ~taken to the design of newer, machine learning-based (ML) tools for predicting recidivism: ~information about race could be withheld from the ML tool during its training phase, ~ensuring that the resulting predictive model does not use race as an explicit predictor. ~However, if race is correlated with measured recidivism in the training data, the ML tool ~may ‘learn’ a perfect proxy for race. If such a proxy is found, the exclusion of race would ~do nothing to weaken the correlation between risk (mis)classifications and race. Is this a ~problem? We argue that, on some explanations of the wrongness of discrimination, it is. ~On these explanations, the use of an ML tool that perfectly proxies race would (likely) be ~more wrong than the use of a traditional tool that imperfectly proxies race. Indeed, on ~some views, use of a perfect proxy for race is plausibly as wrong as explicit racial profiling. ~We end by drawing out four implications of our arguments.

Powered by: