On July 1, San Francisco District Attorney George Gascón will launch a new artificial intelligence tool meant to eradicate potential racial bias in prosecutors’ charging decisions via a “race-blind charging system.”
The first-of-its-kind algorithmic tool, created by the Stanford Computational Policy Lab, will also be offered free to any other prosecutor’s offices that wish to take part.
“Lady justice is depicted wearing a blindfold to signify impartiality of the law, but it is blindingly clear that the criminal justice system remains biased when it comes to race,” said District Attorney George Gascón. “This technology will reduce the threat that implicit bias poses to the purity of decisions which have serious ramifications for the accused, and that will help make our system of justice more fair and just.”
In recent years, implicit–or unconscious–bias has become more widely acknowledged as an issue that tangibly impacts policing and all other critical stages of the U.S. criminal justice system.
Increasingly, police officials, prosecutors, and public defenders are implementing implicit bias training within their offices, with the hope of reducing racial inequity within the justice system.
San Francisco’s plan takes it a step farther, by bringing in tech that, with luck, will make it far more difficult for prosecutors’ to make decisions based on those subconscious biases.
Comments (0)