Jim Crow Never Went Away

If you ever need an illustration of just how stupid the average voter is, find a voter who is complaining about racist government policies and ask them how they plan to change it. 99 percent (a conservative estimate, it’s probably higher) of the time the voter will tell you that they’re planning to beg the government to change its policies. If you point out how stupid that idea is, they’ll point to the elimination of slavery and the striking down of Jim Crow laws as proof that their strategy works, which should prove to you that the person you’re conversing with is extremely gullible (on the upside you probably just found a buyer for that bridge that you’re trying to offload).

While the government has said that it eliminated slavery and Jim Crow laws, it really just changed some legal definitions. If you’re being held against your will and forced to provide labor, you’re not legally considered a slave, you’re legally considered a prison laborer. Likewise, there are no longer laws that overtly treat people differently based on the color of their skin, instead there are algorithms that do the same thing but provide plausible deniability:

But what’s taking the place of cash bail may prove even worse in the long run. In California, a presumption of detention will effectively replace eligibility for immediate release when the new law takes effect in October 2019. And increasingly, computer algorithms are helping to determine who should be caged and who should be set “free.” Freedom — even when it’s granted, it turns out — isn’t really free.

Under new policies in California, New Jersey, New York and beyond, “risk assessment” algorithms recommend to judges whether a person who’s been arrested should be released. These advanced mathematical models — or “weapons of math destruction” as data scientist Cathy O’Neil calls them — appear colorblind on the surface but they are based on factors that are not only highly correlated with race and class, but are also significantly influenced by pervasive bias in the criminal justice system.

As O’Neil explains, “It’s tempting to believe that computers will be neutral and objective, but algorithms are nothing more than opinions embedded in mathematics.”

For the record, when people were celebrating California’s decision to eliminate cash bail, I predicted that it would result in this outcome (although I didn’t predict the use of algorithms, I did predict that since the decision to let somebody out on bail would be the sole decision of some bureaucrats, nothing would actually change).

Plausible deniability is the staple of modern politics. A politician who wants to pass a racist policy just needs to make sure that race is never mentioned in their law and when the policy results in the politician’s desired outcome, they can claim that they had no way to predict such a result. Additional plausible deniability can be added by handing decisions over to algorithms. Most people think of algorithms as mysterious wizardry performed by the high priests of science and are therefore impartial and infallible (because, you know, scientists are always impartial and never wrong).

However, algorithms do exactly what they’re created to do. If you want a machine learning algorithm to perform in a certain way, you either write it to do exactly what you want or you provide it learning data that will skew it towards the results you want. When the masses wise up and realize that the algorithm is racially biases, you can just claim that the complexity of the algorithm prevented anybody from accurately predicting what it would do. Their ignorance will make your explanation believable to them and you can claim that you’ve now made improvements that should (i.e. won’t) lead to more impartial results.