Artificial intelligence (AI) is one of the hottest topics in technology at the moment. If you listen to the people developing AIs, you will likely start to believe that they will solve all of the world’s problems. If you listen to the critics of AI, you will likely start to believe that they are the catalyst that will lead to a Terminator future.
AI probably won’t solve all of our problems but it probably won’t wipe our species out either. However, it is undeniable that algorithms are shaping our lives more and more. This isn’t a problem when those algorithms offer suggestions on what to read based on what you’re currently reading or what to buy based on what you’re currently buying. It is a problem when they decide whether or not you will be kept in a cage or not:
Police in Durham are preparing to go live with an artificial intelligence (AI) system designed to help officers decide whether or not a suspect should be kept in custody.
The system classifies suspects at a low, medium or high risk of offending and has been tested by the force.
It has been trained on five years’ of offending histories data.
The story cites the claimed accuracy rate of the AI as if a high accuracy rate should be enough for everybody to implicitly trust the system. But the system is proprietary so it’s impossible for outside parties to verify the claims of accuracy or to know how the system decides who should be kept in a cage. It’s also a black box. Can an officer override the system? If they can, does that override get included in the AI’s data that will color its future decisions? There are hundreds of questions one can ask but cannot answer about the system.
The problem with relying on AIs to make decisions about law and order is that the judicial system, at least in most so-called developed nations, is supposed to be transparent (although it usually isn’t). Proprietary systems aren’t transparent by definition, which makes them easier for the State to abuse.