How Algorithms Can Make The Justice System Fairer

Reprogramming biased algorithms is much easier than reprogramming biased humans

Part 1 of a series about how emerging technologies can drive societal progress in the areas that matter. This piece is based on an academic article I wrote for the New Zealand Law Journal.

It is not particularly controversial to label the justice system as broken, whether in New Zealand or around the world. New Zealand's own Justice Minister has said as much, in a refreshingly despondent address to the United Nations. We lock far too many people up, fail to rehabilitate most offenders and the whole system is plagued by structural racism. On these points, most lawyers are likely to agree. However, zeroing in on a particular root cause of this mess might cause a few lawyers to twitch in their seats. A towering amount of evidence suggests that one of the key problems for the justice system might be the elephant in the (court)room - the human foibles of the judges themselves.

This is no particular attack on New Zealand's judiciary per se, but instead an acknowledgement of some of the inherent weaknesses of human analysis and decision-making. It is increasingly hard to ignore the idea that structural inequities in the justice system are linked to the level of discretion judges have in dispensing justice, and think that a little less reliance on human judgement may bring a systematically fairer system. While judges will always play a critical and irreplaceable role in our justice system, we should be open-minded about the possibility of deploying emerging technologies in courts to help correct for some of the areas where human analysis is weakest.

Some of these weaknesses are well-documented. Human decision-makers, including judges, tend to make decisions based on their instincts and informed by their experience: informed guesses and gut instincts. A broad literature of behavioural economics points out that one of the flaws of this kind of decision-making is that it is prone to systematic errors and mistakes. While judges are often held up as the pinnacle of reasoned, deliberative decision-making ('judgement' is literally in their job title), they are as susceptible to biases and heuristics (cognitive shortcuts) as the rest of us. This susceptibility is not judges' fault, of course, but does contribute to some of the systemised unfairness in the justice system.

As an example, take bail decisions, where judges decide whether defendants should be released on bail before their trial or kept in jail. Millions of these critical decisions are made every year around the world, by judges who sometimes only have a matter of minutes to come to a decision. Unsurprisingly, different judges take very different approaches to these subjective decisions - one study suggested that some judges in New York were half as likely to grant bail than other judges. This is a simple but clear example of the arbitrariness that pervades the justice system. To have legitimacy any justice system must be perceived as fair, and it is hard to defend a system where a defendant's fate might be decided by the luck of the draw in which judge happens to be presiding on that day. Even worse are the racial inequities driven by these instinctive decisions - black defendants are less likely to be released than white defendants.

Using algorithms to supplement human judges can help to ameliorate some of these dangerous byproducts of human decision-making. Algorithms are imperfect and can contain biases of their own, particularly when their training data is compiled from decades of human racism (we want algorithms to correct for structural human biases, rather than double down on them). Nevertheless, new algorithms show a lot of promise in making justice decisions that are systematically fairer. In one test, a machine-learning algorithm's decisions saw incarceration rates fall 42% (that is, more defendants were released) without crime rates increasing at all. In a real-life example, New Jersey's adoption of algorithms in the justice system contributed to a 16% drop in the pretrial jail population over just six months.

How can algorithms outperform experienced judges on these metrics? Algorithms process decisions in a way that is fundamentally different to how humans come to decisions, which offers certain advantages in the context of a judicial system. We have discussed how humans tend to make intuitive decisions informed by experience - informed guesses. By contrast, an algorithm does not guess. It applies a programmed formula to a particular set of data and uses linear regression functions to map relationships between variables. This creates a model the algorithm can then use to come to a 'decision' when presented with new data in real life. When algorithms can be given feedback on whether their decisions were ultimately correct or not, they can recalibrate their model to improve their future decision-making. Needless to say, humans cannot recursively adjust their intuition in the same way - we are instinctive creatures at heart.

While it might seem strange that we could make the justice system fairer by introducing algorithmic assistants, the evidence to date is promising. However, it is important to note some of the existing limits on this progress, and the sizeable barriers to the integration of algorithms. For instance, one limit is that the kinds of algorithms we are talking about are quite function-specific. Bail decisions are well-suited to the kind of data-driven analysis that algorithms perform. Other decisions, particularly those requiring complicated or novel analysis of the law, will continue to be the exclusive domain of human judges. However, the relatively narrow application of algorithms should not detract from the very real benefits they can offer as a fairness-enhancing supplement for certain processes in the justice system.

One of the most exciting aspects of algorithms and artificial intelligence is the breakneck speed at which the technology advances year to year. This may well open the door to further ways to help make the justice system fairer. However, one of the biggest barriers in the way of algorithmic assistance may be the lack of cooperation from leaders in the justice system. In 2017, Chief Justice Roberts of the United States Supreme Court decried algorithms as "putting a significant strain on how the judiciary goes about doing things". When Kentucky passed a law to require judges to have regard to an algorithm's suggestions when deciding whether to grant bail, the judges overruled the algorithm's recommendations two-thirds of the time - and were particularly likely to decline bail when the defendant was black.

The Kentucky case study exemplifies the complexities of technology in the justice system. While there is good evidence that using algorithms in specific areas can help promote fairness and reduce inequities, this is far from a panacea for a broken justice system. Algorithms have their limitations, and the disposition of human decision-makers towards new high-tech tools is just as important as algorithms’ technological capabilities. Nevertheless, there are plenty of reasons to be hopeful about how artificial decision-making can, counter-intuitively, make justice systems more humane.