Artificial Interruption

by Alexander Urbelis (alex@urbel.is)

Algorithmic Bias and Due Process

In December 2021, New York City passed a first-of-its-kind law regulating the use the Artificial Intelligence (AI) in the employment context.  That law bans employers from using artificial intelligence or algorithm-based technologies for employment-related decisions - such as hiring, recruiting, retention, or promotion - if an independent third-party has not audited those technologies for "bias."

Ordinarily useless, banal, behind the times, and out of touch with the people it serves, it seemed like the New York City Council actually passed an ordinance that was worthwhile, on the right track, and protective of New Yorkers' rights against discrimination and bias, even if a machine was responsible for the prejudice.  I applauded this legislation.  And because it comes into effect in January 2023, I have been helping numerous companies navigate this audit requirement and prepare their disclosures about what, if any, bias results from their AI or algorithm-based employment screening decisions.

As a matter of fundamental fairness, it should be universally agreeable, and even self-evident, that we would want important decisions to be free from prejudice.  And if there exists an AI-based system that produced questionable or biased results because of an identified defect, then we would certainly want to revisit and remake those decisions on the basis of a revised process that was free from bias.

As human beings, our decisions are complex and the result of our experiences, emotions, education, and a range of other nuanced factors and, unless someone is a bigot or racist, we all presumably strive to make our decisions in a manner that is free from bias or discrimination.  But the complexity of our emotions does not lend itself to being audited.  Bias may creep in despite our best efforts.  Someone may not think they are inherently racist, but because of their upbringing or experiences, certain prejudices, however subtle, may affect a decision.

This raises the question: could a well-regulated AI make better decisions than humans?  And, if the answer is anything resembling a "yes," then that raises a further question: could an AI that is free of bias or discrimination replace a jury in our legal system?  And, again, if the answer is anything resembling a "yes," then that raises a further question, a heartbreaking query this time: could an AI have prevented a tragic injustice that occurred on November 29, 2022, at 7:40 pm, in Bonne Terre prison, when the State of Missouri executed Kevin Johnson?

Kevin Johnson was 37.  He had been on death row since he was convicted of first-degree murder 17 years ago.  On July 5, 2005, Johnson killed a police officer, Sergeant William McEntee.  Johnson was 19 years old at the time.

The facts of what happened on July 5, 2005 are heartbreaking on many levels.  They do, however, explain in part why Johnson committed a murder for which he later unconditionally repented and for which he paid with his own life.

Suspected of a probation violation, police showed up at Johnson's home on July 5 with a warrant for his arrest.  According to multiple sources, Johnson saw the police arrive, woke up his 12-year-old little brother "Bam Bam," who then ran next door to their grandmother's house.  Bam Bam had health issues: he was born addicted to crack-cocaine and had a congenital heart defect.  When he arrived at his grandmother's house, Bam Bam's health began flagging.  According to the Missouri Independent, police officers including McEntee entered the grandmother's residence, after which Bam Bam began to have a seizure.

The officers ignored Bam Bam's distress and seizure.  According to various reports, the officers stepped over Bam Bam's limp body and never helped him or called for help.  The police officers also prevented Bam Bam's mother from offering any comfort, aid, or to even enter the house despite the ongoing seizure.  A short time later in a hospital, Bam Bam died.

Johnson was distraught, kicked his bedroom door off its hinges, and, according to the Associated Press, Johnson had then wandered around his neighborhood furious with McEntee for preventing his mother from helping Bam Bam even as his little brother convulsed.  Enraged at McEntee, Johnson screamed "He killed my brother!" as he roamed the streets.

A few short hours after Bam Bam's death, Sergeant McEntee was again in Johnson's neighborhood, called in this time to investigate someone setting off fireworks.  Johnson saw McEntee, pulled a gun on him, and shot him.  McEntee tried to flee, and Johnson shot him two more times, killing him.

The State tried Johnson for first-degree murder.  A first-degree murder charge requires premeditation and could result in the death penalty.  The jury, however, was hung.  The jury was not quite convinced that the murder was premeditated as opposed to the result of Johnson's emotional and impulsive state.

In 2007, the State tried Johnson again and convicted Johnson of first-degree murder.  That conviction resulted in a sentence of death.

The report of a special prosecutor that the Missouri judiciary appointed to investigate claims of racial bias in that second trial concluded that race played a "decisive factor" in the death sentence, and that the process by which Johnson was convicted was "infected" with racial bias.  The government, for one thing, tried to remove all Black persons from the jury.

The special prosecutor sought to vacate Johnson's death sentence.  This was the first time in Missouri that a prosecutor sought to overturn a conviction tainted by discrimination.

Despite the apparent racism and these extraordinary circumstances, after oral argument, the Missouri Supreme Court refused to grant any relief.  The Governor of Missouri, Mike Parson, refused any clemency and would not commute Johnson's sentence to life.  The last stop was the U.S. Supreme Court.  Though Justices Jackson and Sotomayor vigorously dissented, that institution failed to provide any relief as well, clearing the way to Johnson's lethal injection.

As technologists, we understand the axiom of "garbage in, garbage out" as applied to the input and output of a function or program.  As technologists, we know that a flawed system will produce flawed results that cannot be trusted.  As technologists, this is why we understand and applaud auditing of AI or algorithms for bias, discrimination, and any kind of flaw that could negatively impact our lives or the lives of others.  Why, then, do we neither expect nor demand the same kind of common sense and rigor be applied to our justice system?

How much injustice can this system tolerate in the name of justice before it fails to an insecure state?  When evidence that racial bias and discrimination permeated a capital case lies in front of you - and even a prosecutor is clamoring for the sentence to be vacated - how can we countenance those in power when they refuse to act?  Those persons in positions of political power assumed an awesome responsibility when they took office, and we cannot allow them to shirk their obligations or to act as if they are somehow allowed to abdicate adherence to basic moral principles.

The irony, of course, is that it was because of politics that Governor Parson refused to intervene and stop an execution that was a well-documented result of a racially biased and discriminatory legal proceeding.  It was self-interest.  It was borne of his own political ambition.  When this is the norm, are not dispassionate, unambitious machines better decision-makers?

I have no idea whether a detached, artificial intelligence would have prevented any of the horrible outcomes throughout the process that led to the State of Missouri taking the life of Kevin Johnson.  What I think is crystal clear, however, is that we need to apply the same low fault tolerance levels of the technological systems we design to the justice system of which we are all a part.  If we are right to be so cautious about bias seeping into the decision of whether a private company grants a job applicant an interview, then, a fortiori, shouldn't we be infinitely more concerned about bias seeping into the decision of whether the State should terminate the life of one of its own citizens?

Auditing AI systems for any hints of bias is not only about identifying and remediating prejudicial algorithms.  It is also about accountability for discrimination and prejudice.  We demand accountability of private parties, but it is also that which has been missing from our political processes and justice system.  So many of our institutions that should have acted to prevent injustice - the prosecutor's ethical obligations, the jury system, the appointment of a special prosecutor, the trial courts, the appellate courts, and the U.S. Supreme Court - turned a blind eye.

No doubt there were many that tried to act, but it was aggressive apathy that prevailed in Johnson's case.  With such ample evidence around us that we have created and are perpetuating an imperfect justice system that produces prejudice with no accountability, we owe it to ourselves to prioritize change, especially if we are to continue being the ones to audit AI systems, and not the other way around.

Return to $2600 Index