Injustice in the Justice System: AI — Justice Education Project and Encode Justice Event

Save the Date — July 30th 1–4PM EST

Image credit: Sarah Syeda

When making a decision, humans are believed to be swayed by emotions and biases, while technology makes impartial decisions. Unfortunately, this common misconception has led to numerous wrongful arrests and unfair sentencing.

How does Artificial Intelligence Work?

In the simplest terms, artificial intelligence (AI) involves training a machine to complete a task. The task can be as simple as playing chess against a human or as complex as predicting the likelihood of a defendant recommitting a crime. In a light-hearted game of chess, bias in AI does not matter, but when it comes to determining someone’s future, questioning how accurate AI is crucial to maintaining justice.

AI and the Criminal Justice System

From cameras on the streets and in shops to judicial risk assessments, AI is used by law enforcement every day, though it is in many cases far from accurate or fair. When the federal government did a recent study, it found that most commercial facial recognition technologies exhibited bias, with African-American and Asian faces being falsely identified 10 to 100 times more than Caucasian faces. In one instance, a Black man named Nijeer Parks was misidentified as a criminal suspect for a robbery in New Jersey. He was sent to jail for 11 days.

Within risk assessment algorithms, similar issues are present. Risk assessments generally look at the defendant’s economic status, race, gender, and other factors to calculate the recidivism rate that is used by the judge to determine things like if a defendant should be incarcerated before trial, what their sentence should be, their bail bond, and more. Although the algorithms are meant to look at a defendant’s recidivism risk impartially, they become biased because the data used to create the algorithm is biased. Because of this, the risk assessments mimic the exact biases that would exist if a judge were to look at those factors.

Justice Education Project & Encode Justice

To combat and raise awareness about the biases in AI used in the criminal justice system, Justice Education Project (JEP) and Encode Justice (EJ) have partnered to organize a panel discussion with a workshop about algorithmic justice in law enforcement.

JEP is the first national youth-powered and youth-focused criminal justice reform organization in the U.S., with over 6000 supporters and a published book. EJ is the largest network of youth in AI ethics, spanning over 400 high school and college students across 40+ U.S. states and 30+ countries, with coverage from CNN. Together the organizations are hosting a hybrid event on July 30th, 2022 at 127 Kent St, Brooklyn, NY 11222, Church of Ascension from 1–4PM EST. To increase accessibility, this event is available to join via Zoom.

Sign up here to hear from speakers likeRaesetje Sefala, an AI Research Fellow at the Distributed AI Research (DAIR) institute, Chhavi Chauhan, leader of the Women in AI Ethics Collective, Albert Fox Cahn, founder and executive director of Surveillance Technology Oversight Project (S.T.O.P.), Logan Stapleton, 3rd-year Computer Science PhD candidate in GroupLens at the University of Minnesota, Aaron Sankin, a reporter from The Markup, and Neema Guilani, Head of National Security, Democracy and Civil Rights Public Policy, Americas at Twitter. Join us and participate in discussions to continue our fight for algorithmic justice in the criminal justice system.

Facial Recognition Linked to Third-Known Wrongful Arrest of Black Man

National Police Foundation

In early 2019, Nijeer Parks, a Black man from Paterson, New Jersey, was wrongly arrested in Woodbridge, New Jersey due to the misidentification of facial recognition software. The software connected Parks to the footage of an individual shoplifting and then driving away, hitting a police car. At the time of the crime, Parks had neither a car nor a driver’s license, which he informed the Municipal Court Clerk before his arrest. Additionally, Parks claims to have presented a solid alibi, which would have cleared him of any reasonable suspicion of being the perpetrator of the crime. Despite the compelling evidence clearing Parks of any involvement, police refused to examine other forensic evidence at the scene of the crime, arresting and detaining Parks solely based on the facial recognition mismatch. As a result, he spent ten days in jail and $5000 defending himself.

Mr. Parks has filed a lawsuit in the Superior Court of New Jersey against several Woodbridge officials alleging his wrongful arrest to be the product of racial profiling and claiming the force he encountered in the process of his arrest was excessive. His lawsuit also asserts that facial recognition is faulty and untrustworthy and that no reasonable police officer should issue an arrest warrant solely based on facial identification, especially when all additional evidence indicated Parks was nowhere near the scene of the crime. The suit, which was filed in late 2020, is beginning to cause backlash from those who believe law enforcement should utilize tools like facial recognition to operate more efficiently, ignoring the blatant bias and potential harm the technology poses.

This is not the first case that demonstrates the danger of facial recognition in making arrests. Mr. Parks is the third individual to be falsely arrested due to a facial identification error. All three arrested have been African American men. A study by the National Institute on Standards and Technology (NIST) examined 189 algorithms from 99 developers that demonstrated empirical bias across the facial recognition industry. Using millions of images, the study found that African Americans and Asian Americans were 10 to 100 times more likely to be falsely matched when compared to images of Caucasian individuals. While different algorithms had different rates of inaccuracies, racial bias throughout facial recognition cannot be ignored.

Facial recognition has no place in law enforcement. Law enforcement, which has a documented history of racial oppression, should not be given tools that would allow them to continue to perpetuate racial biases and target communities of color without accountability or transparency. Aside from unfairly targeting racial minorities and LGBTQ+ people, facial recognition presents a very real threat to both free speech and the fourth amendment. The government has repeatedly shown that they cannot be trusted in using the software after its secretive and unlawful use prompted congressional hearings in early 2019. Facial recognition once again made headlines this summer when it was used to surveil and detain thousands of protesters marching for the Black Lives Matter movement. This is just another reason why facial recognition is so terrifying: its selective use in targeting communities of color.

The danger of facial recognition does not lie simply in its flaws — even accurate facial recognition pushes us closer and closer to a state of surveillance, a clear violation of our constitutional rights and civil liberties. Regulation simply does not go far enough. Detroit is just one example: even after the false arrest of Michael Oliver in July of 2019, the city still refused to make any concrete legislative change to prevent more injustice at the hands of facial recognition. In September 2019, the Board of Commissioners approved guidelines that required approval within the Detroit Police Department and prohibited its use at protests. However, the “reforms” were not enough: another Black man, Robert Julian-Borchak Williams, was arrested and detained for over 30 hours in January 2020 due to another facial recognition mismatch. Facial recognition is a dangerous tool, and injustices like the arrest of Nijer Parks further show that it must be banned in law enforcement.