Save the Date — July 30th 1–4PM EST

Image credit: Sarah Syeda

When making a decision, humans are believed to be swayed by emotions and biases, while technology makes impartial decisions. Unfortunately, this common misconception has led to numerous wrongful arrests and unfair sentencing.

How does Artificial Intelligence Work?

In the simplest terms, artificial intelligence (AI) involves training a machine to complete a task. The task can be as simple as playing chess against a human or as complex as predicting the likelihood of a defendant recommitting a crime. In a light-hearted game of chess, bias in AI does not matter, but when it comes to determining someone’s future, questioning how accurate AI is crucial to maintaining justice.

AI and the Criminal Justice System

From cameras on the streets and in shops to judicial risk assessments, AI is used by law enforcement every day, though it is in many cases far from accurate or fair. When the federal government did a recent study, it found that most commercial facial recognition technologies exhibited bias, with African-American and Asian faces being falsely identified 10 to 100 times more than Caucasian faces. In one instance, a Black man named Nijeer Parks was misidentified as a criminal suspect for a robbery in New Jersey. He was sent to jail for 11 days.

Within risk assessment algorithms, similar issues are present. Risk assessments generally look at the defendant’s economic status, race, gender, and other factors to calculate the recidivism rate that is used by the judge to determine things like if a defendant should be incarcerated before trial, what their sentence should be, their bail bond, and more. Although the algorithms are meant to look at a defendant’s recidivism risk impartially, they become biased because the data used to create the algorithm is biased. Because of this, the risk assessments mimic the exact biases that would exist if a judge were to look at those factors.

Justice Education Project & Encode Justice

To combat and raise awareness about the biases in AI used in the criminal justice system, Justice Education Project (JEP) and Encode Justice (EJ) have partnered to organize a panel discussion with a workshop about algorithmic justice in law enforcement.

JEP is the first national youth-powered and youth-focused criminal justice reform organization in the U.S., with over 6000 supporters and a published book. EJ is the largest network of youth in AI ethics, spanning over 400 high school and college students across 40+ U.S. states and 30+ countries, with coverage from CNN. Together the organizations are hosting a hybrid event on July 30th, 2022 at 127 Kent St, Brooklyn, NY 11222, Church of Ascension from 1–4PM EST. To increase accessibility, this event is available to join via Zoom.

Sign up here to hear from speakers likeRaesetje Sefala, an AI Research Fellow at the Distributed AI Research (DAIR) institute, Chhavi Chauhan, leader of the Women in AI Ethics Collective, Albert Fox Cahn, founder and executive director of Surveillance Technology Oversight Project (S.T.O.P.), Logan Stapleton, 3rd-year Computer Science PhD candidate in GroupLens at the University of Minnesota, Aaron Sankin, a reporter from The Markup, and Neema Guilani, Head of National Security, Democracy and Civil Rights Public Policy, Americas at Twitter. Join us and participate in discussions to continue our fight for algorithmic justice in the criminal justice system.

Sources:

https://csuglobal.edu/blog/how-does-ai-actually-work#:~:text=AI%20systems%20work%20by%20combining,performance%20and%20develops%20additional%20expertise

https://www.nytimes.com/2020/02/07/learning/should-facial-recognition-technology-be-used-in-schools.html

https://www.technologyreview.com/2019/01/21/137783/algorithms-criminal-justice-ai/

https://www.cnn.com/2021/04/29/tech/nijeer-parks-facial-recognition-police-arrest/index.html