Injustice in the Justice System: AI — Justice Education Project and Encode Justice Event

Save the Date — July 30th 1–4PM EST

Image credit: Sarah Syeda

When making a decision, humans are believed to be swayed by emotions and biases, while technology makes impartial decisions. Unfortunately, this common misconception has led to numerous wrongful arrests and unfair sentencing.

How does Artificial Intelligence Work?

In the simplest terms, artificial intelligence (AI) involves training a machine to complete a task. The task can be as simple as playing chess against a human or as complex as predicting the likelihood of a defendant recommitting a crime. In a light-hearted game of chess, bias in AI does not matter, but when it comes to determining someone’s future, questioning how accurate AI is crucial to maintaining justice.

AI and the Criminal Justice System

From cameras on the streets and in shops to judicial risk assessments, AI is used by law enforcement every day, though it is in many cases far from accurate or fair. When the federal government did a recent study, it found that most commercial facial recognition technologies exhibited bias, with African-American and Asian faces being falsely identified 10 to 100 times more than Caucasian faces. In one instance, a Black man named Nijeer Parks was misidentified as a criminal suspect for a robbery in New Jersey. He was sent to jail for 11 days.

Within risk assessment algorithms, similar issues are present. Risk assessments generally look at the defendant’s economic status, race, gender, and other factors to calculate the recidivism rate that is used by the judge to determine things like if a defendant should be incarcerated before trial, what their sentence should be, their bail bond, and more. Although the algorithms are meant to look at a defendant’s recidivism risk impartially, they become biased because the data used to create the algorithm is biased. Because of this, the risk assessments mimic the exact biases that would exist if a judge were to look at those factors.

Justice Education Project & Encode Justice

To combat and raise awareness about the biases in AI used in the criminal justice system, Justice Education Project (JEP) and Encode Justice (EJ) have partnered to organize a panel discussion with a workshop about algorithmic justice in law enforcement.

JEP is the first national youth-powered and youth-focused criminal justice reform organization in the U.S., with over 6000 supporters and a published book. EJ is the largest network of youth in AI ethics, spanning over 400 high school and college students across 40+ U.S. states and 30+ countries, with coverage from CNN. Together the organizations are hosting a hybrid event on July 30th, 2022 at 127 Kent St, Brooklyn, NY 11222, Church of Ascension from 1–4PM EST. To increase accessibility, this event is available to join via Zoom.

Sign up here to hear from speakers likeRaesetje Sefala, an AI Research Fellow at the Distributed AI Research (DAIR) institute, Chhavi Chauhan, leader of the Women in AI Ethics Collective, Albert Fox Cahn, founder and executive director of Surveillance Technology Oversight Project (S.T.O.P.), Logan Stapleton, 3rd-year Computer Science PhD candidate in GroupLens at the University of Minnesota, Aaron Sankin, a reporter from The Markup, and Neema Guilani, Head of National Security, Democracy and Civil Rights Public Policy, Americas at Twitter. Join us and participate in discussions to continue our fight for algorithmic justice in the criminal justice system.

Deepfakes and the Spread of Misinformation

Deepfakes began to garner attention in 2017, when a video of Jordan Peele posing as former President Barack Obama went viral. In the years since, the disturbing potential dangers surrounding deepfakes have come to light, affecting people around the globe.

What is a Deepfake and Voice Cloning?

A deepfake is a form of artificial intelligence that replaces an existing image or video with the likeness of another person’s face. Deepfakes can be as harmless as someone humorously impersonating a friend. However, recent trends have shown the use of deep fakes in political spheres, where they can be used to spread misinformation.

Voice cloning is a sect of deepfakes that focuses on replicating someone’s voice. This has been used more often and has been introduced by various companies as a fun gadget tool. Adobe’s VoCo is at the forefront of this and after hearing your voice for twenty minutes, the technology can replicate your voice. Adobe is researching how to make a watermark detection system in order to prevent forgery.

What are the Potential Consequences?

Though we have not seen deepfakes being used to cause misinformation on a global level, it has been used today against regular people and voice cloning is the most popular among deepfakes being used. Many big corporate companies, Adobe and Respeecher, have been developing beta technology that can replicate voices. In real life, this technology could be used to alter one’s voice to replicate and impersonate public figures.

As of recently, voice cloning was used in a memorial film for the late Anthony Bourdain. They used voice cloning for the movie as a narrator’s voice. His consent for this was never clearly stated and they started this project after he had passed. Many people were quick to point out how he never said the lines and how “it was ghoulish.” This brought up the new question of whether it was ethical to clone the voice of someone when they have passed or can not give consent. In the case of Bourdain, many on different social media platforms decided it was unethical while many close to Bourdain raised no complaints regarding the use of his voice.

Deepfakes have also been used in other unethical ways. An example of this is the case of Noelle Martin, an Australian activist, whose face was deep-faked into an adult film when she was 17. Her various social media accounts were used as a reference to digitally steal her face and put it in these adult photos and videos. She tried to contact various government agencies and the companies themself but nothing worked. The person behind this was unknown which made it virtually impossible to track them down and thus nothing happened.

What is being done?

Researchers at various different institutions are using different methods to identify the use of deepfakes in technology. At the University of Albany, Professor Siwei Lyu has worked on detecting deepfakes by using “resolution inconsistencies.” Resolution inconsistencies occur in deepfakes when a swapped face does not match up with its surroundings in the photo or video. These inconsistencies help researchers develop methods to detect deepfakes.

While at Purdue University, Professor Edward Delp and former research assistant David Güera are trying to use convolutional neural networks to detect deepfakes in videos. The neural network works by using “frame-level inconsistencies” to identify deep-fake videos. Frame-level inconsistencies simply means the inconsistencies created when deepfake technology puts someone’s face on another. They are using sets of deep-faked videos to train their neural network to identify them. The researchers want to exploit this in order to identify deep-faked videos created by popular technology.

Researchers at UC Riverside and Santa Barbara use two different methods, CNN and LSTM, to see how well they identify deep-faked media. A convolution neural network, CNN, is at its most basic definition is a type of deep learning algorithm that is trained to differentiate photos from others using specific aspects of some photos. In terms of deepfake identification, it can be used to find these inconsistencies mentioned above. LSTM based networks are a part of this process because according to the research at UC Riverside and Santa Barbara can help with classification and localization with the processed media. This is to help organize the wide database of media in order to find results more easily.

They are testing them based on how well they can identify the inconsistencies present in these videos. They concluded in their research that both the Convolution Neural Network (CNN) and LSTM based networks are effective in identifying deepfakes. Looking into the future they would like to see both of these methods combined.

Other than research, public advocacy in the law is another way to help stop deepfakes. The Malicious Deep Fake Prohibition Act of 2018 establishes a new criminal offense for the distribution of fake online media that appears to be realistic. Though it was not passed it would have helped make strides in the right direction in this field and helped many who have been wrongfully affected by these technologies.

Facial Recognition Technology at the Texas Border

Facial recognition technology is currently being used at the border in Texas — but concerns about its flaws are rising.

Image Credit: NPR

Facial Recognition and Biometric Technology at the Border

Facial recognition, a form of biometric technology, is being used by the U.S. Customs and Border Protection at the Brownsville and Progreso Ports of Entry in Texas. Biometric technology software identifies individuals using their fingerprints, voices, eyes, and faces. This technology is being used at the border to compare surveillance photos taken at the ports of entry to passport and ID photos within government records. While this may seem simple enough, concerns about the ethics and accuracy of the technology are rising.


One of the most dangerous flaws of facial recognition technology is that it is disproportionately inaccurate when used to identify POC, transgender and nonbinary individuals, and women. A 2018 study conducted by MIT found that “the error rates for gender classification were consistently higher for females than they were for males, and for darker-skinned subjects than for lighter-skinned subjects,” with identification of darker-skinned women having an error rate of up to 46.5% — 46.8% across numerous softwares. This basically means that 50 percent of the time, facial recognition software will misidentify these women. These extremely high error rates show that facial recognition technology is unreliable, and could cause people to undergo unnecessary secondary inspections, unfounded suspicion, and even harassment at the ports of entry.

There’s not only that; because facial recognition technology is still relatively new, the US does not have comprehensive laws regulating its use, making it easier for the technology to be abused. Without regulation, the government is not required to be transparent about how they use facial recognition technology. The lack of information regarding how the technology is used makes it unclear how and for how long the government stores this information. In addition, questions and concerns over the constitutionality of biometric technology have recently been brought to light, with some pointing out that its use could be a violation of the Fourth Amendment.

While the Customs and Border Protection claims that travelers have the option to opt-out of these photographs, ACLU claims that travelers who choose to opt-out face harassment by agents, secondary inspections, and questioning, with some travelers even having their requests denied because they did not inform the agents that they will be opting out before reaching the kiosks.

Because of inaccurate results and concerns over privacy, it’s understandable that travelers may choose to not participate in facial recognition — but doing so may lead to questioning and harassment. Facial recognition at the border is a lose-lose situation, no matter what the travelers choose to do.

How Facial Recognition Affects People of Different Gender Identities

Image Credit: Getty Images

Artificial intelligence-powered surveillance technology has recently attracted controversy for its role in furthering discrimination against people of color and other marginalized groups.

This discrimination is seen in the many false arrests that have occurred due to the misidentification of AI and software bias. Numerous examples have come to light with police using facial recognition software and incorrectly identifying and arresting “suspects”. Nijeer Parks, and African American man, was incorrectly matched to footage of a suspect and detained for several days, despite compelling evidence proving he had no connection to the crime that occurred.

AI’s flaws also affect the LGBTQ+ community. Many facial recognition software systems are programmed to sort individuals by gender. Non-binary, agender, and gender non-conforming individuals are often sorted into these categories incorrectly, which entirely ignores their gender identities. Transgender individuals are often misgendered, or their gender identity is entirely ignored.

Studies at CU Boulder and MIT found that “facial analysis services performed consistently worse on transgender individuals, and were universally unable to classify non-binary genders.” The Gender Shades Project found that the software being used at Microsoft, Face++, and IBM misidentified many demographics. Microsoft’s facial recognition software misgenders 93.6% of those of darker skin complexions. At Face++ female subjects are misgendered 95.9% according to their error analysis.

Being misidentified by the system can have terrible ramifications for many people of varying gender identities if this type of facial recognition software continues to be used by law enforcement.

It’s important to note how the development of software systems influence their bias and inaccuracy. Congressional inquiries into the accuracy of facial recognition across a number of demographics, as well as a study by the National Institute of Standards and Technology (NIST), have found that the technology is highly inaccurate when identifying non-white, female, or non-cisgender people. It is most effective on white, cisgender men. This reflects the fact that 80% of software engineers are men, and 86% of software engineers are Asian or white.

We’re seeing the growing problems that stem from the lack of diversity in careers that heavily impact our lives. Software bias is a reflection of the bias of those who write the code. By continuing the use of software with documented bias, we perpetuate an endless loop of marginalization and discrimination. The only solution is increasing diversity in fields that affect our everyday life, like software engineering.

Efforts to ban and regulate facial recognition usage, particularly by law enforcement, have increased in the recent past.

On a federal level, Senator Ed Markey (D-MA) has proposed a bill known as the Facial Recognition and Biometric Technology Moratorium Act of 2020. This bill would impose restrictions on federal and state agencies who wish to use this technology, and render information obtained by law enforcement through facial recognition inadmissible in court.

Many cities have restricted and regulated law enforcement’s use of facial recognition. These cities, including Minneapolis, Portland, San Francisco, New Orleans, Boston, and many more, have taken action against the software. It is imperative that more cities and the federal government follow in their path and prevent facial recognition technology from being used by law enforcement in the future.

Facial Recognition Can Now Identify People Wearing Masks

Image Credit: BBC

Japanese company NEC developed a facial recognition system that can identify people wearing face masks with near 100% accuracy.

It focuses on parts of the face that are not covered, such as the eyes, to verify someone’s identity. Verification takes less than one second and has an accuracy rate of over 99.9%, a stark improvement from facial recognition algorithms’ prior identification rate of 20–50% of images of people wearing face masks.

NEC worked to hone this technology for some time as wearing masks is common practice in Japan already, but it accelerated development to accommodate the COVID-19 pandemic. “Needs grew even more due to the coronavirus situation as the state of emergency [last year] was continuing for a long time, and so we’ve now introduced this technology to the market,” Shinya Takashima, Assistant Manager of NEC’s digital platform division, told Reuters.

The company is targeting 100 billion yen ($970 million) in sales in 2021 for its biometrics, facial recognition, and video analysis systems. Lufthansa and Swiss International airlines already employ this technology since its sale in October, and the NEC is trialing facial recognition for automated payments at a convenience shop in its Tokyo company headquarters.

London’s Metropolitan Police Service, ‘The Met Police,’ used the NeoFace Live Facial Recognition system to compare faces in crowds with those on a government watchlist to “prevent crime, to protect vulnerable members of the community or to support the safety strategy.” However, they were met with intense backlash for the lack of records kept on face matches made at King Cross and froze further use of the technology. England and Wales hope to increase transparency about the use of facial recognition systems and image sharing and to revise prior guidelines on the use of surveillance cameras.

The NEC aims to employ live facial recognition technology to promote social distancing, minimizing the need to touch surfaces or carry around forms of identification like security cards, and thus, combat the spread of COVID-19. “Touchless verification has become extremely important due to the impact of the coronavirus,” Shinya Takashima said. “Going forward we hope to contribute to safety and peace of mind by strengthening [efforts] in that area.”

NEC’s NeoFace Live Facial Recognition system can identify people in real-time with close to 100% accuracy, and it effectively eliminates the obstacle of facial coverings like masks. Although this can help promote social distancing and mitigate the spread of disease through hands-free payments and identification, such a powerful tool, if left unregulated, could pose a great threat to human rights.

AI can discriminate based on gender, race and ethnicity, socioeconomic level, disability status, etc. Surveillance and facial recognition technologies disproportionately harm people of color, allow private information to be collected by third parties without a person’s explicit consent, and are be weaponized to intentionally target certain groups like Uyghur Muslims in China. With such a powerful facial recognition system, peaceful protesters can no longer hide their identities from police with face coverings, roadblocks to lethal autonomous weaponry are eliminated, and an individual’s privacy is profoundly violated.

Image Credit: New York Post

Strict regulations, if not a complete ban on facial recognition systems, need to be implemented to ensure the ethical use of this technology, safeguard people’s rights to privacy, and begin to break the AI-driven cycle of systematic discrimination against minority groups.