Artificial intelligence-powered surveillance technology has recently attracted controversy for its role in furthering discrimination against people of color and other marginalized groups.
This discrimination is seen in the many false arrests that have occurred due to the misidentification of AI and software bias. Numerous examples have come to light with police using facial recognition software and incorrectly identifying and arresting “suspects”. Nijeer Parks, and African American man, was incorrectly matched to footage of a suspect and detained for several days, despite compelling evidence proving he had no connection to the crime that occurred.
AI’s flaws also affect the LGBTQ+ community. Many facial recognition software systems are programmed to sort individuals by gender. Non-binary, agender, and gender non-conforming individuals are often sorted into these categories incorrectly, which entirely ignores their gender identities. Transgender individuals are often misgendered, or their gender identity is entirely ignored.
Studies at CU Boulder and MIT found that “facial analysis services performed consistently worse on transgender individuals, and were universally unable to classify non-binary genders.” The Gender Shades Project found that the software being used at Microsoft, Face++, and IBM misidentified many demographics. Microsoft’s facial recognition software misgenders 93.6% of those of darker skin complexions. At Face++ female subjects are misgendered 95.9% according to their error analysis.
Being misidentified by the system can have terrible ramifications for many people of varying gender identities if this type of facial recognition software continues to be used by law enforcement.
It’s important to note how the development of software systems influence their bias and inaccuracy. Congressional inquiries into the accuracy of facial recognition across a number of demographics, as well as a study by the National Institute of Standards and Technology (NIST), have found that the technology is highly inaccurate when identifying non-white, female, or non-cisgender people. It is most effective on white, cisgender men. This reflects the fact that 80% of software engineers are men, and 86% of software engineers are Asian or white.
We’re seeing the growing problems that stem from the lack of diversity in careers that heavily impact our lives. Software bias is a reflection of the bias of those who write the code. By continuing the use of software with documented bias, we perpetuate an endless loop of marginalization and discrimination. The only solution is increasing diversity in fields that affect our everyday life, like software engineering.
Efforts to ban and regulate facial recognition usage, particularly by law enforcement, have increased in the recent past.
On a federal level, Senator Ed Markey (D-MA) has proposed a bill known as the Facial Recognition and Biometric Technology Moratorium Act of 2020. This bill would impose restrictions on federal and state agencies who wish to use this technology, and render information obtained by law enforcement through facial recognition inadmissible in court.
Many cities have restricted and regulated law enforcement’s use of facial recognition. These cities, including Minneapolis, Portland, San Francisco, New Orleans, Boston, and many more, have taken action against the software. It is imperative that more cities and the federal government follow in their path and prevent facial recognition technology from being used by law enforcement in the future.