TikTok’s ‘Beautiful’ Algorithm

Image Credit: Kristen Crawford

TikTok is an app that needs no introduction. Its name has been plastered on television screens due to the national security concerns raised by President Trump and its active use among teenagers in schools worldwide. The appeal to easy fame and virality has its audience addicted, with about 732 million monthly active users worldwide (DataReportal, 2021). Popular TikTok creators like Charli D’Amelio, Addison Rae, and Noah Beck, have even gone on to become high fashion brand ambassadors, star in films, and appear on the cover of magazines. The connection between most creators and their fame is unsettling; nearly all of them are caucasian.

Facial Recognition Technology

Facial Recognition Technology (FRT) is a form of biometric technology that analyzes a person’s facial landmark map, such as the placement of their nose, eyes, and mouth, and compares it with a variety of other landmark maps in order to identify someone. The issue is how the technology misidentifies people, specifically people of color, because of its databank consisting of a majority of Caucasian facial landmark maps. NIST (National Institute of Standards and Technology) conducted a study in 2019 discovering that FRT is 10–100 times more likely to misidentify Asian and Black than Caucasian individuals.

“Middle-aged white men generally benefited from the highest accuracy rates.” -Drew Harwell, The Washington Post.

With this background knowledge in mind, I proposed a question that would later become an extensive research study with surprising results…

Does TikTok use facial recognition technology within its algorithm, and if so, how important is race when it comes to virality?

The Research

After a quick google search, I came across an article by a group of Chinese scientists at the South China University of Technology. This article was a proposition for the use of far more accurate technology that TikTok could implement into its system. It pointed out the failures of TikTok’s current software, and the successes of the one the researchers developed. Not only was facial recognition at play, but it was used to rank users on a scale of 1–5 based on their beauty.

Image Credit: South China University of Technology

The first page of the research proposal gives a summary of facial beauty prediction (FBP), which assesses a person’s attractiveness the same way facial recognition technology assesses a person’s landmark map for identification. FBP uses the fundamental infrastructure of FRT to rank a person’s beauty. I wanted to see what effect race had on virality, so for three months subjects of differing races (Asian, Caucasian, and Caucasian with Hispanic ethnicity) posted the same style and formatted videos with the same sound, at the same time, on the same day. The only difference between the videos was the subject in front of the camera.

The question shifted to the extent of a race’s virality. Who would be the most popular out of the three?

The Data

After three full months, the subjects’ analytics (their average amount of comments, likes, additional followers) were recorded and headshots were taken to mimic the way FRT creates facial landmark maps. Following a facial dot map (fig. 2) the basic structure of their faces were accurately depicted and displayed beside each other (fig.3–4). Through this, the differing landmark maps were compared along with their profile data.

fig. 2
(fig. 3)
(fig. 4)

Subject B was ranked highest in both the amount of likes and views. The startling discovery was not her newfound popularity, but the cause of it. Subject B was Hispanic, with fair skin and a small face. Subject C was Caucasian, with a bigger face and brow. Subject A was Asian, with a larger surface area around her cheeks and brow. Through this, I was able to discover that TikTok does not use FRT and FBP simply based on a person’s genetic makeup, but the composition and coloration of their face and features.

When comparing Subject B to the facial dot map of Charli D’Amelio the similarities were striking (fig. 5).

(fig. 5)

This evidence anecdotally supports the limited diversity of popular creators, yet it remains to prove why it’s more common to find a “white passing” individual or a person with caucasian features on your “for you” page.

If you need more evidence, don’t worry. Subjects used new accounts and didn’t interact with any of the videos shown on their For You Pages. When scrolling through their For You Pages for five minutes, all three watched only 48 videos, and an average of 41.66% consisted of white or “white passing” individuals.

The Conclusion

This research began with the question of whether or not TikTok used facial recognition technology and the extent that race played into virality. From the evidence found, I was able to answer this. TikTok, in its own way, uses facial recognition technology embedded in facial beauty prediction. Race seems to play an underlying role, however, it is not the ultimate deciding factor. Through my research I was able to discover that TikTok’s algorithm is sensitive, and is incredibly harsh to new users. Many claim your first five videos are the foundation to your career on TikTok, allowing the app to categorize your account based off of the content you produce. If your account is not managed correctly, you will be deemed an unreliable source of traction for the app and you’ll experience what many call a “flop” (TechCrunch). This just goes to show that the app relies on various factors when determining a user’s popularity, and race is definitely on the list.

This raises concerns for the future of AI and social media. Timelines, feeds, and pages already seem like algorithmic projections of unattainable beauty standards and ideals. The continued use and development of such technologies will continue to drive users down a path of insecurities and unconscious bias.

Facial Recognition Technology at the Texas Border

Facial recognition technology is currently being used at the border in Texas — but concerns about its flaws are rising.

Image Credit: NPR

Facial Recognition and Biometric Technology at the Border

Facial recognition, a form of biometric technology, is being used by the U.S. Customs and Border Protection at the Brownsville and Progreso Ports of Entry in Texas. Biometric technology software identifies individuals using their fingerprints, voices, eyes, and faces. This technology is being used at the border to compare surveillance photos taken at the ports of entry to passport and ID photos within government records. While this may seem simple enough, concerns about the ethics and accuracy of the technology are rising.

Drawbacks

One of the most dangerous flaws of facial recognition technology is that it is disproportionately inaccurate when used to identify POC, transgender and nonbinary individuals, and women. A 2018 study conducted by MIT found that “the error rates for gender classification were consistently higher for females than they were for males, and for darker-skinned subjects than for lighter-skinned subjects,” with identification of darker-skinned women having an error rate of up to 46.5% — 46.8% across numerous softwares. This basically means that 50 percent of the time, facial recognition software will misidentify these women. These extremely high error rates show that facial recognition technology is unreliable, and could cause people to undergo unnecessary secondary inspections, unfounded suspicion, and even harassment at the ports of entry.

There’s not only that; because facial recognition technology is still relatively new, the US does not have comprehensive laws regulating its use, making it easier for the technology to be abused. Without regulation, the government is not required to be transparent about how they use facial recognition technology. The lack of information regarding how the technology is used makes it unclear how and for how long the government stores this information. In addition, questions and concerns over the constitutionality of biometric technology have recently been brought to light, with some pointing out that its use could be a violation of the Fourth Amendment.

While the Customs and Border Protection claims that travelers have the option to opt-out of these photographs, ACLU claims that travelers who choose to opt-out face harassment by agents, secondary inspections, and questioning, with some travelers even having their requests denied because they did not inform the agents that they will be opting out before reaching the kiosks.

Because of inaccurate results and concerns over privacy, it’s understandable that travelers may choose to not participate in facial recognition — but doing so may lead to questioning and harassment. Facial recognition at the border is a lose-lose situation, no matter what the travelers choose to do.

Spotify Is Listening to You in More Ways Than You Think

Image Credit: Evan Greer

Streaming platform Spotify has amassed 155 million premium subscribers and 345 million active users monthly in over 170 countries and is expanding its services to 85 more countries in 2021. However, it has received harsh criticism for underpaying musicians and artists, continuing the illegal practice of Payola for big labels, and concealing its payment structures. The United Musicians and Allied Workers Union (UMAW) spearheaded a Justice at Spotify campaign last October with demands, such as increasing the average streaming royalty from $.0038 USD to a penny per stream, and the group protested outside of Spotify offices on March 15.

Most recently, the company filed an alarming patent to use artificial intelligence for emotional surveillance and manipulation, listening to users’ conversations, analyzing the sound of one’s voice, and curating targeted ads and music for one’s emotional state. The technology claims to identify “emotional state, gender, age, or accent” in forming its recommendations.

Fight for the Future describes what could soon be a terrifying reality: “Imagine telling a friend that you’re feeling depressed and having Spotify hear that and algorithmically recommend music that matches your mood to keep you depressed and listening to the music they want you to hear.”

Access Now, a nonprofit organization that defends and extends the digital rights of people around the world, sent a letter to Spotify dissecting how this new initiative would be extremely invasive and endanger users’ safety and privacy. It cites four major concerns: emotion manipulation, gender discrimination, privacy violations, and data security.

  • Emotion Manipulation: Monitoring emotional state and making decisions off of that creates a dangerous power dynamic between Spotify and its users and leaves the door open for emotion manipulation.
  • Gender Discrimination: Extrapolating gender through one’s voice and conversations will undoubtedly discriminate against non-binary and transgender individuals.
  • Privacy Violations: The artificial intelligence system would be “always on,” constantly monitoring what users are saying, analyzing their tone and language, and collecting sensitive information about users’ lives.
  • Data Security: Constantly collecting such personal and sensitive data about users’ personal lives would likely make Spotify a target of hackers, stalkers, and government authorities.

“There is absolutely no valid reason for Spotify to even attempt to discern how we’re feeling, how many people are in a room with us, our gender, age, or any other characteristic the patent claims to detect,” says Isedua Oribhabor, a U.S. Policy Analyst at Access Now. “The millions of people who use Spotify deserve respect and privacy, not covert manipulation and monitoring.”

Artificial intelligence is pervaded by racial and gender biases, and emotion recognition software like what Spotify is proposing is considered “a racist pseudoscience” at best. This technology poses a grave threat to independent artists and creators as well: what happens “when music is promoted based on surveillance rather than artistry,” Fight for the Future asks?

Evan Greer, a transgender/genderqueer singer, songwriter, activist, and Deputy Director of Fight for the Future, released the song and music video “Surveillance Capitalism,” part of the larger album Spotify is Surveillance. The song aims to raise awareness about Spotify’s emotion surveillance and manipulation, garner support for Fight for the Future’s petition demanding Spotify abandon the patent and promise to never use this invasive technology on its users, and support the Union of Musicians and Allied Workers by donating 100% of the artist proceeds from the song.

Sign the petition.

Tell Spotify to drop its plan to spy on your conversations to target music and ads.

www.stopspotifysurveillance.org

AI in the Black Lives Matter Movement: The Good, The Bad & The Ugly

Image Credit: The Face

The Good

Social media continues to play a prominent role in amplifying Black voices, spreading awareness about police brutality and systemic injustices against the Black community, sharing resources and petitions, organizing communities and grassroots campaigns, and ultimately, catalyzing a global social justice movement.

Image Credit: Sacrée Frangine

AI is being leveraged to protect the identities of Black Lives Matter protesters, keeping their photos out of police facial recognition databases. People have tried to prevent this facial recognition by blurring or pixelating images of protestors, but complex machine learning systems can still recognize and essentially unblur these images. Researchers at Stanford University developed a new tool to anonymize protesters: the BLMPrivacyBot. Instead of blurring faces, it covers them with a Black Power fist emoji. The technology, trained on a dataset of close to 1.2 million people, employs facial detection instead of facial recognition, identifying where there are faces but not whom they belong to. Although the tool is not foolproof, it is a vital step to protect Black Lives Matter organizers from police facial recognition systems.

Image Credit: AI News

Blocking out the face offers a great form of anonymization; nevertheless, this cannot be mistaken for complete foolproof anonymity, e.g. if someone is wearing a t-shirt with their SSN or if they are not anonymized in another image and identity could be triangulated through similar clothing and surroundings.

@BLMPrivacyBot on Twitter

The Bad

Facial recognition violates people’s privacy, obscurity, and lawful right to protest social injustices anonymously.

Aside from the ethics of diminishing people’s obscurity when they are in public and stripping away their right to do lawful things like protest anonymously, there is a real risk of misidentification through this technology. — Evan Selinger

In 2016, police allegedly used facial recognition to identify and arrest people protesting Freddie Gray’s death in police custody, who they believed had outstanding arrest warrants.

In New York City, the NYPD employed facial recognition over 8,000 times in 2019, disproportionately discriminating against the city’s people of color. Police can add protesters to their so-called gang database, categorizing more than 42,000 New Yorkers as “gang members,” monitor them, and retaliate by punishing them for minor offenses, such as running a red light.

We definitely know that facial recognition has been used to monitor political activity in the past and we think, even if people aren’t arrested in the short-term or tracked in the short-term, that this creates a repository of information that the NYPD can always revisit as facial recognition becomes more powerful and prolific. The NYPD ran approximately 8,000 facial recognition searches last year and that number will only grow. We’re quite concerned about how facial recognition will be used to monitor political protest and further expand the NYPD’s surveillance of communities of color. — Albert Fox Cahn, Executive Director of the Surveillance and Technology Oversight Project (STOP) at the Urban Justice Center

Today, at least one in four law enforcement agencies in the United States have access to dangerous facial recognition technology with little oversight or accountability.

The Department of Homeland Security deployed helicopters, airplanes, and drones to surveil protests against the unjust death of George Floyd in 15 cities, logging at least 270 hours of aerial surveillance.

In a letter to Acting Secretary of Homeland Security, Chad F. Wolf, Representatives Carolyn B. Maloney, Alexandria Ocasio-Cortez, Jamie Raskin, Stephen F. Lynch, and Ayanna Pressley write, “This administration has undermined the First Amendment freedoms of Americans of all races who are rightfully protesting George Floyd’s killing. The deployment of drones and officers to surveil protests is a gross abuse of authority and is particularly chilling when used against Americans who are protesting law enforcement brutality.”

Operators can now directly track and follow the movements of protesters, direct law enforcement on the ground, and identify and add faces of demonstrators to police facial recognition databases. We do not definitively know if police are using facial recognition to track and surveil Black Lives Matter protesters now because of a lack of transparency on law enforcement’s use of the technology. However, facial recognition poses a clear danger to protesters’ safety and lawful right to anonymity.

The Ugly

The NYPD used facial recognition technology to track down a prominent Black Livers Matter activist, 28-year-old Derrick Ingram. On August 7, police officers, some wearing tactical gear, stood outside of his home with a document entitled “Facial Identification Section Informational Lead Report,” allegedly containing a photo taken from his Instagram account. Ingram live-streamed the encounter, repeatedly asking law enforcement to produce a search warrant, which they refused to do. Police only left once protesters gathered outside his apartment in support of Ingram.

Derrick Ingram

A spokesperson confirmed, “The NYPD uses facial recognition as a limited investigative tool, comparing a still image from a surveillance video to a pool of lawfully possessed arrest photos.” It remains unclear, however, if the photo of Ingram captured from social media was used in the investigation or not, and if it was, this would constitute a breach of the police department’s own policies, which only allow for the use of still images pulled from a surveillance video or arrest photo.

Racism in AI

A landmark study in 2018 by researchers at Stanford University and MIT showed facial recognition misidentification rates of 0.8% for light-skinned men and as high as 34.7% for dark-skinned women.

The findings raise questions about how today’s neural networks, which learn to perform computational tasks by looking for patterns in huge data sets, are trained and evaluated. For instance, according to the paper, researchers at a major U.S. technology company claimed an accuracy rate of more than 97 percent for a face-recognition system they’d designed. But the data set used to assess its performance was more than 77 percent male and more than 83 percent white.

Larry Hardesty, MIT News Office

The ACLU conducted a similar study in which Amazon’s facial recognition system, Rekognition, falsely matched 28 members of Congress with mugshots of people arrested for crimes. It disproportionately incorrectly matched people of color, including six members of the Congressional Black Caucus.

Image Credit: ACLU

In the United States alone, facial recognition led to the wrongful arrests and accusations of three black men: Robert Julian-Borchak Williams, Nijer Parks, and Michael Oliver for crimes they did not commit.

The Bottom Line

Image Credit: TechCrunch

Technology perpetuates systemic racism. It is imperative that law enforcement is more transparent and accountable about their use of facial recognition technology moving forward, and the nation as a whole should move to ban federal use of these technologies with the Facial Recognition and Biometric Technology Moratorium Act.

How Facial Recognition Affects People of Different Gender Identities

Image Credit: Getty Images

Artificial intelligence-powered surveillance technology has recently attracted controversy for its role in furthering discrimination against people of color and other marginalized groups.

This discrimination is seen in the many false arrests that have occurred due to the misidentification of AI and software bias. Numerous examples have come to light with police using facial recognition software and incorrectly identifying and arresting “suspects”. Nijeer Parks, and African American man, was incorrectly matched to footage of a suspect and detained for several days, despite compelling evidence proving he had no connection to the crime that occurred.

AI’s flaws also affect the LGBTQ+ community. Many facial recognition software systems are programmed to sort individuals by gender. Non-binary, agender, and gender non-conforming individuals are often sorted into these categories incorrectly, which entirely ignores their gender identities. Transgender individuals are often misgendered, or their gender identity is entirely ignored.

Studies at CU Boulder and MIT found that “facial analysis services performed consistently worse on transgender individuals, and were universally unable to classify non-binary genders.” The Gender Shades Project found that the software being used at Microsoft, Face++, and IBM misidentified many demographics. Microsoft’s facial recognition software misgenders 93.6% of those of darker skin complexions. At Face++ female subjects are misgendered 95.9% according to their error analysis.

Being misidentified by the system can have terrible ramifications for many people of varying gender identities if this type of facial recognition software continues to be used by law enforcement.

It’s important to note how the development of software systems influence their bias and inaccuracy. Congressional inquiries into the accuracy of facial recognition across a number of demographics, as well as a study by the National Institute of Standards and Technology (NIST), have found that the technology is highly inaccurate when identifying non-white, female, or non-cisgender people. It is most effective on white, cisgender men. This reflects the fact that 80% of software engineers are men, and 86% of software engineers are Asian or white.

We’re seeing the growing problems that stem from the lack of diversity in careers that heavily impact our lives. Software bias is a reflection of the bias of those who write the code. By continuing the use of software with documented bias, we perpetuate an endless loop of marginalization and discrimination. The only solution is increasing diversity in fields that affect our everyday life, like software engineering.

Efforts to ban and regulate facial recognition usage, particularly by law enforcement, have increased in the recent past.

On a federal level, Senator Ed Markey (D-MA) has proposed a bill known as the Facial Recognition and Biometric Technology Moratorium Act of 2020. This bill would impose restrictions on federal and state agencies who wish to use this technology, and render information obtained by law enforcement through facial recognition inadmissible in court.

Many cities have restricted and regulated law enforcement’s use of facial recognition. These cities, including Minneapolis, Portland, San Francisco, New Orleans, Boston, and many more, have taken action against the software. It is imperative that more cities and the federal government follow in their path and prevent facial recognition technology from being used by law enforcement in the future.

Facial Recognition Can Now Identify People Wearing Masks

Image Credit: BBC

Japanese company NEC developed a facial recognition system that can identify people wearing face masks with near 100% accuracy.

It focuses on parts of the face that are not covered, such as the eyes, to verify someone’s identity. Verification takes less than one second and has an accuracy rate of over 99.9%, a stark improvement from facial recognition algorithms’ prior identification rate of 20–50% of images of people wearing face masks.

NEC worked to hone this technology for some time as wearing masks is common practice in Japan already, but it accelerated development to accommodate the COVID-19 pandemic. “Needs grew even more due to the coronavirus situation as the state of emergency [last year] was continuing for a long time, and so we’ve now introduced this technology to the market,” Shinya Takashima, Assistant Manager of NEC’s digital platform division, told Reuters.

The company is targeting 100 billion yen ($970 million) in sales in 2021 for its biometrics, facial recognition, and video analysis systems. Lufthansa and Swiss International airlines already employ this technology since its sale in October, and the NEC is trialing facial recognition for automated payments at a convenience shop in its Tokyo company headquarters.

London’s Metropolitan Police Service, ‘The Met Police,’ used the NeoFace Live Facial Recognition system to compare faces in crowds with those on a government watchlist to “prevent crime, to protect vulnerable members of the community or to support the safety strategy.” However, they were met with intense backlash for the lack of records kept on face matches made at King Cross and froze further use of the technology. England and Wales hope to increase transparency about the use of facial recognition systems and image sharing and to revise prior guidelines on the use of surveillance cameras.

The NEC aims to employ live facial recognition technology to promote social distancing, minimizing the need to touch surfaces or carry around forms of identification like security cards, and thus, combat the spread of COVID-19. “Touchless verification has become extremely important due to the impact of the coronavirus,” Shinya Takashima said. “Going forward we hope to contribute to safety and peace of mind by strengthening [efforts] in that area.”

NEC’s NeoFace Live Facial Recognition system can identify people in real-time with close to 100% accuracy, and it effectively eliminates the obstacle of facial coverings like masks. Although this can help promote social distancing and mitigate the spread of disease through hands-free payments and identification, such a powerful tool, if left unregulated, could pose a great threat to human rights.

AI can discriminate based on gender, race and ethnicity, socioeconomic level, disability status, etc. Surveillance and facial recognition technologies disproportionately harm people of color, allow private information to be collected by third parties without a person’s explicit consent, and are be weaponized to intentionally target certain groups like Uyghur Muslims in China. With such a powerful facial recognition system, peaceful protesters can no longer hide their identities from police with face coverings, roadblocks to lethal autonomous weaponry are eliminated, and an individual’s privacy is profoundly violated.

Image Credit: New York Post

Strict regulations, if not a complete ban on facial recognition systems, need to be implemented to ensure the ethical use of this technology, safeguard people’s rights to privacy, and begin to break the AI-driven cycle of systematic discrimination against minority groups.

Examining the Costs and Benefits of Biometrics

surveillance tool

What are biometrics?

Biometrics are a means of identification of humans. These digital identifiers are unique to each individual and can be used to grant access to private networks. Commonly used biometric identifiers include our fingerprints, voice, typing cadence, and other physiological or behavioral traits. Biometrics provide higher levels of security to data, systems, and other private networks. These identifiers are more efficient and effective than standard security implementations such as passwords and codes.

How is it used?

This form of identification plays a significant role in daily life. It is widely used in many situations, including unlocking a phone, airport security, criminal investigations, and more. The U.S. Department of Homeland Security utilizes biometric data on a large scale, filtering out illegal travelers, enforcing federal laws, and verifying the credentials of visa applicants. To date, their system has documented 260 million unique identities and processes over 350,000 cases per day. An individual may encounter biometric identification when using facial recognition or fingerprinting software to unlock their phone. Most devices nowadays including laptops, phones, and tablets come equipped with high tech identification tools such as facial recognition technology, iris scans, and fingerprinting.

Ethical Concerns

While these features offer convenience and efficiency which entice customers, activists argue over the ethics of storing biometric data. According to Andy Adler, a professor specializing in biometrics at Canada’s Carleton University, “[Biometric] systems are also vulnerable to all of the traditional security threats as well as all sorts of new ones and interactions between old and new ones.” The usage of biometric software introduces new threats of identity theft or financial crime, as hackers target and steal biometric information for filing for credit, taxes, insurance scams, and more.

Discrimination

Biometrics like facial recognition disproportionately misidentify people of color, women, and children. According to a study conducted at MIT, facial recognition technologies misidentify darker skinned women one in three times. This high error rate can lead to false accusations in criminal investigations, putting communities of color at risk. For example, it was recently discovered that a Black man, Robert Julian-Borchak Williams, was wrongfully accused of shoplifting $3,800 worth of goods from a store in Detroit. When law enforcement officers identified a Black man in a red cap in a surveillance video, facial recognition software falsely identified Williams as the suspect. He was arrested and jailed for over 24 hours. His case was brought to the attention of the public as a result of a hearing in Detroit focused on facial recognition and the increasing rates of false positives faced by darker-skinned people.

While these issues persists, biometric identifiers like facial recognition should not be widely used. It could take years to train artificial intelligence systems to comprehend the needs of modern-day society.

Facial Recognition Linked to Third-Known Wrongful Arrest of Black Man

National Police Foundation

In early 2019, Nijeer Parks, a Black man from Paterson, New Jersey, was wrongly arrested in Woodbridge, New Jersey due to the misidentification of facial recognition software. The software connected Parks to the footage of an individual shoplifting and then driving away, hitting a police car. At the time of the crime, Parks had neither a car nor a driver’s license, which he informed the Municipal Court Clerk before his arrest. Additionally, Parks claims to have presented a solid alibi, which would have cleared him of any reasonable suspicion of being the perpetrator of the crime. Despite the compelling evidence clearing Parks of any involvement, police refused to examine other forensic evidence at the scene of the crime, arresting and detaining Parks solely based on the facial recognition mismatch. As a result, he spent ten days in jail and $5000 defending himself.

Mr. Parks has filed a lawsuit in the Superior Court of New Jersey against several Woodbridge officials alleging his wrongful arrest to be the product of racial profiling and claiming the force he encountered in the process of his arrest was excessive. His lawsuit also asserts that facial recognition is faulty and untrustworthy and that no reasonable police officer should issue an arrest warrant solely based on facial identification, especially when all additional evidence indicated Parks was nowhere near the scene of the crime. The suit, which was filed in late 2020, is beginning to cause backlash from those who believe law enforcement should utilize tools like facial recognition to operate more efficiently, ignoring the blatant bias and potential harm the technology poses.

This is not the first case that demonstrates the danger of facial recognition in making arrests. Mr. Parks is the third individual to be falsely arrested due to a facial identification error. All three arrested have been African American men. A study by the National Institute on Standards and Technology (NIST) examined 189 algorithms from 99 developers that demonstrated empirical bias across the facial recognition industry. Using millions of images, the study found that African Americans and Asian Americans were 10 to 100 times more likely to be falsely matched when compared to images of Caucasian individuals. While different algorithms had different rates of inaccuracies, racial bias throughout facial recognition cannot be ignored.

Facial recognition has no place in law enforcement. Law enforcement, which has a documented history of racial oppression, should not be given tools that would allow them to continue to perpetuate racial biases and target communities of color without accountability or transparency. Aside from unfairly targeting racial minorities and LGBTQ+ people, facial recognition presents a very real threat to both free speech and the fourth amendment. The government has repeatedly shown that they cannot be trusted in using the software after its secretive and unlawful use prompted congressional hearings in early 2019. Facial recognition once again made headlines this summer when it was used to surveil and detain thousands of protesters marching for the Black Lives Matter movement. This is just another reason why facial recognition is so terrifying: its selective use in targeting communities of color.

The danger of facial recognition does not lie simply in its flaws — even accurate facial recognition pushes us closer and closer to a state of surveillance, a clear violation of our constitutional rights and civil liberties. Regulation simply does not go far enough. Detroit is just one example: even after the false arrest of Michael Oliver in July of 2019, the city still refused to make any concrete legislative change to prevent more injustice at the hands of facial recognition. In September 2019, the Board of Commissioners approved guidelines that required approval within the Detroit Police Department and prohibited its use at protests. However, the “reforms” were not enough: another Black man, Robert Julian-Borchak Williams, was arrested and detained for over 30 hours in January 2020 due to another facial recognition mismatch. Facial recognition is a dangerous tool, and injustices like the arrest of Nijer Parks further show that it must be banned in law enforcement.