The Good
Social media continues to play a prominent role in amplifying Black voices, spreading awareness about police brutality and systemic injustices against the Black community, sharing resources and petitions, organizing communities and grassroots campaigns, and ultimately, catalyzing a global social justice movement.
AI is being leveraged to protect the identities of Black Lives Matter protesters, keeping their photos out of police facial recognition databases. People have tried to prevent this facial recognition by blurring or pixelating images of protestors, but complex machine learning systems can still recognize and essentially unblur these images. Researchers at Stanford University developed a new tool to anonymize protesters: the BLMPrivacyBot. Instead of blurring faces, it covers them with a Black Power fist emoji. The technology, trained on a dataset of close to 1.2 million people, employs facial detection instead of facial recognition, identifying where there are faces but not whom they belong to. Although the tool is not foolproof, it is a vital step to protect Black Lives Matter organizers from police facial recognition systems.
Blocking out the face offers a great form of anonymization; nevertheless, this cannot be mistaken for complete foolproof anonymity, e.g. if someone is wearing a t-shirt with their SSN or if they are not anonymized in another image and identity could be triangulated through similar clothing and surroundings.
@BLMPrivacyBot on Twitter
The Bad
Facial recognition violates people’s privacy, obscurity, and lawful right to protest social injustices anonymously.
Aside from the ethics of diminishing people’s obscurity when they are in public and stripping away their right to do lawful things like protest anonymously, there is a real risk of misidentification through this technology. — Evan Selinger
In 2016, police allegedly used facial recognition to identify and arrest people protesting Freddie Gray’s death in police custody, who they believed had outstanding arrest warrants.
In New York City, the NYPD employed facial recognition over 8,000 times in 2019, disproportionately discriminating against the city’s people of color. Police can add protesters to their so-called gang database, categorizing more than 42,000 New Yorkers as “gang members,” monitor them, and retaliate by punishing them for minor offenses, such as running a red light.
We definitely know that facial recognition has been used to monitor political activity in the past and we think, even if people aren’t arrested in the short-term or tracked in the short-term, that this creates a repository of information that the NYPD can always revisit as facial recognition becomes more powerful and prolific. The NYPD ran approximately 8,000 facial recognition searches last year and that number will only grow. We’re quite concerned about how facial recognition will be used to monitor political protest and further expand the NYPD’s surveillance of communities of color. — Albert Fox Cahn, Executive Director of the Surveillance and Technology Oversight Project (STOP) at the Urban Justice Center
Today, at least one in four law enforcement agencies in the United States have access to dangerous facial recognition technology with little oversight or accountability.
The Department of Homeland Security deployed helicopters, airplanes, and drones to surveil protests against the unjust death of George Floyd in 15 cities, logging at least 270 hours of aerial surveillance.
In a letter to Acting Secretary of Homeland Security, Chad F. Wolf, Representatives Carolyn B. Maloney, Alexandria Ocasio-Cortez, Jamie Raskin, Stephen F. Lynch, and Ayanna Pressley write, “This administration has undermined the First Amendment freedoms of Americans of all races who are rightfully protesting George Floyd’s killing. The deployment of drones and officers to surveil protests is a gross abuse of authority and is particularly chilling when used against Americans who are protesting law enforcement brutality.”
Operators can now directly track and follow the movements of protesters, direct law enforcement on the ground, and identify and add faces of demonstrators to police facial recognition databases. We do not definitively know if police are using facial recognition to track and surveil Black Lives Matter protesters now because of a lack of transparency on law enforcement’s use of the technology. However, facial recognition poses a clear danger to protesters’ safety and lawful right to anonymity.
The Ugly
The NYPD used facial recognition technology to track down a prominent Black Livers Matter activist, 28-year-old Derrick Ingram. On August 7, police officers, some wearing tactical gear, stood outside of his home with a document entitled “Facial Identification Section Informational Lead Report,” allegedly containing a photo taken from his Instagram account. Ingram live-streamed the encounter, repeatedly asking law enforcement to produce a search warrant, which they refused to do. Police only left once protesters gathered outside his apartment in support of Ingram.
A spokesperson confirmed, “The NYPD uses facial recognition as a limited investigative tool, comparing a still image from a surveillance video to a pool of lawfully possessed arrest photos.” It remains unclear, however, if the photo of Ingram captured from social media was used in the investigation or not, and if it was, this would constitute a breach of the police department’s own policies, which only allow for the use of still images pulled from a surveillance video or arrest photo.
Racism in AI
A landmark study in 2018 by researchers at Stanford University and MIT showed facial recognition misidentification rates of 0.8% for light-skinned men and as high as 34.7% for dark-skinned women.
The findings raise questions about how today’s neural networks, which learn to perform computational tasks by looking for patterns in huge data sets, are trained and evaluated. For instance, according to the paper, researchers at a major U.S. technology company claimed an accuracy rate of more than 97 percent for a face-recognition system they’d designed. But the data set used to assess its performance was more than 77 percent male and more than 83 percent white.
Larry Hardesty, MIT News Office
The ACLU conducted a similar study in which Amazon’s facial recognition system, Rekognition, falsely matched 28 members of Congress with mugshots of people arrested for crimes. It disproportionately incorrectly matched people of color, including six members of the Congressional Black Caucus.
In the United States alone, facial recognition led to the wrongful arrests and accusations of three black men: Robert Julian-Borchak Williams, Nijer Parks, and Michael Oliver for crimes they did not commit.
The Bottom Line
Technology perpetuates systemic racism. It is imperative that law enforcement is more transparent and accountable about their use of facial recognition technology moving forward, and the nation as a whole should move to ban federal use of these technologies with the Facial Recognition and Biometric Technology Moratorium Act.