The future of AI: egalitarian or dystopian?

Once upon a time, artificial intelligence (AI) was viewed as distant and unachievable — it was regarded as nothing more than a fantasy to furnish the plots of science fiction stories. We have made numerous breakthroughs since, with AI software now powerful enough to understand natural language, navigate unfamiliar terrain, and augment scientific research. As COVID-19 reduced our ability to interact with each other, we’ve seen AI powered machines step in to fill that void, and AI be used to advance medical research towards better treatments. This ubiquity of AI may only be the beginning, with experts projecting that AI could contribute a staggering $15.7 trillion to the global economy by the end of the decade. Unsurprisingly, many prosperous members of society view the future of AI optimistically, as one of ever increased efficiency and profit. Yet many on the other side of the spectrum look on much more apprehensively: AI may have inherited the best of human traits, our intelligence, but it also has inherited one of humanity’s worst: our bias and prejudice. AI — fraught with discrimination — is being used to perpetuate systemic inequalities. If we fail to overcome this, an AI dominated future would be bleak and dystopian. We would be moving forward in time, yet backwards in progress, accelerating mindlessly towards a less equitable society.

Towards dystopia is where we’re headed if we don’t reverse course. AI is increasingly being used to make influential decisions in people’s lives — decisions that are often biased. This is due to AI being trained on past data to make future decisions. This data can often have bias which is then inherited by the AI. For instance, AI hiring tools are often being used to assess job applicants. Trained on past employee data which consists of mostly men, the AI absorbs this bias and continues the cycle of disfavoring women, which perpetuates the lack of diversity in key industries such as tech. This is absolutely unacceptable, and that’s to say nothing of the many other ways AI can be used to reinforce inequality. Known as the ‘tech to prison pipeline’, AI — trained on historical criminal data — is being used in criminal justice to determine verdicts. However, African Americans are overrepresented in the training data, and as such, the AI has been shown to hand down harsher sentences for African Americans.

To move towards a future with AI that is not only intelligent, but fair, we must enact regulation to outlaw discriminatory uses, and ensure that the developers of AI software are diverse, so their perspectives are included in the software they created.

Perhaps counterintuitively, a world with fair AI will see social justice advanced even further than a world before any AI. The sole reason that AI has become unfair is due to humans themselves holding lots of bias — which AI has absorbed. But with fair AI replacing humans in decision making, by definition, we will be at a state of zero bias, and thus increased equality.

Achieving fair AI may be the key to a better future — one of increased economic prosperity, furthered scientific progress, and more equity. But in the meantime, we must be diligent in ensuring that the AI being used reflects the best of humanity, rather than our worst.

AI in the Black Lives Matter Movement: The Good, The Bad & The Ugly

Image Credit: The Face

The Good

Social media continues to play a prominent role in amplifying Black voices, spreading awareness about police brutality and systemic injustices against the Black community, sharing resources and petitions, organizing communities and grassroots campaigns, and ultimately, catalyzing a global social justice movement.

Image Credit: Sacrée Frangine

AI is being leveraged to protect the identities of Black Lives Matter protesters, keeping their photos out of police facial recognition databases. People have tried to prevent this facial recognition by blurring or pixelating images of protestors, but complex machine learning systems can still recognize and essentially unblur these images. Researchers at Stanford University developed a new tool to anonymize protesters: the BLMPrivacyBot. Instead of blurring faces, it covers them with a Black Power fist emoji. The technology, trained on a dataset of close to 1.2 million people, employs facial detection instead of facial recognition, identifying where there are faces but not whom they belong to. Although the tool is not foolproof, it is a vital step to protect Black Lives Matter organizers from police facial recognition systems.

Image Credit: AI News

Blocking out the face offers a great form of anonymization; nevertheless, this cannot be mistaken for complete foolproof anonymity, e.g. if someone is wearing a t-shirt with their SSN or if they are not anonymized in another image and identity could be triangulated through similar clothing and surroundings.

@BLMPrivacyBot on Twitter

The Bad

Facial recognition violates people’s privacy, obscurity, and lawful right to protest social injustices anonymously.

Aside from the ethics of diminishing people’s obscurity when they are in public and stripping away their right to do lawful things like protest anonymously, there is a real risk of misidentification through this technology. — Evan Selinger

In 2016, police allegedly used facial recognition to identify and arrest people protesting Freddie Gray’s death in police custody, who they believed had outstanding arrest warrants.

In New York City, the NYPD employed facial recognition over 8,000 times in 2019, disproportionately discriminating against the city’s people of color. Police can add protesters to their so-called gang database, categorizing more than 42,000 New Yorkers as “gang members,” monitor them, and retaliate by punishing them for minor offenses, such as running a red light.

We definitely know that facial recognition has been used to monitor political activity in the past and we think, even if people aren’t arrested in the short-term or tracked in the short-term, that this creates a repository of information that the NYPD can always revisit as facial recognition becomes more powerful and prolific. The NYPD ran approximately 8,000 facial recognition searches last year and that number will only grow. We’re quite concerned about how facial recognition will be used to monitor political protest and further expand the NYPD’s surveillance of communities of color. — Albert Fox Cahn, Executive Director of the Surveillance and Technology Oversight Project (STOP) at the Urban Justice Center

Today, at least one in four law enforcement agencies in the United States have access to dangerous facial recognition technology with little oversight or accountability.

The Department of Homeland Security deployed helicopters, airplanes, and drones to surveil protests against the unjust death of George Floyd in 15 cities, logging at least 270 hours of aerial surveillance.

In a letter to Acting Secretary of Homeland Security, Chad F. Wolf, Representatives Carolyn B. Maloney, Alexandria Ocasio-Cortez, Jamie Raskin, Stephen F. Lynch, and Ayanna Pressley write, “This administration has undermined the First Amendment freedoms of Americans of all races who are rightfully protesting George Floyd’s killing. The deployment of drones and officers to surveil protests is a gross abuse of authority and is particularly chilling when used against Americans who are protesting law enforcement brutality.”

Operators can now directly track and follow the movements of protesters, direct law enforcement on the ground, and identify and add faces of demonstrators to police facial recognition databases. We do not definitively know if police are using facial recognition to track and surveil Black Lives Matter protesters now because of a lack of transparency on law enforcement’s use of the technology. However, facial recognition poses a clear danger to protesters’ safety and lawful right to anonymity.

The Ugly

The NYPD used facial recognition technology to track down a prominent Black Livers Matter activist, 28-year-old Derrick Ingram. On August 7, police officers, some wearing tactical gear, stood outside of his home with a document entitled “Facial Identification Section Informational Lead Report,” allegedly containing a photo taken from his Instagram account. Ingram live-streamed the encounter, repeatedly asking law enforcement to produce a search warrant, which they refused to do. Police only left once protesters gathered outside his apartment in support of Ingram.

Derrick Ingram

A spokesperson confirmed, “The NYPD uses facial recognition as a limited investigative tool, comparing a still image from a surveillance video to a pool of lawfully possessed arrest photos.” It remains unclear, however, if the photo of Ingram captured from social media was used in the investigation or not, and if it was, this would constitute a breach of the police department’s own policies, which only allow for the use of still images pulled from a surveillance video or arrest photo.

Racism in AI

A landmark study in 2018 by researchers at Stanford University and MIT showed facial recognition misidentification rates of 0.8% for light-skinned men and as high as 34.7% for dark-skinned women.

The findings raise questions about how today’s neural networks, which learn to perform computational tasks by looking for patterns in huge data sets, are trained and evaluated. For instance, according to the paper, researchers at a major U.S. technology company claimed an accuracy rate of more than 97 percent for a face-recognition system they’d designed. But the data set used to assess its performance was more than 77 percent male and more than 83 percent white.

Larry Hardesty, MIT News Office

The ACLU conducted a similar study in which Amazon’s facial recognition system, Rekognition, falsely matched 28 members of Congress with mugshots of people arrested for crimes. It disproportionately incorrectly matched people of color, including six members of the Congressional Black Caucus.

Image Credit: ACLU

In the United States alone, facial recognition led to the wrongful arrests and accusations of three black men: Robert Julian-Borchak Williams, Nijer Parks, and Michael Oliver for crimes they did not commit.

The Bottom Line

Image Credit: TechCrunch

Technology perpetuates systemic racism. It is imperative that law enforcement is more transparent and accountable about their use of facial recognition technology moving forward, and the nation as a whole should move to ban federal use of these technologies with the Facial Recognition and Biometric Technology Moratorium Act.