Facial Recognition Technology at the Texas Border

Facial recognition technology is currently being used at the border in Texas — but concerns about its flaws are rising.

Image Credit: NPR

Facial Recognition and Biometric Technology at the Border

Facial recognition, a form of biometric technology, is being used by the U.S. Customs and Border Protection at the Brownsville and Progreso Ports of Entry in Texas. Biometric technology software identifies individuals using their fingerprints, voices, eyes, and faces. This technology is being used at the border to compare surveillance photos taken at the ports of entry to passport and ID photos within government records. While this may seem simple enough, concerns about the ethics and accuracy of the technology are rising.

Drawbacks

One of the most dangerous flaws of facial recognition technology is that it is disproportionately inaccurate when used to identify POC, transgender and nonbinary individuals, and women. A 2018 study conducted by MIT found that “the error rates for gender classification were consistently higher for females than they were for males, and for darker-skinned subjects than for lighter-skinned subjects,” with identification of darker-skinned women having an error rate of up to 46.5% — 46.8% across numerous softwares. This basically means that 50 percent of the time, facial recognition software will misidentify these women. These extremely high error rates show that facial recognition technology is unreliable, and could cause people to undergo unnecessary secondary inspections, unfounded suspicion, and even harassment at the ports of entry.

There’s not only that; because facial recognition technology is still relatively new, the US does not have comprehensive laws regulating its use, making it easier for the technology to be abused. Without regulation, the government is not required to be transparent about how they use facial recognition technology. The lack of information regarding how the technology is used makes it unclear how and for how long the government stores this information. In addition, questions and concerns over the constitutionality of biometric technology have recently been brought to light, with some pointing out that its use could be a violation of the Fourth Amendment.

While the Customs and Border Protection claims that travelers have the option to opt-out of these photographs, ACLU claims that travelers who choose to opt-out face harassment by agents, secondary inspections, and questioning, with some travelers even having their requests denied because they did not inform the agents that they will be opting out before reaching the kiosks.

Because of inaccurate results and concerns over privacy, it’s understandable that travelers may choose to not participate in facial recognition — but doing so may lead to questioning and harassment. Facial recognition at the border is a lose-lose situation, no matter what the travelers choose to do.

How Will AI Acknowledge Our Differences in Healthcare?

A few weeks ago for a hackathon, my team was looking into heart attacks for our heart attack detection wearable. Halfway through our research, we learned that women and men can actually experience different heart attack symptoms, which often go unacknowledged. This led us in a new direction: a woman-centered heart attack wearable utilizing AI trained on female-specific data called Heartware.

It was this project that made me realize the need for awareness of bias in AI, especially in the context of healthcare.

Why is AI being used in the healthcare industry?

AI and machine learning today can be leveraged as tools to help medical care workers make more accurate, faster, and cheaper diagnosis such as:

  • Diagnosing skin cancer like a dermatologist
  • Analyze a CT scan for strokes like a radiologist
  • Detect cancer on a colonoscopy like a gastroenterologist

Put simply, AI works by training on past data to make predictions about future data it hasn’t seen before. Just like an experienced doctor studies and has seen many examples of skin cancer vs. not skin cancer, AI can help classify these on a more precise level.

Why do biases matter in healthcare?

In certain cases, such as the impact of gender on heart attacks, factors relating to ones’ identity may actually impact the way certain conditions need to be diagnosed or treated.

In healthcare, it is important to acknowledge these factors for the benefit of the patient when creating algorithms for these use cases. But on the flip-side, it is very important not to exacerbate the deeply ingrained systemic divide in our healthcare system already.

Most often, the harmful bias in AI emerges from a training dataset that does not accurately represent the population it will be deployed on — such as a skin cancer detection algorithm that was not trained on enough data from darker-skinned individuals.

Unwanted AI bias can creep into healthcare data through:

  • Samples: We must ensure that there is adequate diversity in training datasets.
  • Design: What was the original intention behind creating the algorithm? Was it to maximize profit?
  • Usage: Physicians must be aware of the details of the AI models they‘re using to ensure ethical deployment

Case Study: Healthcare Risk-Management Algorithm

Consider an algorithm that detects which patients are at the highest risk and may benefit most from specialized care programs, and classifies which individuals should qualify for this extra care.

This type of algorithm is used on over 200 million individuals in the US. However, a 2019 study showed that significant racial bias existed in this algorithm. Despite sharing similar risk scores, black individuals had a 26.3% higher chance of chronic illness than their white counterparts.

The ground truth this algorithm was built on was the assumption that those who spend more money on healthcare have higher needs. This was a convenient way to come to this conclusion because this data was easy to obtain and was an efficient quantitative indicator that made data cleaning simple.

However, when black and white patients spent the same amount, this did not directly translate to the same level of need. This is because healthcare costs and race are highly correlated. Patients of color are more likely to have reduced access to medical care due to time, location, and cost constraints.

This bias as a result of the flawed ground truth went unidentified until it was deployed on millions of Americans. This exemplifies the importance of considering context when building technology meant to truly benefit all.

How can we improve our algorithms?

Because AI has so much potential for good in the healthcare industry, it is unreasonable to rule it out completely. So what can be done to improve?

Obtaining Better Data

At the moment, it is very difficult to compile a large, diverse, high-quality medical dataset.

As the public becomes more distrusting of sharing their data even if it’s for a worthy cause, hospitals are less incentivized to share data due to the loss of patients to other hospitals. In addition, privacy laws protecting data, the sanctity of medical data, and the consequences from making errors in sharing data, make it more difficult to obtain good data.

On top of this, there is also a technical barrier due to the limited interoperability between medical record systems.

Many datasets will be inherently biased as a result of the sourcing method, like how military datasets leave out females since most service members are male. Therefore, datasets used to train medical AI on a large scale will probably need to be curated in a very intentional manner.

Diversity Among Developers

A diverse training dataset alone will not guarantee the elimination of unwanted bias.

A lack of diversity among developers and investors behind medical AI tools can implant bias due to the framing of problems from the perspective of majority groups, as was in the case study above. Implicit biases and assumptions about data can leave potentially major biases go ignored.

What to look out for:

Steps to spot bias before an algorithm is deployed.

  1. Audit algorithms for potential pre-identified biases.
  2. Dig deeper into where and how the data was obtained and look for flaws in the data that could lead to bias.
  3. Consider how this will be deployed across a diverse array of patient populations.
  4. Follow this process into deployment by continually monitoring bias in real-time for unanticipated outcomes.
  5. At every step, ensure communication and transparency with providers and patients.

How will we move forward?

There is still a lot we don’t know.

A potential solution to the data issue is a “more with less” approach to training models. This would hopefully allow us to create accurate models with more limited data, decreasing the need for huge datasets.

There are also many questions regarding how these algorithms will be used. Will health insurance companies use AI to rack up insurance costs for people of color due to their higher risk? Will different treatments be used depending on a patient’s insurance status or ability to pay? We must keep asking these questions as we phase more AI into our decision-making.

The 21st century physician will need to have at least a basic understanding of how the algorithms they use work and who they were built for.

Key Takeaways

  • It is especially critical in the healthcare field that we are aware of the harmful biases that exist (racial bias in risk-assessment) and the necessary biases that don’t exist (gender bias in heart attack detection).
  • Algorithms built upon initial assumptions about data that are missing important context (such as the correlation between race and healthcare costs) can be incredibly harmful in the long-run.
  • Albeit necessary, obtaining more diverse medical datasets is really challenging — which could be helped by a “more with less” approach to training models.

How Facial Recognition Affects People of Different Gender Identities

Image Credit: Getty Images

Artificial intelligence-powered surveillance technology has recently attracted controversy for its role in furthering discrimination against people of color and other marginalized groups.

This discrimination is seen in the many false arrests that have occurred due to the misidentification of AI and software bias. Numerous examples have come to light with police using facial recognition software and incorrectly identifying and arresting “suspects”. Nijeer Parks, and African American man, was incorrectly matched to footage of a suspect and detained for several days, despite compelling evidence proving he had no connection to the crime that occurred.

AI’s flaws also affect the LGBTQ+ community. Many facial recognition software systems are programmed to sort individuals by gender. Non-binary, agender, and gender non-conforming individuals are often sorted into these categories incorrectly, which entirely ignores their gender identities. Transgender individuals are often misgendered, or their gender identity is entirely ignored.

Studies at CU Boulder and MIT found that “facial analysis services performed consistently worse on transgender individuals, and were universally unable to classify non-binary genders.” The Gender Shades Project found that the software being used at Microsoft, Face++, and IBM misidentified many demographics. Microsoft’s facial recognition software misgenders 93.6% of those of darker skin complexions. At Face++ female subjects are misgendered 95.9% according to their error analysis.

Being misidentified by the system can have terrible ramifications for many people of varying gender identities if this type of facial recognition software continues to be used by law enforcement.

It’s important to note how the development of software systems influence their bias and inaccuracy. Congressional inquiries into the accuracy of facial recognition across a number of demographics, as well as a study by the National Institute of Standards and Technology (NIST), have found that the technology is highly inaccurate when identifying non-white, female, or non-cisgender people. It is most effective on white, cisgender men. This reflects the fact that 80% of software engineers are men, and 86% of software engineers are Asian or white.

We’re seeing the growing problems that stem from the lack of diversity in careers that heavily impact our lives. Software bias is a reflection of the bias of those who write the code. By continuing the use of software with documented bias, we perpetuate an endless loop of marginalization and discrimination. The only solution is increasing diversity in fields that affect our everyday life, like software engineering.

Efforts to ban and regulate facial recognition usage, particularly by law enforcement, have increased in the recent past.

On a federal level, Senator Ed Markey (D-MA) has proposed a bill known as the Facial Recognition and Biometric Technology Moratorium Act of 2020. This bill would impose restrictions on federal and state agencies who wish to use this technology, and render information obtained by law enforcement through facial recognition inadmissible in court.

Many cities have restricted and regulated law enforcement’s use of facial recognition. These cities, including Minneapolis, Portland, San Francisco, New Orleans, Boston, and many more, have taken action against the software. It is imperative that more cities and the federal government follow in their path and prevent facial recognition technology from being used by law enforcement in the future.

Examining the Role of the Indian American Tech Force in Algorithmic Injustice

Algorithmic injustice is representative of the programmers behind 21st-century technology. This op-ed is meant to dig into the potential involvement of Indian-American tech workers in producing discriminatory technology.

Image credit: The Economic Times.

Indian American men are foundational to the current global tech force. They hold thousands of software engineering roles at almost every major institution, yet their powerful influence on advancements in technology and potential complicity in creating injustice often go unnoticed and unchecked. Code is a reflection of the programmers who write it, and it is time we start taking a deeper look at the harms that high-caste, privileged Indian men perpetuate as the architects of our technology.

From the social media filters on our phones to the satellite systems that surround our planet, technology has become a ubiquitous part of human society. Contrived by some of the most complex algorithms and mathematical equations, technology is often perceived as a feat of innovation void of human error — when in reality, it is a reflection of all the flaws that are ingrained in human society. Humans design A.I. systems and algorithms, and as a result, it is saturated with their biases. The discriminatory impacts of A.I. and algorithms have been shown and proven countless number of times, through harmful beauty artificial intelligence, criminal justice surveillance systems, discriminatory hiring and recruitment processes, and bias in healthcare tools. This technology adversely affects marginalized communities around the world in all facets of their lives.

While these examples are unsettling, they should not be shocking. If someone were to look into the demographics of the major tech companies and institutions, they would see that 92 percent of all software positions are held by whites and Asians, and 80 percent are held by men. Researchers and activists have showcased that the disproportionate number of white men in the software industry is directly connected to bias in our algorithms (“Diversity in Tech by the Numbers”). The most common cause of algorithmic bias happens through deep learning. Deep learning algorithms make decisions based on trends found in large quantities of data, which reflect the racism, misogyny, and classism rampant in society.

Growing up as a woman interested in technology in a community of Indian-American software engineers, I was exposed to deeply ingrained forces of misogyny, casteism, and classism from my male peers. Experiencing and seeing these behaviors present within individuals from my community brings me to question the role of 3+ million Indian software engineers might play in perpetuating algorithmic injustice. It is time we start questioning how their values and ways of thinking lead into the code and technology frameworks they are responsible for.

Harvard professor Ajantha Subramanian, states in her book, “The Caste of Merit: Engineering Schools of India’’ that current Indian American software engineers hold significant class and caste privileges in Indian society. She explains that those belonging to lower castes in India face structural barriers that prevent them from pursuing higher education in a similar way to how America’s long history of racism has created institutionalized barriers in education for low-income, black and brown communities. Factors such as fewer academic resources, discrimination within classrooms, a lack of proper physical and emotional support, and lack of meaningful familial engagement prevent students of lower caste backgrounds from receiving the same merit of education as higher-caste students. The educational inequality in India’s schooling systems triggers a domino effect that leads to higher ratios of upper caste individuals in elite engineering schools, further allowing them to pursue well-paying careers in technology. and utilize the H1-B visa to immigrate to the United States.

Equality Labs, a South Asian American human rights startup, reported that “two-thirds of members of the lowest caste, called Dalits, said they have faced workplace discrimination due to their caste. Forty-one percent have experienced discrimination in education because of it. And a quarter of Dalits say they’ve faced physical assault — all in the United States” (NPR). Unfortunately, it does not stop there. Indian Americans are fervent Modi loyalists, and privileged Indian Americans applaud Trump’s and Modi’s actions the way privileged White-Americans uplift Trump.

What this essentially means is that when it comes to algorithmic injustice, we must hold the Indian American IT sector accountable alongside other professionals. Just because Indian Americans are not white or because they are immigrants does not mean they have not benefited from oppressive institutions. We must understand their individual relationship to various social forces such as caste, race, gender, and ethnicity and how their personal role within these systems of oppression negatively influences their point of view, ultimately leading to biased and discriminatory technology.

Our next steps are twofold: to increase awareness and understanding of the problem alongside actively pushing for true diversity in STEM. As aforementioned, many researchers have examined the effects of a disproportionate amount of white men in technology and have drawn links to how their population promotes algorithmic injustice. We need more researchers and digital rights activists assessing how Indian -Americans specifically contribute to algorithmic injustice with their unique set of biases and prejudices, and how each of us might as well. Furthermore, we need to create educational and professional pathways for underrepresented minorities to secure jobs in technology to make sure our technology serves all Americans and not just the privileged.