The future of AI: egalitarian or dystopian?

Once upon a time, artificial intelligence (AI) was viewed as distant and unachievable — it was regarded as nothing more than a fantasy to furnish the plots of science fiction stories. We have made numerous breakthroughs since, with AI software now powerful enough to understand natural language, navigate unfamiliar terrain, and augment scientific research. As COVID-19 reduced our ability to interact with each other, we’ve seen AI powered machines step in to fill that void, and AI be used to advance medical research towards better treatments. This ubiquity of AI may only be the beginning, with experts projecting that AI could contribute a staggering $15.7 trillion to the global economy by the end of the decade. Unsurprisingly, many prosperous members of society view the future of AI optimistically, as one of ever increased efficiency and profit. Yet many on the other side of the spectrum look on much more apprehensively: AI may have inherited the best of human traits, our intelligence, but it also has inherited one of humanity’s worst: our bias and prejudice. AI — fraught with discrimination — is being used to perpetuate systemic inequalities. If we fail to overcome this, an AI dominated future would be bleak and dystopian. We would be moving forward in time, yet backwards in progress, accelerating mindlessly towards a less equitable society.

Towards dystopia is where we’re headed if we don’t reverse course. AI is increasingly being used to make influential decisions in people’s lives — decisions that are often biased. This is due to AI being trained on past data to make future decisions. This data can often have bias which is then inherited by the AI. For instance, AI hiring tools are often being used to assess job applicants. Trained on past employee data which consists of mostly men, the AI absorbs this bias and continues the cycle of disfavoring women, which perpetuates the lack of diversity in key industries such as tech. This is absolutely unacceptable, and that’s to say nothing of the many other ways AI can be used to reinforce inequality. Known as the ‘tech to prison pipeline’, AI — trained on historical criminal data — is being used in criminal justice to determine verdicts. However, African Americans are overrepresented in the training data, and as such, the AI has been shown to hand down harsher sentences for African Americans.

To move towards a future with AI that is not only intelligent, but fair, we must enact regulation to outlaw discriminatory uses, and ensure that the developers of AI software are diverse, so their perspectives are included in the software they created.

Perhaps counterintuitively, a world with fair AI will see social justice advanced even further than a world before any AI. The sole reason that AI has become unfair is due to humans themselves holding lots of bias — which AI has absorbed. But with fair AI replacing humans in decision making, by definition, we will be at a state of zero bias, and thus increased equality.

Achieving fair AI may be the key to a better future — one of increased economic prosperity, furthered scientific progress, and more equity. But in the meantime, we must be diligent in ensuring that the AI being used reflects the best of humanity, rather than our worst.

AI is being used to enhance performance rates of your favorite athletes

When we think about the way artificial intelligence is used in sports, we have to look back to the past. Take Billy Beane, the general manager for the Oakland A’s, a professional major league baseball team that uses quantitative data to make predictions on what kind of players would be successful in the MLB for a low value. The strategy employed by Beane worked pretty well, as the A’s achieved their first playoff appearance in nearly 45 years. Bean received many accolades, and a movie about him, Moneyball. Fast forward years later, and we can see the likes of analytics and Artificial Intelligence being used in sports Industries across multiple Sports. Names like Theo Epstein (Chicago Cubs), Sam Presti (Oklahoma City Thunder), and Zinedane Zidane (Real Madrid) are pioneers who have used AI analytics to help them make decisions on trades, player acquisitions, drafting, and contract negotiations throughout the sports world. Apart from the perspective of general managers, and the way they use AI, artificial intelligence is employed to make more accurate decisions about sports rules and regulations, to protect player safety, or to improve player and athlete performance. Take a few examples, such as using an artificial intelligence catcher to show the audience during the game whether the empires call was correct or incorrect, computer vision algorithms employed during the NBA games to analyze player shot form, and perhaps more importantly the use of AI to analyze concussion impact and predict whether a force to the head has actually caused a head injury for NFL players. Examples such as the last one show the impact artificial intelligence can have on Sports, which can contribute to player safety and improved player recovery, improving the experience of both players and fans.

So how exactly do all of these things work? In sports, data is king. Athletes are executing hundreds of plays, or hundreds of different actions in a single game or a single season that allows treasure troves of data to then be analyzed by AI neural networks in order to make better predictions regarding players. In sports, there is a huge need for statisticians, and nearly every single statistic that is related to a certain sport is often recorded. Therefore, when thinking about the way AI is used in sports, the concepts of big data and distributed data is significantly important. For example, take sportlogiq, a Canadian startup that is focusing on providing broadcasters in the NHL with commentary generated by natural language processing neural networks in order to effectively broadcast better by comparing their broadcast to statistics and analytics of players historically. If a player is performing better than they typically do, the neural network will prompt the broadcaster to discuss it. In order for such a prediction to be made, the neural network will have needed to analyze mountains of data in regards to that specific player to make a better broadcast for sports announcers. Take Nike smart basketball, an analytic software that is often employed by NBA teams to improve player performance in the NBA. Nike analyzes every single bounce of a ball, and has been able to identify different segmentation points on a player’s finger to analyze exactly where they are dribbling the ball, how they grip the ball when they shoot, or even how they attempt a steal or palm the ball when taking it up the basketball court. The smaller specific data segmentation points are recorded thousands and thousands of times, and then Nike is providing constant feedback to the players and how to improve specific points of their game.

Both these examples contribute to the duality we are seeing with artificial intelligence in sports. This shows the future of sports and how powerful technology can be used to revolutionize the sports that we watch every day. There’s definitely a trend being utilized with artificial intelligence in sports, and a growing market is there for artificial intelligence sports companies. Thus can be a great field to get into and one that can be enjoyable for many.

The Weird, Weird World of Building the “Virtual Wall”

Photo: JR Peers/NPR

Restricting children and their families from crossing the border is not new for the United States. America has had a continuous destructive history with immigration that has existed since its founding, a constant pursuit of realistic border control. As we see photos of Haitian Immigrants being terrorized by border patrol agents or families living in Customs and Border Protection (CBP) tents in squalor it is not surprising that a country ingrained with such ideologies is seeking new developments to bring border patrol to the next level. A level whose foundation is Artificial Intelligence and technology.

“Virtual Border Wall” or “Smart Wall”, these phrases have bounced around different presidential administrations since the beginning the 90’s, but, what exactly do they mean?

“Every president since Bill Clinton has chased this technological dream”

J. Weston Phippen, Politico

The past couple of years have significantly increased the attention to building a “bigger”, “better”, more reliable, not necessarily physical wall, one that is lined with security cameras, technology, artificial intelligence, and drones in order to monitor refuge-seekers from entering. Older systems are not sufficient, cameras were equipped with night vision and thermal imaging but people would still sneak through. Even Donald Trump’s administration, whose cries for a “Yuge, beautiful wall” could be heard from space, signed an agreement with Anduril, a military technology company, to build a smart wall. In the beginning of Joe Biden’s Administration, Biden quietly released a section in the U.S. Citizenship Act of 2021 bill named the “Deploying Smart Technology at the Southern Border” to focus on adding smart technologies to the border between Mexico and the U.S. Following this, more than 130 Anduril Sentry Towers were deployed at the southwestern border of the U.S.

What is Anduril? Named for the sword of a fictional “Lord of the Rings” character, it is now used in reality as a monitoring system on human lives. I found an extensive amount on Anduril’s founder, Palmer Luckey. Luckey founded a company called Oculus (yes, THE Oculus) and sold it to facebook for $2 billion at 21 years old. He was later kicked out of Facebook in 2017. Many speculate it was because of the political controversy that surrounded him: he was found to have donated $10,000 dollars to a far-right smear campaign against Hillary Clinton. During that time, he was accused of stealing code for his Virtual Reality System. Although he was exonerated for theft, he was charged for violating his non-disclosure agreement and was ordered to pay $50 million to the company. In 2018, Luckey founded Anduril.

Palmer Luckey, a founder of Anduril, among the equipment at his company’s testing range, New York Times
Photo: Philip Cheung/The New York Times

The “Sentry Towers” that Anduril created are up to 33 feet tall and are capable of seeing 1.5 miles. The 360 cameras equipped with facial recognition technologies in the towers detect the sighting of a human, and would alert nearby CBP agents if they found them, letting them know the exact GPS location. The towers flaunt persistent autonomous awareness. Following the proposal of the “SMART Act” , a bill proposed by two Texans congressmen to curb the price of building a wall and implement these smart technologies into the Border, Luckey was brought in as a consultant. Although the “SMART Act” was never passed, Luckey took Anduril to CBP’s INVNT (The innovation team of Customs and Border Patrol) and they were immediately impressed. Anduril got major aid in 2018 as well; Trump shut down the government to raise $5 billion dollars for his “big beautiful wall” but in the end the money was dedicated to building a smarter, technological wall: a great boost for Anduril. Later, Anduril was given seed money by Peter Thiel, a German billionaire who was a part of Trump’s transition team, and recently hosted a fund-raiser for a Trump backed, Liz Cheney challenger. Thiel’s disdain towards immigrants is extremely apparent; staffing his company with people who “savored the wit” of websites like VDARE (an anti-immigrant, white nationalist, alt-right group); Anduril nourishes this push for an anti-immigrant stance. Now, in the Biden administration, Anduril is thriving. According to Anduril’s website they are currently delivering “High-Tech Capacity” for Biden’s Border Security. Google Cloud was also reported to be working in tandem with Anduril on this Virtual Wall last October.

However, many migrants are adapting to Anduril’s technologies and camera system, and finding more dangerous routes to avoid them. As Matt Steckman, Anduril’s chief revenue officer stated in an interview, “you’ll see traffic sort of squirting to the east and west of the systems,” migrants are finding detours in order to get to the refuge they seek, modifying, even if it means almost certain death in such rough conditions. Many debate whether this is still a reasonable way to solve border control, even if it does not mean Anduril’s technologies are being utilized; this method is known as the: “prevention by deterrence”method. First introduced by the Clinton administration, there are several parallels to the current state of border control to the 1994 plan, the plan citing both to “increase the number of agents on the line” and “make effective use of technology” to raise “the risk of apprehension high enough to be an effective deterrent.” However, since this plan was put in place in 1994, immigrants have not been deterred, in fact, the southwest border encounters in 2021 were at their highest record.

Both sides of the aisle truly believe that a “Smart Wall” is an ethical, reliable way to control the border. Biden stated that he thinks a Virtual Wall is the “humane alternative to a physical wall” and could be used as a way to safely identify migrants who are dangerously crossing the border. However, migrants are re-navigating and journeying into paths that are more treacherous and deadly, there have been record deaths from refuge-seekers; disallowing the safe identification and relocation of migrants that Biden hopes for. Creating a balance between safety and humanity is difficult, but it is extensively more difficult when those in charge of creating these technologies are anti-immigrant and anti-refugee. Does that create a fair foundation for a “humane” border? It is evident that border control and technologies still needs to be discussed and re-evaluated, a conversation that needs to include diverse voices and diverse perspectives, which right now, it does not currently have enough of.

Attached are links to learn more about the Virtual Wall and programs like Encode Justice are constantly working with legislators to keep conversations about AI and Ethics at the forefront.

HIIDE in Plain Sight

Photo: Tauseef Mustafa/AFP via Getty Images

In August 2021, the United States withdrew its troops from Afghanistan and left behind Handheld Interagency Identity Detection Equipment, also known as HIIDE. The device was built with the intention of identifying terrorists using finger and facial recognition as well as iris scans. Facial recognition (FRT) is software that analyzes, compares, and confirms an individual’s identity using available images. Iris scans work similarly, using a geometrical pattern of a person’s eye the same way FRT uses the geometric pattern of a person’s face, except infrared lights are implemented to illuminate the unique characteristics of each iris (NIST). All of this advanced technology is boxed in a sleek and small five by eight inch portable device, on the surface appearing completely harmless, but now left to the Taliban, it has become one of their most powerful weapons.

The Taliban could use HIIDE to identify Afghans who assisted the U.S military (Klippenstein). In an interview with NPR, investigative reporter Annie Jacobsen stated that the Defense Department had a goal of capturing biometrics on 80 percent of the Afghan population, creating a catalog of individuals who could be possibly linked to a crime (Inskeep). The enrollement of biometric data into Automated Biometric Identification System (ABIS), the database where HIIDE’s data is processed and stored, originally began in Iraq to collect fingerprints found on bombs and match them to bombmakers. However, the Defense Department took it one step further in Afghanistan by collecting biometric data from not only Afghan special forces members, but from civilians in patrolled villages whether they were suspects or not.

The intense collection of biometric data is excused by the Homeland Security’s privacy impact assessment of ABIS through “its antiterrorism, special operations, stability operations, homeland defense, counterintelligence, and intelligence efforts around the world.” The invasiveness is shown through odd questions asking Afghan individuals about their favorite fruits and vegetables or the names of their extended family members (Guo). And like HIIDE, despite appearing harmless on the surface, it demonstrates the extent of information gathered on foreign individuals. The line between intrusive and precautionary is blurred.

Afghan National Police application data is stored in a US-funded database “Afghan Personnel and Pay System”. The United States Central Command did not respond to MIT’s request for comment concerning the use for data concerning new recruit’s favorite foods and such.

HIIDE’s data appears to be accessible by Inter-Services Intelligence, a Pakistani agency which has been known to work closely with the Taliban in the past, creating an opportunity for Afghan civilians and military personnel to be hunted for their cooperation with the United States. In 2016, a mass kidnapping orchestrated by the Taliban took place in Kunduz, a city in northern Afghanistan. Witnesses report the Taliban using a device to scan everyone’s fingerprints, ultimately to identify the special forces members that were amongst civilians. Ten were executed on the spot during the Kunduz kidnapping (TOLOnews). The possibility for a similar disaster to occur is terrifying.

It is important to note that HIIDE is a tool, a neutral constituent in a political disaster. Advanced technology such as those mentioned only pose a threat through their applications and the dismissal of concerns raised. The lack of contingency plans for the carelessness seen during the evacuation in Afghanistan emphasizes the need to remove biometric technology from the battlefield.

Deepfakes and the Spread of Misinformation

Webroot

“Yellow Journalism” is a phrase coined during the Spanish-American war, to describe reporting based on dramatic exaggeration, and sometimes flat-out fallacies. With its sensational headlines, and taste for the theatrical, yellow journalism, or as it’s better known as now, “fake news,” is particularly dangerous to those who regard it as the absolute truth. In the days of Hearst and Pulitzer, it was much easier to know what was fabricated, and what was fact. However, in this age of technology, yellow journalism isn’t so “yellow” anymore. It’s the jarring blue glow emitted from millions of cell phones aimlessly scrolling through social media; it’s the rust-colored sand blowing against Silicon Valley complexes where out-of-touch tech giants reside; it’s the uneasy shifting of politician’s black suits as their bodies fail to match up with their words; it’s the curling red lips of innocent women, their faces pasted onto figures entwined in unpleasant acts.

It’d simply be a redundancy to say the internet has allowed misinformation to spread, but it’s more necessary than ever, to examine the media you’re consuming. Deepfakes, which are artificial images made by overlaying someone’s — usually a famous public figure’s — face, so they can be manipulated to say anything, have begun to surface more and more recently. In conjunction with deepfakes is artificial intelligence, or AI, which is when a machine exhibits human-like intelligence by mimicking our behavior. This includes things such as recognizing faces, making decisions, solving problems, and of course, driving a car, as we’ve seen with the emergence of Teslas. AI has been particularly eye-opening in revealing just how much trust we put into mere machines, and deepfakes are a perfect demonstration of how that trust can so easily be shattered.

When you search “deepfakes,” one of the first few results you get are websites where you can make your own. That’s how easy it is. The accessibility of such technology has long been seen as long been seen as an asset, but now, it’s like Pandora’s Box has been opened. Once people realize virtually anything is possible, there’s no end to the irresponsible uses of the internet. However, legally, many of the deepfake scandals can be considered defamation. A case recently came to light in Bucks County, PA, where a jealous mother created deepfakes of her daughter’s teammates, intended to frame them for inappropriate behavior. Police were first informed of this when one of the teammate’s parents reported their daughter had been harassed with messages from an unknown number. The messages included “pictures from the girl’s social media accounts which had been doctored to look as if she was naked, drinking alcohol, or vaping.” These photos were intended to get the girl kicked off the team. Fortunately, police were able to trace the IP address, and arrested the perpetrator. She now faces three misdemeanor counts each of cyber harassment of a child, and harassment, proving just because an act is done “anonymously” via the internet, doesn’t mean you can’t get caught. In fact, the internet provides just as many opportunities for conviction as it does narrow escape. As technology becomes more and more apt to cause damage, cyber harassment is considered a serious crime. If convicted, the mother faces six months to a year in prison. Pennsylvania has active anti-cyberbullying legislation in place, that emphasizes how authorities have the right to interfere in instances that occur off school property. The state makes cyber harassment of a child a third-degree misdemeanor, punishable through a diversionary program.

Women have frequently been the victim of sexual violence via the usage of deepfakes. For example, “pornographic deepfakes exist the realm of other sexually exploitative cybercrimes such as revenge porn and nonconsensual pornography.” According to the Fordham Law Review, one journalist described deepfakes as “a way for men to have their full, fantastical way with women’s bodies,” emphasizing how this is still a sexually abusive act, as it demeans and reduces women to nothing but fantastical objects. Additionally, with the uncertainty of how many of this new technology works, it’s easy for these videos to be leaked, and a woman to have her reputation ruined over something she herself never did. Deepfakes have been used to intimidate and invalidate powerful women as well; men who find themselves threatened by a woman’s advance in authority may see this as a means to bring them down.

In 2018, Rana Ayyub, a successful, budding journalist in Mumbai, fell under scrutiny after a deepfake of her face superimposed on a porn video came into circulation. The stress from the endless harassment sent Ayyub to the hospital, and she withdrew from public life, abandoning her aspirations of working in media. Forty-eight states as well as D.C. have laws against “revenge porn,” but there’s still limitations against prosecuting websites that distribute this content. Section 230 of the Communications Decency Act is a federal law that protects websites protection from prosecution for content posted by third parties. Luckily, this immunity goes away if the website or webmaster actively becomes a part of distributing the content. Additionally, most states impose a fine, and/or a prison sentence for the distribution of nonconsensual porn by a citizen. Federal legislation to address the problem of deepfake pornography was introduced in 2018, and it was called The Malicious Deepfake Prohibition Act of 2018. Unfortunately, this legislation didn’t advance, proving there’s still a long way to go in administering justice to victims of this heinous crime.

Most detrimental to American life as a whole — especially given our fiercely divided nation — is the use of deepfakes to spread political misinformation. With former President Trump’s social media presence considered a hallmark of his presidency, and the majority of citizens having access to the presidential briefings on TV, our elected official’s ideals are more available than ever. However, America has always allowed itself to be swept up in illusions. In the very first televised debate of Nixon versus Kennedy in 1960, Kennedy was widely believed have been given an automatic edge because of his charisma and good looks. In this day and age though, it’s crucial our country looks more than skin deep. A video of President Biden sticking his tongue out, and another video of Biden making a statement that was proven to be fake, were both made of intricately spliced and edited audio clips. The second clip was reposted by one of Bernie Sanders’ campaign managers; it showed Biden apparently saying “Ryan was right.” This was in reference to the former Speaker of the House Paul Ryan’s desire to go after Social Security and Medicare. Even within the Democratic party itself, fake media was being used to enact support for a particular candidate, creating harmful disunity. However, change is on the horizon; the National Defense Authorization Act for Fiscal Year 2020 included deepfake legislation. This legislation included three provisions, the first being the requirement of a comprehensive report on the foreign weaponization of deepfakes. The second necessitates the government to notify Congress of foreign deepfake-misinformation being used to target U.S. elections. Lastly, the third establishes a “Deepfake Prize” competition in order to incentivize the development of more deepfake recognition technologies.

In a world where media is so easily manipulatable, it’s up to citizens to be smart consumers. By reading news from a variety of sources, and closely examining the videos you’re watching, you have a better chance of not being “faked out” by deepfakes. Some tips for identifying deepfakes include: unnatural eye or mouth movement, lack of emotion, awkward posture, unnatural coloring, blurring, and inconsistent audio. Many people worry in a world where anything can be fake, nothing is real. But there will always be journalists committed to reporting the facts, and promoting justice rather than perpetuating lies. When the yellowed edges of tabloids crumple to dust, and the cell phone screens fade to black, truth — in its shining array of technicolor — will snuff out the dull lies.

AI In Hiring Processes: The Biases and Solutions

World Economic Forum

Artificial Intelligence (AI) is becoming increasingly prevalent in our every day lives, and has recently found its way into hiring practices all across the United States. In hopes of finding potential employees more efficiently, many employers are using machine-learning algorithms to search through social profiles and online resumes to find the best person for the job based on set characteristics. As objective as this algorithm may seem, it can play a very negative role.

On the surface, the use of AI in hiring practices seems nothing but helpful. After all, what’s wrong with saving some time and money? However, studies have shown that this new technology raises several ethical issues, particularity by exacerbating unjust gender and racial biases.

When it comes to applying for positions at tech companies like Amazon, women experience a fair disadvantage when it comes to applicant selection using AI. Amazon’s “AI recruitment engine” was found to direct bias against women’s applications. The engine would go through applications and penalize resumes that included the word “women’s,” as in “women’s chess club captain.” Additionally, it penalized graduates of all-women colleges, according to Reuters.

The selection of job applications using AI systems like the Amazon AI recruiter engine further promote gender bias in the workforce and exclude women from certain fields. In Big Tech companies like Microsoft and Google, women make up only a fraction of total employment, and AI involvement is only making this gender disparity worse.

Additionally, AI’s use in hiring practices may not be as efficient and helpful for applicants as it is for employers. When an applicant’s resume is processed through a tracking system or engine, AI is used to determine keywords within the resume. If the resume has words that are not search engine optimized, the resume will most likely be rejected. This selection can negatively impact job seekers if they are applying and unaware of the use of AI in their application.

A wide variety of solutions have been proposed by companies and legislators alike to counteract the negative effects of AI bias. Some companies have implemented programmed “nudges,” which are specifically implemented to remind a program that it should be considering certain groups equally. This works to correct bias, but still does not fully account for the flaws of such software, and these practices are not adopted by the majority of companies utilizing AI in hiring practices.

Illinois has been quite progressive in encouraging transparency surrounding hiring practices. Legislation “requires companies to inform applicants that AI will be used to analyze their interview videos.” (Waltz, 2019) However, it isn’t clear whether this actually protects the applicants from the bias AI has been proven to exhibit. (Jimenez, 2020) Laws also protects applicants from immediate disqualification if they don’t want their applications subject to AI examination. This legislation should absolutely be the model in order to move toward a future with more transparency and concern for the wellbeing of people affected by AI’s biases.

As dystopian as it seems, AI and machine learning is advancing our world, however leaving unerasable marks on people’s lives changing their lives many times for the worse. We must hope to make change with AI so that our globe continues to spin in the direction of prosperity and success.

Examining the Costs and Benefits of Biometrics

surveillance tool

What are biometrics?

Biometrics are a means of identification of humans. These digital identifiers are unique to each individual and can be used to grant access to private networks. Commonly used biometric identifiers include our fingerprints, voice, typing cadence, and other physiological or behavioral traits. Biometrics provide higher levels of security to data, systems, and other private networks. These identifiers are more efficient and effective than standard security implementations such as passwords and codes.

How is it used?

This form of identification plays a significant role in daily life. It is widely used in many situations, including unlocking a phone, airport security, criminal investigations, and more. The U.S. Department of Homeland Security utilizes biometric data on a large scale, filtering out illegal travelers, enforcing federal laws, and verifying the credentials of visa applicants. To date, their system has documented 260 million unique identities and processes over 350,000 cases per day. An individual may encounter biometric identification when using facial recognition or fingerprinting software to unlock their phone. Most devices nowadays including laptops, phones, and tablets come equipped with high tech identification tools such as facial recognition technology, iris scans, and fingerprinting.

Ethical Concerns

While these features offer convenience and efficiency which entice customers, activists argue over the ethics of storing biometric data. According to Andy Adler, a professor specializing in biometrics at Canada’s Carleton University, “[Biometric] systems are also vulnerable to all of the traditional security threats as well as all sorts of new ones and interactions between old and new ones.” The usage of biometric software introduces new threats of identity theft or financial crime, as hackers target and steal biometric information for filing for credit, taxes, insurance scams, and more.

Discrimination

Biometrics like facial recognition disproportionately misidentify people of color, women, and children. According to a study conducted at MIT, facial recognition technologies misidentify darker skinned women one in three times. This high error rate can lead to false accusations in criminal investigations, putting communities of color at risk. For example, it was recently discovered that a Black man, Robert Julian-Borchak Williams, was wrongfully accused of shoplifting $3,800 worth of goods from a store in Detroit. When law enforcement officers identified a Black man in a red cap in a surveillance video, facial recognition software falsely identified Williams as the suspect. He was arrested and jailed for over 24 hours. His case was brought to the attention of the public as a result of a hearing in Detroit focused on facial recognition and the increasing rates of false positives faced by darker-skinned people.

While these issues persists, biometric identifiers like facial recognition should not be widely used. It could take years to train artificial intelligence systems to comprehend the needs of modern-day society.

Examining the Role of the Indian American Tech Force in Algorithmic Injustice

Algorithmic injustice is representative of the programmers behind 21st-century technology. This op-ed is meant to dig into the potential involvement of Indian-American tech workers in producing discriminatory technology.

Image credit: The Economic Times.

Indian American men are foundational to the current global tech force. They hold thousands of software engineering roles at almost every major institution, yet their powerful influence on advancements in technology and potential complicity in creating injustice often go unnoticed and unchecked. Code is a reflection of the programmers who write it, and it is time we start taking a deeper look at the harms that high-caste, privileged Indian men perpetuate as the architects of our technology.

From the social media filters on our phones to the satellite systems that surround our planet, technology has become a ubiquitous part of human society. Contrived by some of the most complex algorithms and mathematical equations, technology is often perceived as a feat of innovation void of human error — when in reality, it is a reflection of all the flaws that are ingrained in human society. Humans design A.I. systems and algorithms, and as a result, it is saturated with their biases. The discriminatory impacts of A.I. and algorithms have been shown and proven countless number of times, through harmful beauty artificial intelligence, criminal justice surveillance systems, discriminatory hiring and recruitment processes, and bias in healthcare tools. This technology adversely affects marginalized communities around the world in all facets of their lives.

While these examples are unsettling, they should not be shocking. If someone were to look into the demographics of the major tech companies and institutions, they would see that 92 percent of all software positions are held by whites and Asians, and 80 percent are held by men. Researchers and activists have showcased that the disproportionate number of white men in the software industry is directly connected to bias in our algorithms (“Diversity in Tech by the Numbers”). The most common cause of algorithmic bias happens through deep learning. Deep learning algorithms make decisions based on trends found in large quantities of data, which reflect the racism, misogyny, and classism rampant in society.

Growing up as a woman interested in technology in a community of Indian-American software engineers, I was exposed to deeply ingrained forces of misogyny, casteism, and classism from my male peers. Experiencing and seeing these behaviors present within individuals from my community brings me to question the role of 3+ million Indian software engineers might play in perpetuating algorithmic injustice. It is time we start questioning how their values and ways of thinking lead into the code and technology frameworks they are responsible for.

Harvard professor Ajantha Subramanian, states in her book, “The Caste of Merit: Engineering Schools of India’’ that current Indian American software engineers hold significant class and caste privileges in Indian society. She explains that those belonging to lower castes in India face structural barriers that prevent them from pursuing higher education in a similar way to how America’s long history of racism has created institutionalized barriers in education for low-income, black and brown communities. Factors such as fewer academic resources, discrimination within classrooms, a lack of proper physical and emotional support, and lack of meaningful familial engagement prevent students of lower caste backgrounds from receiving the same merit of education as higher-caste students. The educational inequality in India’s schooling systems triggers a domino effect that leads to higher ratios of upper caste individuals in elite engineering schools, further allowing them to pursue well-paying careers in technology. and utilize the H1-B visa to immigrate to the United States.

Equality Labs, a South Asian American human rights startup, reported that “two-thirds of members of the lowest caste, called Dalits, said they have faced workplace discrimination due to their caste. Forty-one percent have experienced discrimination in education because of it. And a quarter of Dalits say they’ve faced physical assault — all in the United States” (NPR). Unfortunately, it does not stop there. Indian Americans are fervent Modi loyalists, and privileged Indian Americans applaud Trump’s and Modi’s actions the way privileged White-Americans uplift Trump.

What this essentially means is that when it comes to algorithmic injustice, we must hold the Indian American IT sector accountable alongside other professionals. Just because Indian Americans are not white or because they are immigrants does not mean they have not benefited from oppressive institutions. We must understand their individual relationship to various social forces such as caste, race, gender, and ethnicity and how their personal role within these systems of oppression negatively influences their point of view, ultimately leading to biased and discriminatory technology.

Our next steps are twofold: to increase awareness and understanding of the problem alongside actively pushing for true diversity in STEM. As aforementioned, many researchers have examined the effects of a disproportionate amount of white men in technology and have drawn links to how their population promotes algorithmic injustice. We need more researchers and digital rights activists assessing how Indian -Americans specifically contribute to algorithmic injustice with their unique set of biases and prejudices, and how each of us might as well. Furthermore, we need to create educational and professional pathways for underrepresented minorities to secure jobs in technology to make sure our technology serves all Americans and not just the privileged.