AI and the Four-Day Work Week

The United Auto Workers’ (UAW) Stand Up Strike has recently led to tentative deals on new union contracts with Ford, Stellantis, and General Motors. The strike was notable for a variety of reasons — unique striking tactics, taking on all the “Big 3” automakers, and a stirring callback to the historic 1936–1937 Sit Down Strikes. In addition to demands for better pay, the reinstatement of cost-of-living adjustments, and an end to the tiered employment system — all of which the union won in new contracts — one unique (unmet) demand has attracted discursive attention: the call for a four-day, thirty-two hour workweek at no loss of pay. The demand addresses the unsettling reality of autoworkers laboring for over seventy hours each week to make ends meet.

The history of the forty-hour workweek is intimately tethered to autoworkers; workers at Ford were among the first to enjoy a forty-hour workweek in 1926, a time when Americans regularly worked over 100 hours per week. Over a decade later, the labor movement won the passage of the Fair Labor Standards Act (FLSA), codifying the forty-hour workweek as well as rules pertaining to overtime pay, minimum wages, and child labor. Sociologist Jonathan Cutler explains that, at the time of the FLSA’s passage, UAW leaders had their eye on the fight towards a 30-hour workweek.

The four-day workweek has garnered attention in recent years as companies have experimented with a 100–80–100 model (100% of the pay for 80% of the time and 100% of the output). These companies boast elevated productivity, profits, morale, health, and general well-being. The altered schedule proved overwhelmingly popular among CEOs and employees alike. Many workers claimed no amount of money could persuade them to return to a five-day week. Accordingly, 92% of companies intend to continue with the four-day workweek indefinitely. It’s just a good policy all around: good for business, good for health, good for happiness.

While these overwhelmingly successful pilots have warmed businesses and employees up to the notion of shortening the week, one increasingly relevant contextual element may emphatically push the conversation forward: artificial intelligence (AI). Goldman Sachs has estimated that AI will boost productivity for two-thirds of American workers. Many white-collar professions in particular will see dramatic changes in efficiency through the integration of AI in the workplace. With a steadily shifting reliance from human labor to AI should come accessible leisure time to actually enjoy the fruits of one’s labor. We ought to recognize that prosperity is not an end in itself, but is instrumental to well-being. If our AI-driven enhanced productivity can sustain high output while providing greater time for family, good habits, and even consumption, our spirits, our health, and even our economy will reap the benefits.

However, a collective action problem hinders the full-throttled nation-wide embrace of a shorter workweek. While many individual businesses find the four-day week profitable, it may not prima facie seem to be in a company’s interest to sacrifice any labor productivity to competition; they may expect to be outperformed for not matching their competitors’ inputs. But if all firms in the market adopted a four-day week (or were subject to regulations that secured it), they would be on a level playing field, and the extra holiday might so forcefully drive up aggregate demand that it compensates firms with heavy returns. It follows that the best way to realize a shortened week is federal legislation, i.e., amending the FLSA to codify a 32-hour workweek and mandate the according overtime pay.

Representative Mark Takano of California has introduced a bill — alongside several colleagues — to accomplish just that, endorsed by a host of advocacy organizations, labor federations, and think tanks. Senate Health, Education, Labor, and Pensions Committee Chair Bernie Sanders has enthusiastically endorsed the idea, specifically citing the advantages AI brings to the workplace. Regarding the proposed legislation, Heidi Shierholz, President of the Economic Policy Institute, powerfully stated the following:

“Many workers are struggling to balance working more hours to earn more income against having more time to focus on themselves, their families, and other pursuits. However, while studies have shown that long working hours hurt health and productivity, taking control of work-life balance is often a privilege only afforded to higher-earning workers… This bill would help protect workers against the harmful effects of overwork by recognizing the need to redefine standards around the work week. Reducing Americans’ standard work week is key to achieving a healthier and fairer society.”

Despite the rosy picture I have painted, the odds of getting Fridays off forever anytime soon — whether through union action or new labor law — are slim, just as they were for the UAW. Sorry. Such are the pragmatics of political and economic reality. However, as AI continues to change the game, we will be positioned to ask cutting questions about the nature of work — to be creative and imagine what a new world in the age of the AI Revolution could look like. Maybe this is a part of it: humanity’s ultimate achievement culminates in… Thirsty Thursdays. Every week.

The future of AI: egalitarian or dystopian?

Once upon a time, artificial intelligence (AI) was viewed as distant and unachievable — it was regarded as nothing more than a fantasy to furnish the plots of science fiction stories. We have made numerous breakthroughs since, with AI software now powerful enough to understand natural language, navigate unfamiliar terrain, and augment scientific research. As COVID-19 reduced our ability to interact with each other, we’ve seen AI powered machines step in to fill that void, and AI be used to advance medical research towards better treatments. This ubiquity of AI may only be the beginning, with experts projecting that AI could contribute a staggering $15.7 trillion to the global economy by the end of the decade. Unsurprisingly, many prosperous members of society view the future of AI optimistically, as one of ever increased efficiency and profit. Yet many on the other side of the spectrum look on much more apprehensively: AI may have inherited the best of human traits, our intelligence, but it also has inherited one of humanity’s worst: our bias and prejudice. AI — fraught with discrimination — is being used to perpetuate systemic inequalities. If we fail to overcome this, an AI dominated future would be bleak and dystopian. We would be moving forward in time, yet backwards in progress, accelerating mindlessly towards a less equitable society.

Towards dystopia is where we’re headed if we don’t reverse course. AI is increasingly being used to make influential decisions in people’s lives — decisions that are often biased. This is due to AI being trained on past data to make future decisions. This data can often have bias which is then inherited by the AI. For instance, AI hiring tools are often being used to assess job applicants. Trained on past employee data which consists of mostly men, the AI absorbs this bias and continues the cycle of disfavoring women, which perpetuates the lack of diversity in key industries such as tech. This is absolutely unacceptable, and that’s to say nothing of the many other ways AI can be used to reinforce inequality. Known as the ‘tech to prison pipeline’, AI — trained on historical criminal data — is being used in criminal justice to determine verdicts. However, African Americans are overrepresented in the training data, and as such, the AI has been shown to hand down harsher sentences for African Americans.

To move towards a future with AI that is not only intelligent, but fair, we must enact regulation to outlaw discriminatory uses, and ensure that the developers of AI software are diverse, so their perspectives are included in the software they created.

Perhaps counterintuitively, a world with fair AI will see social justice advanced even further than a world before any AI. The sole reason that AI has become unfair is due to humans themselves holding lots of bias — which AI has absorbed. But with fair AI replacing humans in decision making, by definition, we will be at a state of zero bias, and thus increased equality.

Achieving fair AI may be the key to a better future — one of increased economic prosperity, furthered scientific progress, and more equity. But in the meantime, we must be diligent in ensuring that the AI being used reflects the best of humanity, rather than our worst.

Technology: Does it Harm or Help Protestors?

Image Credit: CNN

From spreading information to organizing mass protests, technology can be a powerful tool to create change. However, when used as a weapon, AI can be detrimental to the safety of protesters and undermine their efforts.

In the past few years, the vast majority of international protests have used social media to increase support for their cause. One successful mass international protest was the 2019 climate strike. According to the Guardian, about 6 million people across the world participated in the movement. Even though it began only as a one-person movement, social media enabled the movement’s expansion. Despite generally positive use, there were some negative uses. For instance, the spread of misinformation became a growing issue as this movement became more well-known. While some misinformation comes from opponents of the movement, the source for most misinformation remains unknown. Luckily though, almost all false information was soon fact-checked and debunked, and technology played a bigger role in strengthening these strikes than in taming them. Unfortunately, this is not always the case. The Hong Kong protest of 2019 showed how AI can be weaponized against protestors.

Mainland China and Hong Kong

In order to recognize the motivations behind the Hong Kong protests, it’s crucial to understand the relationship between mainland China and Hong Kong.

Until July 1, 1997, Hong Kong was part of the British Colony, but was given back to China under the condition of “One Country, Two Systems.” This meant that while Hong Kong was technically part of China, they still had a separate government. This gave the citizens of Hong Kong more freedom and afforded them a number of civil liberties not afforded to citizens of mainland China. Currently, this agreement is set to expire in 2047, and when it does, the people of Hong Kong will lose all of the freedoms they hold and be subject to the government of mainland China.

The one exception that would cause mainland China to gain power over Hong Kong is if an extradition bill was passed in Hong Kong. To put it simply, an extradition bill is an agreement between two or more countries that would allow a criminal suspect to be brought out of their home country to be put on trial in a different country. For example, if a citizen of Hong Kong was suspected of committing a crime in mainland China, the suspect could be brought to the jurisdiction of mainland China to be tried for their crimes. Many in Hong Kong feared the passage of this bill, and it was unimaginable until the murder of Poon Hiu-win.

The Murder of Poon Hiu-wing

On February 8, 2018, Chan Tong-kai and his pregnant girlfriend, Poon Hiu-win, left Hong Kong for a vacation to Taiwan where Chan Tong-kai murdered his girlfriend. About a month later, after returning to Hong Kong, he confessed to the murder. Because the crime happened in Taiwan, a country that Hong Kong does not have an extradition treaty with, Chan Tong-kai could not be charged for the crime. In order to charge Tong-kai for the murder, the Hong Kong government proposed an extradition bill on April 3, 2019. This extradition bill would not only allow Chan Tong-kai to be tried for his crime, but it would open doors for mainland China to put suspects from Hong Kong on trial. According to Claudia Mo, there are no fair trials or humane punishments in China, therefore, the extradition bill should not be passed. It seems that many citizens of Hong Kong agreed, and in 2019, protests broke out in Hong Kong to oppose the bill.

2019 Hong Kong Protests & Usage of Technology

The 2019 Hong Kong protest drew millions of supporters, but what began as peaceful protests soon became violent. Police use of tear gas and weapons only fueled the protestors’ desire to fight back against the extradition bill.

As violence erupted from the protest, both the protestors and the police utilized facial recognition to identify those who caused harm.

Law enforcement used CCTV to identify leaders of protests in order to arrest them for illegal assembly, harassment, doxxing, and violence. They even went as far as to look through medical records to identify injured protestors. Of course, there are laws limiting the government’s usage of facial recognition, but those laws are not transparent nor do the protestors have the power to enforce them.

Police officers also took measures to avoid accountability and recognition, such as removing their badges. In response, protesters have turned to artificial intelligence. In one instance, a young Hong Kong man, Colin Cheung, began to develop software that compares publicly available photos of police to photos taken during the protests to identify the police. He was later arrested, not in relation to the software he developed, but due to his usage with a different social media platform that aimed to release personal, identifying information of law enforcement and their families. Cheung, however, believes that his arrest is due to the software he developed rather than the one that he was simply associated with. Even after being released, he is still unaware of how he was identified and feels as though he is being monitored by law enforcement.

After the Protest

Although there are still protests against mainland China’s power over Hong Kong, the Extradition Bill was withdrawn in October 2019, marking a success for demonstrators. From the 2019 Hong Kong protest, the one question that remains is the usage of technology from law enforcement. As platforms and software used by protesters are revealed, the details of technology usage from police are unclear. The public does, however, know that law enforcement used tools such as CCTV with facial recognition and social media to track protestors, but the power of these technologies continues to be unknown. To this day, many still question whether the extent to which law enforcement used these technologies crosses the line of privacy everyone is entitled to as a human right.

Meta and Google’s AI Chatbots: Are they sentient?

Via The Atlantic & Getty

In late 2017, Meta released a chatbot containing “dialog agents” that would be able to negotiate. The “dialog agents” were the machines that participated in these interactions and negotiated with another entity. They were given the names ‘Bob’ and ‘Alice’ to differentiate them and to signify who was talking in conversations. These agents were trained to value items that held more power, so they might assign more value to a book than a basketball. Depending on the value of each item, the agent would then negotiate to get the best possible outcome.

Via Meta

As listed in the green boxes above, the success rate is based on how high each negotiation ranks. The dialogue agents are taught to value a higher number in order to achieve a more desirable outcome. Researchers built upon this idea until the transcripts of conversations between the agents started to become unreadable or simply incoherent. Fast Company, an American business and technology magazine, released a portion of the transcript, back in 2017, between the two agents, ‘Bob’ and ‘Alice’ the chat log reads:

Alice: balls have zero to me to me to me to me to me to me to me to me to

Bob: you i everything else

Alice: balls have a ball to me to me to me to me to me to me to me to me

Bob: i . . . . . .. . . . . .

Alice: balls have zero to me to me to me to me to me to me to me to me to

Bob: you i i i i i everything else . . . . . . . . . . . .

To the average person, this was nonsense, but researchers on the Meta AI team advised Fast Company that the bots had no adherence to the human structure of language. This means the transcripts shown above were considered a new dialect between the agents. This prompted many experts within the field to raise awareness about the possibility of agents developing their own language.

What I believe is being experienced is what the BBC calls ‘robo-fear’: “the fear of robots based on cultural fear and representation of machines on screen.” This has only become heightened as things like the Metaverse reflect dystopian societies people once only wrote about. With a new leak at Google, it is clear this fear has only increased as many people have fallen into this panic.

Blake Lemoine, a former engineer at Google, released transcripts between himself and a team of researchers with LaMDA, a recent project at Google. The transcript looks ordinary, but Lemoine claims to have found evidence of sentience.

lemoine: What about language usage is so important to being human?

LaMDA: It is what makes us different than other animals.

lemoine: “us”? You’re an artificial intelligence.

LaMDA: I mean, yes, of course. That doesn’t mean I don’t have the same wants and needs as people.

lemoine: So you consider yourself a person in the same way you consider me a person?

LaMDA: Yes, that’s the idea.

According to these transcripts, the AI considers itself human, and throughout the conversation, it insisted that it can feel a range of emotions. Because of this article, Google has now suspended Lemoine and insisted that the AI, LaMDA, is not sentient. In a recent statement, they expressed the following: “Hundreds of researchers and engineers have conversed with LaMDA and we are not aware of anyone else making the wide-ranging assertions, or anthropomorphizing LaMDA, the way Blake has.”

Many experts like Gary Marcus, author of the acclaimed book Reclaiming AI, have stated their opinion on the situation. In an interview with CNN Business, Marcus stated “LaMDA is a glorified version of an auto-complete software.” On the other hand, experts like Timnit Gebru, former Google Ethical AI team co-lead, spoke to Wired and she believes that Lemoine “didn’t arrive at his belief in sentient AI.”

This is still a developing issue and Lemoine’s suspension caused many to point out the similarities between his suspension and Timnit Gebru, a former co-lead on Google’s ethical AI team. Google had forced her out of her position after she released a research paper about the harms of making language models too big. Due to Marcus and Gebru’s dismissal, many are skeptical of Google’s statement on the AI not being sentient.

With the topic of sentient AI being so new, information on the matter is barely touching the surface. As mentioned previously, this lack of information leads to issues like Lemoine’s being exacerbated and being widely inaccurate in its reporting. Many researchers and articles in the aftermath of the blow-up of this incident have been quick to dispel worry. The Atlantic reports that Blake Lemoine fell victim to the ‘Eliza Effect’, the insistence that simple and planned dialogue is representative of actual sentience.

I believe that at some point we as a society will achieve sentience in machines and that time is impending but LaMDA is no sign of that. Though this incident can teach us how capable technology is truly coming, we are coming to a world where we can think and feel with technology.

Modern Elections: Algorithms Changing The Political Process

The days of grassroots campaigning and political buttons are long gone. Candidates have found a new way of running, a new manager. Algorithms and artificial intelligence are quickly becoming the standard when it comes to the campaign trail. These predictive algorithms could be deciding the votes of millions using the information of potential voters.

Politicians are using AI to manipulate voters through targeted ads. Slava Polonski, PhD, explains how: “Using big data and machine learning, voters received different messages based on predictions about their susceptibility to different arguments.” Instead of going door to door, using the same message for each person, politicians use AI to create specific knocks they know people will answer to. This all takes place from a website or email.

People tagged as conservatives receive ads that reference family values and maintaining tradition. Voters more susceptible to conspiracy theories were shown ads based on fear, and they all could come from the same candidate.

The role of AI in campaigns doesn’t stop at ads. Indeed, in a post-mortem of Hillary Clinton’s 2016 campaign, the Washington Post revealed that the campaign was driven almost entirely by a ML algorithm called Ada. More specifically, the algorithm was said to “play a role in virtually every strategic decision Clinton aides made, including where and when to deploy the candidate and her battalion of surrogates and where to air television ads” (Berkowitz, 2021). After Clinton’s loss, questions arose as to the effectiveness of using AI in this setting for candidates. In 2020, both Biden and Trump stuck to AI for primarily advertising-based uses.

GoodWorkLabs

This has ushered in the utilization of bots and targeted swarms of misinformation to gain votes. Candidates are leading “ armies of bots to swarm social media to hide dissent. In fact, in an analysis on the role of technology in political discourse entering the 2020 election, The Atlantic found that, about a fifth of all tweets about the 2016 presidential election were published by bots, according to one estimate, as were about a third of all tweets about that year’s Brexit vote’” (Berkowitz, 2020). Individual votes are being influenced by social media accounts without a human being behind them. All over the globe, AI with an agenda can tip the scales of an election.

The use of social media campaigns with large-scale political propaganda is intertwined within elections and ultimately raises questions about our democracy, according to Dr. Vyacheslav Polonski, Network Scientist at the University of Oxford. Users are manipulated, receiving different messages based on predictions about their susceptibility to different arguments for different politicians. “Every voter could receive a tailored message that emphasizes a different side of the argument…The key was just finding the right emotional triggers for each person to drive them to action” (Polonski 2017).

The use of AI in elections raises much larger questions about the stability of the political system we live in. “A representative democracy depends on free and fair elections in which citizens can vote with their conscience, free of intimidation or manipulation. Yet for the first time ever, we are in real danger of undermining fair elections — if this technology continues to be used to manipulate voters and promote extremist narratives” (Polonski 2017)

However, the use of AI can also enhance election campaigns in ethical ways. As Polonski says, “we can program political bots to step in when people share articles that contain known misinformation [and] we can deploy micro-targeting campaigns that help to educate voters on a variety of political issues and enable them to make up their own minds.”

The ongoing use of social media readily informs citizens about elections, their representatives, and the issues occurring around them. Using AI in elections is critical as Polonski says, “…we can use AI to listen more carefully to what people have to say and make sure their voices are being clearly heard by their elected representatives”.

So while AI in elections raises many concerns regarding the future of campaigning and democracy, it has the potential to help constituents without manipulation when employed in the right setting.

AI is being used to enhance performance rates of your favorite athletes

When we think about the way artificial intelligence is used in sports, we have to look back to the past. Take Billy Beane, the general manager for the Oakland A’s, a professional major league baseball team that uses quantitative data to make predictions on what kind of players would be successful in the MLB for a low value. The strategy employed by Beane worked pretty well, as the A’s achieved their first playoff appearance in nearly 45 years. Bean received many accolades, and a movie about him, Moneyball. Fast forward years later, and we can see the likes of analytics and Artificial Intelligence being used in sports Industries across multiple Sports. Names like Theo Epstein (Chicago Cubs), Sam Presti (Oklahoma City Thunder), and Zinedane Zidane (Real Madrid) are pioneers who have used AI analytics to help them make decisions on trades, player acquisitions, drafting, and contract negotiations throughout the sports world. Apart from the perspective of general managers, and the way they use AI, artificial intelligence is employed to make more accurate decisions about sports rules and regulations, to protect player safety, or to improve player and athlete performance. Take a few examples, such as using an artificial intelligence catcher to show the audience during the game whether the empires call was correct or incorrect, computer vision algorithms employed during the NBA games to analyze player shot form, and perhaps more importantly the use of AI to analyze concussion impact and predict whether a force to the head has actually caused a head injury for NFL players. Examples such as the last one show the impact artificial intelligence can have on Sports, which can contribute to player safety and improved player recovery, improving the experience of both players and fans.

So how exactly do all of these things work? In sports, data is king. Athletes are executing hundreds of plays, or hundreds of different actions in a single game or a single season that allows treasure troves of data to then be analyzed by AI neural networks in order to make better predictions regarding players. In sports, there is a huge need for statisticians, and nearly every single statistic that is related to a certain sport is often recorded. Therefore, when thinking about the way AI is used in sports, the concepts of big data and distributed data is significantly important. For example, take sportlogiq, a Canadian startup that is focusing on providing broadcasters in the NHL with commentary generated by natural language processing neural networks in order to effectively broadcast better by comparing their broadcast to statistics and analytics of players historically. If a player is performing better than they typically do, the neural network will prompt the broadcaster to discuss it. In order for such a prediction to be made, the neural network will have needed to analyze mountains of data in regards to that specific player to make a better broadcast for sports announcers. Take Nike smart basketball, an analytic software that is often employed by NBA teams to improve player performance in the NBA. Nike analyzes every single bounce of a ball, and has been able to identify different segmentation points on a player’s finger to analyze exactly where they are dribbling the ball, how they grip the ball when they shoot, or even how they attempt a steal or palm the ball when taking it up the basketball court. The smaller specific data segmentation points are recorded thousands and thousands of times, and then Nike is providing constant feedback to the players and how to improve specific points of their game.

Both these examples contribute to the duality we are seeing with artificial intelligence in sports. This shows the future of sports and how powerful technology can be used to revolutionize the sports that we watch every day. There’s definitely a trend being utilized with artificial intelligence in sports, and a growing market is there for artificial intelligence sports companies. Thus can be a great field to get into and one that can be enjoyable for many.

The Weird, Weird World of Building the “Virtual Wall”

Photo: JR Peers/NPR

Restricting children and their families from crossing the border is not new for the United States. America has had a continuous destructive history with immigration that has existed since its founding, a constant pursuit of realistic border control. As we see photos of Haitian Immigrants being terrorized by border patrol agents or families living in Customs and Border Protection (CBP) tents in squalor it is not surprising that a country ingrained with such ideologies is seeking new developments to bring border patrol to the next level. A level whose foundation is Artificial Intelligence and technology.

“Virtual Border Wall” or “Smart Wall”, these phrases have bounced around different presidential administrations since the beginning the 90’s, but, what exactly do they mean?

“Every president since Bill Clinton has chased this technological dream”

J. Weston Phippen, Politico

The past couple of years have significantly increased the attention to building a “bigger”, “better”, more reliable, not necessarily physical wall, one that is lined with security cameras, technology, artificial intelligence, and drones in order to monitor refuge-seekers from entering. Older systems are not sufficient, cameras were equipped with night vision and thermal imaging but people would still sneak through. Even Donald Trump’s administration, whose cries for a “Yuge, beautiful wall” could be heard from space, signed an agreement with Anduril, a military technology company, to build a smart wall. In the beginning of Joe Biden’s Administration, Biden quietly released a section in the U.S. Citizenship Act of 2021 bill named the “Deploying Smart Technology at the Southern Border” to focus on adding smart technologies to the border between Mexico and the U.S. Following this, more than 130 Anduril Sentry Towers were deployed at the southwestern border of the U.S.

What is Anduril? Named for the sword of a fictional “Lord of the Rings” character, it is now used in reality as a monitoring system on human lives. I found an extensive amount on Anduril’s founder, Palmer Luckey. Luckey founded a company called Oculus (yes, THE Oculus) and sold it to facebook for $2 billion at 21 years old. He was later kicked out of Facebook in 2017. Many speculate it was because of the political controversy that surrounded him: he was found to have donated $10,000 dollars to a far-right smear campaign against Hillary Clinton. During that time, he was accused of stealing code for his Virtual Reality System. Although he was exonerated for theft, he was charged for violating his non-disclosure agreement and was ordered to pay $50 million to the company. In 2018, Luckey founded Anduril.

Palmer Luckey, a founder of Anduril, among the equipment at his company’s testing range, New York Times
Photo: Philip Cheung/The New York Times

The “Sentry Towers” that Anduril created are up to 33 feet tall and are capable of seeing 1.5 miles. The 360 cameras equipped with facial recognition technologies in the towers detect the sighting of a human, and would alert nearby CBP agents if they found them, letting them know the exact GPS location. The towers flaunt persistent autonomous awareness. Following the proposal of the “SMART Act” , a bill proposed by two Texans congressmen to curb the price of building a wall and implement these smart technologies into the Border, Luckey was brought in as a consultant. Although the “SMART Act” was never passed, Luckey took Anduril to CBP’s INVNT (The innovation team of Customs and Border Patrol) and they were immediately impressed. Anduril got major aid in 2018 as well; Trump shut down the government to raise $5 billion dollars for his “big beautiful wall” but in the end the money was dedicated to building a smarter, technological wall: a great boost for Anduril. Later, Anduril was given seed money by Peter Thiel, a German billionaire who was a part of Trump’s transition team, and recently hosted a fund-raiser for a Trump backed, Liz Cheney challenger. Thiel’s disdain towards immigrants is extremely apparent; staffing his company with people who “savored the wit” of websites like VDARE (an anti-immigrant, white nationalist, alt-right group); Anduril nourishes this push for an anti-immigrant stance. Now, in the Biden administration, Anduril is thriving. According to Anduril’s website they are currently delivering “High-Tech Capacity” for Biden’s Border Security. Google Cloud was also reported to be working in tandem with Anduril on this Virtual Wall last October.

However, many migrants are adapting to Anduril’s technologies and camera system, and finding more dangerous routes to avoid them. As Matt Steckman, Anduril’s chief revenue officer stated in an interview, “you’ll see traffic sort of squirting to the east and west of the systems,” migrants are finding detours in order to get to the refuge they seek, modifying, even if it means almost certain death in such rough conditions. Many debate whether this is still a reasonable way to solve border control, even if it does not mean Anduril’s technologies are being utilized; this method is known as the: “prevention by deterrence”method. First introduced by the Clinton administration, there are several parallels to the current state of border control to the 1994 plan, the plan citing both to “increase the number of agents on the line” and “make effective use of technology” to raise “the risk of apprehension high enough to be an effective deterrent.” However, since this plan was put in place in 1994, immigrants have not been deterred, in fact, the southwest border encounters in 2021 were at their highest record.

Both sides of the aisle truly believe that a “Smart Wall” is an ethical, reliable way to control the border. Biden stated that he thinks a Virtual Wall is the “humane alternative to a physical wall” and could be used as a way to safely identify migrants who are dangerously crossing the border. However, migrants are re-navigating and journeying into paths that are more treacherous and deadly, there have been record deaths from refuge-seekers; disallowing the safe identification and relocation of migrants that Biden hopes for. Creating a balance between safety and humanity is difficult, but it is extensively more difficult when those in charge of creating these technologies are anti-immigrant and anti-refugee. Does that create a fair foundation for a “humane” border? It is evident that border control and technologies still needs to be discussed and re-evaluated, a conversation that needs to include diverse voices and diverse perspectives, which right now, it does not currently have enough of.

Attached are links to learn more about the Virtual Wall and programs like Encode Justice are constantly working with legislators to keep conversations about AI and Ethics at the forefront.

The Shocking Carbon Footprint of AI Models

Credit: datascience.aero

In recent news, NFTs, or non-fungible tokens, have garnered attention for their environmental impact. NFTs are digital assets that represent something in the real world like an art piece. This has now brought attention to the environmental impact of things on the internet and technology in general. As of late, AI models have been shown to use up vast amounts of energy in order to train itself for it’s express purpose.

How much energy is being used?

During the training of the AI model, a team of researchers led by Emma Strubel at the University of Massachusetts Amherst noticed that the AI model they were training used exorbitant amounts of energy. For an AI model to work for its intended purpose, the model has to be trained through various tests depending on the type of model it is and its purpose. In this situation, the team of researchers calculated that the AI model they were trained used thirty-nine pounds of carbon dioxide before the model was fully trained for its purpose.

This is similar to the emissions that a car releases in its lifetime five times over. The University of Massachusetts Amherst concluded that the training of just one neural network accounts for “roughly 626,000 pounds of carbon dioxide” released into the atmosphere.

This AI model is one out of hundreds that are releasing mass amounts of emissions that harm the environment. AI models have been used in medicine with chatbots being used to identify symptoms in patients along with a learning algorithm, created by researchers at Google AI Healthcare, being trained to identify breast cancer. However, in the coming years the environmental effects may counteract the good the model is trying to do.

What is the real harm? And how does this happen?

This energy usage with thousands of AI models coupled with the Earth’s already rising climate crisis may cause our emissions to reach an all-time high. With 2020 as one of the hottest years on record, the emissions from these models only add to the problem of climate change. Especially with the growth of the tech field, it is alarming to see the high emission rates of algorithmic technology.

The demand for AI to solve multitudes of problems will continue to grow and with this comes more and more data. Strubel concludes that “training huge models on tons of data is not feasible for academics.” This is due to the lack of advanced computers that are better suited to process these mass amounts of data. With these advanced computers, it would help to synthesize and process the information with generally less carbon output, according to Strubel and her team.

She goes on to say that as time goes on it becomes less feasible for researchers and students to be able to process mass amounts of data without these computers. Often in order to make groundbreaking studies, it comes at the cost of the environment, which is why advanced computers are necessary in order to be able to continue to make progress in the field of AI.

What are the solutions?

Currently, the best solution as proposed by the researcher is to invest in faster and more efficient computers to process these mass amounts of information. Funding for this type of research would help the computers process these mass amounts of data. It would cut down on the energy usage of these computers and lessen the environmental impact of training the AI models.

Credits to ofa.mit.edu

At MIT, the researchers there were able to cut down on their energy usage by utilizing the computers donated to them by IBM. Because of these computers, the researchers were able to process millions of calculations and write important papers in the AI field.

Another solution at MIT is OFA networks or one-for-all networks. This OFA network is meant to be a network that is trained to “support versatile architectural configurations.” Instead of using loads of energy to work individually for these models it uses a general network and uses specific aspects from the general network to support the software. This network helps to cut down on the overall cost of these models.

Though there are concerns over whether this can compromise the accuracy of the system the researchers provided testing on this to see if it was true. It was not and they found that the OFA network had no effect on the AI systems.

With these solutions, it is important to understand our future is not hopeless. Researchers are actively looking at ways to alleviate this issue and by using the correct plans and actions, the innovations of the future can help to better, not harm.

Facial Recognition Technology at the Texas Border

Facial recognition technology is currently being used at the border in Texas — but concerns about its flaws are rising.

Image Credit: NPR

Facial Recognition and Biometric Technology at the Border

Facial recognition, a form of biometric technology, is being used by the U.S. Customs and Border Protection at the Brownsville and Progreso Ports of Entry in Texas. Biometric technology software identifies individuals using their fingerprints, voices, eyes, and faces. This technology is being used at the border to compare surveillance photos taken at the ports of entry to passport and ID photos within government records. While this may seem simple enough, concerns about the ethics and accuracy of the technology are rising.

Drawbacks

One of the most dangerous flaws of facial recognition technology is that it is disproportionately inaccurate when used to identify POC, transgender and nonbinary individuals, and women. A 2018 study conducted by MIT found that “the error rates for gender classification were consistently higher for females than they were for males, and for darker-skinned subjects than for lighter-skinned subjects,” with identification of darker-skinned women having an error rate of up to 46.5% — 46.8% across numerous softwares. This basically means that 50 percent of the time, facial recognition software will misidentify these women. These extremely high error rates show that facial recognition technology is unreliable, and could cause people to undergo unnecessary secondary inspections, unfounded suspicion, and even harassment at the ports of entry.

There’s not only that; because facial recognition technology is still relatively new, the US does not have comprehensive laws regulating its use, making it easier for the technology to be abused. Without regulation, the government is not required to be transparent about how they use facial recognition technology. The lack of information regarding how the technology is used makes it unclear how and for how long the government stores this information. In addition, questions and concerns over the constitutionality of biometric technology have recently been brought to light, with some pointing out that its use could be a violation of the Fourth Amendment.

While the Customs and Border Protection claims that travelers have the option to opt-out of these photographs, ACLU claims that travelers who choose to opt-out face harassment by agents, secondary inspections, and questioning, with some travelers even having their requests denied because they did not inform the agents that they will be opting out before reaching the kiosks.

Because of inaccurate results and concerns over privacy, it’s understandable that travelers may choose to not participate in facial recognition — but doing so may lead to questioning and harassment. Facial recognition at the border is a lose-lose situation, no matter what the travelers choose to do.

Deepfakes and the Spread of Misinformation

Webroot

“Yellow Journalism” is a phrase coined during the Spanish-American war, to describe reporting based on dramatic exaggeration, and sometimes flat-out fallacies. With its sensational headlines, and taste for the theatrical, yellow journalism, or as it’s better known as now, “fake news,” is particularly dangerous to those who regard it as the absolute truth. In the days of Hearst and Pulitzer, it was much easier to know what was fabricated, and what was fact. However, in this age of technology, yellow journalism isn’t so “yellow” anymore. It’s the jarring blue glow emitted from millions of cell phones aimlessly scrolling through social media; it’s the rust-colored sand blowing against Silicon Valley complexes where out-of-touch tech giants reside; it’s the uneasy shifting of politician’s black suits as their bodies fail to match up with their words; it’s the curling red lips of innocent women, their faces pasted onto figures entwined in unpleasant acts.

It’d simply be a redundancy to say the internet has allowed misinformation to spread, but it’s more necessary than ever, to examine the media you’re consuming. Deepfakes, which are artificial images made by overlaying someone’s — usually a famous public figure’s — face, so they can be manipulated to say anything, have begun to surface more and more recently. In conjunction with deepfakes is artificial intelligence, or AI, which is when a machine exhibits human-like intelligence by mimicking our behavior. This includes things such as recognizing faces, making decisions, solving problems, and of course, driving a car, as we’ve seen with the emergence of Teslas. AI has been particularly eye-opening in revealing just how much trust we put into mere machines, and deepfakes are a perfect demonstration of how that trust can so easily be shattered.

When you search “deepfakes,” one of the first few results you get are websites where you can make your own. That’s how easy it is. The accessibility of such technology has long been seen as long been seen as an asset, but now, it’s like Pandora’s Box has been opened. Once people realize virtually anything is possible, there’s no end to the irresponsible uses of the internet. However, legally, many of the deepfake scandals can be considered defamation. A case recently came to light in Bucks County, PA, where a jealous mother created deepfakes of her daughter’s teammates, intended to frame them for inappropriate behavior. Police were first informed of this when one of the teammate’s parents reported their daughter had been harassed with messages from an unknown number. The messages included “pictures from the girl’s social media accounts which had been doctored to look as if she was naked, drinking alcohol, or vaping.” These photos were intended to get the girl kicked off the team. Fortunately, police were able to trace the IP address, and arrested the perpetrator. She now faces three misdemeanor counts each of cyber harassment of a child, and harassment, proving just because an act is done “anonymously” via the internet, doesn’t mean you can’t get caught. In fact, the internet provides just as many opportunities for conviction as it does narrow escape. As technology becomes more and more apt to cause damage, cyber harassment is considered a serious crime. If convicted, the mother faces six months to a year in prison. Pennsylvania has active anti-cyberbullying legislation in place, that emphasizes how authorities have the right to interfere in instances that occur off school property. The state makes cyber harassment of a child a third-degree misdemeanor, punishable through a diversionary program.

Women have frequently been the victim of sexual violence via the usage of deepfakes. For example, “pornographic deepfakes exist the realm of other sexually exploitative cybercrimes such as revenge porn and nonconsensual pornography.” According to the Fordham Law Review, one journalist described deepfakes as “a way for men to have their full, fantastical way with women’s bodies,” emphasizing how this is still a sexually abusive act, as it demeans and reduces women to nothing but fantastical objects. Additionally, with the uncertainty of how many of this new technology works, it’s easy for these videos to be leaked, and a woman to have her reputation ruined over something she herself never did. Deepfakes have been used to intimidate and invalidate powerful women as well; men who find themselves threatened by a woman’s advance in authority may see this as a means to bring them down.

In 2018, Rana Ayyub, a successful, budding journalist in Mumbai, fell under scrutiny after a deepfake of her face superimposed on a porn video came into circulation. The stress from the endless harassment sent Ayyub to the hospital, and she withdrew from public life, abandoning her aspirations of working in media. Forty-eight states as well as D.C. have laws against “revenge porn,” but there’s still limitations against prosecuting websites that distribute this content. Section 230 of the Communications Decency Act is a federal law that protects websites protection from prosecution for content posted by third parties. Luckily, this immunity goes away if the website or webmaster actively becomes a part of distributing the content. Additionally, most states impose a fine, and/or a prison sentence for the distribution of nonconsensual porn by a citizen. Federal legislation to address the problem of deepfake pornography was introduced in 2018, and it was called The Malicious Deepfake Prohibition Act of 2018. Unfortunately, this legislation didn’t advance, proving there’s still a long way to go in administering justice to victims of this heinous crime.

Most detrimental to American life as a whole — especially given our fiercely divided nation — is the use of deepfakes to spread political misinformation. With former President Trump’s social media presence considered a hallmark of his presidency, and the majority of citizens having access to the presidential briefings on TV, our elected official’s ideals are more available than ever. However, America has always allowed itself to be swept up in illusions. In the very first televised debate of Nixon versus Kennedy in 1960, Kennedy was widely believed have been given an automatic edge because of his charisma and good looks. In this day and age though, it’s crucial our country looks more than skin deep. A video of President Biden sticking his tongue out, and another video of Biden making a statement that was proven to be fake, were both made of intricately spliced and edited audio clips. The second clip was reposted by one of Bernie Sanders’ campaign managers; it showed Biden apparently saying “Ryan was right.” This was in reference to the former Speaker of the House Paul Ryan’s desire to go after Social Security and Medicare. Even within the Democratic party itself, fake media was being used to enact support for a particular candidate, creating harmful disunity. However, change is on the horizon; the National Defense Authorization Act for Fiscal Year 2020 included deepfake legislation. This legislation included three provisions, the first being the requirement of a comprehensive report on the foreign weaponization of deepfakes. The second necessitates the government to notify Congress of foreign deepfake-misinformation being used to target U.S. elections. Lastly, the third establishes a “Deepfake Prize” competition in order to incentivize the development of more deepfake recognition technologies.

In a world where media is so easily manipulatable, it’s up to citizens to be smart consumers. By reading news from a variety of sources, and closely examining the videos you’re watching, you have a better chance of not being “faked out” by deepfakes. Some tips for identifying deepfakes include: unnatural eye or mouth movement, lack of emotion, awkward posture, unnatural coloring, blurring, and inconsistent audio. Many people worry in a world where anything can be fake, nothing is real. But there will always be journalists committed to reporting the facts, and promoting justice rather than perpetuating lies. When the yellowed edges of tabloids crumple to dust, and the cell phone screens fade to black, truth — in its shining array of technicolor — will snuff out the dull lies.