The Weird, Weird World of Building the “Virtual Wall”

Photo: JR Peers/NPR

Restricting children and their families from crossing the border is not new for the United States. America has had a continuous destructive history with immigration that has existed since its founding, a constant pursuit of realistic border control. As we see photos of Haitian Immigrants being terrorized by border patrol agents or families living in Customs and Border Protection (CBP) tents in squalor it is not surprising that a country ingrained with such ideologies is seeking new developments to bring border patrol to the next level. A level whose foundation is Artificial Intelligence and technology.

“Virtual Border Wall” or “Smart Wall”, these phrases have bounced around different presidential administrations since the beginning the 90’s, but, what exactly do they mean?

“Every president since Bill Clinton has chased this technological dream”

J. Weston Phippen, Politico

The past couple of years have significantly increased the attention to building a “bigger”, “better”, more reliable, not necessarily physical wall, one that is lined with security cameras, technology, artificial intelligence, and drones in order to monitor refuge-seekers from entering. Older systems are not sufficient, cameras were equipped with night vision and thermal imaging but people would still sneak through. Even Donald Trump’s administration, whose cries for a “Yuge, beautiful wall” could be heard from space, signed an agreement with Anduril, a military technology company, to build a smart wall. In the beginning of Joe Biden’s Administration, Biden quietly released a section in the U.S. Citizenship Act of 2021 bill named the “Deploying Smart Technology at the Southern Border” to focus on adding smart technologies to the border between Mexico and the U.S. Following this, more than 130 Anduril Sentry Towers were deployed at the southwestern border of the U.S.

What is Anduril? Named for the sword of a fictional “Lord of the Rings” character, it is now used in reality as a monitoring system on human lives. I found an extensive amount on Anduril’s founder, Palmer Luckey. Luckey founded a company called Oculus (yes, THE Oculus) and sold it to facebook for $2 billion at 21 years old. He was later kicked out of Facebook in 2017. Many speculate it was because of the political controversy that surrounded him: he was found to have donated $10,000 dollars to a far-right smear campaign against Hillary Clinton. During that time, he was accused of stealing code for his Virtual Reality System. Although he was exonerated for theft, he was charged for violating his non-disclosure agreement and was ordered to pay $50 million to the company. In 2018, Luckey founded Anduril.

Palmer Luckey, a founder of Anduril, among the equipment at his company’s testing range, New York Times
Photo: Philip Cheung/The New York Times

The “Sentry Towers” that Anduril created are up to 33 feet tall and are capable of seeing 1.5 miles. The 360 cameras equipped with facial recognition technologies in the towers detect the sighting of a human, and would alert nearby CBP agents if they found them, letting them know the exact GPS location. The towers flaunt persistent autonomous awareness. Following the proposal of the “SMART Act” , a bill proposed by two Texans congressmen to curb the price of building a wall and implement these smart technologies into the Border, Luckey was brought in as a consultant. Although the “SMART Act” was never passed, Luckey took Anduril to CBP’s INVNT (The innovation team of Customs and Border Patrol) and they were immediately impressed. Anduril got major aid in 2018 as well; Trump shut down the government to raise $5 billion dollars for his “big beautiful wall” but in the end the money was dedicated to building a smarter, technological wall: a great boost for Anduril. Later, Anduril was given seed money by Peter Thiel, a German billionaire who was a part of Trump’s transition team, and recently hosted a fund-raiser for a Trump backed, Liz Cheney challenger. Thiel’s disdain towards immigrants is extremely apparent; staffing his company with people who “savored the wit” of websites like VDARE (an anti-immigrant, white nationalist, alt-right group); Anduril nourishes this push for an anti-immigrant stance. Now, in the Biden administration, Anduril is thriving. According to Anduril’s website they are currently delivering “High-Tech Capacity” for Biden’s Border Security. Google Cloud was also reported to be working in tandem with Anduril on this Virtual Wall last October.

However, many migrants are adapting to Anduril’s technologies and camera system, and finding more dangerous routes to avoid them. As Matt Steckman, Anduril’s chief revenue officer stated in an interview, “you’ll see traffic sort of squirting to the east and west of the systems,” migrants are finding detours in order to get to the refuge they seek, modifying, even if it means almost certain death in such rough conditions. Many debate whether this is still a reasonable way to solve border control, even if it does not mean Anduril’s technologies are being utilized; this method is known as the: “prevention by deterrence”method. First introduced by the Clinton administration, there are several parallels to the current state of border control to the 1994 plan, the plan citing both to “increase the number of agents on the line” and “make effective use of technology” to raise “the risk of apprehension high enough to be an effective deterrent.” However, since this plan was put in place in 1994, immigrants have not been deterred, in fact, the southwest border encounters in 2021 were at their highest record.

Both sides of the aisle truly believe that a “Smart Wall” is an ethical, reliable way to control the border. Biden stated that he thinks a Virtual Wall is the “humane alternative to a physical wall” and could be used as a way to safely identify migrants who are dangerously crossing the border. However, migrants are re-navigating and journeying into paths that are more treacherous and deadly, there have been record deaths from refuge-seekers; disallowing the safe identification and relocation of migrants that Biden hopes for. Creating a balance between safety and humanity is difficult, but it is extensively more difficult when those in charge of creating these technologies are anti-immigrant and anti-refugee. Does that create a fair foundation for a “humane” border? It is evident that border control and technologies still needs to be discussed and re-evaluated, a conversation that needs to include diverse voices and diverse perspectives, which right now, it does not currently have enough of.

Attached are links to learn more about the Virtual Wall and programs like Encode Justice are constantly working with legislators to keep conversations about AI and Ethics at the forefront.

Deepfakes and the Spread of Misinformation

Webroot

“Yellow Journalism” is a phrase coined during the Spanish-American war, to describe reporting based on dramatic exaggeration, and sometimes flat-out fallacies. With its sensational headlines, and taste for the theatrical, yellow journalism, or as it’s better known as now, “fake news,” is particularly dangerous to those who regard it as the absolute truth. In the days of Hearst and Pulitzer, it was much easier to know what was fabricated, and what was fact. However, in this age of technology, yellow journalism isn’t so “yellow” anymore. It’s the jarring blue glow emitted from millions of cell phones aimlessly scrolling through social media; it’s the rust-colored sand blowing against Silicon Valley complexes where out-of-touch tech giants reside; it’s the uneasy shifting of politician’s black suits as their bodies fail to match up with their words; it’s the curling red lips of innocent women, their faces pasted onto figures entwined in unpleasant acts.

It’d simply be a redundancy to say the internet has allowed misinformation to spread, but it’s more necessary than ever, to examine the media you’re consuming. Deepfakes, which are artificial images made by overlaying someone’s — usually a famous public figure’s — face, so they can be manipulated to say anything, have begun to surface more and more recently. In conjunction with deepfakes is artificial intelligence, or AI, which is when a machine exhibits human-like intelligence by mimicking our behavior. This includes things such as recognizing faces, making decisions, solving problems, and of course, driving a car, as we’ve seen with the emergence of Teslas. AI has been particularly eye-opening in revealing just how much trust we put into mere machines, and deepfakes are a perfect demonstration of how that trust can so easily be shattered.

When you search “deepfakes,” one of the first few results you get are websites where you can make your own. That’s how easy it is. The accessibility of such technology has long been seen as long been seen as an asset, but now, it’s like Pandora’s Box has been opened. Once people realize virtually anything is possible, there’s no end to the irresponsible uses of the internet. However, legally, many of the deepfake scandals can be considered defamation. A case recently came to light in Bucks County, PA, where a jealous mother created deepfakes of her daughter’s teammates, intended to frame them for inappropriate behavior. Police were first informed of this when one of the teammate’s parents reported their daughter had been harassed with messages from an unknown number. The messages included “pictures from the girl’s social media accounts which had been doctored to look as if she was naked, drinking alcohol, or vaping.” These photos were intended to get the girl kicked off the team. Fortunately, police were able to trace the IP address, and arrested the perpetrator. She now faces three misdemeanor counts each of cyber harassment of a child, and harassment, proving just because an act is done “anonymously” via the internet, doesn’t mean you can’t get caught. In fact, the internet provides just as many opportunities for conviction as it does narrow escape. As technology becomes more and more apt to cause damage, cyber harassment is considered a serious crime. If convicted, the mother faces six months to a year in prison. Pennsylvania has active anti-cyberbullying legislation in place, that emphasizes how authorities have the right to interfere in instances that occur off school property. The state makes cyber harassment of a child a third-degree misdemeanor, punishable through a diversionary program.

Women have frequently been the victim of sexual violence via the usage of deepfakes. For example, “pornographic deepfakes exist the realm of other sexually exploitative cybercrimes such as revenge porn and nonconsensual pornography.” According to the Fordham Law Review, one journalist described deepfakes as “a way for men to have their full, fantastical way with women’s bodies,” emphasizing how this is still a sexually abusive act, as it demeans and reduces women to nothing but fantastical objects. Additionally, with the uncertainty of how many of this new technology works, it’s easy for these videos to be leaked, and a woman to have her reputation ruined over something she herself never did. Deepfakes have been used to intimidate and invalidate powerful women as well; men who find themselves threatened by a woman’s advance in authority may see this as a means to bring them down.

In 2018, Rana Ayyub, a successful, budding journalist in Mumbai, fell under scrutiny after a deepfake of her face superimposed on a porn video came into circulation. The stress from the endless harassment sent Ayyub to the hospital, and she withdrew from public life, abandoning her aspirations of working in media. Forty-eight states as well as D.C. have laws against “revenge porn,” but there’s still limitations against prosecuting websites that distribute this content. Section 230 of the Communications Decency Act is a federal law that protects websites protection from prosecution for content posted by third parties. Luckily, this immunity goes away if the website or webmaster actively becomes a part of distributing the content. Additionally, most states impose a fine, and/or a prison sentence for the distribution of nonconsensual porn by a citizen. Federal legislation to address the problem of deepfake pornography was introduced in 2018, and it was called The Malicious Deepfake Prohibition Act of 2018. Unfortunately, this legislation didn’t advance, proving there’s still a long way to go in administering justice to victims of this heinous crime.

Most detrimental to American life as a whole — especially given our fiercely divided nation — is the use of deepfakes to spread political misinformation. With former President Trump’s social media presence considered a hallmark of his presidency, and the majority of citizens having access to the presidential briefings on TV, our elected official’s ideals are more available than ever. However, America has always allowed itself to be swept up in illusions. In the very first televised debate of Nixon versus Kennedy in 1960, Kennedy was widely believed have been given an automatic edge because of his charisma and good looks. In this day and age though, it’s crucial our country looks more than skin deep. A video of President Biden sticking his tongue out, and another video of Biden making a statement that was proven to be fake, were both made of intricately spliced and edited audio clips. The second clip was reposted by one of Bernie Sanders’ campaign managers; it showed Biden apparently saying “Ryan was right.” This was in reference to the former Speaker of the House Paul Ryan’s desire to go after Social Security and Medicare. Even within the Democratic party itself, fake media was being used to enact support for a particular candidate, creating harmful disunity. However, change is on the horizon; the National Defense Authorization Act for Fiscal Year 2020 included deepfake legislation. This legislation included three provisions, the first being the requirement of a comprehensive report on the foreign weaponization of deepfakes. The second necessitates the government to notify Congress of foreign deepfake-misinformation being used to target U.S. elections. Lastly, the third establishes a “Deepfake Prize” competition in order to incentivize the development of more deepfake recognition technologies.

In a world where media is so easily manipulatable, it’s up to citizens to be smart consumers. By reading news from a variety of sources, and closely examining the videos you’re watching, you have a better chance of not being “faked out” by deepfakes. Some tips for identifying deepfakes include: unnatural eye or mouth movement, lack of emotion, awkward posture, unnatural coloring, blurring, and inconsistent audio. Many people worry in a world where anything can be fake, nothing is real. But there will always be journalists committed to reporting the facts, and promoting justice rather than perpetuating lies. When the yellowed edges of tabloids crumple to dust, and the cell phone screens fade to black, truth — in its shining array of technicolor — will snuff out the dull lies.

Artificial Intelligence and the Future of Elections

GoodWorkLabs
GoodWorkLabs

The days of grassroots campaigning and political buttons are long gone. Candidates have found a new way of running, a new manager. Algorithms and Artificial Intelligence (AI) are quickly becoming the standard when it comes to the campaign trail. These predictive algorithms could be deciding the votes of millions using the information of potential voters.

Politicians use AI to manipulate voters through targeted ads. Slava Polonski, PhD, explains how, “Using big data and machine learning, voters received different messages based on predictions about their susceptibility to different arguments.” Instead of going door to door, using the same message for each person, politicians use AI to create specific knocks they know people will answer to. This all takes place from a website or email.

People tagged conservative receive ads that reference family values and maintaining tradition. Voters more susceptible to conspiracy theories were shown ads based on fear, and they all could come from the same candidate. Politicians can make themselves a one size fits all through specialized ads.

AI’s campaign capabilities don’t stop at ads. After Hillary Clinton’s defeat in 2016, the Washington Post revealed that her campaign was centered around a machine learning tool called ‘Ada.’ More specifically, “the algorithm was said to play a role in virtually every strategic decision Clinton aides made, including where and when to deploy the candidate and her battalion of surrogates and where to air television ads” (Berkowitz). After Clinton’s loss, questions arose surrounding the effectiveness of using AI in this new way. In 2020, both Biden and Trump stuck to AI primarily for advertising purposes.

This means the utilization of bots and targeted swarms of misinformation to gain votes. Candidates are leading “ armies of bots to swarm social media to hide dissent” (Berkowitz). A post election analysis by the Atlantic found that around 20% of all tweets in the 2016 election were made by bots, as were over 30% surrounding the UK’s Brexit vote. Individual votes are susceptible to influence by social media accounts without a human being behind them. All over the globe, AI with an agenda can tip the scales of an election.

The use of social media campaigns with large-scale political propaganda is intertwined within elections and ultimately raises questions about our democracy. Users are manipulated to receive different messages based on predictions about their susceptibility to different arguments for different politicians. “Every voter could receive a tailored message that emphasised a different side of the argument…The key was just finding the right emotional triggers for each person to drive them to action” (Polonski).

The use of AI in elections raises much larger questions about the stability of the political system we live in. Our democracy rests upon the principle of free and fair elections, where people can vote without intimidation or manipulation. AI undermines this principle, manipulating voters, or even promoting “extremist narratives” (Polonski).

However, the use of AI can also enhance election campaigns in ethical ways. Polonski says, “we can program political bots to step in when people share articles that contain known misinformation [and] we can deploy micro-targeting campaigns that help to educate voters on a variety of political issues and enable them to make up their own minds”.

The ongoing use of social media readily informs citizens about elections, their representatives, and the issues occurring around them. Using AI in elections is critical as Polonski says, “…we can use AI to listen more carefully to what people have to say and make sure their voices are being clearly heard by their elected representatives”.

So while AI in elections raises many concerns regarding the future of campaigning and democracy, it has the potential to help constituents without manipulation when employed in the right setting.