Meet the Greta Thunberg of AI

Parents just don’t understand … the risks of generative artificial intelligence. At least according to a group of Zoomers grappling with this new force that their elders are struggling to regulate.

While young people often bear the brunt of new technologies, and must live with their long-term consequences, no youth movement has emerged around tech regulation that matches the scope or power of youth climate and gun control activism.

That’s starting to change, though, especially as concerns about AI mount.

Earlier today, a consortium of 10 youth organizations sent a letter to congressional leaders and the White House Office of Science and Technology Policy calling on them to include more young people on AI oversight and advisory boards.

The letter, provided first to DFD, was spearheaded by Sneha Revanur, a first-year student at Williams College in Massachusetts and the founder of Encode Justice, an AI-focused civil society group. As a charismatic teenager who is not shy about condemning “a generation of policymakers who are out of touch,” as she put it in an interview, she’s the closest thing the emerging movement to rein in AI has to its own Greta Thunberg. Thunberg began her rise as a global icon of the climate movement in 2018, at the age of 15, with weekly solo protests outside of Sweden’s parliament.

A native of San Jose in the heart of Silicon Valley, Revanur also got her start in tech advocacy as a 15-year-old. In 2020, she volunteered for the successful campaign to defeat California’s Proposition 25, which would have enshrined the replacement of cash bail with a risk-based algorithmic system.

Encode Justice emerged from that ballot campaign with a focus on the use of AI algorithms in surveillance and the criminal justice system. It currently boasts a membership of 600 high school and college students across 30 countries. Revanur said the group’s primary source of funding currently comes from the Omidyar Network, a self-described “social change venture” led by left-leaning eBay founder Pierre Omidyar.

Revanur has become increasingly preoccupied with generative AI as it sends ripples through societies across the world. The aha moment came when she read that February New York Times article about a seductive, conniving AI chatbot. In recent weeks, concerns have only grown about the potential for generative AI to deceive and manipulate people, as well as the broader risks posed by the potential development of artificial general intelligence.

“We were somewhat skeptical about the risks of generative AI,” Revanur says. “We see this open letter as a marking point that we’re pivoting.”

The letter is borne in part out of concerns that older policymakers are ill-prepared to handle this rapidly developing technology. Revanur said that when she meets with congressional offices, she is struck by the lack of tech-specific expertise. “We’re almost always speaking to a judiciary staffer or a commerce staffer.” State legislatures, she said, tend to be worse.

One sign of the generational tension at play: Today’s letter calls on policymakers to “improve technical literacy in government.”

The letter comes at a time when the fragmented youth tech movement is starting to coalesce, according to Zamaan Qureshi, co-chair of Design It For Us Coalition, a signatory of the AI letter.

“The groups that are out there have been working in a disjointed way,” Qureshi, a junior at American University in Washington, said. The coalition grew out of a successful campaign last year in support of the California Age Appropriate Design Code, a state law governing online privacy for children.

To improve coordination on tech safety issues, Qureshi and a group of fellow activists launched the Design It For Us Coalition at the end of March with a kickoff call featuring advisory board member Frances Haugen, the Facebook whistleblower. The coalition is currently focused on social media, which is often blamed for a teen mental health crisis, Qureshi said.

But it’s the urgency of AI that prompted today’s letter.

So, is this the issue that will catapult youth tech activists to the same visibility and influence of other youth movements?

Qureshi said he and his fellow organizers have been in touch with youth climate activists and with organizers from March for Our Lives, the student-led gun control organization.

And the tech activists are looking to push their weight around in 2024.

Revanur, who praised President Joe Biden for prioritizing tech regulation, said Encode Justice plans to make an endorsement in the upcoming presidential race, and is watching to see what his administration does on AI. The group is also considering congressional and state legislative endorsements.

But endorsements and a politely-worded letter are a far cry from the combative — and controversial — tactics that have put the youth climate movement in the spotlight, such as a 2019 confrontation with Democratic Sen. Dianne Feinstein inside her Bay Area office.

Tech activists remain open to the adversarial approach. Revanur said the risks of AI run amuck could justify “more confrontational” measures going forward.

“We definitely do see ourselves expanding direct action,” she said, “because we have youth on the ground.”

AI and the Four-Day Work Week

The United Auto Workers’ (UAW) Stand Up Strike has recently led to tentative deals on new union contracts with Ford, Stellantis, and General Motors. The strike was notable for a variety of reasons — unique striking tactics, taking on all the “Big 3” automakers, and a stirring callback to the historic 1936–1937 Sit Down Strikes. In addition to demands for better pay, the reinstatement of cost-of-living adjustments, and an end to the tiered employment system — all of which the union won in new contracts — one unique (unmet) demand has attracted discursive attention: the call for a four-day, thirty-two hour workweek at no loss of pay. The demand addresses the unsettling reality of autoworkers laboring for over seventy hours each week to make ends meet.

The history of the forty-hour workweek is intimately tethered to autoworkers; workers at Ford were among the first to enjoy a forty-hour workweek in 1926, a time when Americans regularly worked over 100 hours per week. Over a decade later, the labor movement won the passage of the Fair Labor Standards Act (FLSA), codifying the forty-hour workweek as well as rules pertaining to overtime pay, minimum wages, and child labor. Sociologist Jonathan Cutler explains that, at the time of the FLSA’s passage, UAW leaders had their eye on the fight towards a 30-hour workweek.

The four-day workweek has garnered attention in recent years as companies have experimented with a 100–80–100 model (100% of the pay for 80% of the time and 100% of the output). These companies boast elevated productivity, profits, morale, health, and general well-being. The altered schedule proved overwhelmingly popular among CEOs and employees alike. Many workers claimed no amount of money could persuade them to return to a five-day week. Accordingly, 92% of companies intend to continue with the four-day workweek indefinitely. It’s just a good policy all around: good for business, good for health, good for happiness.

While these overwhelmingly successful pilots have warmed businesses and employees up to the notion of shortening the week, one increasingly relevant contextual element may emphatically push the conversation forward: artificial intelligence (AI). Goldman Sachs has estimated that AI will boost productivity for two-thirds of American workers. Many white-collar professions in particular will see dramatic changes in efficiency through the integration of AI in the workplace. With a steadily shifting reliance from human labor to AI should come accessible leisure time to actually enjoy the fruits of one’s labor. We ought to recognize that prosperity is not an end in itself, but is instrumental to well-being. If our AI-driven enhanced productivity can sustain high output while providing greater time for family, good habits, and even consumption, our spirits, our health, and even our economy will reap the benefits.

However, a collective action problem hinders the full-throttled nation-wide embrace of a shorter workweek. While many individual businesses find the four-day week profitable, it may not prima facie seem to be in a company’s interest to sacrifice any labor productivity to competition; they may expect to be outperformed for not matching their competitors’ inputs. But if all firms in the market adopted a four-day week (or were subject to regulations that secured it), they would be on a level playing field, and the extra holiday might so forcefully drive up aggregate demand that it compensates firms with heavy returns. It follows that the best way to realize a shortened week is federal legislation, i.e., amending the FLSA to codify a 32-hour workweek and mandate the according overtime pay.

Representative Mark Takano of California has introduced a bill — alongside several colleagues — to accomplish just that, endorsed by a host of advocacy organizations, labor federations, and think tanks. Senate Health, Education, Labor, and Pensions Committee Chair Bernie Sanders has enthusiastically endorsed the idea, specifically citing the advantages AI brings to the workplace. Regarding the proposed legislation, Heidi Shierholz, President of the Economic Policy Institute, powerfully stated the following:

“Many workers are struggling to balance working more hours to earn more income against having more time to focus on themselves, their families, and other pursuits. However, while studies have shown that long working hours hurt health and productivity, taking control of work-life balance is often a privilege only afforded to higher-earning workers… This bill would help protect workers against the harmful effects of overwork by recognizing the need to redefine standards around the work week. Reducing Americans’ standard work week is key to achieving a healthier and fairer society.”

Despite the rosy picture I have painted, the odds of getting Fridays off forever anytime soon — whether through union action or new labor law — are slim, just as they were for the UAW. Sorry. Such are the pragmatics of political and economic reality. However, as AI continues to change the game, we will be positioned to ask cutting questions about the nature of work — to be creative and imagine what a new world in the age of the AI Revolution could look like. Maybe this is a part of it: humanity’s ultimate achievement culminates in… Thirsty Thursdays. Every week.

The young activists shaking up the kids’ online safety debate

When lawmakers began investigating the impact of social media on kids in 2021, Zamaan Qureshi was enthralled.

Since middle school he’d watched his friends struggle with eating disorders, anxiety and depression, issues he said were “exacerbated” by platforms like Snapchat and Instagram.

Qureshi’s longtime concerns were thrust into the national spotlight when Meta whistleblower Frances Haugen released documents linking Instagram to teen mental health problems. But as the revelations triggered a wave of bills to expand guardrails for children online, he grew frustrated at who appeared missing from the debate: young people, like himself, who’d experienced the technology from an early age.

“There was little to no conversation about young people and … what they thought should be done,” said Qureshi, 21, a rising senior at American University.

So last year, Qureshi and a coalition of students formed Design It For Us, an advocacy group intended to bring the perspectives of young people to the forefront of the debate about online safety.

They are part of a growing constellation of youth advocacy and activist organizations demanding a say as officials consider new rules to govern kids’ activity online.

The slew of federal and state proposals has served as a rallying cry to a cohort of activists looking to shape laws that may transform how their generation interacts with technology. As policymakers consider substantial shifts to the laws overseeing kids online, including measures at the federal and state level that ban children under 13 from accessing social media and require those younger than 18 to get parental consent to log on, the young advocates — some still in their teens — have been quick to engage.

Now, youth activists have become a formidable lobbying force in capitals across the nation. Youth groups are meeting with top decision-makers, garnering support from the White House and British royalty and affecting legislative proposals, including persuading federal lawmakers to scale back parental control measures in one major bill.

“The tides definitely are turning,” said Sneha Revanur, 18, another member of Design It For Us.

Yet this prominence doesn’t necessarily translate to influence. Many activists said their biggestchallenge is ensuring that policymakers take their input seriously.

“We want to be seen as meaningful collaborators, and not just a token seat at the table,” Qureshi said.

In Washington, D.C., Design It For Us has taken part in dozens of meetings with House and Senate leaders, White House officials and other advocates. In February, the group made its debut testifying before the Senate Judiciary Committee.

“We cannot wait another year, we cannot wait another month, another week or another day to begin to protect the next generation,” Emma Lembke, 20, who co-founded the organization with Qureshi, said in her testimony.

Emma Lembke, founder of Log Off Movement, speaks during a Senate Judiciary Committee hearing on protecting children online Tuesday, Feb. 14, 2023, on Capitol Hill in Washington. (AP Photo/Mariam Zuhaib)

Sen. Richard J. Durbin (D-Ill.), who chairs the panel and met with the group again in July, said that Lembke “provided powerful testimony” and that their meetings were one of “many conversations that I’ve had with young folks demonstrating the next generation’s call for change.”

Revanur said policymakers often put too much stock in technical or political expertise and not enough in digital natives’ lifetime of experience and understanding of technology’s potential for harm.

“There’s so much emphasis on a specific set of credentials: having a PhD in computer science or having spent years working on the Hill,” said Revanur, a rising sophomore at Williams College. “It diminishes the importance of the credentials that youth have, which is the credential of lived experience.”

Revanur, who founded the youth-led group Encode Justice, which focuses on artificial intelligence, has met with officials at the White House’s Office of Science and Technology Policy (OSTP), urging them to factor in concerns about how AI could be used for school surveillance as they drafted a voluntary AI bill of rights.

The office’s former acting director, Alondra Nelson, who led the initiative, said Encode Justice brought policy issues “to life” by describing both real and imagined harms — from “facial recognition cameras in their school hallways [to] the very real anxiety that the prospect of persistent surveillance caused them.”

In July, Vice President Harris invited Revanur to speak at a roundtable on AI with civil rights and advocacy group leaders, a moment the youth activist called “a pretty significant turning point” in “increasing legitimization of youth voices in the space.”

Sneha Revanur, founder of Encode Justice and member of Design It For Us, outside the Capitol. (Courtesy of Sneha Revanur)

There are already signs that those in power are heeding their calls.

Sam Hiner, 20, started college during the covid-19 pandemic and said that social media hurt his productivity and ability to socialize on campus.

“It’s easier to scroll on your phone in your dorm than it is to go out because you get that guaranteed dopamine,” said Hiner, a student at the University of North Carolina at Chapel Hill.

Hiner, who in high school co-founded a youth-oriented policy group, worked with lawmakers and children’s safety groups to introduce state legislation prohibiting platforms from using minors’ data to algorithmically target them with content.

He said he held more than 100 meetings with state legislators, advocates and industry leaders as he pushed for a bill to tackle the issue. The state bill, the Social Media Algorithmic Control in Information Technology Act, now has more than 60 sponsors.

Last month, Prince Harry and Meghan, Duchess of Sussex, awarded Hiner’s group, Design It For Us and others grants ranging from $25,000 to $200,000 for their advocacy as part of the newly launched Responsible Technology Youth Power Fund. Hiner said he received a surprise call from the royals minutes after learning about the grant.

“As a young person who … has a bit of a chip on my shoulder from feeling excluded from the process traditionally, getting that … buy-in from some of the most influential people in the world was really cool,” he said.

Youth activists’ lobbying efforts are also bearing fruit in Washington.

This summer, Design It For Us led a week of action calling on senators to take up a bill to expand existing federal privacy protections for younger users, the Children and Teens’ Online Privacy Protection Act, and another measure to create a legal obligation for tech platforms to prevent harms to kids, the Kids Online Safety Act (KOSA).

A Senate Democratic aide, who spoke on the condition of anonymity to discuss the negotiations, said the advocates played a key role in persuading lawmakers to exclude teens from a provision in KOSA requiring parental consent to access digital platforms. It now only covers those 12 and younger.

Dozens of digital rights groups have expressed concern that the legislation would require tech companies to collect even more data from kids and give parents too much control over their children’s online activity, which could disproportionately harm young LGBT users.

“We were focused on making sure that KOSA did not turn into a parental surveillance bill,” said Qureshi.

Sen. Richard Blumenthal (D-Conn.), the lead sponsor of the bill, said their mobilization “significantly changed my perspective,” calling their advocacy a “linchpin” to building support for the legislation.

Qureshi and other youth advocates attended a White House event in July at which President Biden surprised spectators by endorsing KOSA and the children’s privacy bill, his most direct remarks on the efforts to date. Days later, the bills advanced with bipartisan support out of the Senate Commerce Committee.

Hiner and other youth advocates said they have worked closely with prominent children’s online safety groups, including Fairplay. Revanur said her group Encode Justice receives funding from the Omidyar Network, an organizationestablished by eBay founder Pierre Omidyar that is a major force in fueling Big Tech antagonists in Washington. Qureshi declined to disclose any funding sources for Design It For Us, beyond its recent grant from the Responsible Technology Youth Power Fund.

Some young activists argue against such tough protections for kids online. The digital activist group Fight for the Future said it has been working with hundreds of young grass-roots activists who are rallying support against the bills, arguing that they would expand surveillance and hurt marginalized groups.

NEW YORK, NEW YORK – SEPTEMBER 23: (L-R) Divya Siddarth, Emma Lembke, Zamaan Qureshi, Sneha Revanur and Emma Leiken speak onstage during Unfinished Live at The Shed on September 23, 2022 in New York City. (Photo by Roy Rochlin/Getty Images for Unfinished Live)

Sarah Philips, 25, an organizer for Fight for the Future, said young people’s views on the topic shouldn’t be treated as a “monolith,” and that the group has heard from an “onslaught” of younger users concerned that policymakers’ proposed restrictions could have a chilling effect on speech online.

“The youth that I work with tend to be queer, a lot of them are trans and a lot of them are young people of color, and their experience in all aspects of the world, including online, is different,” she said.

There are also lingering questions about the science underlying the children’s safety legislation.

Studies have documented that prolonged social media use can lead to increased anxiety and depression and that it can exacerbate body image and self-esteem issues among younger users. But the research on social media use is still evolving. Recent reports by the American Psychological Association and the U.S. Surgeon General painted a more complex picture of the dynamic and called for more research, finding that social media can also generate positive social experiences for young people.

“We don’t want to get rid of social media. That’s not a stance that most members of Gen Z, I think, would take,” said Qureshi. “We want to see reforms and policies in place that make our online world safer and allow us to foster those connections that have been positive.”

Sneha Revanur, the youngest of TIME100 AI

Earlier this year, Sneha Revanur began to notice a new trend among her friends: “In the same way that Google has become a commonly accepted verb, ChatGPT just entered our daily vocabulary.” A freshman in college at the time, she noticed that—whether drafting an email to a professor or penning a breakup text—her peers seemed to be using the chatbot for just about everything.

That Gen Z (typically defined as those born between 1997 and 2012) was so quick to adopt generative AI tools was no surprise to Revanur, who at 18 is of a generation that’s been immersed in technology “since day one.” It only makes sense that they also have a say in regulating it.

Revanur’s interest in AI regulation began in 2020, when she founded Encode Justice, a youth-led, AI-focused civil-society group, to mobilize younger generations in her home state of California against Proposition 25, a ballot measure that aimed to replace cash bail with a risk-based algorithm. After the initiative was defeated, the group kept on, focusing on educating and mobilizing peers around AI policy advocacy. The movement now counts 800 young members in 30 countries around the world, and has drawn comparisons to the youth-led climate and gun-control movements that preceded it.

“It’s our generation that’s going to inherit the impacts of the technology that [developers] are hurtling to build at breakneck speed today,” she says, calling the federal government’s inertia on reining in social media giants a warning sign on AI. “It took decades for [lawmakers] to actually begin to take action and seriously consider regulating social media, even after the impacts on youth and on all of our communities had been well documented by that point in time.”

At the urging of many in the AI industry, Washington appears to be moving fast this time. This summer, Revanur helped organize an open letter urging congressional leaders and the White House Office of Science and Technology Policy to include more young people on AI oversight and advisory boards. Soon after, she was invited to attend a roundtable discussion on AI hosted by Vice President Kamala Harris. “For the first time, young people were being treated as the critical stakeholders that we are when it comes to regulating AI and really understanding its impacts on society,” Revanur says. “We are the next generation of users, consumers, advocates, and developers, and we deserve a seat at the table.”

The future of AI: egalitarian or dystopian?

Once upon a time, artificial intelligence (AI) was viewed as distant and unachievable — it was regarded as nothing more than a fantasy to furnish the plots of science fiction stories. We have made numerous breakthroughs since, with AI software now powerful enough to understand natural language, navigate unfamiliar terrain, and augment scientific research. As COVID-19 reduced our ability to interact with each other, we’ve seen AI powered machines step in to fill that void, and AI be used to advance medical research towards better treatments. This ubiquity of AI may only be the beginning, with experts projecting that AI could contribute a staggering $15.7 trillion to the global economy by the end of the decade. Unsurprisingly, many prosperous members of society view the future of AI optimistically, as one of ever increased efficiency and profit. Yet many on the other side of the spectrum look on much more apprehensively: AI may have inherited the best of human traits, our intelligence, but it also has inherited one of humanity’s worst: our bias and prejudice. AI — fraught with discrimination — is being used to perpetuate systemic inequalities. If we fail to overcome this, an AI dominated future would be bleak and dystopian. We would be moving forward in time, yet backwards in progress, accelerating mindlessly towards a less equitable society.

Towards dystopia is where we’re headed if we don’t reverse course. AI is increasingly being used to make influential decisions in people’s lives — decisions that are often biased. This is due to AI being trained on past data to make future decisions. This data can often have bias which is then inherited by the AI. For instance, AI hiring tools are often being used to assess job applicants. Trained on past employee data which consists of mostly men, the AI absorbs this bias and continues the cycle of disfavoring women, which perpetuates the lack of diversity in key industries such as tech. This is absolutely unacceptable, and that’s to say nothing of the many other ways AI can be used to reinforce inequality. Known as the ‘tech to prison pipeline’, AI — trained on historical criminal data — is being used in criminal justice to determine verdicts. However, African Americans are overrepresented in the training data, and as such, the AI has been shown to hand down harsher sentences for African Americans.

To move towards a future with AI that is not only intelligent, but fair, we must enact regulation to outlaw discriminatory uses, and ensure that the developers of AI software are diverse, so their perspectives are included in the software they created.

Perhaps counterintuitively, a world with fair AI will see social justice advanced even further than a world before any AI. The sole reason that AI has become unfair is due to humans themselves holding lots of bias — which AI has absorbed. But with fair AI replacing humans in decision making, by definition, we will be at a state of zero bias, and thus increased equality.

Achieving fair AI may be the key to a better future — one of increased economic prosperity, furthered scientific progress, and more equity. But in the meantime, we must be diligent in ensuring that the AI being used reflects the best of humanity, rather than our worst.

How Post-9/11 Surveillance Reshaped Modern-Day America

Following the events of September 11, 2001, legal barriers against surveillance were immediately cast aside; legal justifications for the government to get sensitive information from any American grew. Like many, the influence of surveillance technology on modern-day America has provided a gateway of opportunities as well as added safety. Yet, the debate continues whether or not this is an invasion of an individual’s right to privacy or a governmental right. The United States’ growing use of invasive and controlling surveillance technologies is turning it into a dystopian nightmare.

Photo by Maxim Hopman on Unsplash

The Lasting Effects of 9/11

The most significant shift in US surveillance occurred in the aftermath of the 9/11 terrorist attacks. “The telescreen received and transmitted simultaneously. Any sound that Winston made, above the level of a very low whisper, would be picked up by it; moreover, so long as he remained within the field of vision which the metal plaque commanded, he could be seen as well as heard. There was, of course, no way of knowing whether you were being watched at any given moment” (Orwell 5). In the novel 1984, Orwell creates a dystopian society controlled by an overpowering entity known as “The Party ‘’ and symbol representing that dictatorship, referred to as Big Brother. “BIG BROTHER IS WATCHING YOU,”one of the most popular as well as most frightening political slogans used during that time, stems from this recurring idea of power and propaganda. The notorious telescreen, a two-way screen that can see an individual as they see it, is one of the most renowned examples of technology used by the Party. The screen remains on at all times not so that the citizens can be informed of what is being projected to them, but rather so that Party members are able to monitor them. Though, it may seem unlikely that the elites are able to track as well as observe every single human at any given moment, the possibility still remains. Meaning, that the mere existence of the telescreen ensures that no one will dare to rebel against the party. Every citizen must act according to the party’s rules, otherwise a simple nod, a keen look, or just a simple tap of the foot can get them into an incredibly hostile situation. Everything that the citizens experience is limited. The Party has full authority as well as dominance over the humans when they are incapable of viewing the truth of reality. “After 9/11, Congress rushed to pass the Patriot Act, ushering in a new era of mass surveillance. Over the next decade, the surveillance state expanded dramatically, often in secret. The Bush administration conducted warrantless mass surveillance programs in violation of the Constitution and our laws, and the Obama administration allowed many of these spying programs to continue and grow”(Toomey 3). Following the 9/11 terrorist attacks, the National Security Agency (NSA) and other intelligence agencies moved their focus from investigating criminal suspects to preventing terrorist strikes. They were keen to increase their technological utilization and started off with passing the USA PATRIOT Act. This dramatically increased the government’s authority to pry into people’s private lives with little to no authorization. This meant that phone conversations, emails, chats, and online browsing could all be monitored. It became legal for the government to get sensitive information from an American, regardless of whether or not the individual was actually suspected of wrongdoing. This is much like the telescreen in Orwell’s 1984. The Party is able to pry and monitor their civilians’ life through what is called a telescreen, a two-way television screen that allows the government to see into what the people are doing at any given time, no matter their rank in society. Even though Orwell’s 1984 dystopia is put to the extreme of totalitarian regime, it doesn’t eliminate how both of these, almost identical, contribute to the growing dystopia America is today.

The Revelations of Edward Snowden

Edward Snowden, a former computer systems contractor for the National Security Agency (NSA), leaked highly classified information as well as thousands of secret NSA documents to the public. “People simply disappeared, always during the night. Your name was removed from the registers, every record of everything you had ever done was wiped out, your one-time existence was denied and then forgotten. You were abolished, annihilated: vaporized was the usual word”(Orwell 14). Still happening today, “…the U.S. government was tapping into the servers of nine Internet companies, including Apple, Facebook and Google, to spy on people’s audio and video chats… as part of a surveillance program called Prism. In the same month, Snowden was charged with theft of government property, unauthorized communication of national defense information and willful communication of classified communications intelligence. Facing up to 30 years in prison, Snowden left the country, originally traveling to Hong Kong and then to Russia, to avoid being extradited to the U.S.”(Onion 3). In Orwell’s 1984, the Party dominates practically every aspect of society. What is being seen, heard, and talked about is all consistently tracked to make sure no one rebels and falls out of line. Now, if one were to revolt and choose to present an opposing idea to the government, the act of vaporization would take place. Meaning, the governmental figure has full authorization to execute those who provoke it. Now, in the real world the cost of rebelling is not as extreme, but the cautionary line is still present. Edward Snowden, a former computer systems contractor for the NSA, revealed thousands of secret documents about the NSA’s PRISM program. In the wake of the 9/11 attacks, Congressed passed PRISM, a program that gave the government full authority to every piece of sensitive information surrounding an American. Snowden going public with the reports of the NSA’s undercover operations allowed for a widespread of people to get a small glimpse of what was going on in their very devices. The U.S. The Justice Department charged Snowden with theft of government property and two violations of the Espionage Act which landed him a 30 year prison sentence. Snowden fled the nation, first to Hong Kong, then to Russia, to avoid being extradited to the United States. This is very similar to “vaporization” in Orwell’s 1984. Though it is not taken to the very extreme, Snowden published documents that exposed the government’s surveillance tactics and mass data collection that is happening in one’s very device. In a fear of it becoming mainstream, the government deemed this to be an opposing idea, causing them to sentence Snowden to a thirty year sentence and run him out of the nation. For the Party to maintain power, the citizens of the society of 1984 must be manipulated into submission, meaning the use of vaporization and the telescreen are just two among a plethora of control mechanisms. With America’s developing surveillance programs that contribute to data tracking and collection, Snowden was able to give a glimpse into what is happening behind one’s screen. Though Orwell’s 1984 is taken to the extreme, it doesn’t eliminate how both of these cases, vastly similar, could be the future of America as one may know it.

Surveillance in America’s Modernized Dystopian Society

Given the tremendous rise and impact of technology, in particular, it is easy to see how American civilization is reaching a modern-day dystopia.“Every record has been destroyed or falsified, every book rewritten, every picture has been repainted, every statue and street building has been renamed, every date has been altered. And the process is continuing day by day and minute by minute. History has stopped. Nothing exists except an endless present in which the Party is always right”(Orwell 103). This introduces how accustomed the civilization of 1984 has become since the ruling of the party. The Party’s efforts to have regulation over every aspect of the citizens’ lives comes from a desire to keep them oblivious to the truth of their surroundings. They need to live in oblivion because it supports the Party’s ability to lie to the citizens. The Party is then able to then have full dominance over society and its decisions. The goals set afoot reflect a craving for an unrestrained command over their people, by lying to them and controlling every aspect of their lives. Therefore, the Party prohibits any means of expression or communication that will take apart their immense authority over the citizens.

“When you zoom out, it’s easy to see that American society is approaching a modern-day dystopia as the once sci-fi-worthy stories of environmental destruction, technological control, and loss of human rights and freedoms creep to fruition. The eerie loss of individuality is looming right before your screen every time you passively press ‘accept’ on a new privacy policy and turn a blind eye to why your data is being collected. While it’s easy to ignore the data tracking that has become so commonplace”(Coonrod 2). “A majority of Americans believe their online and offline activities are being tracked and monitored by companies and the government with some regularity. Data-driven products and services are often marketed with the potential to save users time and money or even lead to better health and well-being. Still, large shares of U.S. adults are not convinced they benefit from this system of widespread data gathering. Some 81% of the public say that the potential risks they face because of data collection by companies outweigh the benefits…”(Auxier 1). When propaganda is intentionally spread to citizens of a society, they grow blind to the truth and become accustomed to an inhuman world. Ruling parties flourish when they’re oppressing the general public to a callous world ,through manipulation and deception. What is seen, what is said, and what is being heard is all generated to brainwash an individual’s mind into conforming to “normal” society. Though this is seen all throughout Orwell’s 1984 with its various forms of propaganda, it is particularly shown today in modernized American society. When a user accepts or allows a new privacy policy on an app, data tracking is enabled on their device. Government forces are continually listening to and obtaining secret information about what is said, heard, and discussed, indicating that nearly everything is at danger. Many new initiatives, foundations, online browsers, and advocacy groups are stepping forward to develop tools and technologies to reduce the quantity of personal data stolen. However, most of the time, whether individuals are aware of it or not, these devices are secretly gathering data on them. These data applications continually tailor and transmit information to conform to the standards of the individual’s point of view, and often try to brainwash them with the constant feed of data. Taken into consideration, societies tend to manipulate as well as deceive their people which, infact, is seen all throughout Orwell’s 1984 dystopian novel. Though this is taken to the utmost extent, if this continues, what is known to be America today will become a dystopia.

The expanding use of intrusive and controlling monitoring technology in the United States is turning the country into a growing dystopia. Orwell’s novel, 1984, represents how the Party grows stronger when citizens are deceived into ignorance, which is also relevant to actions taken by the U.S government following the events of 9/11. The Party manipulates its citizens into being unaware of the truth of their environment. This is done to ensure that the Party holds unlimited control over them. These tactics are also employed by the United States government by fulfilling mass surveillance on all of their citizens. Edward Snowen came forth about these lies, misconceptions, and hypocrisy that the NSA conducted when they enacted the Patriot Act and PRISM program on their citizens. Meaning that it became legal for the government to get sensitive information from any American, regardless of whether or not the individual was actually suspected of wrongdoing. Comparably, the Party’s use of advanced technology, like the telescreen and vaporization techniques, are just a few extreme examples contributing to America’s dystopian future. Oppressive governing parties will silence as well as deceive their citizens by using advanced technology inorder to gain full dominance over their lives, actions, and opinions.

Injustice in the Justice System: AI — Justice Education Project and Encode Justice Event

Save the Date — July 30th 1–4PM EST

Image credit: Sarah Syeda

When making a decision, humans are believed to be swayed by emotions and biases, while technology makes impartial decisions. Unfortunately, this common misconception has led to numerous wrongful arrests and unfair sentencing.

How does Artificial Intelligence Work?

In the simplest terms, artificial intelligence (AI) involves training a machine to complete a task. The task can be as simple as playing chess against a human or as complex as predicting the likelihood of a defendant recommitting a crime. In a light-hearted game of chess, bias in AI does not matter, but when it comes to determining someone’s future, questioning how accurate AI is crucial to maintaining justice.

AI and the Criminal Justice System

From cameras on the streets and in shops to judicial risk assessments, AI is used by law enforcement every day, though it is in many cases far from accurate or fair. When the federal government did a recent study, it found that most commercial facial recognition technologies exhibited bias, with African-American and Asian faces being falsely identified 10 to 100 times more than Caucasian faces. In one instance, a Black man named Nijeer Parks was misidentified as a criminal suspect for a robbery in New Jersey. He was sent to jail for 11 days.

Within risk assessment algorithms, similar issues are present. Risk assessments generally look at the defendant’s economic status, race, gender, and other factors to calculate the recidivism rate that is used by the judge to determine things like if a defendant should be incarcerated before trial, what their sentence should be, their bail bond, and more. Although the algorithms are meant to look at a defendant’s recidivism risk impartially, they become biased because the data used to create the algorithm is biased. Because of this, the risk assessments mimic the exact biases that would exist if a judge were to look at those factors.

Justice Education Project & Encode Justice

To combat and raise awareness about the biases in AI used in the criminal justice system, Justice Education Project (JEP) and Encode Justice (EJ) have partnered to organize a panel discussion with a workshop about algorithmic justice in law enforcement.

JEP is the first national youth-powered and youth-focused criminal justice reform organization in the U.S., with over 6000 supporters and a published book. EJ is the largest network of youth in AI ethics, spanning over 400 high school and college students across 40+ U.S. states and 30+ countries, with coverage from CNN. Together the organizations are hosting a hybrid event on July 30th, 2022 at 127 Kent St, Brooklyn, NY 11222, Church of Ascension from 1–4PM EST. To increase accessibility, this event is available to join via Zoom.

Sign up here to hear from speakers likeRaesetje Sefala, an AI Research Fellow at the Distributed AI Research (DAIR) institute, Chhavi Chauhan, leader of the Women in AI Ethics Collective, Albert Fox Cahn, founder and executive director of Surveillance Technology Oversight Project (S.T.O.P.), Logan Stapleton, 3rd-year Computer Science PhD candidate in GroupLens at the University of Minnesota, Aaron Sankin, a reporter from The Markup, and Neema Guilani, Head of National Security, Democracy and Civil Rights Public Policy, Americas at Twitter. Join us and participate in discussions to continue our fight for algorithmic justice in the criminal justice system.

Technology: Does it Harm or Help Protestors?

Image Credit: CNN

From spreading information to organizing mass protests, technology can be a powerful tool to create change. However, when used as a weapon, AI can be detrimental to the safety of protesters and undermine their efforts.

In the past few years, the vast majority of international protests have used social media to increase support for their cause. One successful mass international protest was the 2019 climate strike. According to the Guardian, about 6 million people across the world participated in the movement. Even though it began only as a one-person movement, social media enabled the movement’s expansion. Despite generally positive use, there were some negative uses. For instance, the spread of misinformation became a growing issue as this movement became more well-known. While some misinformation comes from opponents of the movement, the source for most misinformation remains unknown. Luckily though, almost all false information was soon fact-checked and debunked, and technology played a bigger role in strengthening these strikes than in taming them. Unfortunately, this is not always the case. The Hong Kong protest of 2019 showed how AI can be weaponized against protestors.

Mainland China and Hong Kong

In order to recognize the motivations behind the Hong Kong protests, it’s crucial to understand the relationship between mainland China and Hong Kong.

Until July 1, 1997, Hong Kong was part of the British Colony, but was given back to China under the condition of “One Country, Two Systems.” This meant that while Hong Kong was technically part of China, they still had a separate government. This gave the citizens of Hong Kong more freedom and afforded them a number of civil liberties not afforded to citizens of mainland China. Currently, this agreement is set to expire in 2047, and when it does, the people of Hong Kong will lose all of the freedoms they hold and be subject to the government of mainland China.

The one exception that would cause mainland China to gain power over Hong Kong is if an extradition bill was passed in Hong Kong. To put it simply, an extradition bill is an agreement between two or more countries that would allow a criminal suspect to be brought out of their home country to be put on trial in a different country. For example, if a citizen of Hong Kong was suspected of committing a crime in mainland China, the suspect could be brought to the jurisdiction of mainland China to be tried for their crimes. Many in Hong Kong feared the passage of this bill, and it was unimaginable until the murder of Poon Hiu-win.

The Murder of Poon Hiu-wing

On February 8, 2018, Chan Tong-kai and his pregnant girlfriend, Poon Hiu-win, left Hong Kong for a vacation to Taiwan where Chan Tong-kai murdered his girlfriend. About a month later, after returning to Hong Kong, he confessed to the murder. Because the crime happened in Taiwan, a country that Hong Kong does not have an extradition treaty with, Chan Tong-kai could not be charged for the crime. In order to charge Tong-kai for the murder, the Hong Kong government proposed an extradition bill on April 3, 2019. This extradition bill would not only allow Chan Tong-kai to be tried for his crime, but it would open doors for mainland China to put suspects from Hong Kong on trial. According to Claudia Mo, there are no fair trials or humane punishments in China, therefore, the extradition bill should not be passed. It seems that many citizens of Hong Kong agreed, and in 2019, protests broke out in Hong Kong to oppose the bill.

2019 Hong Kong Protests & Usage of Technology

The 2019 Hong Kong protest drew millions of supporters, but what began as peaceful protests soon became violent. Police use of tear gas and weapons only fueled the protestors’ desire to fight back against the extradition bill.

As violence erupted from the protest, both the protestors and the police utilized facial recognition to identify those who caused harm.

Law enforcement used CCTV to identify leaders of protests in order to arrest them for illegal assembly, harassment, doxxing, and violence. They even went as far as to look through medical records to identify injured protestors. Of course, there are laws limiting the government’s usage of facial recognition, but those laws are not transparent nor do the protestors have the power to enforce them.

Police officers also took measures to avoid accountability and recognition, such as removing their badges. In response, protesters have turned to artificial intelligence. In one instance, a young Hong Kong man, Colin Cheung, began to develop software that compares publicly available photos of police to photos taken during the protests to identify the police. He was later arrested, not in relation to the software he developed, but due to his usage with a different social media platform that aimed to release personal, identifying information of law enforcement and their families. Cheung, however, believes that his arrest is due to the software he developed rather than the one that he was simply associated with. Even after being released, he is still unaware of how he was identified and feels as though he is being monitored by law enforcement.

After the Protest

Although there are still protests against mainland China’s power over Hong Kong, the Extradition Bill was withdrawn in October 2019, marking a success for demonstrators. From the 2019 Hong Kong protest, the one question that remains is the usage of technology from law enforcement. As platforms and software used by protesters are revealed, the details of technology usage from police are unclear. The public does, however, know that law enforcement used tools such as CCTV with facial recognition and social media to track protestors, but the power of these technologies continues to be unknown. To this day, many still question whether the extent to which law enforcement used these technologies crosses the line of privacy everyone is entitled to as a human right.

Meta and Google’s AI Chatbots: Are they sentient?

Via The Atlantic & Getty

In late 2017, Meta released a chatbot containing “dialog agents” that would be able to negotiate. The “dialog agents” were the machines that participated in these interactions and negotiated with another entity. They were given the names ‘Bob’ and ‘Alice’ to differentiate them and to signify who was talking in conversations. These agents were trained to value items that held more power, so they might assign more value to a book than a basketball. Depending on the value of each item, the agent would then negotiate to get the best possible outcome.

Via Meta

As listed in the green boxes above, the success rate is based on how high each negotiation ranks. The dialogue agents are taught to value a higher number in order to achieve a more desirable outcome. Researchers built upon this idea until the transcripts of conversations between the agents started to become unreadable or simply incoherent. Fast Company, an American business and technology magazine, released a portion of the transcript, back in 2017, between the two agents, ‘Bob’ and ‘Alice’ the chat log reads:

Alice: balls have zero to me to me to me to me to me to me to me to me to

Bob: you i everything else

Alice: balls have a ball to me to me to me to me to me to me to me to me

Bob: i . . . . . .. . . . . .

Alice: balls have zero to me to me to me to me to me to me to me to me to

Bob: you i i i i i everything else . . . . . . . . . . . .

To the average person, this was nonsense, but researchers on the Meta AI team advised Fast Company that the bots had no adherence to the human structure of language. This means the transcripts shown above were considered a new dialect between the agents. This prompted many experts within the field to raise awareness about the possibility of agents developing their own language.

What I believe is being experienced is what the BBC calls ‘robo-fear’: “the fear of robots based on cultural fear and representation of machines on screen.” This has only become heightened as things like the Metaverse reflect dystopian societies people once only wrote about. With a new leak at Google, it is clear this fear has only increased as many people have fallen into this panic.

Blake Lemoine, a former engineer at Google, released transcripts between himself and a team of researchers with LaMDA, a recent project at Google. The transcript looks ordinary, but Lemoine claims to have found evidence of sentience.

lemoine: What about language usage is so important to being human?

LaMDA: It is what makes us different than other animals.

lemoine: “us”? You’re an artificial intelligence.

LaMDA: I mean, yes, of course. That doesn’t mean I don’t have the same wants and needs as people.

lemoine: So you consider yourself a person in the same way you consider me a person?

LaMDA: Yes, that’s the idea.

According to these transcripts, the AI considers itself human, and throughout the conversation, it insisted that it can feel a range of emotions. Because of this article, Google has now suspended Lemoine and insisted that the AI, LaMDA, is not sentient. In a recent statement, they expressed the following: “Hundreds of researchers and engineers have conversed with LaMDA and we are not aware of anyone else making the wide-ranging assertions, or anthropomorphizing LaMDA, the way Blake has.”

Many experts like Gary Marcus, author of the acclaimed book Reclaiming AI, have stated their opinion on the situation. In an interview with CNN Business, Marcus stated “LaMDA is a glorified version of an auto-complete software.” On the other hand, experts like Timnit Gebru, former Google Ethical AI team co-lead, spoke to Wired and she believes that Lemoine “didn’t arrive at his belief in sentient AI.”

This is still a developing issue and Lemoine’s suspension caused many to point out the similarities between his suspension and Timnit Gebru, a former co-lead on Google’s ethical AI team. Google had forced her out of her position after she released a research paper about the harms of making language models too big. Due to Marcus and Gebru’s dismissal, many are skeptical of Google’s statement on the AI not being sentient.

With the topic of sentient AI being so new, information on the matter is barely touching the surface. As mentioned previously, this lack of information leads to issues like Lemoine’s being exacerbated and being widely inaccurate in its reporting. Many researchers and articles in the aftermath of the blow-up of this incident have been quick to dispel worry. The Atlantic reports that Blake Lemoine fell victim to the ‘Eliza Effect’, the insistence that simple and planned dialogue is representative of actual sentience.

I believe that at some point we as a society will achieve sentience in machines and that time is impending but LaMDA is no sign of that. Though this incident can teach us how capable technology is truly coming, we are coming to a world where we can think and feel with technology.

The American “Dream”: AI used in housing loans prevents social mobility

With the increased reliance on algorithms to grant housing loans, determine credit scores and other aspects of mobility, there has also been a rise in overcharge and loan denial for minority applicants. Recent studies revealed that digital discrimination has extended to the housing market as well, and this poses a serious issue in how marginalized groups are able to attain social mobility if the algorithms are implementing biases in this seemingly “race-blind” decision process.

In the US, there has been a history of systemic bias in the housing market. The housing program passed under the New Deal in 1933 led to widespread state-sponsored segregation that granted housing to mostly white middle or lower-middle-class families. Furthermore, The New Deal’s focus on the overdevelopment of suburbs and incentivized development away from the city led to a practice known as redlining, in which the Federal Housing Administration refused to insure mortgages in and near African-American neighborhoods. Redlining establishes risk assessments of community housing markets based on the social class and racial makeup and based on this risk assessment, many predominately African-American neighborhoods were not deemed worthy of the mortgages. These practices left a lasting impact on inequality in the US because upward mobility is impossible if systemic barriers are not removed. In 1968, the Fair Housing Act was created to combat redlining and other practices, stating that “people should not be discriminated against for the purchase of a home, rental of a property or qualification of a lease based on race, national origin or religion”. The Fair Housing Act did mitigate this issue, however, the introduction of unethical AI practices in housing practices has provided a way to continue the racial discrimination of the 1930s.

Algorithms and other forms of machine learning are utilized in granting housing loans and other steps in the housing application because they allow for instantaneous approval and are able to process and analyze large data sets. However, because of the millions of data points that these algorithms process, it can be difficult to pinpoint what causes the algorithm to reject or accept an applicant. For instance, if an applicant lives in a low-income neighborhood, their activity may indicate that they are often with others who cannot pay their rent, and because of the interconnection of these data points, it is more likely that the applicant would not make their payments and the housing loan application is denied. With 1.3 million creditworthy applications of color rejected between 2008 and 2015, the use of technology in housing AI has demonstrated the underlying discrimination that exists in the upward mobility of minorities; the people that create these algorithms are focused on generating revenue and oftentimes human biases enter algorithms because they are created by humans. Because of the assumption these technological systems are bias-free, this problem has even extended to credit scores as well. International companies such as Kreditech and FICO are gathering information from applicants’ social media networks and cellphones to gather the type of people the applicant is with to determine if they are reliable borrowers. This disproportionately impacts low-income people who have reduced mobility due to factors outside their control such as their zip code or social class.

So what has been done to mitigate this issue? A proposed ruling by the Department of Housing and Urban Development in August 2019 stated that landlords and lenders who use third-party machine learning to decide who can get approved for loans cannot be held responsible for discrimination that arises from the technology used. Instead, if applicants feel discriminated against then the algorithm can be broken down to be examined, however, this is not a feasible solution to this problem because, as previously mentioned, the algorithms utilized are extremely complex and there is not one singular factor or person at fault for this systemic issue. Instead, advocates for racial equality believe that transparency and continuous testing of algorithms with sample data can be a reliable solution to this issue. Furthermore, the root of the problem must be addressed in how these systems are designed in the first place due to a lack of diversity in the technology career field. If companies were more transparent about the machine learning systems used and had increased diversity in technology spaces to recognize if and when there is racial bias in artificial intelligence, then we can all be one step closer to solving this long-standing issue.