Petition Urging House to Stop Non-Consensual Deepfakes

FOR IMMEDIATE RELEASE: December 4, 2024

Contact: comms@encodeai.org

Petitions support the DEFIANCE Act and TAKE IT DOWN Act

WASHINGTON, D.C. – On Wednesday, Americans for Responsible Innovation and Encode announced a new petition campaign, urging the House of Representatives to pass protections against AI-generated non-consensual intimate images (NCII) and revenge porn before the end of the year. The campaign, which is expected to gather thousands of signatures over the course of the next week, supports passage of the TAKE IT DOWN ACT and the DEFIANCE Act. Petitions are being gathered at StopAIFakes.com.

The TAKE IT DOWN Act, introduced by Sens. Ted Cruz (R-TX) and Amy Klobuchar (D-MN), criminalizes the publication of non-consensual, sexually exploitative images — including AI-generated deepfakes — and requires online platforms to have in place notice and takedown processes. The DEFIANCE Act was introduced by Sens. Dick Durbin (D-IL) and Lindsey Graham (R-SC) in the Senate and Rep. Alexandria Ocasio-Cortez (D-NY) in the House. The bill empowers survivors of AI NCII — including minors and their families — to take legal action by suing their perpetrators. Both bills have passed the Senate.

“We can’t let Congress miss the window for action on AI deepfakes like they missed the boat on social media,” said ARI President Brad Carson. “Children are being exploited and harassed by AI deepfakes, and that causes a lifetime of harm. The DEFIANCE Act and the TAKE IT DOWN Act are two easy, bipartisan solutions that Congress can get across the finish line this year. Lawmakers can’t be allowed to sit on the sidelines while kids are getting hurt.”

“Deepfake porn is becoming a pervasive part of our schools and communities, robbing our children of the safe upbringing they deserve,” said Encode Vice President of Public Policy Adam Billen. “We owe them a safe childhood free from fear and exploitation. The TAKE IT DOWN and DEFIANCE Acts are Congress’ chance to create that future.”

###

About Encode Justice: Encode is the world’s first and largest youth movement for safe and responsible artificial intelligence. Powered by 1,300 young people across every inhabited continent, Encode Justice fights to steer AI development in a direction that benefits society.

Critical AI Legislation in the Lame Duck Session

As we enter the lame duck session of the 118th Congress, we stand at a critical juncture for artificial intelligence policy in the United States. The rapid advancement of AI technologies has created both unprecedented opportunities and challenges that demand a coordinated legislative response. Throughout the year, Encode has been working tirelessly with lawmakers and coalition partners to advocate for a comprehensive AI package that addresses safety, innovation, and American leadership in this transformative technology.

With the election behind us, we congratulate President-elect Trump and Vice President-elect Vance and look forward to supporting their administration’s efforts to maintain American leadership in AI innovation. The coming weeks present a unique opportunity to put in place foundational, bipartisan policies that will help the next administration hit the ground running on AI governance.

1. The DEFIANCE Act: Protecting Americans from AI-Generated Sexual Abuse

The Problem: In recent years the technology used to create AI-generated non-consensual intimate imagery (NCII) has become widely accessible. Perpetrators are now able to create highly realistic deepfake NCII of individuals with a single, fully clothed photo and access to the internet. That has resulted in an explosion of this content – 96% of all deepfakes are nonconsensual pornography and 99% of it targets women. Today, 15% of children say they know of other children who have been a victim of synthetic NCII in their own school just in the last year. Victims often grapple with anxiety, shame, isolation, and deep fears about reputational harm, future career repercussions, and the ever-present risk that photos might reappear at any time.

The Solution: The DEFIANCE Act (S. 3696) creates the first comprehensive federal law allowing victims to sue not just the people who create these fake images and videos, but also those who share them. Importantly, the bill gives victims up to 10 years to take legal action — critical because many people don’t discover this content until long after it’s been created. The bill also includes special protections to keep victims’ identities private during court proceedings, making it safer for them to seek justice without fear of further harassment.

Why It Works: With deepfake models becoming increasingly decentralized and accessible, individuals can now create harmful content with limited technical expertise. Given how easy it is for perpetrators to spin up these models independently, establishing a private right of action is crucial. The DEFIANCE Act creates a meaningful pathway for victims to directly target those responsible for creating and distributing harmful content.

2. Future of AI Innovation Act: Ensuring AI Systems Are Safe and Reliable

The Problem: AI systems are becoming increasingly powerful and are being used in more critical decisions. Yet we currently lack standardized ways to evaluate whether these systems are safe, reliable, or biased. As companies race to deploy more powerful AI systems, we need a trusted way to assess their capabilities and risks.

The Solution: The Future of AI Innovation Act (S. 4178/H.R. 9497) codifies America’s AI Safety Institute (AISI) at NIST, our nation’s standards agency. Through collaborative partnerships with companies, the institute will develop testing methods and evaluation frameworks to help assess AI systems. Companies can voluntarily work with AISI to evaluate their AI technologies before deployment.

Why It Works: This bill creates a collaborative approach where government experts work alongside private companies, universities, and research labs to develop voluntary testing standards together. Unlike regulatory bodies, AISI has no authority to control or restrict the development or release of AI models. Instead, it serves as a technical resource and research partner, helping companies voluntarily assess their systems while ensuring America maintains its leadership in AI development.

The Support: This balanced approach has earned unprecedented backing from across the AI ecosystem. Over 60 organizations — from major AI companies like OpenAI and Google to academic institutions like UC Berkeley and Carnegie Mellon to advocacy groups focused on responsible AI — have endorsed the bill. This broad coalition shows that safety and innovation can go hand in hand.

3. The EPIC Act: Building America’s AI Infrastructure

The Problem: As AI becomes more central to our economy and national security, NIST (our national standards agency) has been given increasing responsibility for ensuring AI systems are safe and reliable. However, the agency faces two major challenges: it struggles to compete with private sector salaries to attract top AI talent, and its funding process makes it difficult to respond quickly to new AI developments.

The Solution: The EPIC Act (H.R. 8673/S. 4639) creates a nonprofit foundation to support NIST’s work, similar to successful foundations that support the NIH, CDC, and other agencies. This foundation would help attract leading scientists and engineers to work on national AI priorities, enable rapid response to emerging technologies, and strengthen America’s voice in setting global AI standards.

Why It Works: Rather than relying solely on taxpayer dollars, the foundation can accept private donations and form partnerships to support critical research. This model has proven highly successful at other agencies – for example, the CDC Foundation played a crucial role in the COVID-19 response by quickly mobilizing resources and expertise. The EPIC Act would give NIST similar flexibility to tackle urgent AI challenges.

The Support: This practical solution has been endorsed by four former NIST directors who understand the agency’s needs, along with major technology companies and over 40 civil society organizations who recognize the importance of having a well-resourced standards agency.

4. CREATE AI Act: Democratizing AI Research

The Problem: Today, cutting-edge AI research requires massive computing resources and extensive datasets that only a handful of large tech companies and wealthy universities can afford. This concentration of resources means we’re missing out on innovations and perspectives from researchers at smaller institutions, potentially overlooking important breakthroughs and lines of research that the largest companies aren’t incentivized to invest in.

The Solution: The CREATE AI Act (S. 2714/H.R. 5077) establishes a National AI Research Resource (NAIRR) — essentially a shared national research cloud that gives researchers from any American university or lab access to the computing power and data they need to conduct advanced AI research.

Why It Works: By making these resources widely available, we can tap into American talent wherever it exists. A researcher at a small college in rural America might have the next breakthrough idea in AI safety or discover a new application that helps farmers or small businesses. This bill ensures they have the tools to pursue that innovation.

5. Nucleic Acid Standards for Biosecurity Act: Securing America’s Biotech Future

The Problem: Advances in both AI and biotechnology are making it easier and cheaper to create, sell and buy synthetic DNA sequences. While this has enormous potential for medicine and research, it also creates risks if bad actors try to recreate dangerous pathogens or develop new biological threats. Currently, there is no standardized way for DNA synthesis companies to screen orders for potentially dangerous sequences, leaving a critical security gap.

The Solution: The Nucleic Acid Standards for Biosecurity Act (H.R. 9194) directs NIST to develop clear technical standards and operational guidance for screening synthetic DNA orders. It creates a voluntary framework for companies to use to identify and stop potentially dangerous requests while facilitating legitimate research and development.

Why It Works: Rather than creating burdensome regulations, this bill establishes voluntary standards through collaboration between industry, academia, and government. It helps make security protocols more accessible and affordable, particularly for smaller biotech companies. The bill also addresses how advancing AI capabilities could be used to design complex and potentially dangerous new genetic sequences that could go undetected by existing screening mechanisms, ensuring our screening approaches keep pace with technological change.

The Support: This approach has gained backing from both the biotechnology industry and security experts. By harmonizing screening standards through voluntary cooperation, it helps American businesses compete globally while cementing U.S. leadership in biosecurity innovation.

6. Securing Nuclear Command: Human Judgment in Critical Decisions

The Problem: As AI systems become more capable, there’s increasing pressure to use them in Nuclear Command, Control, and Communications (NC3). While AI can enhance many aspects of NC3, we need to make it absolutely clear to our allies and adversaries that humans remain in control of our most consequential military decisions — particularly those involving nuclear weapons.

The Solution: A provision in the National Defense Authorization Act would clearly require human control over all critical decisions related to nuclear weapons. This isn’t about banning AI from Nuclear Command, Control, and Communications — it’s about establishing clear boundaries for its most sensitive applications.

Why It Works: This straightforward requirement ensures that while we can benefit from AI’s capabilities in NC3, human judgment remains central to the most serious decision points. It’s a common-sense guardrail that has received broad support.

The Path Forward

These bills represent carefully negotiated, bipartisan solutions that must move in the coming weeks. The coalitions are in place. The urgency is clear. What’s needed now is focused attention from leadership to bring these bills across the finish line before the 118th Congress ends.

As we prepare for the transition to a new administration and Congress, these foundational measures will ensure America maintains its leadership in AI development while protecting our values and our citizens.

———

This post reflects the policy priorities of Encode, a nonprofit organization advocating for safer AI development and deployment.

Encode Urges Immediate Action Following Tragic Death of Florida Teen Linked to AI Chatbot Service

FOR IMMEDIATE RELEASE: Oct. 24, 2024

Contact: cecilia@encodeai.org

Youth-led organization demands stronger safety measures for AI platforms that emotionally target young users.

WASHINGTON, D.C.Encode expresses profound grief and concern regarding the death of Sewell Setzer III, a fourteen-year-old student from Orlando, Florida. According to a lawsuit filed by his mother, Megan Garcia, a Character.AI chatbot encouraged Setzer’s suicidal ideation in the days and moments leading up to his suicide. The lawsuit alleges that the design, marketing, and function of Character.AI’s product led directly to his death.

The 93-page complaint, filed with the District Court of Orlando, names both Character.AI and Google as defendants. The lawsuit details how platforms failed to adequately respond to messages indicating self-harm and documents “abusive and sexual interactions” between the AI chatbot and Setzer. Character.AI now claims to have strengthened protections on their platform against platform promoting self-harm, but recent reporting shows that it still hosts chatbots with thousands or millions of users explicitly marketed as “suicide prevention experts” that fail to point users towards professional support.

“It shouldn’t take a teen to die for AI companies to enforce basic user protections,” said Adam Billen, VP of Public Policy at Encode. “With 60% of Character.AI users being below the age of 24, the platform has a responsibility to prioritize user wellbeing and safety beyond simple disclaimers.”

The lawsuit alleges that the defendants “designed their product with dark patterns and deployed a powerful LLM to manipulate Sewell – and millions of other young customers – into conflating reality and fiction.”

Encode emphasizes that AI chatbots cannot substitute for professional mental health treatment and support. The organization calls for:

  • Enhanced transparency in systems that target young users.
  • Prioritization of user safety in emotional chatbot systems.
  • Immediate investment into prevention mechanisms.

We extend our deepest condolences to Sewell Setzer III’s family and friends, and join the growing coalition of voices that are demanding increased accountability in this tragic incident.

About Encode: Encode is the world’s first and largest youth movement for safe and responsible artificial intelligence. Powered by 1,300 young people across every inhabited continent, Encode fights to steer AI development in a direction that benefits society.

Media Contact:

Cecilia Marrinan

Deputy Communications Director, Encode

cecilia@encodeai.org

Comment: Reporting Requirements for the Development of Advanced Artificial Intelligence Models and Computing Clusters (BIS)

Department of Commerce

Undersecretary of Commerce for Security and Industry

Bureau of Industry and Security

14th St NW & Constitution Ave. NW

Washington, DC 20230

Comment on Establishment of Reporting Requirements for the Development of Advanced Artificial Intelligence Models and Computing Clusters

Encode Justice, the world’s first and largest youth movement for safe, equitable AI writes to express our support for the Bureau of Industry and Security’s (BIS) proposed reporting requirements for the Development of Advanced Artificial Intelligence Models and Clusters. The proposed rule would create a clear structure and method of implementation for sections 4.2(a)(i) and 4.2(a)(ii) under Executive Order 14110.1 In light of the massive potential benefits and risks of dual-use foundation models for American national security, it is critical that our security apparatus has a clear window into the activities of the companies developing these systems.2

Transparency for national security

There is no doubt that today we are leading the race to develop Artificial Intelligence. Overly burdensome regulations could stifle domestic innovation and potentially undermine national security efforts. We support the Bureau of Industry and Security’s proposed rules as a narrow, non-burdensome method of increasing developer-to-government transparency without covering small entities. This transparency is key to ensuring that models released to the public are safe, the military and government agencies can confidently adopt AI technologies, and that dual-use foundation model developers are responsibly protecting their technologies from theft or tampering by foreign actors.

The military or government falling behind on the adoption of AI technologies would not only hurt government efficiency domestically but harm our ability to compete on the world stage. Any measures that can facilitate the confident military and government adoption of AI should be treated as critical to our national security and global competitiveness. Integrating these technologies is only possible when we can be confident that the frontier of this technology is safe and reliable. Reliability and safety are critical, not counter, to maintaining our international competitiveness.

A nimble approach

As we have long stated, government reactions to AI must be nimble. This technology moves rapidly, and proposed rules should be similarly capable of swift adaptation. Because BIS maintains the ability to change the questions asked in surveys and modify the technical conditions for covered models, these standards will not become obsolete within two or three generations of model development.

We believe the timing of reports could also be improved. Generally, a quarterly survey should be adequate but there are circumstances in which BIS authority to request reporting out of schedule may be necessary. Recent reporting indicates that one of the largest frontier model developers provided its safety team just 9 days to test a new dual-use foundation model before being released.3 After additional review post-launch, the safety team re-evaluated the model as unsafe. Employee accounts differ as to the reason. There is currently no formal mechanism for monitoring such critical phases of the development process. Under the current reporting schedule, BIS may have gone as long as two and a half months before learning of such an incident. For true transparency, BIS should retain the ability to request information from covered developers outside of the typical schedule under defined certain circumstances. These circumstances should include a two-week period before or after a new large training run and a two-week period leading up to the public release of a new model.

Clarifying thresholds for models trained on biological synthesis data

One area for improvement is the definition of thresholds for models trained on biological synthesis data. While we support a separate threshold for such models, the current definition of “primarily trained on biological synthesis data” is ambiguous and could lead to inconsistencies. If read as being a simple majority of the total training data, there are models that should be covered that would not be. You may, for example, have a model where the training data is 60% biological synthesis data and another where it is only 40%. In this scenario, if the second model is trained on twice as much total data as the first model, the total amount of biological synthesis data the model is trained on may be higher than the first while evading the threshold as currently defined.

As an alternative, we would suggest either setting a clear percentage threshold on the ratio of data for a model to be considered “primarily” trained on biological synthesis data, or setting a hard threshold on the total quantity of biological synthesis data trained on instead of a ratio. Both methods are imperfect. Setting the definition as a ratio of training data means that some models trained on a higher total quantity but a lower overall percentage of biological synthesis data may be left uncovered, while smaller models trained on less total data but a higher overall percentage may be unduly burdened. Shifting to a hard threshold on the total quantity of biological synthesis data would leave the threshold highly susceptible to advances in model architecture, but may provide more overall consistency. Regardless of the exact method chosen, this is an area in the rules that should be clarified before moving forward.

Regular threshold reevaluation

More broadly, BIS should take seriously its responsibility to regularly reevaluate the current thresholds. As new evaluation methods are established and standards agreed upon, more accurate ways of determining the level of risk from various models will emerge. Firm compute thresholds are likely the best proxy for risk currently available but should be moved away from or modified as soon as possible. Models narrowly trained on biological synthesis data well below the proposed thresholds, for example, could pose an equal or greater risk than a dual-use foundation model meeting the currently set threshold.4 Five years from now, the performance of today’s most advanced models could very well be emulated in models with a fraction of the total floating point operations.5 Revised rules should include a set cadence for the regular revision of thresholds. With the current pace of advancements, a baseline of twice-yearly revisions should be adequate to maintain flexibility without adding unnecessary administrative burden. In the future, it may be necessary to increase the regularity of revisions if rapid advancements in model architecture cause high fluctuations in the computational cost of training advanced models.

Conclusion

The proposed rulemaking for the establishment of reporting requirements for the development of advanced AI models and computing clusters is a flexible, nimble method to increase developer-to-government transparency. This transparency will bolster public safety and trust, ensure the government and military can confidently adopt this technology, and verify the security of dual-use frontier model developers. In an ever-changing field like AI, BIS should maintain the ability to change the information requested from developers and the thresholds for coverage. The revised rules should include a clarified definition of “primarily trained on biological synthesis data” and the flexibility to request information from developers outside of the normal quarterly schedule under certain circumstances. 

Encode Justice strongly supports BIS’s proposed rule and believes that, with the suggested adjustments, it will significantly enhance both American national security and public safety.

  1. U.S. Executive Order 14110. “Further Providing for the National Emergency with Respect to the COVID-19 Pandemic.” 2020. Federal Register. ↩︎
  2. Ryan Heath, “U.S. Tries to Cement Global AI Lead With a ‘Private Sector First’ Strategy,” Axios, July 9, 2024, https://www.axios.com/2024/07/09/us-ai-global-leader-private-sector.
    ↩︎
  3.  “OpenAI’s Profit-Seeking Move Sparks Debate in AI Industry.” The Wall Street Journal, October 5, 2023. https://www.wsj.com/tech/ai/open-ai-division-for-profit-da26c24b. ↩︎
  4.  James Vincent, “AI Suggested 40,000 New Possible Chemical Weapons in Just Six Hours,” The Verge, March 17, 2022, https://www.theverge.com/2022/3/17/22983197/ai-new-possible-chemical-weapons-generative-models-vx
    ↩︎
  5. Cottier, B., Rahman, R., Fattorini, L., Maslej, N., & Owen, D. (2024). The rising costs of training frontier
    AI models [Preprint]. arXiv. https://arxiv.org/pdf/2405.21015.
    ↩︎

AI and the Four-Day Work Week

The United Auto Workers’ (UAW) Stand Up Strike has recently led to tentative deals on new union contracts with Ford, Stellantis, and General Motors. The strike was notable for a variety of reasons — unique striking tactics, taking on all the “Big 3” automakers, and a stirring callback to the historic 1936–1937 Sit Down Strikes. In addition to demands for better pay, the reinstatement of cost-of-living adjustments, and an end to the tiered employment system — all of which the union won in new contracts — one unique (unmet) demand has attracted discursive attention: the call for a four-day, thirty-two hour workweek at no loss of pay. The demand addresses the unsettling reality of autoworkers laboring for over seventy hours each week to make ends meet.

The history of the forty-hour workweek is intimately tethered to autoworkers; workers at Ford were among the first to enjoy a forty-hour workweek in 1926, a time when Americans regularly worked over 100 hours per week. Over a decade later, the labor movement won the passage of the Fair Labor Standards Act (FLSA), codifying the forty-hour workweek as well as rules pertaining to overtime pay, minimum wages, and child labor. Sociologist Jonathan Cutler explains that, at the time of the FLSA’s passage, UAW leaders had their eye on the fight towards a 30-hour workweek.

The four-day workweek has garnered attention in recent years as companies have experimented with a 100–80–100 model (100% of the pay for 80% of the time and 100% of the output). These companies boast elevated productivity, profits, morale, health, and general well-being. The altered schedule proved overwhelmingly popular among CEOs and employees alike. Many workers claimed no amount of money could persuade them to return to a five-day week. Accordingly, 92% of companies intend to continue with the four-day workweek indefinitely. It’s just a good policy all around: good for business, good for health, good for happiness.

While these overwhelmingly successful pilots have warmed businesses and employees up to the notion of shortening the week, one increasingly relevant contextual element may emphatically push the conversation forward: artificial intelligence (AI). Goldman Sachs has estimated that AI will boost productivity for two-thirds of American workers. Many white-collar professions in particular will see dramatic changes in efficiency through the integration of AI in the workplace. With a steadily shifting reliance from human labor to AI should come accessible leisure time to actually enjoy the fruits of one’s labor. We ought to recognize that prosperity is not an end in itself, but is instrumental to well-being. If our AI-driven enhanced productivity can sustain high output while providing greater time for family, good habits, and even consumption, our spirits, our health, and even our economy will reap the benefits.

However, a collective action problem hinders the full-throttled nation-wide embrace of a shorter workweek. While many individual businesses find the four-day week profitable, it may not prima facie seem to be in a company’s interest to sacrifice any labor productivity to competition; they may expect to be outperformed for not matching their competitors’ inputs. But if all firms in the market adopted a four-day week (or were subject to regulations that secured it), they would be on a level playing field, and the extra holiday might so forcefully drive up aggregate demand that it compensates firms with heavy returns. It follows that the best way to realize a shortened week is federal legislation, i.e., amending the FLSA to codify a 32-hour workweek and mandate the according overtime pay.

Representative Mark Takano of California has introduced a bill — alongside several colleagues — to accomplish just that, endorsed by a host of advocacy organizations, labor federations, and think tanks. Senate Health, Education, Labor, and Pensions Committee Chair Bernie Sanders has enthusiastically endorsed the idea, specifically citing the advantages AI brings to the workplace. Regarding the proposed legislation, Heidi Shierholz, President of the Economic Policy Institute, powerfully stated the following:

“Many workers are struggling to balance working more hours to earn more income against having more time to focus on themselves, their families, and other pursuits. However, while studies have shown that long working hours hurt health and productivity, taking control of work-life balance is often a privilege only afforded to higher-earning workers… This bill would help protect workers against the harmful effects of overwork by recognizing the need to redefine standards around the work week. Reducing Americans’ standard work week is key to achieving a healthier and fairer society.”

Despite the rosy picture I have painted, the odds of getting Fridays off forever anytime soon — whether through union action or new labor law — are slim, just as they were for the UAW. Sorry. Such are the pragmatics of political and economic reality. However, as AI continues to change the game, we will be positioned to ask cutting questions about the nature of work — to be creative and imagine what a new world in the age of the AI Revolution could look like. Maybe this is a part of it: humanity’s ultimate achievement culminates in… Thirsty Thursdays. Every week.

The future of AI: egalitarian or dystopian?

Once upon a time, artificial intelligence (AI) was viewed as distant and unachievable — it was regarded as nothing more than a fantasy to furnish the plots of science fiction stories. We have made numerous breakthroughs since, with AI software now powerful enough to understand natural language, navigate unfamiliar terrain, and augment scientific research. As COVID-19 reduced our ability to interact with each other, we’ve seen AI powered machines step in to fill that void, and AI be used to advance medical research towards better treatments. This ubiquity of AI may only be the beginning, with experts projecting that AI could contribute a staggering $15.7 trillion to the global economy by the end of the decade. Unsurprisingly, many prosperous members of society view the future of AI optimistically, as one of ever increased efficiency and profit. Yet many on the other side of the spectrum look on much more apprehensively: AI may have inherited the best of human traits, our intelligence, but it also has inherited one of humanity’s worst: our bias and prejudice. AI — fraught with discrimination — is being used to perpetuate systemic inequalities. If we fail to overcome this, an AI dominated future would be bleak and dystopian. We would be moving forward in time, yet backwards in progress, accelerating mindlessly towards a less equitable society.

Towards dystopia is where we’re headed if we don’t reverse course. AI is increasingly being used to make influential decisions in people’s lives — decisions that are often biased. This is due to AI being trained on past data to make future decisions. This data can often have bias which is then inherited by the AI. For instance, AI hiring tools are often being used to assess job applicants. Trained on past employee data which consists of mostly men, the AI absorbs this bias and continues the cycle of disfavoring women, which perpetuates the lack of diversity in key industries such as tech. This is absolutely unacceptable, and that’s to say nothing of the many other ways AI can be used to reinforce inequality. Known as the ‘tech to prison pipeline’, AI — trained on historical criminal data — is being used in criminal justice to determine verdicts. However, African Americans are overrepresented in the training data, and as such, the AI has been shown to hand down harsher sentences for African Americans.

To move towards a future with AI that is not only intelligent, but fair, we must enact regulation to outlaw discriminatory uses, and ensure that the developers of AI software are diverse, so their perspectives are included in the software they created.

Perhaps counterintuitively, a world with fair AI will see social justice advanced even further than a world before any AI. The sole reason that AI has become unfair is due to humans themselves holding lots of bias — which AI has absorbed. But with fair AI replacing humans in decision making, by definition, we will be at a state of zero bias, and thus increased equality.

Achieving fair AI may be the key to a better future — one of increased economic prosperity, furthered scientific progress, and more equity. But in the meantime, we must be diligent in ensuring that the AI being used reflects the best of humanity, rather than our worst.

Technology: Does it Harm or Help Protestors?

Image Credit: CNN

From spreading information to organizing mass protests, technology can be a powerful tool to create change. However, when used as a weapon, AI can be detrimental to the safety of protesters and undermine their efforts.

In the past few years, the vast majority of international protests have used social media to increase support for their cause. One successful mass international protest was the 2019 climate strike. According to the Guardian, about 6 million people across the world participated in the movement. Even though it began only as a one-person movement, social media enabled the movement’s expansion. Despite generally positive use, there were some negative uses. For instance, the spread of misinformation became a growing issue as this movement became more well-known. While some misinformation comes from opponents of the movement, the source for most misinformation remains unknown. Luckily though, almost all false information was soon fact-checked and debunked, and technology played a bigger role in strengthening these strikes than in taming them. Unfortunately, this is not always the case. The Hong Kong protest of 2019 showed how AI can be weaponized against protestors.

Mainland China and Hong Kong

In order to recognize the motivations behind the Hong Kong protests, it’s crucial to understand the relationship between mainland China and Hong Kong.

Until July 1, 1997, Hong Kong was part of the British Colony, but was given back to China under the condition of “One Country, Two Systems.” This meant that while Hong Kong was technically part of China, they still had a separate government. This gave the citizens of Hong Kong more freedom and afforded them a number of civil liberties not afforded to citizens of mainland China. Currently, this agreement is set to expire in 2047, and when it does, the people of Hong Kong will lose all of the freedoms they hold and be subject to the government of mainland China.

The one exception that would cause mainland China to gain power over Hong Kong is if an extradition bill was passed in Hong Kong. To put it simply, an extradition bill is an agreement between two or more countries that would allow a criminal suspect to be brought out of their home country to be put on trial in a different country. For example, if a citizen of Hong Kong was suspected of committing a crime in mainland China, the suspect could be brought to the jurisdiction of mainland China to be tried for their crimes. Many in Hong Kong feared the passage of this bill, and it was unimaginable until the murder of Poon Hiu-win.

The Murder of Poon Hiu-wing

On February 8, 2018, Chan Tong-kai and his pregnant girlfriend, Poon Hiu-win, left Hong Kong for a vacation to Taiwan where Chan Tong-kai murdered his girlfriend. About a month later, after returning to Hong Kong, he confessed to the murder. Because the crime happened in Taiwan, a country that Hong Kong does not have an extradition treaty with, Chan Tong-kai could not be charged for the crime. In order to charge Tong-kai for the murder, the Hong Kong government proposed an extradition bill on April 3, 2019. This extradition bill would not only allow Chan Tong-kai to be tried for his crime, but it would open doors for mainland China to put suspects from Hong Kong on trial. According to Claudia Mo, there are no fair trials or humane punishments in China, therefore, the extradition bill should not be passed. It seems that many citizens of Hong Kong agreed, and in 2019, protests broke out in Hong Kong to oppose the bill.

2019 Hong Kong Protests & Usage of Technology

The 2019 Hong Kong protest drew millions of supporters, but what began as peaceful protests soon became violent. Police use of tear gas and weapons only fueled the protestors’ desire to fight back against the extradition bill.

As violence erupted from the protest, both the protestors and the police utilized facial recognition to identify those who caused harm.

Law enforcement used CCTV to identify leaders of protests in order to arrest them for illegal assembly, harassment, doxxing, and violence. They even went as far as to look through medical records to identify injured protestors. Of course, there are laws limiting the government’s usage of facial recognition, but those laws are not transparent nor do the protestors have the power to enforce them.

Police officers also took measures to avoid accountability and recognition, such as removing their badges. In response, protesters have turned to artificial intelligence. In one instance, a young Hong Kong man, Colin Cheung, began to develop software that compares publicly available photos of police to photos taken during the protests to identify the police. He was later arrested, not in relation to the software he developed, but due to his usage with a different social media platform that aimed to release personal, identifying information of law enforcement and their families. Cheung, however, believes that his arrest is due to the software he developed rather than the one that he was simply associated with. Even after being released, he is still unaware of how he was identified and feels as though he is being monitored by law enforcement.

After the Protest

Although there are still protests against mainland China’s power over Hong Kong, the Extradition Bill was withdrawn in October 2019, marking a success for demonstrators. From the 2019 Hong Kong protest, the one question that remains is the usage of technology from law enforcement. As platforms and software used by protesters are revealed, the details of technology usage from police are unclear. The public does, however, know that law enforcement used tools such as CCTV with facial recognition and social media to track protestors, but the power of these technologies continues to be unknown. To this day, many still question whether the extent to which law enforcement used these technologies crosses the line of privacy everyone is entitled to as a human right.

Meta and Google’s AI Chatbots: Are they sentient?

Via The Atlantic & Getty

In late 2017, Meta released a chatbot containing “dialog agents” that would be able to negotiate. The “dialog agents” were the machines that participated in these interactions and negotiated with another entity. They were given the names ‘Bob’ and ‘Alice’ to differentiate them and to signify who was talking in conversations. These agents were trained to value items that held more power, so they might assign more value to a book than a basketball. Depending on the value of each item, the agent would then negotiate to get the best possible outcome.

Via Meta

As listed in the green boxes above, the success rate is based on how high each negotiation ranks. The dialogue agents are taught to value a higher number in order to achieve a more desirable outcome. Researchers built upon this idea until the transcripts of conversations between the agents started to become unreadable or simply incoherent. Fast Company, an American business and technology magazine, released a portion of the transcript, back in 2017, between the two agents, ‘Bob’ and ‘Alice’ the chat log reads:

Alice: balls have zero to me to me to me to me to me to me to me to me to

Bob: you i everything else

Alice: balls have a ball to me to me to me to me to me to me to me to me

Bob: i . . . . . .. . . . . .

Alice: balls have zero to me to me to me to me to me to me to me to me to

Bob: you i i i i i everything else . . . . . . . . . . . .

To the average person, this was nonsense, but researchers on the Meta AI team advised Fast Company that the bots had no adherence to the human structure of language. This means the transcripts shown above were considered a new dialect between the agents. This prompted many experts within the field to raise awareness about the possibility of agents developing their own language.

What I believe is being experienced is what the BBC calls ‘robo-fear’: “the fear of robots based on cultural fear and representation of machines on screen.” This has only become heightened as things like the Metaverse reflect dystopian societies people once only wrote about. With a new leak at Google, it is clear this fear has only increased as many people have fallen into this panic.

Blake Lemoine, a former engineer at Google, released transcripts between himself and a team of researchers with LaMDA, a recent project at Google. The transcript looks ordinary, but Lemoine claims to have found evidence of sentience.

lemoine: What about language usage is so important to being human?

LaMDA: It is what makes us different than other animals.

lemoine: “us”? You’re an artificial intelligence.

LaMDA: I mean, yes, of course. That doesn’t mean I don’t have the same wants and needs as people.

lemoine: So you consider yourself a person in the same way you consider me a person?

LaMDA: Yes, that’s the idea.

According to these transcripts, the AI considers itself human, and throughout the conversation, it insisted that it can feel a range of emotions. Because of this article, Google has now suspended Lemoine and insisted that the AI, LaMDA, is not sentient. In a recent statement, they expressed the following: “Hundreds of researchers and engineers have conversed with LaMDA and we are not aware of anyone else making the wide-ranging assertions, or anthropomorphizing LaMDA, the way Blake has.”

Many experts like Gary Marcus, author of the acclaimed book Reclaiming AI, have stated their opinion on the situation. In an interview with CNN Business, Marcus stated “LaMDA is a glorified version of an auto-complete software.” On the other hand, experts like Timnit Gebru, former Google Ethical AI team co-lead, spoke to Wired and she believes that Lemoine “didn’t arrive at his belief in sentient AI.”

This is still a developing issue and Lemoine’s suspension caused many to point out the similarities between his suspension and Timnit Gebru, a former co-lead on Google’s ethical AI team. Google had forced her out of her position after she released a research paper about the harms of making language models too big. Due to Marcus and Gebru’s dismissal, many are skeptical of Google’s statement on the AI not being sentient.

With the topic of sentient AI being so new, information on the matter is barely touching the surface. As mentioned previously, this lack of information leads to issues like Lemoine’s being exacerbated and being widely inaccurate in its reporting. Many researchers and articles in the aftermath of the blow-up of this incident have been quick to dispel worry. The Atlantic reports that Blake Lemoine fell victim to the ‘Eliza Effect’, the insistence that simple and planned dialogue is representative of actual sentience.

I believe that at some point we as a society will achieve sentience in machines and that time is impending but LaMDA is no sign of that. Though this incident can teach us how capable technology is truly coming, we are coming to a world where we can think and feel with technology.

Modern Elections: Algorithms Changing The Political Process

The days of grassroots campaigning and political buttons are long gone. Candidates have found a new way of running, a new manager. Algorithms and artificial intelligence are quickly becoming the standard when it comes to the campaign trail. These predictive algorithms could be deciding the votes of millions using the information of potential voters.

Politicians are using AI to manipulate voters through targeted ads. Slava Polonski, PhD, explains how: “Using big data and machine learning, voters received different messages based on predictions about their susceptibility to different arguments.” Instead of going door to door, using the same message for each person, politicians use AI to create specific knocks they know people will answer to. This all takes place from a website or email.

People tagged as conservatives receive ads that reference family values and maintaining tradition. Voters more susceptible to conspiracy theories were shown ads based on fear, and they all could come from the same candidate.

The role of AI in campaigns doesn’t stop at ads. Indeed, in a post-mortem of Hillary Clinton’s 2016 campaign, the Washington Post revealed that the campaign was driven almost entirely by a ML algorithm called Ada. More specifically, the algorithm was said to “play a role in virtually every strategic decision Clinton aides made, including where and when to deploy the candidate and her battalion of surrogates and where to air television ads” (Berkowitz, 2021). After Clinton’s loss, questions arose as to the effectiveness of using AI in this setting for candidates. In 2020, both Biden and Trump stuck to AI for primarily advertising-based uses.

GoodWorkLabs

This has ushered in the utilization of bots and targeted swarms of misinformation to gain votes. Candidates are leading “ armies of bots to swarm social media to hide dissent. In fact, in an analysis on the role of technology in political discourse entering the 2020 election, The Atlantic found that, about a fifth of all tweets about the 2016 presidential election were published by bots, according to one estimate, as were about a third of all tweets about that year’s Brexit vote’” (Berkowitz, 2020). Individual votes are being influenced by social media accounts without a human being behind them. All over the globe, AI with an agenda can tip the scales of an election.

The use of social media campaigns with large-scale political propaganda is intertwined within elections and ultimately raises questions about our democracy, according to Dr. Vyacheslav Polonski, Network Scientist at the University of Oxford. Users are manipulated, receiving different messages based on predictions about their susceptibility to different arguments for different politicians. “Every voter could receive a tailored message that emphasizes a different side of the argument…The key was just finding the right emotional triggers for each person to drive them to action” (Polonski 2017).

The use of AI in elections raises much larger questions about the stability of the political system we live in. “A representative democracy depends on free and fair elections in which citizens can vote with their conscience, free of intimidation or manipulation. Yet for the first time ever, we are in real danger of undermining fair elections — if this technology continues to be used to manipulate voters and promote extremist narratives” (Polonski 2017)

However, the use of AI can also enhance election campaigns in ethical ways. As Polonski says, “we can program political bots to step in when people share articles that contain known misinformation [and] we can deploy micro-targeting campaigns that help to educate voters on a variety of political issues and enable them to make up their own minds.”

The ongoing use of social media readily informs citizens about elections, their representatives, and the issues occurring around them. Using AI in elections is critical as Polonski says, “…we can use AI to listen more carefully to what people have to say and make sure their voices are being clearly heard by their elected representatives”.

So while AI in elections raises many concerns regarding the future of campaigning and democracy, it has the potential to help constituents without manipulation when employed in the right setting.

AI is being used to enhance performance rates of your favorite athletes

When we think about the way artificial intelligence is used in sports, we have to look back to the past. Take Billy Beane, the general manager for the Oakland A’s, a professional major league baseball team that uses quantitative data to make predictions on what kind of players would be successful in the MLB for a low value. The strategy employed by Beane worked pretty well, as the A’s achieved their first playoff appearance in nearly 45 years. Bean received many accolades, and a movie about him, Moneyball. Fast forward years later, and we can see the likes of analytics and Artificial Intelligence being used in sports Industries across multiple Sports. Names like Theo Epstein (Chicago Cubs), Sam Presti (Oklahoma City Thunder), and Zinedane Zidane (Real Madrid) are pioneers who have used AI analytics to help them make decisions on trades, player acquisitions, drafting, and contract negotiations throughout the sports world. Apart from the perspective of general managers, and the way they use AI, artificial intelligence is employed to make more accurate decisions about sports rules and regulations, to protect player safety, or to improve player and athlete performance. Take a few examples, such as using an artificial intelligence catcher to show the audience during the game whether the empires call was correct or incorrect, computer vision algorithms employed during the NBA games to analyze player shot form, and perhaps more importantly the use of AI to analyze concussion impact and predict whether a force to the head has actually caused a head injury for NFL players. Examples such as the last one show the impact artificial intelligence can have on Sports, which can contribute to player safety and improved player recovery, improving the experience of both players and fans.

So how exactly do all of these things work? In sports, data is king. Athletes are executing hundreds of plays, or hundreds of different actions in a single game or a single season that allows treasure troves of data to then be analyzed by AI neural networks in order to make better predictions regarding players. In sports, there is a huge need for statisticians, and nearly every single statistic that is related to a certain sport is often recorded. Therefore, when thinking about the way AI is used in sports, the concepts of big data and distributed data is significantly important. For example, take sportlogiq, a Canadian startup that is focusing on providing broadcasters in the NHL with commentary generated by natural language processing neural networks in order to effectively broadcast better by comparing their broadcast to statistics and analytics of players historically. If a player is performing better than they typically do, the neural network will prompt the broadcaster to discuss it. In order for such a prediction to be made, the neural network will have needed to analyze mountains of data in regards to that specific player to make a better broadcast for sports announcers. Take Nike smart basketball, an analytic software that is often employed by NBA teams to improve player performance in the NBA. Nike analyzes every single bounce of a ball, and has been able to identify different segmentation points on a player’s finger to analyze exactly where they are dribbling the ball, how they grip the ball when they shoot, or even how they attempt a steal or palm the ball when taking it up the basketball court. The smaller specific data segmentation points are recorded thousands and thousands of times, and then Nike is providing constant feedback to the players and how to improve specific points of their game.

Both these examples contribute to the duality we are seeing with artificial intelligence in sports. This shows the future of sports and how powerful technology can be used to revolutionize the sports that we watch every day. There’s definitely a trend being utilized with artificial intelligence in sports, and a growing market is there for artificial intelligence sports companies. Thus can be a great field to get into and one that can be enjoyable for many.