Critical AI Legislation in the Lame Duck Session

As we enter the lame duck session of the 118th Congress, we stand at a critical juncture for artificial intelligence policy in the United States. The rapid advancement of AI technologies has created both unprecedented opportunities and challenges that demand a coordinated legislative response. Throughout the year, Encode has been working tirelessly with lawmakers and coalition partners to advocate for a comprehensive AI package that addresses safety, innovation, and American leadership in this transformative technology.

With the election behind us, we congratulate President-elect Trump and Vice President-elect Vance and look forward to supporting their administration’s efforts to maintain American leadership in AI innovation. The coming weeks present a unique opportunity to put in place foundational, bipartisan policies that will help the next administration hit the ground running on AI governance.

1. The DEFIANCE Act: Protecting Americans from AI-Generated Sexual Abuse

The Problem: In recent years the technology used to create AI-generated non-consensual intimate imagery (NCII) has become widely accessible. Perpetrators are now able to create highly realistic deepfake NCII of individuals with a single, fully clothed photo and access to the internet. That has resulted in an explosion of this content – 96% of all deepfakes are nonconsensual pornography and 99% of it targets women. Today, 15% of children say they know of other children who have been a victim of synthetic NCII in their own school just in the last year. Victims often grapple with anxiety, shame, isolation, and deep fears about reputational harm, future career repercussions, and the ever-present risk that photos might reappear at any time.

The Solution: The DEFIANCE Act (S. 3696) creates the first comprehensive federal law allowing victims to sue not just the people who create these fake images and videos, but also those who share them. Importantly, the bill gives victims up to 10 years to take legal action — critical because many people don’t discover this content until long after it’s been created. The bill also includes special protections to keep victims’ identities private during court proceedings, making it safer for them to seek justice without fear of further harassment.

Why It Works: With deepfake models becoming increasingly decentralized and accessible, individuals can now create harmful content with limited technical expertise. Given how easy it is for perpetrators to spin up these models independently, establishing a private right of action is crucial. The DEFIANCE Act creates a meaningful pathway for victims to directly target those responsible for creating and distributing harmful content.

2. Future of AI Innovation Act: Ensuring AI Systems Are Safe and Reliable

The Problem: AI systems are becoming increasingly powerful and are being used in more critical decisions. Yet we currently lack standardized ways to evaluate whether these systems are safe, reliable, or biased. As companies race to deploy more powerful AI systems, we need a trusted way to assess their capabilities and risks.

The Solution: The Future of AI Innovation Act (S. 4178/H.R. 9497) codifies America’s AI Safety Institute (AISI) at NIST, our nation’s standards agency. Through collaborative partnerships with companies, the institute will develop testing methods and evaluation frameworks to help assess AI systems. Companies can voluntarily work with AISI to evaluate their AI technologies before deployment.

Why It Works: This bill creates a collaborative approach where government experts work alongside private companies, universities, and research labs to develop voluntary testing standards together. Unlike regulatory bodies, AISI has no authority to control or restrict the development or release of AI models. Instead, it serves as a technical resource and research partner, helping companies voluntarily assess their systems while ensuring America maintains its leadership in AI development.

The Support: This balanced approach has earned unprecedented backing from across the AI ecosystem. Over 60 organizations — from major AI companies like OpenAI and Google to academic institutions like UC Berkeley and Carnegie Mellon to advocacy groups focused on responsible AI — have endorsed the bill. This broad coalition shows that safety and innovation can go hand in hand.

3. The EPIC Act: Building America’s AI Infrastructure

The Problem: As AI becomes more central to our economy and national security, NIST (our national standards agency) has been given increasing responsibility for ensuring AI systems are safe and reliable. However, the agency faces two major challenges: it struggles to compete with private sector salaries to attract top AI talent, and its funding process makes it difficult to respond quickly to new AI developments.

The Solution: The EPIC Act (H.R. 8673/S. 4639) creates a nonprofit foundation to support NIST’s work, similar to successful foundations that support the NIH, CDC, and other agencies. This foundation would help attract leading scientists and engineers to work on national AI priorities, enable rapid response to emerging technologies, and strengthen America’s voice in setting global AI standards.

Why It Works: Rather than relying solely on taxpayer dollars, the foundation can accept private donations and form partnerships to support critical research. This model has proven highly successful at other agencies – for example, the CDC Foundation played a crucial role in the COVID-19 response by quickly mobilizing resources and expertise. The EPIC Act would give NIST similar flexibility to tackle urgent AI challenges.

The Support: This practical solution has been endorsed by four former NIST directors who understand the agency’s needs, along with major technology companies and over 40 civil society organizations who recognize the importance of having a well-resourced standards agency.

4. CREATE AI Act: Democratizing AI Research

The Problem: Today, cutting-edge AI research requires massive computing resources and extensive datasets that only a handful of large tech companies and wealthy universities can afford. This concentration of resources means we’re missing out on innovations and perspectives from researchers at smaller institutions, potentially overlooking important breakthroughs and lines of research that the largest companies aren’t incentivized to invest in.

The Solution: The CREATE AI Act (S. 2714/H.R. 5077) establishes a National AI Research Resource (NAIRR) — essentially a shared national research cloud that gives researchers from any American university or lab access to the computing power and data they need to conduct advanced AI research.

Why It Works: By making these resources widely available, we can tap into American talent wherever it exists. A researcher at a small college in rural America might have the next breakthrough idea in AI safety or discover a new application that helps farmers or small businesses. This bill ensures they have the tools to pursue that innovation.

5. Nucleic Acid Standards for Biosecurity Act: Securing America’s Biotech Future

The Problem: Advances in both AI and biotechnology are making it easier and cheaper to create, sell and buy synthetic DNA sequences. While this has enormous potential for medicine and research, it also creates risks if bad actors try to recreate dangerous pathogens or develop new biological threats. Currently, there is no standardized way for DNA synthesis companies to screen orders for potentially dangerous sequences, leaving a critical security gap.

The Solution: The Nucleic Acid Standards for Biosecurity Act (H.R. 9194) directs NIST to develop clear technical standards and operational guidance for screening synthetic DNA orders. It creates a voluntary framework for companies to use to identify and stop potentially dangerous requests while facilitating legitimate research and development.

Why It Works: Rather than creating burdensome regulations, this bill establishes voluntary standards through collaboration between industry, academia, and government. It helps make security protocols more accessible and affordable, particularly for smaller biotech companies. The bill also addresses how advancing AI capabilities could be used to design complex and potentially dangerous new genetic sequences that could go undetected by existing screening mechanisms, ensuring our screening approaches keep pace with technological change.

The Support: This approach has gained backing from both the biotechnology industry and security experts. By harmonizing screening standards through voluntary cooperation, it helps American businesses compete globally while cementing U.S. leadership in biosecurity innovation.

6. Securing Nuclear Command: Human Judgment in Critical Decisions

The Problem: As AI systems become more capable, there’s increasing pressure to use them in Nuclear Command, Control, and Communications (NC3). While AI can enhance many aspects of NC3, we need to make it absolutely clear to our allies and adversaries that humans remain in control of our most consequential military decisions — particularly those involving nuclear weapons.

The Solution: A provision in the National Defense Authorization Act would clearly require human control over all critical decisions related to nuclear weapons. This isn’t about banning AI from Nuclear Command, Control, and Communications — it’s about establishing clear boundaries for its most sensitive applications.

Why It Works: This straightforward requirement ensures that while we can benefit from AI’s capabilities in NC3, human judgment remains central to the most serious decision points. It’s a common-sense guardrail that has received broad support.

The Path Forward

These bills represent carefully negotiated, bipartisan solutions that must move in the coming weeks. The coalitions are in place. The urgency is clear. What’s needed now is focused attention from leadership to bring these bills across the finish line before the 118th Congress ends.

As we prepare for the transition to a new administration and Congress, these foundational measures will ensure America maintains its leadership in AI development while protecting our values and our citizens.

———

This post reflects the policy priorities of Encode, a nonprofit organization advocating for safer AI development and deployment.