Twenty years ago, social media was expected to be the great democratizer, making us all more ‘open and connected’ and toppling autocratic governments around the world. Those early optimistic visions simply missed the downside. We watched as it transformed our daily lives, elections, and the mental health of an entire generation. By the time its harms were well-understood, it was too late: the platforms were entrenched and the problems endemic. California Senate Bill 1047 aims to ensure we don’t repeat this same mistake with artificial intelligence.
AI is advancing at breakneck speed. Both of us are strong believers in the power of technology, including AI, to bring great benefits to society. We don’t think that the progress of AI can be stopped, or that it should be. But leading AI researchers warn of imminent dangers—from facilitating the creation of biological weapons to enabling large-scale cyberattacks on critical infrastructure. It’s not a far off future – today’s AI systems are flashing warning signs of dangerous capabilities. OpenAI just released a powerful new system that it rated as “medium” risk for enabling chemical, biological, radiological and nuclear weapons creation–up from the “low” risk posed by its previous system. A handful of AI companies are significantly increasing the risk of major societal harms, without our society’s consent, and without meaningful transparency or accountability. They are asking us to trust them to manage that risk, on our behalf and by themselves.
We have a chance right now to say that the people have a stake and a voice in protecting the public interest. SB 1047, recently passed by the California state legislature, would help us get ahead of the most severe risks posed by advanced AI systems. Governor Gavin Newsom now has until September 30th to sign or veto the bill. With California home to many leading AI companies, his decision will reverberate globally.
SB 1047 has four core provisions: testing, safeguards, accountability, and transparency. The bill would require developers of the most powerful AI models to test for the potential to cause catastrophic harm and implement reasonable safeguards. And it would hold them accountable if they cause harm by failing to take these common sense measures. The bill would also provide vital transparency into AI companies’ safety plans and protect employees who blow the whistle on unsafe practices.
To see how these requirements are common sense, consider car safety. Electric vehicle batteries can sometimes explode, so the first electric vehicles were tested extensively to develop procedures for safely preventing explosions. Without such testing, electric vehicles may have been involved in many disasters on the road – and damaged consumer trust in the technology for years to come. The same is true of AI. The need for safeguards, too, is straightforward. It would be irresponsible for a company to sell a car designed to drive as fast as possible if it lacked basic safety features like seatbelts. Why should we treat AI developers differently?
Governor Newsom has already signed several other AI-related bills this session, such as a pair of bills protecting the digital likeness of performers. While those bills are important, they are not designed to prevent the very serious risks that SB 1047 addresses – risks that affect all of us.
If Governor Newsom signs SB 1047, it won’t be the first time that California has led the country in protecting the public interest. From data privacy to emissions standards, California has consistently moved ahead of the Federal government to protect its residents against major societal threats. This opportunity lies on the Governor’s desk once more.
The irony is, AI developers have already – voluntarily – committed to many of the common sense testing and safeguard protocols required by SB 1047, at summits convened by the White House and in Seoul. But strangely, these companies resist being held accountable if they fail to keep their promises. Some have threatened that they will leave California if the bill is passed. That’s nonsense. As Dario Amodei, the CEO of Anthropic, has said, such talk is just “theater” and “bluster” that “bears no relationship to the actual content of the bill.” The story is depressingly familiar – the tech industry has made such empty threats before to coerce California into sparing it from regulation. It’s the worst kind of deja vu. But California hasn’t caved to such brazen attempts at coercion before, and Governor Newsom shouldn’t cave to them now.
SB 1047 isn’t a panacea for all AI-related risks, but it represents a meaningful, proactive step toward making this technology safe and accountable to our democracy and to the public interest. Governor Newsom has the opportunity to lead the nation in governing the most critical technology of our time. And as this issue only grows in importance, this decision will become increasingly important to his legacy. We urge him to seize this moment and sign SB 1047.
Jason Winston George is an actor best known for his role as Dr. Ben Warren on Grey’s Anatomy and Station 19. A member of the SAG-AFTRA National Board, he helped negotiate the union’s contract on AI provisions.
Sneha Revanur is the Founder and President of Encode Justice.