Every year, the Bulletin of Atomic Scientists sets the hands of its iconic Doomsday Clock—a graphic representation of how close the world is to global disaster. And every year, there’s a huge influx of comments from readers despairing “This is awful; what can I do?”

So, one of the things that we did for this issue of the magazine was to look for people who are making an effort—whether on a local, regional, or national scale—to keep the Clock from striking midnight.

The candidates we looked at came from all walks of life, with all sorts of backgrounds and interests, tackling all sorts of different threats. They were old and young—and some of them were quite young indeed. At the age of 15, Sneha Revanur founded an organization, Encode Justice, to deal with the harmful implications of artificial intelligence (AI). Now a college sophomore, she made Time Magazine’s list of 100 most influential people in September and has been described by Politico as the “Greta Thunberg of AI.”

In this interview with Bulletin executive editor Dan Drollette Jr., Revanur describes how she got interested in the problem and what caused her to found a youth-led, AI-focused civil-society group. She tells about how her friends went from thinking “Sneha does this AI thing and just like, skips class and goes to D.C. sometimes” to expressing genuine concern about some of the problems associated with AI—which include rendering their dream jobs obsolete, surveilling them around the clock, and inserting deep fakes that pollute their social media. And all that in addition to outright AI-enhanced catastrophes.

Encode Justice now has 900 young members in 30 countries around the world and has drawn comparisons to earlier youth-led climate and gun-control movements. Revanur and her peers were invited to the White House; participated in efforts to legislate for a better, safer AI future; wrote op-eds for The Hill; and helped to successfully defeat a state ballot initiative that would have inserted biased AI-generated algorithms into the justice process—showing how much just one person can accomplish at the grass-roots level.

(Editor’s note: This interview has been condensed and edited for brevity and clarity.)

Dan Drollette Jr.: Where are you from?

Sneha Revanur: I’m originally from California—San Jose, right in the heart of Silicon Valley.

Drollette: Did that have an influence on your interest in artificial intelligence?

Revanur: I definitely would say so. My older sister works in tech, and both my parents are software engineers. So tech was always right in my face, and it got me to thinking about how to guide it in the right direction. My parents are always making jokes about how I’m out to regulate them.

But seriously, this all meant that I was exposed early on to a culture of thinking that every problem in society can be fixed with some sort of computational solution—whether that’s a mobile app, a machine-learning model, or some other mechanism to respond to something. I think that there was always this view that innovation was some sort of silver bullet.

And I think that view has exploded in recent years, with the rise of AI.

I often say that, had I been born anywhere else, Encode Justice would not exist. I really think that growing up in Silicon Valley, in the kind of household I did, was really, really pivotal in shaping me—and shaping the formation of this organization.

Drollette: How did the organization come to be called “Encode Justice”?

Revanur: I came up with the name. And I gave a lot of thought to its connotations—everything about it was very intentional. I mean, I could have chosen a name for the organization that contained a very negative view of technology.

But what I think is so powerful about the name “Encode Justice” is that it captures the sense that our organization’s goal is not about stopping all technology, nor is it about putting an end to innovation. Instead, we are trying to re-imagine what we do have and build justice into the frameworks of these systems from the very beginning. It actually is a call to action, instead of a call to shut things down or give up.

And I think that is a really powerful approach. Even as we are on the brink of potentially catastrophic threats, I remain grounded in the belief that if we act fast, we can get this right; we just need to move to set some rules of the road. If we do that, then AI still can be a force for good; it can open up realms of possibility.

So, I do believe that the message captured in the name Encode Justice still holds true and has as much meaning all these years later.

Drollette: When was the organization started?

Revanur: In the summer of 2020.

Drollette: So, if you’re in your second year of college right now, then at that time you would have been …

Revanur: Fifteen years old.

Drollette: Can you tell me how it came about?

Revanur: It’s an interesting story. A few years ago, I came across an investigation conducted by ProPublica that uncovered pretty staggering rates of racial bias in an algorithm that was being used by justice systems nationwide to evaluate the risk of a given person breaking the law again.[1]

The problem with the algorithm was that it was twice as likely to falsely predict that black defendants relative to white defendants would recidivate—that is, re-offend.

That was a rude awakening for me, because like most people, I tended to view algorithms as perfectly objective, perfectly scientific, and perfectly neutral. It’s difficult for us to conceptualize that these seemingly impenetrable mathematical formulae could actually, you know, purvey injustice or convey some of the worst aspects of human society.

It became clear to me that this was an issue that I would be interested in.

If California voters had approved Proposition 25, it would have meant the end of cash bail in the state and ushered in an era of risk assessment algorithms that would supposedly measure a person’s likelihood of re-offending or skipping out on court. Image courtesy of Thomas Hawk/Flickr

A couple of years later, I came across a ballot measure in my home state of California called “California Proposition 25,” that sought to enshrine the use of very similar algorithms statewide in our legal system. If it passed, it would have replaced the already unjust system of cash bail with an algorithmic tool that was very similar to the tool that had been essentially indicted in this ProPublica investigation, and it would pretty much have been used in all pre-trial cases.

I realized there was very little public awareness about the potential dangers of introducing algorithmic supplements into the process.[2] And there was even less youth participation in the conversation about this technology—and I think youth participation is so critical, because whatever the system is, that’s the system that we’re going to inherit entering adulthood.

So I began to rally my peers.

Together, we contacted voters, created informational content, partnered with community organizations across California, ran phone banks, and were eventually able to defeat the measure by around a 13-percent margin.

Now at that point, we were just a campaign focused on a single ballot measure. But after our initial victory, I realized that we have this incredible network of youth in place, not just from California, but from all over the world—about 900 high school and college students, from 30 different countries—who are fired up and thinking more critically about the implications of AI.

I began thinking that we could take that and really make something.

That’s the point where we became a more formal, full-fledged organization, able to take on other projects—such as facial recognition technology, surveillance and privacy issues, democratic erosion, and all sorts of other risks from AI, from disinformation to labor displacement.

And I think that over the last year, it’s become apparent that there are many new and unanticipated risks of catastrophic harm from AI; for example, ChatGPT 4 has been already “jailbroken” to generate bomb-making instructions.[3] And AI that was intended for drug discovery has already been re-purposed to design tens of thousands of lethal chemical weapons in just hours.[4]

We’re seeing systems slipping out of our control; we are moving towards increasingly powerful, increasingly sophisticated systems that could potentially have grave existential harms for humanity.

Consequently, organizations like Encode Justice are shifting towards making sure we can prepare for those threats, while at the same time not losing sight of the realities that we’re already face-to-face with.

Drollette: Did you have any idea that Encode Justice was going to be this successful? How many folks were a part of it when you started?

Revanur: I think it was really just 20 or 30 kids in the beginning. And a lot of them were from my school or neighboring schools, all in California.

And at that point, it was pretty skewed towards California, because we’d been working on a California issue.

To be quite frank, I honestly never envisioned it would grow as large as it has.

And I think the reason why it’s been so well received is that it is really a product of the times. Over the last year, we’ve seen an absolute explosion of interest—almost hysteria—in AI. Had that not taken place, then there really wouldn’t have been all this attention and visibility around the work that we’re doing. We just jumped into the space at the exact right moment.

It’s pretty astonishing.

Drollette: What’s next for Encode Justice? I believe you folks are working on something called the “Blueprint for an AI Bill of Rights”?

Revanur: That’s actually a project that the White House Office of Science and Technology Policy [OSTP] released last year in 2022. My involvement was in advising OSTP on crafting those principles, making sure that they reflected youth priorities.

We actually first came in contact with OSTP in early 2022, when they were first beginning to mull over that project. There was a brief lull when the project fell to the wayside while there were some changes in agency leadership, but over the summer of 2022, we did a lot of advocacy—contacting senators and ensuring that it was moved back to the top of the priority list. We wrote an op-ed in The Hill, in collaboration with Marc Rotenberg of the Center for AI and Digital Policy, calling on the new OSTP director nominee to reprioritize the AI Bill of Rights.

And eventually the framework was released. It definitely is a great starting point—but at the moment, the framework and the principles in the AI Bill of Rights are not enforceable, they’re merely a blueprint. So obviously, there’s a lot of work that has to be done to ensure that we are following up on that critical work by actually passing regulations with real teeth.[5]

I think it really does speak to the urgency of this issue, that Washington is finally summoning up the political will to take meaningful action. And so I really do hope that we can translate some of the very promising ideas and principles in the Blueprint for an AI Bill of Rights into actually enforceable regulations.[6]

Drollette: As a journalist, I think I’ve noticed a change in how Big Tech is typically covered in the press. About 10 or so years ago, everything that I ran across in the popular press about computing tended to be along the lines of a fawning “rah, rah, everything is wonderful, tech can never do anything wrong” kind of coverage. Do you get the impression among your peers that there’s more of a realization that tech can bring problems as well as benefits?

Sneha Revanur. Image courtesy of Sneha Revanur

Revanur: I think the tides are turning—but to be quite frank, I don’t think we’re there yet. I think that there is still this kind of residual prevailing sense of almost unchecked and unqualified optimism.

To be clear, I share some of that optimism as well, in the sense that I believe that technology has the potential to be a force for the positive transformation of society. I think that there’s so much that technology could do for humanity.

But I think we’ve seen firsthand that problems come up.

So, while I think that the tide is turning, it will take a while yet to complete.

And I think that if the tide is turning, it is turning unevenly. It is taking place in my generation in particular, whereas older generations are more removed from the frontline impacts of technology—and so are less skeptical of it. So yeah, I think it’s slowly but surely trending in the direction of a more qualified rather than unqualified optimism, but not uniformly.

Drollette: So maybe it could be said that technology can be a force for good but needs to be actively steered in that direction—especially by those affected?

Revanur: I think that framing this as a matter of steering is apt, because I think it implies that there is a human duty and a moral responsibility on our part to do that steering. And I think that that will come in the form of really meaningful rules of the road for artificial intelligence, that will come in the form of urgent action to ensure that we are addressing both immediate risks and longer-term, still-unrealized risks from AI. And so it really is going to depend on our swift action.

And I think that we are at a point in time right now, where it very well could be possible that we could lose meaningful human control of AI systems as we know it. And I think that’s why it’s incredibly important that we act fast—because the costs could be unrecoverable otherwise.

Drollette: One of the things that I’ve been hearing is that generally youth these days are kind of apathetic, that they don’t feel that they can really make much of a difference. By that, I mean that youth can see what the problems are, but don’t think they can help contribute to a solution. Is that the kind of thing you’ve observed? Or are you the exception that proves the rule—the anomaly on campus, who’s been down to Washington to push for change?

Revanur: I definitely wouldn’t say that I’m anomalous at all. I think it’s really important to highlight that while I lead this organization, it’s not just as one individual person—there’s a movement of 900 high school and college students around me.

So I think that it would not be right to describe my generation as apathetic when obviously, all the work that I’m doing is supported and fortified by this incredibly large coalition of youth from literally all over the world.

Though I do think that a couple of years ago, we definitely stood at a position of apathy, because I think that people weren’t able to conceptualize the impact that AI was having on their daily lives. I think that in the past, people tended to view artificial intelligence as this entirely abstract technical phenomenon that’s completely detached from our lived reality.

But I think that over the last year, especially, people everywhere, especially youth, have begun to recognize the impacts that AI could have on us—the risk of hallucinations from large language models, the impacts of social media algorithms. I mean, AI is becoming a part of every aspect of our everyday lives: If you apply for a job, there are algorithms that are curating lists of jobs for you and effectively telling you what to apply for. There are algorithms that are screening your resume, once you actually do submit the application. If you stand trial, there are algorithms that are evaluating you as a defendant. There is surveillance everywhere, using facial recognition-enabled surveillance cameras that have the ability to track your whereabouts and your identity.[7] So I really think that people are becoming more and more cognizant of the ubiquity of artificial intelligence.

And with that recognition, I think that more people are becoming anxious about a dystopian future, and that anxiety is translating into concrete political action. It’s no longer an abstract issue that was only discussed by academics and PhDs.

And that’s part of the reason why, I think, youth-led organizations like Encode Justice are picking up steam.

Drollette: And you’re doing all this while still being a full-time college student—you’re juggling this activism while still dealing with classes, exams, internships, and everything else?

Revanur: It’s definitely challenging. Right now, I’m speaking to you by Zoom from the Williams College library, and then I’ve got a problem set that I’ve got to get to immediately after this call. So, you know, there’s a lot to do, and it’s definitely difficult.

But I think that what has helped me get me through it all is the fact that I’m doing this work alongside so many of my peers who are in a very similar position: Every member of Encode Justice is either in high school or college. What that means is that all of us are navigating a very similar set of responsibilities in terms of having a full course load, internships, jobs, family responsibilities, and obligations outside of our schoolwork.

We are taking on this duty because we realize that our collective future depends on it. We are coming at this issue from different walks of life, different backgrounds, different vantage points, because the clock is ticking and our generation needs to act. So, we are taking some time out of our lives to really put our heads together and work on this. And I think that’s really powerful—and really beautiful.

[1] See “Machine Bias: There’s software used across the country to predict future criminals. And it’s biased against blacks,” by Julia Angwin, Jeff Larson, Surya Mattu, and Lauren Kirchner.  ProPublica, May 23, 2016. Available at https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing

[2] For more, see “Californians Should Reject Proposition 25: Reform Pretrial System Without Using Racially Biased Risk Assessments.” Human Rights Watch, July 29, 2020. Available at https://www.hrw.org/news/2020/07/29/us-californians-should-reject-proposition-25

[3] So-called “jailbreaking” is a method of removing restrictions and limitations so that prohibited modifications can be made and harmful content accessed. See “Research: GPT-4 Jailbreak Easily Defeats Safety Guardrails” by Roger Montti, Search Engine Journal, October 23, 2023. Accessible at https://www.searchenginejournal.com/research-gpt-4-jailbreak-easily-defeats-safety-guardrails/498386/

[4] See “AI Drug Discovery Systems Might Be Repurposed to Make Chemical Weapons, Researchers Warn,” by Rebecca Sohn, Scientific American, April 21, 2022. Accessible at  https://www.scientificamerican.com/article/ai-drug-discovery-systems-might-be-repurposed-to-make-chemical-weapons-researchers-warn/

[5] See “A first take on the White House executive order on AI: A great start that perhaps doesn’t go far enough” published on the Bulletin of the Atomic Scientists website, October 30, 2023, at https://thebulletin.org/2023/10/a-first-take-on-the-white-house-executive-order-on-ai-a-great-start-that-perhaps-doesnt-go-far-enough/

[6] There are essentially five principles to the Blueprint for an AI Bill of Rights. It says that “You should be protected from unsafe or ineffective systems;” “You should not face discrimination by algorithms and systems should be used and designed in an equitable way;” “You should be protected from abusive data practices via built-in protections and you should have agency over how data about you is used;” “ You should know that an automated system is being used and understand how and why it contributes to outcomes that impact you;” and “You should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy problems you encounter.”

The full text of the AI Bill of Rights is available at https://www.whitehouse.gov/wp-content/uploads/2022/10/Blueprint-for-an-AI-Bill-of-Rights.pdf

[7] See “The high-tech surveillance state is not restricted to China: Interview with Maya Wang of Human Rights Watch” published in the Bulletin of the Atomic Scientists on September 8, 2022, at https://thebulletin.org/premium/2022-09/the-high-tech-surveillance-state-is-not-restricted-to-china-interview-with-maya-wang-of-human-rights-watch/