Credit: Federal Computer Week

In November 2019, Silicon Valley and the Pentagon shared their first Thanksgiving. The newly christened National Security Commission on Artificial Intelligence invited all the biggest names in tech and foreign policy to discuss their visions for the future. Senate and House party leaders showed up, along with White House staffers, professors, a gaggle of corporate executives, and Pentagon officials.

Lt. General Jack Shanahan had spent just one year in his seat as the inaugural Director of the Joint Artificial Intelligence Center: in other words, as the Pentagon’s new AI czar. He spoke on a panel about public-private partnerships for the military’s use of AI. To his left sat Google’s Senior Vice President for Global Affairs. To his right, Google’s former CEO, who was now the Commission’s chairman. Shanahan was easily one of the most important men there: not just on the panel, but at the whole conference. He was responsible for the integration of AI systems across the entire Defense Department. Despite his prestige, he was charmingly self deprecating. “This is certainly the first and last time I will serve as a warm up act to Dr. Henry Kissinger!” he joked.

Private sector partnership was, and continues to be, a touchy subject. Google had just pulled out of a Defense Department AI project that Shanahan himself oversaw. Shanahan pushed back against Google’s justification for the move, citing the exhaustive public comment period and list of ethical rules governing AI use. The Google execs seemed to agree. But later in the discussion, his position subtly shifted. Ethical concerns aside, he explained the gravity of the AI threat to national security. Soon, he said, AI will be processing battlefield information and handing down marching orders at breakneck speeds. To keep up with our enemies, we will have to adopt these technologies or “risk losing the fight.”

Lt. Gen. Shanahan is right that AI will reshape the nature of warfighting. China in particular has dedicated immense resources to achieving AI supremacy. The Politburo has dedicated billions of dollars and even group study sessions to improving their AI capabilities. As a result, American policymakers are growing quite anxious. The RAND Corporation has written that while America’s lead in semiconductor design paired with our allies’ superior semiconductor manufacturing has kept us in the lead thus far, it will not last unless we pick up the pace on research and development.

China achieving AI dominance could be catastrophic for the free world. Lethal Autonomous Weapons (LAWs), also known as ‘killer robots,’ could turn out to be overwhelmingly powerful tools in conventional warfare. But they are only one part of the larger AI arms race. As Shanahan suggested, computers may take on the role of commanding officers. AI may even be used to help top brass design geopolitical grand strategies. It is not clear how powerful these technologies will be, nor how long we have to wait. But if Shanahan is right to say that we must begin to think of war as “algorithm versus algorithm,” commanding armies faster than human comprehension, then how could a country like Taiwan be secure? Ever larger powers such as India could be left no choice but appeasement. The liberal world order could be on the way out.

With this in mind, we must consider the cost of supremacy. China’s main advantage over the United States is its wealth of data. The more data you have, the better you can train AI. With a billion people without any right to privacy, China has a clear lead on that front. This has put pressure on our own privacy protections. Already, our policy makers are talking about boosting private sector AI as a sort of laboratory for military tech. This could lead us further down the road of “surveillance capitalism” in the name of national defense. For American supremacy to be worth saving, we must find a way to balance the competing values of privacy and security.

Faulty AI systems can also be very dangerous. Bias in data can lead to bias in algorithms. AI policy expert Osonde Osoba writes that this can be checked by keeping AI decisions transparent and easy to appeal. However, this approach may not work for national security applications. First, consider the ‘killer robots.’ Their decisions are inherently impossible to appeal. LAWs must get it right the first time, or we must be prepared to justify our mistakes. Second, consider algorithms that will design grand strategy. These algorithms would likely be based on classified data, and would themselves be state secrets. There will simply not be any room for transparency.

Even if we do outperform our competitors, do we really want to live in a world of “algorithms versus algorithms?” Most people are probably comfortable with AI taking on the menial tasks of life: driving to work, treating you for the flu, organizing your schedule. But there is something unsettling about giving computers the choice between war and peace. Defense experts have raised concerns that AI may not fear escalation as much as we do. It makes sense; AI aren’t elected leaders, or commanders with a sense of responsibility to their troops. They are machines. Shanahan’s joke that this was “the first and last time I will serve as a warm up act for Dr. Henry Kissinger” was probably right, but not because of Kissinger’s illustrious career. If Shanahan is successful, Kissinger and his kin will become obsolete.

Concerns about trigger happy AI have driven even Chinese leaders to entertain the idea of AI arms control agreements. But American analysts are skeptical, pointing out that the Chinese proposals would do little to constrict their own weapons development programs. Even if a stronger agreement were reached, recent events have suggested the great powers wouldn’t be interested in keeping their word.

So is AI really just a “damned if we do, damned if we don’t” situation? Not necessarily. AI has the potential to save the military money and lives, but we must be willing to move slowly or risk destroying the very democracy we are trying to protect. Congress must take the lead on AI and stop delegating these hard ethical questions to obscure Pentagon appointees and corporate executives. Ideally, a Joint Committee on AI would investigate and regulate these new technologies. Hopefully, elected officials will see that there is more to hegemony than computing power. Our economic, geographic, and diplomatic superiority has bought us time to deal with AI carefully. As was said in the movie WarGames, this arms race is “A strange game. The only winning move is not to play.”

Related Posts

See all recent posts