How AI Is Used to Fight Human Exploitation

Image credit: Equal Times

Written by Chaira Harder and Zoe Tomlinson

Edited by Alexandra Raphling

It began like an ordinary day for Linh Tran*; her friend had invited her to go shopping with a group of people in Vietnam’s capital, Hanoi. When two vehicles arrived to take the group to the city, Linh’s friend instructed her to ride in a separate car in order to “balance” the seating. What should have been a relaxing afternoon trip became a three-year nightmare. Linh found herself across the Chinese border, her friend’s vehicle nowhere to be found. Linh had been sold, becoming one of the many victims of sex trafficking.

For the next few years, Linh was locked inside a dog cage, wearing nothing but a collar. When she was brought out, it was to be teased and raped.

Today, a global estimated 40 million people are subject to human trafficking every year. Forty million, without the slightest bit of freedom. If 40 million people were to link arms, their length would extend to thrice that of the United States.

While the Covid-19 pandemic has had a detrimental effect on most businesses and industries, human trafficking has been thriving. Its recent rise has been, in part, due to climbing unemployment rates. Today, only behind the drug trade, it is the second-largest crime industry in the world, expanding faster than any other. Yet, despite its alarming numbers, talk of this modern form of slavery is deficient and shallow.

Despite its low profile, counter-trafficking work has progressed significantly due to artificial intelligence (AI). This technology provides the missing piece: strong evidence. In the past, even high suspect traffickers could easily escape consequences and further efforts due to a lack of compelling evidence.

Tech companies such as IBM, Amazon, AT&T, and Microsoft have been on the front lines of the fight against human trafficking. They’ve been actively using AI to identify patterns of suspicious activity and trace potential sources of human trafficking.

Often through the development of software and the formation of tech coalitions such as Tech Against Trafficking, these major tech companies influence other businesses, organizations, and banks to take part in counter-trafficking. The key is data sharing. Banks like Western Union and Liberty Global work with IBM, providing information to their analysts to trace the flow of funds related to trafficking through their new IBM Cloud-hosted data hub. These financial institutions train their machine learning models to recognize and detect terms and incidents specific to human trafficking. AI helps IBM identify other characteristics of human exploitation, such as the associates paid to assist in the transportation of the victims.

Other companies have taken different approaches such as conducting research to map the landscape of human trafficking issues and solutions.

Image classification, specifically facial recognition, makes hunting down traffickers more feasible. Traffic Jam is one example of successful usage of image classification. It uses patterns and algorithms to maneuver through digital advertisements and publications, detecting potential victims and criminal organizations.

Global tracking and satellite technology also utilize AI to detect suspicious activity. In 2019, a months-long investigation was finally concluded when a high-resolution satellite image captured two fishing vessels off of Papua New Guinea’s coast. 2,000 victims found on the two boats were freed from human trafficking.

Machine learning and artificial intelligence have played a crucial role in slowing the growth of the human trafficking industry. The Covid-19 pandemic brings unique threats to victims of modern day slavery, threats that can be fought and defeated with artificial intelligence. We need AI now more than ever.

Can We Escape an AI Arms Race?

Credit: Federal Computer Week

In November 2019, Silicon Valley and the Pentagon shared their first Thanksgiving. The newly christened National Security Commission on Artificial Intelligence invited all the biggest names in tech and foreign policy to discuss their visions for the future. Senate and House party leaders showed up, along with White House staffers, professors, a gaggle of corporate executives, and Pentagon officials.

Lt. General Jack Shanahan had spent just one year in his seat as the inaugural Director of the Joint Artificial Intelligence Center: in other words, as the Pentagon’s new AI czar. He spoke on a panel about public-private partnerships for the military’s use of AI. To his left sat Google’s Senior Vice President for Global Affairs. To his right, Google’s former CEO, who was now the Commission’s chairman. Shanahan was easily one of the most important men there: not just on the panel, but at the whole conference. He was responsible for the integration of AI systems across the entire Defense Department. Despite his prestige, he was charmingly self deprecating. “This is certainly the first and last time I will serve as a warm up act to Dr. Henry Kissinger!” he joked.

Private sector partnership was, and continues to be, a touchy subject. Google had just pulled out of a Defense Department AI project that Shanahan himself oversaw. Shanahan pushed back against Google’s justification for the move, citing the exhaustive public comment period and list of ethical rules governing AI use. The Google execs seemed to agree. But later in the discussion, his position subtly shifted. Ethical concerns aside, he explained the gravity of the AI threat to national security. Soon, he said, AI will be processing battlefield information and handing down marching orders at breakneck speeds. To keep up with our enemies, we will have to adopt these technologies or “risk losing the fight.”

Lt. Gen. Shanahan is right that AI will reshape the nature of warfighting. China in particular has dedicated immense resources to achieving AI supremacy. The Politburo has dedicated billions of dollars and even group study sessions to improving their AI capabilities. As a result, American policymakers are growing quite anxious. The RAND Corporation has written that while America’s lead in semiconductor design paired with our allies’ superior semiconductor manufacturing has kept us in the lead thus far, it will not last unless we pick up the pace on research and development.

China achieving AI dominance could be catastrophic for the free world. Lethal Autonomous Weapons (LAWs), also known as ‘killer robots,’ could turn out to be overwhelmingly powerful tools in conventional warfare. But they are only one part of the larger AI arms race. As Shanahan suggested, computers may take on the role of commanding officers. AI may even be used to help top brass design geopolitical grand strategies. It is not clear how powerful these technologies will be, nor how long we have to wait. But if Shanahan is right to say that we must begin to think of war as “algorithm versus algorithm,” commanding armies faster than human comprehension, then how could a country like Taiwan be secure? Ever larger powers such as India could be left no choice but appeasement. The liberal world order could be on the way out.

With this in mind, we must consider the cost of supremacy. China’s main advantage over the United States is its wealth of data. The more data you have, the better you can train AI. With a billion people without any right to privacy, China has a clear lead on that front. This has put pressure on our own privacy protections. Already, our policy makers are talking about boosting private sector AI as a sort of laboratory for military tech. This could lead us further down the road of “surveillance capitalism” in the name of national defense. For American supremacy to be worth saving, we must find a way to balance the competing values of privacy and security.

Faulty AI systems can also be very dangerous. Bias in data can lead to bias in algorithms. AI policy expert Osonde Osoba writes that this can be checked by keeping AI decisions transparent and easy to appeal. However, this approach may not work for national security applications. First, consider the ‘killer robots.’ Their decisions are inherently impossible to appeal. LAWs must get it right the first time, or we must be prepared to justify our mistakes. Second, consider algorithms that will design grand strategy. These algorithms would likely be based on classified data, and would themselves be state secrets. There will simply not be any room for transparency.

Even if we do outperform our competitors, do we really want to live in a world of “algorithms versus algorithms?” Most people are probably comfortable with AI taking on the menial tasks of life: driving to work, treating you for the flu, organizing your schedule. But there is something unsettling about giving computers the choice between war and peace. Defense experts have raised concerns that AI may not fear escalation as much as we do. It makes sense; AI aren’t elected leaders, or commanders with a sense of responsibility to their troops. They are machines. Shanahan’s joke that this was “the first and last time I will serve as a warm up act for Dr. Henry Kissinger” was probably right, but not because of Kissinger’s illustrious career. If Shanahan is successful, Kissinger and his kin will become obsolete.

Concerns about trigger happy AI have driven even Chinese leaders to entertain the idea of AI arms control agreements. But American analysts are skeptical, pointing out that the Chinese proposals would do little to constrict their own weapons development programs. Even if a stronger agreement were reached, recent events have suggested the great powers wouldn’t be interested in keeping their word.

So is AI really just a “damned if we do, damned if we don’t” situation? Not necessarily. AI has the potential to save the military money and lives, but we must be willing to move slowly or risk destroying the very democracy we are trying to protect. Congress must take the lead on AI and stop delegating these hard ethical questions to obscure Pentagon appointees and corporate executives. Ideally, a Joint Committee on AI would investigate and regulate these new technologies. Hopefully, elected officials will see that there is more to hegemony than computing power. Our economic, geographic, and diplomatic superiority has bought us time to deal with AI carefully. As was said in the movie WarGames, this arms race is “A strange game. The only winning move is not to play.”