Artificial Intelligence (AI) has made headlines for its use in everything from education to healthcare. Less well known, however, is its use by the American Department of Defense. Autonomous machines are able to complete many tasks without human supervision, from conducting missions to language translation, making AI highly useful in warfare. However, this new and innovative technology does not come without several shortcomings.
AI can be particularly useful when it comes to gathering intelligence about enemies due to the large data sets available for analysis. For example, Project Maven, a Pentagon AI project, uses algorithms to comb through footage taken by aerial vehicles to identify and target hostile activity. This system will help make more accurate and timely decisions, aiding analysts who would otherwise spend lots of time combing through footage themselves.
In regards to combat itself, AI has the capability to fundamentally shift the structure of warfare. As we shift from the Industrial Era of warfare, an era where weaponry was most consequential, information emerges as the most vital aspect of combat operations.
However, the use of AI in warfare does have some limitations. First, obtaining datasets that can be used to gain intelligence can be a challenge, especially for organizations that prefers to restrict access to data. Even if developers gain access to this data, most software system’s image-processing capability is not yet advanced enough to overcome flaws in photos, such as poor lighting or blurry quality.
Although AI and machine learning softwares can be used to make more efficient decisions, it cannot adapt well to unfamiliar situations. Unlike humans who are able to come up with solutions to various problems on the spot, AI can only work effectively in the specific situations they were programmed to deal with.
The debate over AI’s place in warfare is controversial. Supporters believe that autonomous systems have the ability to increase combat efficiency and limit the harm done to civilians. Critics don’t believe that AI should have the discretion to take human life. Ethical consideration has assigned culpability for any damage to the inventor of a lethal system. Many world powers have taken part in the NGO Campaign to stop Killer Robots, and no fully autonomous weapons are currently used in the United States.
Although AI in warfare is a controversial topic, looking at it from a holistic perspective allows us to see that it has considerable dangers. Technology is our future, so it’s important that we continue to prioritize safety and reliability.