Imagine a world where war is fought by machines, and infantry has been replaced by specialized robots. Imagine a world where intelligent software systems make life-and-death decisions, both moral and tactical, without human oversight. It’s not science fiction. It’s not even that far off.
Autonomous warfare is coming, and it’s going to be a bumpy ride.
War Has Changed
Technology has revolutionized many aspects of human life, and war is no exception. The military robots of today were mere science fiction a few decades ago. Some 40 percent of the U.S. aerial fleet is comprised of unmanned drones, and the Air Force is now training more drone operators than pilots. Autonomous ground vehicles have also made their way onto the battlefield, with more than 6,000 currently deployed in Iraq and Afghanistan. Some officials estimate that up to 25 percent of today’s infantry forces could be replaced by robots over the next few decades.
The shift towards autonomous warfare makes sense. Machines are cheaper than human beings, can integrate more information and react more quickly, and don’t suffer from many human failings, like fatigue. In 2002, Major Kenneth Rose of the U.S. Army’s Training and Doctrine Command outlined some of the advantages of having robots on the battlefield:
Machines don’t get tired. They don’t close their eyes. They don’t hide under trees when it rains and they don’t talk to their friends […] A human’s attention to detail on guard duty drops dramatically in the first 30 minutes […] Machines know no fear.
Technology has already changed the way military forces operate. Here are a few examples of military robots you could find in service today.
Daksh, developed by India’s Defence Research and Development Organisation, is a battery-operated remote-controlled robot used for locating and handling bombs and other hazardous materials. It can climb stairs, navigate steep slopes, and maneuver through small spaces to reach its destination. Using a robotic arm, it can lift objects and inspect them with its onboard X-Ray device. If it determines that an object is a bomb, it can defuse it on the spot. Daksh is also equipped with a shotgun to break through locked doors.
With its master control station (MCS), Daksh can be operated remotely over a range of up to 500 meters.
Originally developed in 1979 and still in use today, Goalkeeper is a fully autonomous weapon system used for short-range defense of ships against missiles, aircraft, and surface vessels. The system handles everything from surveillance to detection and destruction.
Goalkeeper uses dual radar subsystems that work together to identify and prioritize targets and engage the highest priority threat. The system comes equipped with the GAU-8/A Avenger 30mm gatling gun, which is also used by the A-10 Thunderbold II aircraft.
General Atomics MQ-9 Reaper
Formerly known as Predator B, the MQ-9 Reaper is a drone developed by General Atomics Aeronautical Systems and used primarily by the U.S. Air Force. Capable of remote controlled or autonomous flight, it’s used for long-term, high-altitude surveillance.
The MQ-9 is a larger and more powerful version of its predecessor, the MQ-1 Predator. The Reaper can carry 15 times more ordnance and cruise almost three times faster than the MQ-1.
The Reaper carries a whole slew of weapons including the GBU-12 Paveway II laser-guided bomb, the AGM-114 Hellfire II air-to-ground missiles, the AIM-9 Sidewinder, and the GBU-38 JDAM (Joint Direct Attack Munition). In the future, the AIM-92 Stinger may have a place on the vehicle as well.
Technology In Development
As is always the case, today’s technology is no match for tomorrow’s technology. The traditional understanding of war will be turned on its head as these and other systems join (or replace) soldiers on the battlefield over the next couple of decades.
BigDog — the four-legged robot developed by Boston Dynamics — has had its fair share of press coverage. Resembling a headless animal, Big Dog is three feet long, stands 2.5 feet tall, and weighs 240 pounds. Roughly the size of a small mule, BigDog is just that: a robotic pack mule designed to accompany soldiers in terrain too rough for conventional vehicles.
BigDog is designed to mimic the mobility and speed of a living animal, capable of navigating autonomously through uncharted wooded areas to a pre-determined destination.
Atlas, also by Boston Dynamics, is a humanoid robot developed primarily for search and rescue missions. Ideally, it will be able to respond to dangerous emergency situations, like a nuclear meltdown: it can shut off valves, open doors, and operate equipment in vehicles in environments too dangerous for humans.
To answer the question that’s probably in the back of your mind, the Department of Defense said in 2013 that it had no interest in using this particular robot in combat — but it could become a valuable asset for rescue operations.
Ripsaw, by Howe & Howe Technologies, is an unmanned tank built for evaluation by the U.S. Army. It can perform various operations including convoy protection, perimeter defense, border patrol, crowd control, rescue, and bomb disposal.
The vehicle was originally introduced at a Dallas vehicle show in 2001, where it piqued the interest of the U.S. Army. The United States later ordered a prototype to be made and shipped to Iraq.
Risks & Ethical Concerns
Unmanned vehicles and weapons systems are no longer science fiction. They’re real, and they’re here to stay. With that said, there’s definitely a discussion to be had regarding the ethics and potential risks of these systems.
Artificial intelligence, even outside the scope of war, is pretty controversial. For years, some of the greatest minds in technology have warned that we need to be cautious with such systems. Elon Musk is particularly averse to artificial intelligence, noting that it could be “more dangerous than nukes.” At the MIT Aeronautics and Astronautics department’s Centennial Symposium last year, Musk had this to say:
I think we should be very careful about artificial intelligence. If I were to guess like what our biggest existential threat is, it’s probably that. So we need to be very careful with the artificial intelligence. Increasingly scientists think there should be some regulatory oversight maybe at the national and international level, just to make sure that we don’t do something very foolish. With artificial intelligence we are summoning the demon.
Combine deadly weapons with extremely intelligent software that can learn and make decisions independently, and you’ve got a disaster waiting to happen, if the AI’s goal-seeking behavior is improperly configured. How do you adequately define an enemy? How do you ensure that artificial creativity won’t come up with horrifying unanticipated solutions to the problem you asked it to solve? These are not trivial concerns, as we place smarter and smarter algorithms in charge of more and more powerful military hardware.
There are also concerns about security. No computer system is perfectly safe. What happens when enemy forces infect your machines with a computer virus, crippling them or changing their behavior? What safeguards will be in place for security exploits?
Beyond that, there’s also the question of moral judgement. War is complicated, and machine intelligence has its limits. A child approaches a convoy of soldiers with something in his hand. Is it a toy, or a grenade? Machine vision, at its current level of fidelity, may not be sure. He’s getting closer. Do you shoot? These are difficult questions, and it’s difficult to imagine our computer systems being sophisticated (not to say compassionate) enough to resolve them in the near future.
Trying to fight technological progress doesn’t work. The future of war will be, without a doubt, largely autonomous — the economic and pragmatic pressures are too strong. The question is whether we will be responsible enough in our development of these systems to carefully consider the implications of our design choices. We must have the judgement to know the limits of our creations, and not ask them to make decisions that they don’t have the wisdom to make.
What do you think about autonomous warfare? Feel free to share your thoughts in the comments below!