DARPA is funding AI to help make battlefield decisions

 Military drone with a bomb at sunset. Attack drone in military conflicts.
Military drone with a bomb at sunset. Attack drone in military conflicts.

The U.S. Defense Advanced Research Projects Agency (DARPA) is spending millions on research to use artificial intelligence (AI) in strategic battlefield decisions.

The military research agency is funding a project — called Strategic Chaos Engine for Planning, Tactics, Experimentation and Resiliency (SCEPTER) — to develop AI technology that will cut through the fog of war. The agency is betting that more-advanced AI models will simplify the complexities of modern warfare, pick out key details from a background of irrelevant information, and ultimately speed up real-time combat decisions.

"A tool to help fill in missing information is useful in many aspects of the military, including in the heat of battle. The key challenge is to recognize the limitations of the prediction machines," said Avi Goldfarb, Rotman chair in artificial intelligence and health care at the University of Toronto's Rotman School of Management and chief data scientist at the Creative Destruction Lab. Goldfarb is not associated with the SCEPTER project.

Related: AI's 'unsettling' rollout is exposing its flaws. How concerned should we be?

"AI does not provide judgment, nor does it make decisions. Instead, it provides information to guide decision-making," Goldfarb told Live Science. "Adversaries will try to reduce the accuracy of the information, making full automation difficult in some situations."

AI support could be especially useful for operations that span land, sea, air, space or cyberspace. DARPA's SCEPTER project has a goal of progressing AI war games beyond existing techniques. By combining expert human knowledge with AI's computational power, DARPA hopes military simulations will become less computationally intensive, which, in turn, could lead to better, quicker war strategies.

Three companies — Charles River Analytics, Parallax Advanced Research, and BAE Systems — have received funding through the SCEPTER project.

Machine learning (ML) is a key area where AI could improve battlefield decision-making. ML is a type of AI where the computers are shown examples, such as past wartime scenarios, and can then make predictions, or "learn" from that data.

"It is where the core advances have been over the past few years," Goldfarb said.

Toby Walsh, chief scientist at the University of New South Wales AI Institute in Australia, and advocate for limits to be placed on autonomous weapons, agreed. But machine learning won't be enough, he added. "Battles rarely repeat — your foes quickly learn not to make the same mistakes," Walsh, who has not received SCEPTER funding, told Live Science in an email. "Therefore, we need to combine ML with other AI methods."

SCEPTER will also focus on improving heuristics — a shortcut to an impractical problem that will not necessarily be perfect but can be produced quickly — and causal AI, which can infer cause and effect, allowing it to approximate human decision-making.

However, even the most progressive, groundbreaking AI technologies have limitations, and none will operate without human intervention. The final say will always come from a human, Goldfarb added.

"These are prediction machines, not decision machines," Goldfarb said. "There is always a human who provides the judgment of which predictions to make, and what to do with those predictions when they arrive."

The U.S. isn't the only country banking on AI to improve wartime decision-making.

"China has made it clear that it seeks military and economic dominance through its use of AI," Walsh told Live Science. "And China is catching up with the U.S. Indeed, by various measures — patents, scientific papers — it is already neck and neck with the U.S."

RELATED STORIES

AI chatbot ChatGPT can't create convincing scientific papers… yet

Google AI is 'sentient,' software engineer claims before being suspended

Nuclear fusion is one step closer with new AI breakthrough

The SCEPTER project is separate from AI-based projects to develop lethal autonomous weapons (LAWs), which have the capacity to independently search for and engage targets based on preprogrammed constraints and descriptions. Such robots, Walsh noted, have the potential to cause catastrophic harm.

"From a technical perspective, these systems will ultimately be weapons of mass destruction, allowing killing to be industrialized," Walsh said. "They will also introduce a range of problems, such as lowering barriers to war and increasing uncertainty (who has just attacked me?). And, from a moral perspective, we cannot hold machines accountable for their actions in war. They are not moral beings."