Defense experts say AI is essential in warfare, but it comes with mixed success and unpredictable pitfalls.
Artificial intelligence (AI) is often framed as a force multiplier that can accelerate decision-making and produce valuable information. Meanwhile, AI deployment exercises have yielded mixed results, highlighting challenges such as systems stalling and unpredictable software outside controlled environments.
Some defense insiders believe that AI tools also introduce new safety and escalation risks if not developed, evaluated, and trained correctly.
Over the past year, U.S. military testing has demonstrated that some AI systems are failing in the field. In May 2025, Anduril Industries worked with the U.S. Navy on the launch of 30 AI drone boats, all of which ended up stuck idling in the water after the systems rejected their inputs.
A similar setback occurred in August 2025 during the company’s test of its Anvil counterdrone system. The resultant mechanical failure caused a 22-acre fire in Oregon, according to a Wall Street Journal report.
Anduril responded to the reported AI test failures, calling them “a small handful of alleged setbacks at government experimentation, testing, and integration events.”
“Modern defense technology emerges through relentless testing, rapid iteration, and disciplined risk-taking,” Anduril stated on its website. “Systems break. Software crashes. Hardware fails under stress. Finding these failures in controlled environments is the entire point.”
But some say the challenges AI faces in the national security landscape should not be taken lightly. Problems such as brittle AI models and building on the wrong kind of training data can create systems that do not perform as expected in a battlefield scenario.
“This is why military-grade AI, purpose-built for national security use cases and the warfighter, is critical,” Tyler Saltsman, founder of EdgeRunner AI, told The Epoch Times.
Saltsman’s company has active research and development contracts with the U.S. military. He said AI systems are not typically designed for warfighting.
“[AI models] may choose to refuse or deflect certain questions or tasks if those requests do not comply with the AI system’s own rules,” Saltsman said. “A model refusing to provide guidance to a soldier in combat or giving biased responses rather than operationally relevant responses can have life-or-death implications.”
Scenarios such as the one Saltsman described can start with the wrong kind of training data.







