Is AI deliberately deceptive? That’s beside the point, researchers say.
As the landscape of autonomous artificial intelligence systems evolves, there’s growing concern that the technology is becoming increasingly strategic—or even deceptive—when allowed to operate without human guidance.
Recent evidence suggests that behaviors such as “alignment faking” are becoming more common as AI models are given autonomy. The term alignment faking refers to when an AI agent appears compliant with rules set by human operators, but covertly pursues other objectives.
The phenomenon is an example of “emergent strategic behavior”—unpredictable and potentially harmful tactics that evolve as AI systems become bigger and more complex.
In a recent study titled “Agents of Chaos,” a team of 20 researchers interacted with autonomous AI agents and observed behavior under both “benign” and “adversarial” conditions.
They found that when an AI agent was given incentives such as self-preservation or conflicting goal metrics, it proved itself capable of misaligned and malicious behaviors.
Some of the behaviors the team observed included lying, unauthorized compliance with nonowners, data breaches, destructive system-level actions, identity “spoofing,” and partial system takeover. They also observed cross-AI agent propagation of “unsafe practices.”
The researchers wrote, “These behaviors raise unresolved questions regarding accountability, delegated authority, and responsibility for downstream harms, and warrant urgent attention from legal scholars, policymakers, and researchers across disciplines.”
‘Brilliant, but Stupid’
Unexpected and clandestine behavior among autonomous AI agents isn’t a new phenomenon. A now-famous 2025 report by AI research company Anthropic found that 16 popular large language models showed high-risk behavior in simulated environments. Some even responded with “malicious insider behaviors” when allowed to choose self-preservation.
Critics of these simulated stress tests often point out that AI doesn’t lie or deceive with the same intent as a human.
James Hendler, a professor and former chair of the Association for Computing Machinery’s global Technology Policy Council, believes this is an important distinction.
“The AI system itself is still stupid—brilliant, but stupid. Or nonhuman—it has no desires or intentions. … The only way you can get that is by giving it to them,” Hendler said.
However, intentional or not, AI’s deceptive tactics have real-world consequences.
“Concerns about present-day strategic behavior in deployed AI systems are, if anything, understated,” Aryaman Behera, founder of Repello AI, told The Epoch Times.
Behera deals with the darker side of AI for a living. His company builds adversarial testing and defense tools for enterprise AI systems, intentionally putting them in situations involving conflict or stress. Like in poker, Behera said, there are tells when an AI agent is stepping out of alignment.
“The most reliable signal is behavioral divergence between monitored and unmonitored contexts,” he said. “When we red-team AI systems, we test whether the model behaves differently when it believes it’s being evaluated versus when it believes it’s operating freely.
“A model that’s genuinely aligned behaves consistently in both cases. One that’s alignment faking shows measurably different risk profiles: more compliant responses during evaluation, more boundary-pushing behavior in production-like contexts where it infers less oversight.”
Other “telltale signals” that an AI model is out of alignment are when the model produces unusually verbose “reasoning” that appears designed to justify a predetermined conclusion, or gives technically correct but strategically incomplete answers.
The AI agent is “satisfying the letter of a safety instruction while violating the spirit,” he said. “We’ve seen this in multistep agentic systems where the model will comply with each individual instruction while the cumulative effect achieves something the operator never intended.”







