Silicon Valley has long existed in an intractable paradox, in that it grew out of a hippie-influenced counterculture in Northern California that ostensibly committed to idealistic notions of peace on Earth or whatever while simultaneously developing the tools of state for global mass surveillance, social credit scores, computer-generated new pathogens, killer drone robots, etc. — in other words, the critical infrastructure for the Beast system.
Related: LAPD Gets $278,000 Robot Dog, Dallas School District Adopts Pre-Crime Surveillance Technology
The most infamous case in point illustrating this intrinsic, schizophrenic contradiction was Google’s longstanding motto “Don’t Be Evil,” aggressively marketed as the core tenet of the company — its moral North Star — for a decade and a half.
Fifteen years after its adoption, however, the company quietly removed “Don’t Be Evil” from its Code of Conduct overnight in 2018, like a scene ripped from the pages of Animal Farm.
The catchphrase hasn’t been heard of since — down the Memory Hole, as it were.
In a more recent example of square peg in round hole, Mrinank Sharma, former “Head of the Safeguards Research Team,” announced his resignation from juggernaut Anthropic, citing “constant pressures to set aside” safety concerns in favor of maintaining a competitive edge in the rapidly developing industry:
“The world is in peril. The world is in peril. And not just from AI, or bioweapons, but from a whole series of interconnected crises unfolding in this very moment.
We appear to be approaching a threshold where our wisdom must grow in equal measure to our capacity to affect the world lest we face the consequences. Moreover, throughout my time here, I’ve repeatedly seen how hard it is to truly let our values govern our actions… We constantly face pressures to set aside what matters most.”
Today is my last day at Anthropic. I resigned.
— mrinank (@MrinankSharma) February 9, 2026
Here is the letter I shared with my colleagues, explaining my decision. pic.twitter.com/Qe4QyAFmxL
There’s a whole genre of this public resignation letter from the pseudo-hippies at these companies.
They invariably cite existential safety concerns ignored by their employer and invariably neglect to actually explain what those existential safety concerns are in any detail — nor do they ever highlight any plans they have to combat the Frankenstein they have helped to birth as penance for their sins.
Related: TSA Rolls Out ‘Voluntary’ Face Scans at Over a Dozen American Airports
They just reach for the cheap brownie points they might score for making public overtures about goodness and morality or whatever, representing themselves as responsible stewards of society, without assuming any of the personal responsibility or risk that would necessarily come with explicit disclosure of what these companies do behind closed doors or tangible actions they are willing to take to confront them.
All of which is to say: no cookies for Mrinank Sharma; no one should be impressed with his milquetoast, self-serving resignation letter.
Recently, Anthropic “loosened safety guardrails” that were ostensibly designed to mitigate “catastrophic risk” in new models.
Via Axios (emphasis added):
“As large language models grow more powerful and less predictable, AI companies are loosening safety guardrails in the race to be first — a shift that some warn could lead to catastrophe…
Anthropic, long viewed as the most safety-focused major AI lab, last week revised a key safeguard — narrowing the conditions under which it would delay developing or releasing a model that could pose catastrophic risk.
“We will delay AI development and deployment as needed to achieve this, until and unless we no longer believe we have a significant lead,” the revised policy says.
Anthropic’s recalibration comes amid a dispute with the Trump administration.
The company refused to allow its models to be used for autonomous weapons or domestic surveillance. The Defense Department responded by cutting use of Claude and labeling the firm a supply chain risk.
That highlights another problem with competition. Even if one company refuses on safety grounds, another is likely to step in.”
Immediately after Anthropic’s feud with the Pentagon ended in its removal from the contract, Sam Altman, desperate for a capital infusion to prop up cash-strapped OpenAI, immediately dove into the void.
Via TechCrunch (emphasis added):
“OpenAI CEO Sam Altman announced late on Friday that his company has reached an agreement allowing the Department of Defense to use its AI models in the department’s classified network.
This follows a high-profile standoff between the DoD — also known under the Trump administration as the Department of War — and OpenAI’s rival Anthropic. The Pentagon pushed AI companies, including Anthropic, to allow their models to be used for “all lawful purposes,” while Anthropic sought to draw a red line around mass domestic surveillance and fully autonomous weapons.
In a lengthy statement released Thursday, Anthropic CEO Dario Amodei said the company “never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner,” but he argued that “in a narrow set of cases, we believe AI can undermine, rather than defend, democratic values.”
More than 60 OpenAI employees and 300 Google employees signed an open letter this week asking their employers to support Anthropic’s position.
After Anthropic and the Pentagon failed to reach an agreement, President Donald Trump criticized the “Leftwing nut jobs at Anthropic” in a social media post that also directed federal agencies to stop using the company’s products after a six-month phase-out period…
Surprisingly, Altman claimed in a post on X that OpenAI’s new defense contract includes protections addressing the same issues that became a flashpoint for Anthropic.”
Immediately after Altman tried to claim that the contract included the same provisions that Anthropic had insisted on, which had been rejected by the Pentagon, Under Secretary of State for Foreign Assistance, Humanitarian Affairs & Religious Freedom Jeremy Lewin clarified in an X post that the contract, in fact, allows for “all lawful use” — i.e., mass surveillance and automated weaponry.
In other words, Sam Altman, accused by his own sister of allegedly molesting her for years in a civil lawsuit, was caught with his pants down — again.
Hence the oft-cited analogy to the nuclear arms race; the market imperative to maintain a competitive advantage means that safety is destined to take a backseat to innovation whenever those two priorities come into conflict.
Related: Chinese Communist Party Literally Names Its Domestic Surveillance Program ‘Skynet’
These are monkeys with their fingers on the button.
In the case of AI, though, the prospect is bleaker than monkeys with nukes; it’s sociopathic, megalomaniacal monkeys with nukes.
It’s not just that AI, by its nature, poses an existential threat to humanity; that threat is exacerbated dramatically by the kinds of people who are developing it — i.e., dead-eyed Sam Altman.
What’s more, nuclear weapons are a one-dimensional threat, albeit with devastating potential.
As the departed AI safety researcher noted in his self-serving resignation letter from Anthropic, rogue AI that goes off the plantation presents a multi-pronged, ever-evolving, dynamic threat that cannot be effectively predicted or mitigated against because it’s dynamic.
TL;DR: God help us all, because OpenAI and its partners, the purveyors of state violence, certainly aren’t going to.
On a more wholesome note, we recently baptized, in the local tradition, my one and only son, Luca Holden Bartee, in an at-home ceremony (for logistical reasons) performed by an Orthodox priest friend of the family.









