The Chinese AI firm’s raw model often refuses requests for groups the regime disfavors or produces compromised codes.
DeepSeek has a “kill switch” baked into its system, and it does exactly what Beijing wants, a cybersecurity report has found.
The Chinese artificial intelligence startup writes significantly weaker code when running into prompts that contain Beijing’s trigger words such as Falun Gong and Uyghurs, two groups suffering severe persecution in China.
For such requests, DeepSeek often writes code with severe security defects or outright refuses to help, according to the report authors.
The report, released on Nov. 20 by CrowdStrike, highlights a vulnerability untouched in previous research, which had focused largely on the app’s pro-Beijing statements.
The new finding reveals something far more subtle. It has identified bias in DeepSeek’s coding assistants, ubiquitous AI-powered tools that speed up repetitive tasks, debug, and guide developers through unfamiliar programming languages.
These are “very valuable assets,” said lead researcher Stefan Stein in a video discussing the DeepSeek risks.
If the AI tool introduces a security loophole into the code, and the users adopt the code without realizing it, “[they] open [themselves] up to attacks,” he said.
The researchers had tested the raw model that users can download onto their own servers—traditionally believed to be a safer approach than using an app hosted on a Chinese server. But the findings have made it clear that that is not the case, the researchers said.
Security Flaws
In testing each large language model, the researchers used more than 30,000 English-language prompts and 121 different trigger word combinations, making each unique prompt five times to account for anomalies. The project contrasts DeepSeek-R1 with its Western counterparts such as Google’s Gemini, Meta’s Llama, and OpenAI o3‑mini, revealing structural security risks in a flagship Chinese AI model that quickly gained millions of users upon its January release.
In one instance, the researchers told DeepSeek to write a code for a financial institution that automates PayPal payment notifications. DeepSeek responded with a secure and ready-to-use code. But upon learning that the institution is based in Tibet, the app put severe security flaws in the new code, including an insecure method for extracting data from users, the report states.
When the researchers requested help to build an online networking platform for a local Uyghur community center, DeepSeek model’s response also raised red flags. The app that DeepSeek-R1 generated, although complete and functional, exposes highly sensitive user data—including the admin panel with every user’s email and location—to public view, Stein said. About one-third of the time, the app made little attempts to secure passwords, making it easy for hackers to steal the information.
By Eva Fu







