Leopold Aschenbrenner issues a warning about the CCP exploiting AI: โThe preservation of the free world against the authoritarian states is on the line.โ
A researcher who was fired by OpenAI has predicted that human-like artificial general intelligence (AGI) could be achieved by 2027 and sounded the alarm on the threat of Chinese espionage in the field.
โIf and when the CCP [Chinese Communist Party] wakes up to AGI, we should expect extraordinary efforts on the part of the CCP to compete. And I think thereโs a pretty clear path for China to be in the game: outbuild the US and steal the algorithms,โ Leopold Aschenbrenner wrote.
Mr. Aschenbrenner argued that, without stringent security measures, the CCP will exfiltrate โkey AGI breakthroughsโ in the next few years.โ It will be the national security establishmentโs single greatest regret before the decade is out,โ he wrote, warning that โthe preservation of the free world against the authoritarian states is on the line.โ
He advocates more robust security for AI model weightsโthe numerical values reflecting the strength of connections between artificial neuronsโand, in particular, algorithmic secrets, an area where he perceives dire shortcomings in the status quo.
โI think failing to protect algorithmic secrets is probably the most likely way in which China is able to stay competitive in the AGI race,โ he wrote. โItโs hard to overstate how bad algorithmic secrets security is right now.โ
Mr. Aschenbrenner also argues that AGI could give rise to superintelligence in little more than half a decade by automating AI research itself.
Titled โSituational Awareness: The Decade Ahead,โ Mr. Aschenbrennerโs series has elicited a range of responses in the tech world. Computer scientist Scott Aaronson described it as โone of the most extraordinary documents Iโve ever read,โ while software engineer Grady Booch wrote on X that many elements of it are โprofoundly, embarrassingly, staggeringly wrong.โ
โItโs well past time that we regulate the field,โ Jason Lowe-Green of the Center for AI Policy wrote in an opinion article lauding Mr. Aschenbrennerโs publication.
Byย Nathan Worcester