The Future of Life Institute has spent the past decade sounding the alarm over the existential risks posed by advanced artificial intelligence.
Hundreds of people, from conservative commentators to prominent tech executives, have signed a letter seeking a ban on “the development of superintelligence.”
This year, leading technology firms such as Google, Meta Platforms, and OpenAI have accelerated efforts to build artificial intelligence systems capable of outperforming humans across a broad spectrum of elementary and complex tasks.
A growing chorus of prominent people thinks that it is time to hit the brakes—at least temporarily.
The letter, put together by the Future of Life Institute, calls for a ban on advancing superintelligent AI until there is public demand and science charts a safe path for the technology.
“We call for a prohibition on the development of superintelligence, not lifted before there is broad scientific consensus that it will be done safely and controllably, and strong public buy-in,” the brief statement, released on Oct. 22, reads.
The Future of Life Institute has spent the past decade sounding the alarm over the existential risks posed by advanced AI. Its petition has drawn thousands of signatures and support from hundreds of high-profile figures aligned with the group’s mission, including AI pioneers Yoshua Bengio and Geoffrey Hinton.
Bengio said AI systems could outperform most individuals in various cognitive tasks in the next few years. While they will bring advancements, they could also “carry significant risks,” Bengio wrote in a personal note released with the letter.
“To safely advance toward superintelligence, we must scientifically determine how to design AI systems that are fundamentally incapable of harming people, whether through misalignment or malicious use,” he wrote.
“We also need to make sure the public has a much stronger say in decisions that will shape our collective future.”
The letter warns of increasing threats to the world, including the loss of freedom, civil liberties, and “human economic obsolescence and disempowerment.”
Among the other signatories are conservative media personality Glenn Beck, Virgin Group founder Sir Richard Branson, Apple cofounder Steve Wozniak, former national security adviser Susan Rice, and political commentator Steve Bannon.
The letter expresses consternation over the rapid development and deployment of AI across a wide array of industries, political ideologies, and religious sects.
“The future of AI should serve humanity, not replace it,“ Prince Harry, one of many signatories alongside his wife, Meghan, said in a personal note released with the letter. ”The true test of progress will be not how fast we move, but how wisely we steer.”
Stuart Russell, an AI pioneer and computer science professor at the University of California–Berkeley, said the statement is not a prohibition or moratorium “in the usual sense.” Instead, he wrote, it is a proposal to install the necessary safeguards for a technology that “has a significant chance to cause human extinction.”
“Is that too much to ask?” Russell wrote.
In a 2015 blog post, OpenAI CEO Sam Altman wrote that the rise of “superhuman machine intelligence (SMI) is probably the greatest threat to the continued existence of humanity.”
Elon Musk, CEO of Tesla Motors and SpaceX, told podcast host Joe Rogan earlier this year that there is “only a 20 percent chance of annihilation.”
“The probability of a good outcome is like 80 percent,” the billionaire entrepreneur said.
It is not only experts and famous individuals who voice caution.
The Future of Life Institute cited a recent national survey of 2,000 adults that found only 5 percent support for “the status quo of fast, unregulated development.” Close to two-thirds (64 percent), meanwhile, think that superhuman AI either should not be created until it is proven safe and controllable or “should never be developed.”
By Andrew Moran






