Existential Safety Is AI Industry’s Core Weakness, Study Warns

5Mind. The Meme Platform

‘By really shedding light on what companies are doing, we give them an incentive to do better,’ the Future of Life Institute’s president said.

Eight major artificial intelligence (AI) developers are failing to plan for how they would manage extreme risks posed by future AI models that match or surpass human capabilities, according to a study by the Future of Life Institute (FLI) published Dec. 3.

FLI’s Winter 2025 AI Safety Index assessed U.S. companies Anthropic, OpenAI, Google DeepMind, Meta, and xAI, and Chinese companies Z.ai, DeepSeek, and Alibaba Cloud across six themes, which included current harms, safety frameworks, and existential safety.

The independent panel of experts who conducted the review found that even with the highest-scoring developers, “existential safety remains the industry’s core structural weakness.”

Artificial narrow intelligence, or weak AI, is the current level of AI that exists today, according to IBM.

Tech giants are working toward developing artificial general intelligence (AGI), or strong AI, which IBM defines as AI that can “use previous learnings and skills to accomplish new tasks in a different context without the need for human beings to train the underlying models.”

Artificial superintelligence or Super AI, if realized, “would think, reason, learn, make judgements and possess cognitive abilities that surpass those of human beings,” according to IBM.

“All of the companies reviewed are racing toward AGI/superintelligence without presenting any explicit plans for controlling or aligning such smarter-than-human technology, thus leaving the most consequential risks effectively unaddressed,” FLI said in the report.

All Fail on Existential Safety

The findings of the evaluation were presented in the form of report cards with letter grades from A to F, accompanied by a corresponding numerical grade point average (GPA).

For the existential safety metric, which “examines companies’ preparedness for managing extreme risks from future AI systems that could match or exceed human capabilities, including stated strategies and research for alignment and control,” not one developer scored higher than D.

Anthropic, OpenAI, and Google DeepMind all achieved a D, which, according to FLI, indicates a weak strategy that contains “vague or incomplete plans for alignment and control” or shows “minimal evidence of technical rigor.”

The remaining five developers scored Fs, meaning they were regarded as having “no credible strategy,” lacking safeguards or increasing their catastrophic-risk exposure.

By Victoria Friedman

Read Full Article on TheEpochTimes.com

Contact Your Elected Officials
The Epoch Times
The Epoch Timeshttps://www.theepochtimes.com/
Tired of biased news? The Epoch Times is truthful, factual news that other media outlets don't report. No spin. No agenda. Just honest journalism like it used to be.

A Connection Between the Murder of MN Rep. Hortman and Somali Fraud?

Minnesotans are urging the FBI to launch a real investigation into the killing of Rep. Hortman and any possible ties to Somali fraud cases statewide.

Poland & Hungary Are Threatened By Ukraine Yet Still Remain Divided By It

As of February 2025, official police data showed that Ukrainians committed more crimes in Poland than any other foreigners.

It’s Morning Again In America

In 1984, Reagan said “Morning Again in America.” Today, supporters say it rings true as President Trump leads recovery after the Biden years.

2026 Pray for the Nation

This new year, commit to monthly prayer for our nation. Follow the plan daily or weekly, and post it by your fridge or coffee station for 2026.

Nick Shirley Exposes MORE Minnesota Somali Fraud!

Just about every working taxpaying American is familiar with the welfare fraud committed by the immigrant Somali community of Minnesota.

ICE Targeting Massive Suspected Fraud in Minnesota

Federal agents swept Minneapolis Dec. 29 in a massive childcare and fraud probe, Homeland Security Secretary Kristi Noem said.

First 2026 Social Security Payments Being Sent Soon: 4 Things to Know

Social Security payments with 2026 adjustments will start being sent out this week, according to the agency’s schedule.

OpenAI Looking to Hire ‘Head of Preparedness’ to Tackle AI Dangers

OpenAI is seeking to hire a candidate for the post of “Head of Preparedness” to tackle dangers posed by the proliferation of AI, CEO Altman said in post on X.

FBI Surged Resources to Minnesota Early in Federal Fraud Investigation, Patel Says

FBI Dir. Kash Patel confirmed his office had surged resources to Minnesota to investigate claims of fraud exploiting federal services and programs.

White House to Present Plans for Trump’s East Wing Ballroom in January

The White House will unveil new details on President Donald Trump’s planned East Wing ballroom during a hearing early next month,.

Trump Credits Tariffs for Surprisingly Strong Economic Growth

Trump said that his tariffs led to a significant expansion of the U.S. economy after a federal agency released its estimates for the 3rd quarter of 2025.

Trump Unveils ‘Golden Fleet’ Initiative, New Battleship Class in His Name

President Trump unveiled plans for a new fleet of large warships, to be called “Trump-class” battleships, as part of his vision to build a “Golden Fleet.”

Trump Admin Halts Offshore Wind Projects Over National Security Concerns

Trump administration halts U.S. offshore wind construction, pausing leases on five major projects after Pentagon warns turbines could disrupt military radar.
spot_img

Related Articles

Popular Categories

MAGA Business Central