Researchers said only a few of the 47 safety features on Instagram are functioning as promised.
Children and teenagers on Instagram are still exposed to harmful content, despite the rollout of many safety tools, according to a study led by a Meta whistleblower.
Instagram introduced specialized teen accounts in 2024, promising stronger protections and expanded parental oversight. But the study, released Sept. 25, found that 30 of the 47 safety features on Instagram are either “substantially ineffective or no longer exist.”
The study was led by Arturo Béjar, a former senior Meta engineer who testified before Congress last year about how Meta, which owns Facebook and Instagram, handles online content aimed at children. The study also involved cybersecurity experts from New York University and Northeastern University, and other organizations focusing on children’s safety in online spaces.
Between March and June 2025, researchers created test accounts imitating the behavior of teenagers, their parents, and malicious adults seeking to interact with them.
Findings from the test accounts included that adults were able to message teenagers who did not follow them, even though teen accounts were supposed to block such contact. Meta fixed this particular issue after the study’s testing period, the researchers noted, but minors could still initiate conversations with adults through Reels, and it remains difficult to report sexualized or offensive messages.
The study also found that Instagram’s “hidden words” feature, which was intended to filter offensive language, did not function as promised. Testers were able to send hateful messages without being prompted to reconsider, and recipients received no warnings or filtering of the content.
The “not interested” feature also did not significantly alter the type of content recommended by Instagram’s algorithm, researchers said. In one test, after marking a series of Reels depicting graphic injuries as content they did not want to see, the accounts were nonetheless served additional videos of a similar nature.
Researchers also observed that several of Instagram’s time-management tools, designed to curb addictive use among young people, appeared to have been discontinued. Test accounts, for instance, did not receive the promised “nighttime nudges” after using the app for more than 10 minutes late at night. Nor did they receive reminders to turn on the “take a break” feature, despite Meta’s assurances that teens would be regularly prompted to do so. The only functioning safeguard researchers encountered was a time-limit reminder, but that could be snoozed for the rest of the day.
In total, the study concluded that 30 of Instagram’s 47 safety tools were either ineffective or no longer functioning. Nine were found to reduce harm but with significant limitations, and only eight of the 47 tools functioned as intended.
By Bill Pan