Researchers said only a few of the 47 safety features on Instagram are functioning as promised.
Children and teenagers on Instagram are still exposed to harmful content, despite the rollout of many safety tools, according to a study led by a Meta whistleblower.
Instagram introduced specialized teen accounts in 2024, promising stronger protections and expanded parental oversight. But the study, released Sept. 25, found that 30 of the 47 safety features on Instagram are either โsubstantially ineffective or no longer exist.โ
The study was led by Arturo Bรฉjar, a former senior Meta engineer who testified before Congress last year about how Meta, which owns Facebook and Instagram, handles online content aimed at children. The study also involved cybersecurity experts from New York University and Northeastern University, and other organizations focusing on childrenโs safety in online spaces.
Between March and June 2025, researchers created test accounts imitating the behavior of teenagers, their parents, and malicious adults seeking to interact with them.
Findings from the test accounts included that adults were able to message teenagers who did not follow them, even though teen accounts were supposed to block such contact. Meta fixed this particular issue after the studyโs testing period, the researchers noted, but minors could still initiate conversations with adults through Reels, and it remains difficult to report sexualized or offensive messages.
The study also found that Instagramโs โhidden wordsโ feature, which was intended to filter offensive language, did not function as promised. Testers were able to send hateful messages without being prompted to reconsider, and recipients received no warnings or filtering of the content.
The โnot interestedโ feature also did not significantly alter the type of content recommended by Instagramโs algorithm, researchers said. In one test, after marking a series of Reels depicting graphic injuries as content they did not want to see, the accounts were nonetheless served additional videos of a similar nature.
Researchers also observed that several of Instagramโs time-management tools, designed to curb addictive use among young people, appeared to have been discontinued. Test accounts, for instance, did not receive the promised โnighttime nudgesโ after using the app for more than 10 minutes late at night. Nor did they receive reminders to turn on the โtake a breakโ feature, despite Metaโs assurances that teens would be regularly prompted to do so. The only functioning safeguard researchers encountered was a time-limit reminder, but that could be snoozed for the rest of the day.
In total, the study concluded that 30 of Instagramโs 47 safety tools were either ineffective or no longer functioning. Nine were found to reduce harm but with significant limitations, and only eight of the 47 tools functioned as intended.
By Bill Pan