A new report, led by former Meta executive Arturo Béjar and multiple nonprofits, accuses Instagram of making deliberate design choices that put teens in danger. It argues that many of Instagram’s safety tools for minors do not work as promised, and the company’s public claims about teen safety mislead users and parents.
What the Report Found
The authors tested 47 out of 53 safety features Instagram claims to have. They discovered that most features were missing, outdated, or ineffective. Only eight features worked fully as intended, without major limitations. The rest either failed, were disabled, or placed unfair burdens on users. The report stresses that the focus should not just be on content moderation but on how Instagram’s design itself channels harmful content to teens.
Problematic Features and Design Patterns
One major concern is that Instagram still allows adult strangers to contact minors through features built into its design. Even though the company claims to limit such interactions, the report found that users could bypass restrictions.
Another design issue involves disappearing messages. The platform encourages teens to use them through gamified prompts. This feature makes it harder for teens to report abuse or trace wrongdoing after messages vanish.
The study also showed that Instagram still recommends sexual, violent, self-harm, and disturbing content to teen and underage accounts, even when Meta claims it avoids doing so.
Meta’s Response
Meta rejected the report’s conclusions, calling them misleading and speculative. The company says many safety tools do function, that Teen Accounts already reduce exposure to harmful content, and that parental controls give families tools to shield teens. However, the report’s authors counter that Meta’s claims rely on cherry-picking data and lack transparency about how tools perform in real life.
Why Design Matters More Than Moderation
Design choices shape how people use a platform. Even if moderation exists, if the path users take funnels them toward risky content, design becomes the weak point. The report argues that Instagram prioritizes engagement and growth over teen safety, choosing features that boost usage even when harmful.
Because many safety tools demand action from the teen, such as manually hiding or reporting, the burden falls on vulnerable users. Worse, features meant to protect may be hidden or ineffective, undermining trust.
What Should Happen Next
The authors urge Meta to redesign Instagram with safety as the default rather than as an optional add-on. They stress the need for clear and effective reporting paths for minors, alongside greater transparency about how safety tools function in practice. In addition, they recommend that Meta test new product changes against scenarios that simulate real cases of abuse to identify flaws before release.
Conclusion
The new report presents a sobering view. Instagram’s underlying design may work against teen safety more than against harmful content. While Meta promotes Teen Accounts and parental tools, the reality seems more complex. Unless Instagram rethinks its designs rather than patching features, teens may remain exposed even when safety is promised.
Bonus Read: Parents of Teens Testify to Congress After AI Chatbot Links in Teen Suicides