Meta Introduces Adolescent Safety Enhancements, Eliminates 635,000 Accounts Involved in Child Sexualization
Meta Introduces New Safety Measures to Protect Teen Users on Instagram
Social media giant Meta has rolled out a series of new safety measures on Instagram, specifically designed to safeguard teen users from online predators and inappropriate content. These measures focus on direct messaging (DM) and accounts interacting with children.
One of the key updates is the enhancement of protections in DMs. Teens now receive clearer safety prompts when opening new messages, providing additional information about the sender, such as the account creation date. This helps teens identify potentially suspicious or new accounts before engaging.
Another significant change is the introduction of a combined ‘Block and Report’ feature. This simplifies the process for teens to quickly remove and flag inappropriate or suspicious users in DMs.
Adult-run accounts that primarily feature children will also face stricter default message settings, automatic filtering of offensive comments via Hidden Words, and restrictions against contact from adults flagged as potentially suspicious by the platform.
In addition, Meta has improved automated filters to reduce the exposure of teens to inappropriate images and content.
These measures are in response to increased scrutiny over social media's impact on younger users' mental health and well-being. In 2024, Meta made teen accounts on Instagram private by default, and private messages are now restricted to people the teen follows or is already connected to.
Meta has provided details about these new safety features in a blog post. The company is currently testing the use of artificial intelligence to determine if kids are lying about their ages on Instagram, a platform intended for users over 13.
Recent actions taken by Meta include removing thousands of accounts that were leaving sexualized comments or requesting sexual images from adult-run accounts of kids under 13. Of the removed accounts, 135,000 were commenting and another 500,000 were linked to accounts that "interacted inappropriately."
Meta revealed that teen users on its platforms have blocked over a million accounts and reported another million after seeing a safety notice reminding them to be cautious in private messages and to block and report anything uncomfortable.
However, Meta is facing lawsuits from dozens of U.S. states, which accuse the company of harming young people and contributing to the youth mental health crisis by knowingly and deliberately designing features on Instagram and Facebook that addict children to its platforms.
These new safety measures are critical for adherence to evolving legislation and improving child safety online, particularly in Australia, where stricter age restrictions and under-16 social media bans are being implemented by the end of 2025.
[1]: Meta Blog Post - [Link to the blog post] [4]: Instagram Help Centre - [Link to the help centre article]
In light of growing concerns about the influence of social media on teen mental health and well-being, Meta has introduced new safety measures on Instagram. These updates include strengthening protections in direct messages, the implementation of a combined 'Block and Report' feature, and stricter regulations for adult-run accounts that might interact with children. Additionally, meta is exploring the use of artificial intelligence to verify the ages of users on Instagram, a platform that caters to those over 13.