Meta introduces fresh adolescent safety measures, eliminates 635,000 accounts accused of youth sexualization
In a bid to protect younger users on its platform, Meta, the parent company of Instagram, has introduced new safety features for teen accounts. These measures aim to safeguard children from predatory adults and scammers who may seek to exploit them.
One of the key features is the provision of contextual information about the accounts that message teen users. When chatting with someone, teens can tap a "Safety Tips" icon that offers options to restrict, block, or report the user. This feature helps teenagers recognize and respond to potentially harmful contacts[1][3][5].
Meta has also introduced a combined block and report option in direct messages (DMs), allowing teens to quickly take action against accounts making them uncomfortable. This simplifies the process of stopping unwanted or predatory interactions[2][3].
The company has been proactive in removing accounts involved in sexualizing children. Over 635,000 accounts have been removed, including those making sexualized comments or soliciting sexual images from accounts managed by adults featuring kids under 13. This crackdown extends to hundreds of thousands of accounts linked to inappropriate behaviors[1][4].
To further protect teen users, accounts primarily sharing photos or videos of children are now automatically placed under the strictest message settings to block unsolicited messages. The "Hidden Words" feature is enabled on such accounts to filter offensive comments, and these accounts receive notifications encouraging them to review privacy and safety settings[2].
Meta is also testing the use of artificial intelligence to verify the ages of users on Instagram. As of 2024, teen accounts on Instagram are private by default[1][4].
These initiatives demonstrate Meta's multi-faceted approach to protecting younger users on Instagram by combining automated enforcement, enhanced user controls, and preventive settings for vulnerable accounts. The company is responding amid increased scrutiny regarding the safety and well-being of teen users on social media platforms[1][4].
It is worth noting that Meta is currently facing lawsuits from dozens of U.S. states, accusing it of harming young people and contributing to the youth mental health crisis. The lawsuits claim that the company knowingly and deliberately designed features on Instagram and Facebook that addict children to its platforms[6].
Despite these challenges, Meta's efforts to improve safety for teen users on Instagram are evident. After seeing safety notices warning them to be cautious with private messages, teen users have blocked over a million accounts and reported another million, showing engagement with the new tools offered to protect their safety[1][4]. The safety notice reminds people to "be cautious in private messages" and to "block and report anything that makes them uncomfortable."
References:
[1] Meta Press Release: [Link to the press release] [2] TechCrunch: [Link to the TechCrunch article] [3] The Verge: [Link to The Verge article] [4] Wired: [Link to the Wired article] [5] The Guardian: [Link to The Guardian article] [6] CNN Business: [Link to the CNN Business article]
- Amazon, based in Seattle, might consider implementing similar safety measures for its social media platform, given Meta's proactive approach to protecting younger users on Instagram, particularly in light of controversies surrounding the impact of technology on children's health-and-wellness.
- Microsoft, a tech giant with a strong presence in business and entertainment, could examine Meta's safety features for teen accounts, as such features align with the need for social responsibility in the technology sector.
- In the realm of science and technology, it's crucial to develop algorithms that can verify the ages of users, especially on health-and-wellness platforms, to ensure the safety and well-being of all users, particularly younger ones.
- As entertainment industries expand their presence on social media, they should follow Meta's lead in prioritizing the safety of their younger users, implementing measures such as automatic strict message settings for accounts sharing inappropriate content.