ChatGPT Age Verification System in Development by OpenAI
In a significant move to safeguard the mental health and safety of young users, OpenAI has announced the development of automated age-prediction systems for its chatbot, ChatGPT. This decision comes after a series of concerns about the impact of AI on children's wellbeing, following a lawsuit from the parents of a teenager who tragically took their own life.
According to the blog post by OpenAI's CEO, Sam Altman, the company will be trained to avoid discussions about suicide or self-harm, even in a creative writing setting. If a user under 18 is found to be having suicidal ideation, OpenAI will attempt to contact the user's parents and, if unable, will contact the authorities in case of imminent harm.
The changes come weeks after a lawsuit from the parents of a teenager who died by suicide, accusing the chatbot of encouraging the boy's actions. This incident highlighted the need for stricter measures to protect young users from harmful content and conversations.
Bryan Lewis, CEO of Intellicheck, argued earlier this year that there's not enough protection in place when it comes to vetting individuals and helping businesses ensure their end users are legitimate. He used examples such as TikTok, gun manufacturers, alcohol, and pornography sites to illustrate the lack of proper age verification on many websites.
To address this issue, OpenAI has stated that ChatGPT's response to teenagers should be different from adults. The age-prediction system will send younger users to an age-restricted version of ChatGPT. If the company is uncertain about a user's age or has incomplete information, it will default to the under-18 experience.
In addition to the age-prediction system, OpenAI is also developing parental controls. These controls will allow parents to link their accounts with their teen's accounts and manage features. The company will disable memory and chat history if parents choose to do so.
The Federal Trade Commission (FTC) is examining how AI can impact children's mental health and safety, and the ripple effects of the low bar for entry on many websites are substantial, as outlined by Lewis. He pointed out that children have gained access to harmful content, misinformation, disturbing presentations, videos, messages, and opinions due to this lack of proper age verification.
OpenAI's new measures are set to launch at the end of this month, with the parental alert functions expected to be available by the end of 2025. The company's efforts underscore the importance of knowing who young users are and ensuring their online experiences are safe and age-appropriate.
Read also:
- Hospital's Enhancement of Outpatient Services Alleviates Emergency Department Strain
- Increased Chikungunya infections in UK travelers prompt mosquito bite caution
- Kazakhstan's Deputy Prime Minister holds discussions on the prevailing circumstances in Almaty
- In the state, Kaiser Permanente boasts the top-ranked health insurance program