OpenAI is rolling out an innovative age prediction mechanism to enhance the safety of ChatGPT users worldwide. This new feature follows a recent update allowing verified adults to generate explicit content, marking a significant shift in user content management. While the initial launch is planned for the European Union in the coming weeks, global deployment is on the horizon.
ChatGPT will assess a mix of behavioral and account-level signals to estimate a user’s age. These include account longevity, activity timestamps, usage patterns, and self-stated age. If a user is identified as under 18, ChatGPT will automatically enable stricter safety settings. However, if adults are mistakenly flagged as minors, they can disable these settings by verifying their age through Persona, a trusted third-party verification service.
“Ensuring that sensitive content is handled with care for our younger audience is a priority for OpenAI,” said a company representative.
Despite its benefits, the tool isn't foolproof. A study in Australia revealed that facial age estimation has a margin of error, with some minors bypassing protective measures on social media platforms. However, the efficacy of such tools is evident, as illustrated by Australia’s recent success in banning under-16 social media accounts, blocking around 4.7 million accounts.
As the European Parliament considers similar legislation, this move by OpenAI aligns with the EU's Digital Services Act requirements. ChatGPT, which boasts around 900 million weekly active users, continues to face scrutiny over its content guardrails, especially after incidents linking the platform to crisis situations and teen suicides.
OpenAI is committed to refining its platform to better serve and protect all users. As the digital landscape evolves, implementing robust age verification continues to be a crucial step in fostering a safer online environment.