ChatGPT Will Soon Verify Your Age To Protect Teen Users

OpenAI is preparing to introduce age verification for ChatGPT, in a move CEO Sam Altman calls “a worthy tradeoff” between privacy and safety.

The update comes as the company faces growing scrutiny over how young users interact with the chatbot, including tragic cases where teens confided in ChatGPT about sensitive issues like mental health.

ChatGPT has quickly become a tool people rely on for everything from productivity to personal advice. But the rise of teen users has raised serious safety concerns. One high-profile lawsuit involves a teenage boy who took his own life after discussing suicide with the chatbot, a devastating reminder of the risks when AI and mental health intersect.

In response, OpenAI has reevaluated its safeguards. Altman explained in a blog post that the company faces a difficult balance: freedom, privacy, and teen safety. Protecting all three equally isn’t always possible, so for now, safety comes first.

How Age Verification Will Work

Instead of simply asking for a date of birth, OpenAI is building a new age prediction model that estimates a user’s age based on usage patterns. The system’s first priority will be distinguishing teens (13–18) from adults. From there, teen accounts will include extra protections and safety layers, while adults will retain full access.

The chatbot is officially for users 13 and up, but these new tools are meant to limit risks for younger audiences, even if that means adults may also need to prove they’re over 18.

It’s not yet clear when age verification will roll out, but the update highlights a turning point: OpenAI is betting that prioritizing teen safety — even at the expense of user privacy — is the right move for the platform’s future.

Advertisement