OpenAI is rolling out age prediction for ChatGPT

Daily News Article   —   Posted on January 21, 2026

(by Ashley Capoot, CNBC) – OpenAI on Tuesday said it is rolling out an age prediction model to its ChatGPT consumer plans to help the artificial intelligence company identify accounts that belong to users under 18 years old.

The model relies on a combination of account-level signals and behavioral signals, OpenAI said. Some of those signals include usage patterns over time, how long an account has existed, the typical times of day when a user is active and the user’s stated age.

OpenAI has rolled out several new safety features in recent months as it faces mounting scrutiny over how it protects users, particularly minors.

The AI startup and other tech companies are facing a probe from the FTC over how their AI chatbots potentially negatively affect children and teenagers, and OpenAI is named in several wrongful death lawsuits, including one that centers around a teenage[r]…

Once OpenAI’s age prediction model suggests that a user is under 18, OpenAI said ChatGPT will automatically apply protections designed to reduce exposure to “sensitive content.”…

If users are incorrectly identified as under 18, they will be able to use the identity-verification service Persona to restore their full access.

Persona is used by other tech companies, including Roblox, which has also faced pressure from lawmakers to better protect children on its platform.

In August, OpenAI said it would release parental controls to help them understand and shape how their teens are using ChatGPT. The following month, OpenAI rolled out its parental controls and said it was working to build an age prediction system.

The company also convened a council of eight experts in October who will provide insight into how AI could affect users’ mental health, emotions and motivation.

OpenAI said Tuesday that it will continue to improve the accuracy of its age prediction model over time.

The model will roll out in the European Union in the coming weeks “to account for regional requirements.”

Published at CNBC on Jan. 20, 2026. Reprinted here for educational purposes only. May not be reproduced on other websites without permission.



Background

In August 2025, the Associated Press reviewed more than three hours of interactions between ChatGPT and researchers posing as vulnerable teens. The chatbot typically provided warnings against risky activity but went on to deliver startlingly detailed and personalized plans for [harmful behavior].

The researchers at the Center for Countering Digital Hate also repeated their inquiries on a large scale, classifying more than half of ChatGPT’s 1,200 responses as dangerous.

Chatbots affect kids and teens differently than a search engine because they are “fundamentally designed to feel human,” said Robbie Torney, senior director of AI programs at Common Sense Media, which was not involved in the AP's report.

Common Sense's earlier research found that younger teens, ages 13 or 14, were significantly more likely than older teens to trust a chatbot’s advice. (From an AP report Aug. 5, 2025 on NY1 News)