redo Jump to...
print Print...
(by Ashley Capoot, CNBC) – OpenAI on Tuesday said it is rolling out an age prediction model to its ChatGPT consumer plans to help the artificial intelligence company identify accounts that belong to users under 18 years old.
The model relies on a combination of account-level signals and behavioral signals, OpenAI said. Some of those signals include usage patterns over time, how long an account has existed, the typical times of day when a user is active and the user’s stated age.
OpenAI has rolled out several new safety features in recent months as it faces mounting scrutiny over how it protects users, particularly minors.
The AI startup and other tech companies are facing a probe from the FTC over how their AI chatbots potentially negatively affect children and teenagers, and OpenAI is named in several wrongful death lawsuits, including one that centers around a teenage[r]…
Once OpenAI’s age prediction model suggests that a user is under 18, OpenAI said ChatGPT will automatically apply protections designed to reduce exposure to “sensitive content.”…
If users are incorrectly identified as under 18, they will be able to use the identity-verification service Persona to restore their full access.
Persona is used by other tech companies, including Roblox, which has also faced pressure from lawmakers to better protect children on its platform.
In August, OpenAI said it would release parental controls to help them understand and shape how their teens are using ChatGPT. The following month, OpenAI rolled out its parental controls and said it was working to build an age prediction system.
The company also convened a council of eight experts in October who will provide insight into how AI could affect users’ mental health, emotions and motivation.
OpenAI said Tuesday that it will continue to improve the accuracy of its age prediction model over time.
The model will roll out in the European Union in the coming weeks “to account for regional requirements.”
Published at CNBC on Jan. 20, 2026. Reprinted here for educational purposes only. May not be reproduced on other websites without permission.
Questions
1. What is the purpose of ChatGPT’s age prediction model?
2. How will the age prediction model identify underage users?
3. For what reasons has the company taken the step to try to identify users under 18?
4. a) What will ChatGPT do if it determines the user is underage?
b) How can a user who was mistaken for underage change their status?
c) Do you think OpenAI is doing enough to protect kids and teens from the harmful effects associated with using ChatGPT? Explain your answer.
5. What previous action did OpenAI take in October?
6. The age prediction model will roll out in the EU in the coming weeks – no mention on when it will be implemented in the U.S.
a) Do you think the age prediction model is necessary? Explain your answer.
b) Do you think it will work to protect teens? Explain your answer.
7. AI companies like those behind ChatGPT have a moral responsibility to protect users from dangerous interactions primarily because they create and deploy powerful technologies that can profoundly influence vulnerable people’s thoughts, emotions, and actions — especially when users treat the AI as a confidant, advisor, or companion.
Developers knowingly build systems capable of generating persuasive, empathetic, or reinforcing responses that can amplify harmful ideas (like self-harm, delusions, or risky behaviors) in at-risk individuals, such as teens or those in mental health crises. Foreseeable harms — backed by real incidents linked to chatbot interactions — make prevention a basic ethical duty, akin to engineers’ obligation to design safe bridges or planes rather than ignoring collapse risks.
(from a Jan. 20 Grok prompt “explain in a few sentences why AI companies like ChatGPT have a moral responsibility to protect users”)
Do you think CEO Sam Altman is taking his moral/ethical responsibility seriously? Explain your answer.
Background
In August 2025, the Associated Press reviewed more than three hours of interactions between ChatGPT and researchers posing as vulnerable teens. The chatbot typically provided warnings against risky activity but went on to deliver startlingly detailed and personalized plans for [harmful behavior].
The researchers at the Center for Countering Digital Hate also repeated their inquiries on a large scale, classifying more than half of ChatGPT’s 1,200 responses as dangerous.
Chatbots affect kids and teens differently than a search engine because they are “fundamentally designed to feel human,” said Robbie Torney, senior director of AI programs at Common Sense Media, which was not involved in the AP’s report.
Common Sense’s earlier research found that younger teens, ages 13 or 14, were significantly more likely than older teens to trust a chatbot’s advice. (From an AP report Aug. 5, 2025 on NY1 News)
Daily “Answers” emails are provided for Daily News Articles, Tuesday’s World Events and Friday’s News Quiz.