Daily News Article - January 21, 2026
1. What is the purpose of ChatGPT's age prediction model?
2. How will the age prediction model identify underage users?
3. For what reasons has the company taken the step to try to identify users under 18?
4. a) What will ChatGPT do if it determines the user is underage?
b) How can a user who was mistaken for underage change their status?
c) Do you think OpenAI is doing enough to protect kids and teens from the harmful effects associated with using ChatGPT? Explain your answer.
5. What previous action did OpenAI take in October?
6. The age prediction model will roll out in the EU in the coming weeks - no mention on when it will be implemented in the U.S.
a) Do you think the age prediction model is necessary? Explain your answer.
b) Do you think it will work to protect teens? Explain your answer.
7. AI companies like those behind ChatGPT have a moral responsibility to protect users from dangerous interactions primarily because they create and deploy powerful technologies that can profoundly influence vulnerable people's thoughts, emotions, and actions — especially when users treat the AI as a confidant, advisor, or companion.
Developers knowingly build systems capable of generating persuasive, empathetic, or reinforcing responses that can amplify harmful ideas (like self-harm, delusions, or risky behaviors) in at-risk individuals, such as teens or those in mental health crises. Foreseeable harms — backed by real incidents linked to chatbot interactions — make prevention a basic ethical duty, akin to engineers' obligation to design safe bridges or planes rather than ignoring collapse risks.
(from a Jan. 20 Grok prompt "explain in a few sentences why AI companies like ChatGPT have a moral responsibility to protect users")
Do you think CEO Sam Altman is taking his moral/ethical responsibility seriously? Explain your answer.