
OpenAI and Anthropic are making tweaks to their chatbots that they say will make them safer for teens. As OpenAI has updated its guidelines on how ChatGPT should interact with users between the ages of 13 and 17, Anthropic is working on a new way to identify if someone might be underage.
On Thursday, OpenAI announced that ChatGPT’s Model Spec – the guidelines for how its chatbot should behave – will include four new principles for users under 18. Now, it aims to have ChatGPT “put teen safety first, even when it may conflict with other goals.” That means guiding teens toward safer options when other user interests, like “maximum intellectual …
Read the full story at The Verge.
