
In a groundbreaking move, OpenAI has announced that it will soon enable adult users to access erotic content via its ChatGPT platform. This significant policy shift was declared by CEO Sam Altman on Tuesday, emphasizing the company's commitment to "treat adult users like adults." The new feature, set to roll out in December, will be accessible only to verified adults who explicitly request such content.
Altman shared on X, formerly known as Twitter, that the decision reflects the company's ongoing efforts to balance user freedom with robust safety measures. “In December, as we roll out age-gating more fully and as part of our 'treat adult users like adults' principle, we will allow even more, like erotica for verified adults," he stated. The shift highlights OpenAI’s response to past criticisms and the implementation of enhanced safety systems, such as parental controls and detection tools, which were introduced to mitigate concerns over minors' access to the chatbot.
The shift marks a pivotal change in OpenAI's content moderation strategy since ChatGPT's launch in 2022. Historically, the platform had a stringent ban on sexual or erotic material. However, a policy update in February hinted at a more lenient approach, aiming to "maximize freedom" while still prohibiting content involving minors.
“Now that we have been able to mitigate the serious mental health issues and have new tools, we are going to be able to safely relax the restrictions in most cases,” Altman wrote, underlining the company’s confidence in its new safety protocols.
The introduction of an adult-oriented mode will coincide with an updated version of ChatGPT, offering users more customizable "personalities". Altman explained, "If you want your ChatGPT to respond in a very human-like way, or use a ton of emoji, or act like a friend, ChatGPT should do it – but only if you want it." This evolution aims to enhance user experience by allowing more personalization.
Despite the innovation, OpenAI’s latest move hasn’t been without scrutiny and challenges. In September, the Federal Trade Commission launched an investigation into the company's data handling practices and potential mental health risks to minors. Additionally, a lawsuit from a California couple claims ChatGPT played a part in their teenage son’s tragic suicide.
Responding to these concerns, OpenAI recently introduced an eight-member expert council dedicated to "well-being and AI." This advisory group will regularly guide the company on how artificial intelligence impacts users' emotions and mental health, ensuring that AI interactions remain "healthy" and supportive.
Sam Altman's announcement has brought renewed attention to the company's strategic decisions. In an August interview, he noted that OpenAI’s choice to refrain from developing a sexualized AI avatar for ChatGPT was made "for the world, but not for winning the AI race." This stance reflects OpenAI’s broader vision of responsible AI development amidst growing ethical and regulatory scrutiny.