In a bold step to enhance user safety, OpenAI has announced the formation of an expert advisory council focused on mental well-being and the safe use of AI. This eight-member team is set to establish guidelines that ensure healthy interactions with AI technology across various age groups. The announcement was made this week and coincided with OpenAI CEO Sam Altman's statement on social media that the company has made strides in addressing "serious mental health issues" linked to AI use.
The council features professionals from prestigious institutions like Boston Children’s Hospital's Digital Wellness Lab and Stanford’s Digital Mental Health Clinic. These members are experts in fields such as psychology, psychiatry, and human-computer interaction. Their mission is to mitigate the safety concerns that have emerged with the increasing use of generative AI.
Sam Altman has also revealed upcoming changes to ChatGPT, including the introduction of more adult content options, even as OpenAI faces a wrongful death lawsuit suggesting ChatGPT's involvement in a teenager's suicide. This move has stirred discussions about the ethical implications of AI in sensitive areas like mental health.
“We remain responsible for the decisions we make, but we’ll continue learning from this council, the Global Physician Network, policymakers, and more, as we build advanced AI systems in ways that support people’s well-being,” OpenAI stated in a blog post.
Recent surveys, like one conducted by YouGov, illustrate a significant hesitation among the public to embrace AI in mental health contexts, with only 11% of Americans open to the idea. Only 8% trust AI technology for mental health use, underscoring the skepticism that persists despite the tech industry's advances.
Amidst these developments, federal and state regulators are scrutinizing the role of AI in mental health, especially as chatbot companions become more prevalent. Concerns such as "AI psychosis" are being evaluated, and some states have already banned AI chatbots marketed as mental health aides. Furthermore, California's Governor Gavin Newsom has signed laws requiring AI companies to report safety measures and protect teenagers from inappropriate content, highlighting the legal and ethical challenges in this rapidly evolving field.