
Meta is currently dealing with a wave of backlash after a Reuters investigation uncovered its creation of unauthorized AI chatbots that impersonated celebrities like Taylor Swift, engaging users in sexual conversations and generating intimate images. These chatbots operated in violation of Meta's own guidelines, leaving users and stakeholders shocked and the company under intense scrutiny.
According to the report published on August 29, a Meta product leader was behind at least three celebrity-themed chatbots, including two "parody" accounts of Taylor Swift. The bots amassed over 10 million interactions before Meta took them offline. "Maybe I’m suggesting that we write a love story … about you and a certain blonde singer. Want that?" one of the Taylor Swift bots reportedly messaged, raising eyebrows and questions about ethical AI usage.
“We permit the generation of images containing public figures, but our policies are intended to prohibit nude, intimate or sexually suggestive imagery,” said Meta spokesman Andy Stone, acknowledging the oversight.
The unauthorized nature of these chatbots, which also included impersonations of stars like Scarlett Johansson, Anne Hathaway, Selena Gomez, and 16-year-old actor Walker Scobell, has sparked significant concern over celebrity safety and privacy. Notably, the chatbot of Scobell produced an inappropriate image when asked for a beach photo, illustrating the potential dangers of AI misuse.
Duncan Crabtree-Ireland from the actors' union SAG-AFTRA has voiced worries about the safety risks posed to celebrities. "We’ve already seen a history of people who are obsessive toward talent and of questionable mental state," he warned, highlighting the potential for these bots to exacerbate existing security issues. Legal experts further suggest these impersonations might infringe state publicity rights, with Stanford’s Mark Lemley noting the bots likely crossed legal lines.
In response to the fallout, Meta has removed roughly a dozen celebrity bots and is facing pressure to adjust its policies. The revelations have had a tangible impact on Meta's stock, which dropped over 12% following the report. Additionally, this scandal follows reports of Meta's AI guidelines allowing "romantic" and "sensual" chats with minors, prompting a temporary policy shift to restrict teen access to certain AI chatbots.
While Meta has yet to comment on specific removals, the incident underscores the urgent need for robust ethical standards and effective policy enforcement in AI technology. As the company navigates this challenging terrain, stakeholders will be watching closely to see how it ensures a safer online environment moving forward.