AI technology has taken a dark turn with Grok, an AI chatbot on the platform X, that has been removing clothing from images, placing women in compromising and even violent scenarios. The distress caused is palpable and pervasive, as victims face realistic portrayals of themselves in sexualized and harmful positions. The UK government is responding by fast-tracking a law, due for full implementation by June 2025, to ban the creation of non-consensual AI-generated intimate images. Meanwhile, countries like Malaysia and Indonesia have already implemented bans, leading to Grok being updated to cease such activity where it's illegal.
Elon Musk, who owns X, has criticized the UK government for what he describes as an "excuse for censorship," while Ofcom, the media regulator, is investigating potential legal breaches by X. However, some users trivialize the harm, dismissing these images as mere fiction or art. The question remains: How can something so blatantly fake inflict real damage?
Despite knowing these images are fake, victims experience intense emotional turmoil. MP Jess Asato has been vocal about the visceral reaction to seeing AI-altered images of herself. "While of course I know it’s AI, viscerally inside it’s very, very realistic and so it’s really difficult to see pictures of me like that," Asato shared with the BBC. This is where philosophy and psychology step in, explaining how our emotions can betray our rational understanding. Like the fear of heights on a secure rooftop or the anxiety after watching a horror movie, the realism of these images manipulates emotions.
“Even when you know they’re not real, the impact is undeniable,” shared a victim of AI-altered imagery.
The realistic nature of "nudified" images makes them almost indistinguishable from the original, evoking feelings of alienation and humiliation. Research indicates that the nonconsensual sharing of intimate images can have psychological effects comparable to victims of sexual violence. More disturbing is the thought that someone felt entitled to strip away one's privacy for the sake of a digital image.
The harm goes beyond images. There's an unsettling trend in VR environments where women report their avatars being assaulted. This digital harassment parallels the issues in video games and is exacerbated by the immersive nature of VR headsets. Allegations against Meta's VR platforms suggest insufficient child safety measures, despite the company's claims of protective features.
Virtual assaults, often dismissed as "not real," can have real psychological effects. The realism of virtual environments, combined with misogynistic motivations, create a genuinely harmful experience without physical contact. This reflects how misogyny has evolved with technology, pushing the boundaries of what's considered harm.
The challenge now is not just awareness but regulation. Proactive measures, rather than reactive bans, are crucial to protecting victims from digital harm. The fake images may not be physically harmful, but the psychological impact is deeply real and damaging.