Meta finds itself in the spotlight, facing serious allegations from Strike 3 Holdings, a producer of adult films. The lawsuit claims that Meta used their videos as training data for an unspecified AI tool known as Movie Gen, seeking $350 million in damages. Meta, however, firmly denies these allegations, asserting that any video downloads by employees were for personal reasons and not linked to AI development.
The tech giant has made it clear that it has stringent policies against using adult content in AI training. According to Meta, approximately 22 adult videos were downloaded annually from their network, a figure they suggest could be attributed to individual user behavior rather than company policy or AI training practices. A Meta spokesperson emphasized, "We don’t want this kind of content and work to avoid it."
Meta has requested the US court to dismiss the lawsuit, pointing to a lack of evidence supporting the claim that its AI models were trained on adult content. This legal tussle emerges as Meta grapples with broader challenges concerning its AI systems, which have reportedly generated unsafe or inaccurate outputs. In response, the company has tightened its regulations surrounding AI development.
Earlier this year, Meta secured a legal victory in a related case, with a US court upholding the company's use of books for training its Llama AI models as fair use under US law. This ongoing legal scrutiny highlights the complexities of AI training practices in the tech industry.