An alarming new report from AI Forensics (AIF) has shed light on the startling inefficiencies in AgeVerif, an AI-powered age-verification software currently employed by adult content platforms in France. The report, released on October 28, scrutinizes AgeVerif’s compliance with legal age checks and uncovers a web of bias, high error rates, and potential conflicts of interest.
AgeVerif, which launched in 2020, uses AI biometric analysis to ensure users are of legal age, ostensibly offering compliance with regulations aimed at shielding minors from adult content. However, AIF's findings suggest the software could allow up to 10% of minors to slip through its checks. This troubling margin of error is attributed partly to the system's reliance on a dataset where merely 7% of images feature minors.
The discrepancies in AgeVerif's performance don't stop there. The study highlights a stark racial bias in the system's accuracy, with a 27% misidentification rate for black minors compared to 11% for white minors. Such disparities draw attention to the broader issue of fairness across different skin tones in AI systems.
“We are strongly questioning the deployment of existing 'age verification' solutions on an even broader scale,” stated Paul Bouchaud, AIF's lead computational researcher.
More concerning is the ease with which the software can be bypassed. Simple browser code can outsmart the system, raising questions about its real-world efficacy. AIF also uncovered a conflict of interest, noting that one of AgeVerif's directors is linked to the very pornography companies the software purports to regulate.
The European Union views age-verification as essential for protecting minors online, with the 2023 Digital Services Act mandating such measures for websites operating within the EU. Individual states like France are crafting their own legal frameworks, slated for 2024. However, as Bouchaud points out, the lack of a certification process for age-verification providers leaves much to be desired.
Despite these setbacks, there are more robust systems in development that align with EU principles, though unresolved issues persist. AIF stresses the importance of addressing these challenges, noting that workarounds like VPNs and ad-blockers further render current solutions ineffective.
Meanwhile, the Danish EU council presidency is pushing for age-verification clauses within the Child Sexual Abuse (CSA) Regulation and Directive, advocating for technology that respects privacy and avoids discrimination. However, civil society continues to raise valid concerns about user privacy and the ethics of these verifications.