By now, you might be clocking that somewhere behind closed doors, there's a bureaucratic effort underway to determine your age. Not because there's a surprise party in the works, but because a group of nine regulatory heavyweights believe a "safe" internet starts with flashing your ID – all in the name of online safety.
The initiative, called "age assurance," is about as lively as a corporate PowerPoint presentation – with the same knack for dullness. It's a concoction of biometric data, document verification, and artificial intelligence, all rolled into a global effort under the Global Online Safety Regulators Network, known as GOSRN.
Launched in 2022, GOSRN is a collaborative of nine nations, including the UK, France, Australia, and Fiji, aiming to align policies on age verification. Ireland's Coimisiún na Meán currently chairs this group. Committed to ensuring nobody underage sneaks a peek at adult content, this month they've released a "Position Statement on Age Assurance and Online Safety Regulation."
"It’s like asking for a 'gentle' chainsaw," critics say. "They promise it will be accurate, fair, and non-intrusive. But that’s easier said than done."
Their blueprint advocates for cross-border standards, biometrics, official ID checks, and ultimately, the erosion of online anonymity in the name of child safety. But can a system that promises to be "accurate, reliable, fair, and non-intrusive" truly exist?
The pitch is all about protecting kids. However, critics warn it resembles more of a surveillance framework in disguise. The technology behind this age assurance largely involves facial recognition, third-party credentials, and databases that store your age information permanently.
Handing over your ID data opens the door to potential misuse, storage, and even marketing exploitation. Once deployed, these systems rarely limit themselves to their initial purpose. The ability to block access to certain content can easily expand to include anything deemed "psychologically harmful" or "financially risky."
GOSRN’s plan for "interoperability" means your digital footprint – once scanned – could be shared across a network of platforms. The aim? To prevent companies from "forum shopping" to dodge stringent regulations.
Imagine telling someone back in 1996 about today’s internet – patrolled by a global safety committee ensuring you're old enough to watch a cooking show with adult language. It sounds absurd, yet here we are.
Ofcom, the UK's regulatory body, is already enforcing the Online Safety Act, investigating and fining websites for not meeting "highly effective age assurance" standards. This is no gentle suggestion – it's a regulatory mantra that could reshape the internet as we know it.
Dubbed "Safety by Design," this philosophy envisions an internet where every interaction is pre-approved and sanitized. It's a move some say could reduce the web to a plain corporate handbook, void of privacy and freedom of speech.
The real concern isn't just the tech's shortcomings or its paternalistic tone – it's the normalization of digital ID checks as a gateway to online life. Once in place, dismantling such an integrated system could prove impossible.
In this brave new world, anonymity becomes a relic for the "suspicious." Real people, it seems, will be expected to log in with verified identities, behaving in ways a committee deems suitable.
GOSRN may argue its commitment to human rights and democracy, yet the vague definition of "online harm" leaves room for subjective interpretation. Agreeing to identity-based age gates could mean signing away privacy, believing it’s all "for the children."