Matthew Green published the second part of his series about anonymous credentials. Seems like this is one of those writing projects that is just more of a chore than expected. It was months between part 1 and part 2, and apparently it took much longer to get part 1 off the ground. So any relation to current events is coincidental.
But also, unfortunate. Over the last several years there has been a growing push around the world to deanonymise the Internet. We’re all supposed to now be held accountable for what we say online, and some of us are to be prevented from saying anything at all. There are many new laws on the books in this vein, with Australia’s social media ban for under 16’s being an especially groundbreaking example. But even Australia is only important as a trailblazer. Whereas the announcement just a few days ago of the single standard EU age verification app really is much more significant in practical impact.
This is because the EU is a large enough market to force behavioural changes in large tech firms. If Australia implements a government age verification service, very few large firms will devote serious resources to supporting it. But in the European market, it’s worth spending money to make money. More than that, the EU is in practice the only jurisdiction that combines technical skill and political will competently enough to make meaningful technology laws. There’s pretty much no-one else stepping up here. Eventually even the USA will by default end up copying this precedent.
The EU proposal is bad in ways subtle enough that it could well settle in as an inescapable part of the digital landscape. It uses government ID as the foundation for accessing services. This excludes people without ID, in particular refugees. That, of course, is explicitly in line with the anti-refugee agenda of current European politics. But it also fundamentally doesn’t work for any immigrant such as myself, who can’t always rely on a connection to one particular country. If my ID is taken away, my access to services would disappear too. That also is fully in line with global trends to enforce strict national loyalty.
It is, however, open source, and appears to conform to open standards. It does explicitly take anonymity as a core aspiration. These are things we want to encourage. So it’s hard to argue against. And it doesn’t help much if respected cryptographers pop up with long technical descriptions saying “age verification? what a fun project! we can totally do that.”
Meanwhile, there’s the usual crowd of cringeworthy techno-anarchists loudly rejecting any attempt to rein in the bad actors online. Technical solutions that get proposed, such as blocklists, are based on the assumption that it’s up to individuals to build their own defenses. This empathy-free approach just makes you look like an asshole. When the debate is shoe-horned into the “both sides” narrative, any neutral observer is bound to sympathise with the reasonable-sounding side.
Between all these voices, I find it hard to formulate an objection to online age verification that is both concise and convincing. So instead, I want to describe an online accountability mechanism that I would support.
Principles
I’d lay out the design principles as follows:
- Individual people must be permitted to have multiple identities. Without this, the digital world becomes stifling and constraining – and for many people, dangerous.
- An identity must originate from an individual making a conscious choice. When society creates an identity for you, that is always disempowering.
- Bad behaviour from an identity must impose a significant cost on the real individual behind it.
- No-one must ever be permanently excluded from participating online.
Reputation
The difficulty comes when imagining a mechanism to impose significant costs on bad actors, when those actors can always just create a new identity.
The answer should come down to the concept of reputation. Simply, the default behaviour for most services should be to prevent an unknown user doing anything damaging. For example you can like posts but not reply. You can start threads, but only after review from a moderator. You can only follow people who follow you. That sort of thing. Once you’ve built up enough “reputation points”, these restrictions can gradually fall away. Time is an important factor here. Simply requiring active, problem-free participation over years makes it very hard for people with bad intent to break in.
The problem here is that the Internet is big, and I like it that way. I want to participate in hundreds of communities, and I only have good intentions to all of them. But it’s too much to expect years of commitment to each of those.
Instead, I should be able to borrow my reputation from one site and use it to jump the queue on another site. I should be able, at my discretion, to reveal that this Matthew Exon is the same Matthew Exon who is a respected and supportive member of, for example, my local gardening club. This requires some digital certification that includes membership duration, number of comments, number of complaints, etc, that I can use as a credential on other sites.
Of course, the downside of this is that it allows sites to collaborate. If one of them sells their data to an advertiser, they can say that I am both a gardener and a skydiver. This fine-grained segmentation is exactly what marketers want, and exactly what I don’t want them to have.
What’s more, this kind of cross-certification is only useful between communities that are reasonably aligned. If I’m going to start posting on a support network for the local queer community, they want to ensure that I’m not going to harrass, stalk, or troll their members. If I show up with a certification that I am a long-standing member of truth.social, that only makes it more likely that I am a bad actor.
So this mechanism has to be very selective. I probably want to have several online personas, and I want to keep them carefully segregated between sites. I might want those personas to overlap, for example, my professional persona and my family persona might independently need accounts on PayPal to pay for things. But I would not want PayPal to be able to correlate those, because I’m sure they would sell that information to advertisers. Meanwhile, I would expect sites to only allow borrowing reputation from a curated list of other sites. This is not something that would be invoked automatically. It would happen rarely, and only when explicitly desired both by myself and the service I’m using.
Children
How does this apply to age verification? Let’s take as our premise that under-16s really should not be permitted to see posts on our social media site. In that case the site should be locked down. If you turn up to the site as a new user, you can’t create an account and you can’t view any posts.
To create an account, one mechanism could be to get a testimonial from an existing member. That member would certify that you are over 16. If it subsequently turns out that you are under 16, that other member would face having their own account banned. So it might be difficult for you to get an account! Any member would be taking on a significant risk by providing you an invitation. On the other hand, any existing member would have a strong incentive to bring all their friends into the community. Remember that Facebook used a similar mechanism in their early days. Clearly this is enough openness to allow a site to be successful.
But we should not allow the Internet to exclude those who are socially isolated – these are exactly the people the Internet can and should benefit the most. Allowing users to borrow their reputations from other sites provides one important mechanism to open a community beyond natural cliques. It is reasonable to ask someone who has never used the Internet before to spend a year or so building up reputation in a safe sandpit, before they start exploring the darker corners. This is necessary anyway to build up a sense of what behaviour is appropriate and what is dangerous.
These sandpits would be open to anyone of any age, entirely anonymously. But they would also allow a user to assert that they are above a particular age. This is just a claim, with no proof required to back it up. But if it turns out the user is lying, there goes their reputation. And there are many ways a young user can slip up that can be noticed. It’s hard to prove someone is mature during a 30 second test. Keeping up the pretense over the course of a year is much harder. A user could join at 15, claim to be 15, and after a year have a credential that they are 16.
But what if the user was really 14 when they claimed to be 15? The answer is – they’ll probably get away with it! There’s still a risk they might get caught out, e.g. when discussing which subjects they are studying at school. And if you’re trying to invest a whole year building up a reputation, so that you can leverage that reputation for decades on social media, is it really worth lying? For some kids it certainly will be, and in some of those cases that’s because the user really needs the support of the adults they are reaching out to. It’s all about being reasonable. Most kids won’t bother trying to run an elaborate long con. And so our policy of banning social media for under 16s will mostly achieve its goals. But we’ve achieved this without imposing heavy-handed, bureaucratic, arbitrary rules on everyone.
That was the second way a new user can prove they are 16. Finally, there’s a third way – use the EU digital identity app! Our social media site can simply choose to allow that as one alternative mechanism. Wasn’t I complaining that this app is a privacy nightmare? Yes, it would be, if it was the primary way to prove your authenticity online. And it’s pretty clear that is how it is intended to be used. But if the usual way to sign up to a service is an invite or a borrowed reputation, then using government ID as an alternative for young people just getting started in the world is an entirely reasonable choice.
Keep in mind that our social network is somewhat interested in not doing harm to children, but it is far more interested in keeping out trolls and stalkers. If the goverment says you’re a real person and are over 16, that doesn’t tell us anything very interesting. We’d probably still want to limit what you can do on our site, including preventing you posting publicly or sending DMs. You’d need to spend some more time building up a reputation first. That’s a delay you could skip if you got an invite or borrowed your reputation from another site. So the government ID appoach could naturally evolve into a relatively disfavoured method, unused by the vast majority of users. And with that status, it could be very valuable as a safety net for those with no other options, rather than a standard everyone is forced to accept.
The last of the fundamental principles I laid out is inclusivity. Everyone must be given the opportunity to bootstrap an identity with a positive reputation. This not only includes young people coming online for the first time, but also people who have actively ruined their reputation and want a fresh start. It also must include people excluded from mainstream society – refugees, the dispossessed, the abused and fearful. No system that adopts government ID as its foundation can be inclusive in this way.
Rather, the system of identity should be free-standing. Its structural integrity should derive entirely from the connections within itself. But it should provide a variety of mechanisms to bootstrap access in the first place. Some of those mechanisms should rightfully be provided by our democratically-elected governments. But we should not strive for anything resembling the rigidity of the formal legal system. We can and should do better than that.
Existing Law
How does this approach compare to the law Australia actually passed? It’s hard to say! One of the biggest problems with the law is that it’s extremely unclear how it will be interpreted and applied.
The basic idea, that under-16s shouldn’t be using Instagram, is something that we might accept. The law as it stands asks Instagram to take reasonable measures to ensure under-16s don’t sign up. In principle those reasonable measures could be exactly what I described.
In practice though, Meta would be taking on a huge risk by relying on invitations or borrowed reputation. The potential fines are huge. They would be stuck in court arguing about these complex mechanisms and the relative incentives of every participant. They would also be forced to admit that they know full well that some kids will easily slip through the net.
The alternative is for Meta to insist on knowing everything about everyone, and normalising this as just part of the way society works now. As it happens, that also aligns with its commercial interest as an advertiser. They can use formal technical mechanisms aimed at verifying each individual account holder’s identity directly. And if kids are finding it absurdly easy to work around age verification, you can simply say they are breaking the law and so it’s out of your hands.
It seems that the less effective Meta’s attempts to protect kids are, and the more damage they do in practice, the more likely they are to stand up in court.
Developmental Harm
Meanwhile, the law as it stands forbids kids from participating in any kind of social media at all until they are 16, regardless of content or moderation policies. And this is one of the most important problems with the legislation. Kids should have a well-moderated space in which to express themselves. I’ve heard many good arguments for this. Under-16s still have political opinions, especially about the problems they are about to inherit, and their voices should be heard and respected. Everyone wants to communicate with their peers, if we try to prevent kids talking to each other they will only find more subversive and dangerous ways. And cutting off social media damages kids excluded from the mainstream, such as refugees or queer kids.
But I think another significant point is that kids should have a place to learn the social skills they will need as adults. That’s one of the important roles of public education today. Kids should be learning how to socialise with each other in the playground, with trusted adults available as a safety mechanism and to establish the basic norms. But face-to-face interaction is very different to online interaction, and the latter is now an inescapable part of adult life. We shouldn’t expect kids to dive into the deep end when they hit 16 without ever having stepped into the paddling pool. And honestly, I don’t really want such 16 year-olds turning up in my social media feed either. Personally, I started using social media at 17, and I do regret some of the choices I made at the time being a permanent part of Internet history.
A better idea is to have a very well-moderated social media network for kids. This might be limited to one school, or perhaps a catchment area, to provide some shared context and limit the number of people to a manageable crowd. It would require round-the-clock moderation, which is an expense, not a burden you could place on teachers. But the point is that social media exists in the world, and kids shouldn’t be expected to face that on their own. It should be one of the duties of our society to provide support where kids need it.
But notice, while I’ve framed this idea as something for kids to use, it could equally well apply to adults. The technical costs of running a social media site are trivial, and it really would be good bang-for-buck for a government department to provide this as a public service. But the true cost is human moderation. Perhaps children are more at risk than adults, and need this help more urgently, but we all need professionals keeping us safe from the worst of the Internet. I certainly need defense against spammers and scammers. The American big tech companies resolutely refuse to provide that for free. But we are not helpless, we could provide those services for ourselves.
The Australian legislation ignores every social media problem except the effect on children, and by providing a blanket ban it forbids providing any support to this group. And that’s the problem I can’t reconcile myself to. I also expect this to be the model adopted by every other jurisdiction. And that will only make the already dire state of social media even worse.