Federating Failure

Twitter’s got problems. I hope I don’t need to convince you of this. If you haven’t been the victim of harassment yourself, I hope you follow some people who have. They’re the ones who are challenging the frogs, gators, and eggs.

For me, there’s really no point in leaving Twitter for something else if it doesn’t also have those voices. I’m not saying that I wouldn’t chat with friends I already know in a smaller space, but if something is going to replace Twitter for me, it needs to be a space that includes people that don’t see any appeal in technology for technology’s sake. That means that a replacement needs to scale because it’s hard to be challenged by unexpected and new perspectives in a small, highly interconnected community.

Nerds love federated and distributed systems. They’re challenging to engineer and they play to the techno-anarcho-capitalist narrative of programming culture. They also tend to fail rather spectacularly at protecting individual users from actual malice. Spend five minutes and see if you can find someone with abhorrent, hateful views on the web. That distributed system will happily connect you to Holocaust deniers, genocide advocates, and all sorts of other nastiness. Write a controversial article under a woman’s name and include your email address; watch as the distributed system perfectly delivers all kinds of horrible invective to your inbox.

These are old systems, of course. In the 80s and 90s, we cared more about protecting the ability of anybody to say anything they wanted than protecting someone who’s just figuring out their identity from a phone full of demeaning push notifications. Is it really the case that any federated system will have this problem? Sadly, yes. It’s not about the mechanism; it’s about people. If the complaint is that Twitter hasn’t used the power it has to protect its users, then breaking up the network doesn’t solve that problem. It doesn’t empower the victims of harassment; it means that everyone who runs a server needs to use their authority well, and to agree on what that means.

What happens when the administrators of half the servers on the network decide deadnaming is abuse and the other half decide they won’t censor something that is (legally, but not morally) the truth? Suppose Google decides to turn Google++ into the newest member of your network, then approaches abuse as half-heartedly as Twitter has today. Who’s going to disconnect the eight hundred pound, well-intentioned but borderline incompetent gorilla? When somebody gets the bright idea to rewrite their server in C for performance and then gets thoroughly owned, will disconnecting them from the network after the fact help the person who suddenly receives a thousand messages from seemingly real people urging them to commit suicide? If the answer to any of these scenarios is just to filter the instances you don’t like, how are people who don’t follow the politics of the network supposed to know who to filter, and more importantly, what’s the point of federation then?

These are the questions I can think of off the top of my head. Protecting people who are most at risk of abuse requires more than just answering the easy questions. Those who have power over the network need to actively listen to those who are at risk and demonstrate an affirmative commitment to protecting them, because the next form of harassment might not look like what we know about today. This is a process that requires judgement, empathy, and humility, and it’s just too much to expect that this can be replicated across hundreds or thousands of different instances.

But frankly, if even the easy questions don’t have good answers at scale, then I’m not interested. If the space you’ve built is nice as long as it’s populated by people who see some appeal in the idea of being able to run your own server, then you’re just running away from the problem – and from the unexpected and challenging voices that you can find in a large network. That doesn’t sit easy with me.