Spurred on by the problems at Twitter, a lot of my social media timeline has “moved out” of Twitter/“X” on to what people mostly describe as mastodon or the “fediverse”. The fediverse being the collection of decentralised “instances” (aka servers) sometimes using different software but speaking a common protocol called ActivityPub.
Since people are now posting social media updates on a system that is a lot more decentralised than twitter once was, I was interested in knowing where these instances are hosted to see just how decentralised it really was!
To get any kind of numbers, we need to first know where all of the instances are. To solve this we have a useful endpoint on Mastodon instances that will tell you all of the other instances that instance knows about (known as “peers”).
$ curl -s 'https://xxx/api/v1/instance/peers' | jq . | wc -l
12294
Now we can look up their DNS records (in this case we are looking at a single A record), match them to a AS number, and then using the bgp.tools naming export, we can rank them by hosting provider name:
Here we can see a large market share of OVH, Cloudflare, Hetzner, and DigitalOcean.
However this does not show the full picture, since there are a lot of instances out there. But some of those instances have large numbers of users (50k+), while some are for a single person or company.
So we can use another common endpoint on those instances called nodeinfo
:
$ curl -s 'https://xxxx/nodeinfo/2.0' | jq .
{
"version": "2.0",
"software": {
"name": "mastodon",
"version": "4.1.4"
},
"protocols": [
"activitypub"
],
"services": {
"outbound": [],
"inbound": []
},
"usage": {
"users": {
"total": 187,
"activeMonth": 146,
"activeHalfyear": 186
},
"localPosts": 11491
},
"openRegistrations": true,
"metadata": {}
}
This response returns the total number of users on an instance, and depending on the software on the other side, also the users who are still active!
Using that data we can get a much better picture of where all of the users really are:
Here we can see that Fastly and Cloudflare make up over 50% of the entire fediverse network. Except this is not the case. Fastly and Cloudflare are (typically) proxy CDN services that people use for speed and/or DDoS mitigation, so we are not actually seeing where the data is being stored, only who is fronting the site itself.
So we will have to get creative in getting the true hosting providers. By convincing the fediverse to talk to us!
ActivityPub is a reasonably easy spec to half implement. What we want to do is to be able to send something to the frontend endpoint (that could or could not be using cloudflare or fastly) and cause it to need to fetch something from us!
The most simple way to do this is to use the authentication system that ActivityPub requires. In ActivityPub almost all requests between servers are signed using HTTP Signatures. However in order for the otherside to validate the signature it must also fetch the users public key from the origin. Most of the time this is cached. But this is a mechanism we can use to cause an ActivityPub server to call back to us!
What I then setup was a HTTPS server (on a fresh new domain name and server to avoid mixing things) that would make signed requests to all ActivityPub servers, and then link back what IP address they fetched the key on. This is not a perfect mechanism as larger instances often have large Sidekiq pools to perform all of the fetching/pushing. In theory they could be hosted in a different location to where the database is.
After waiting around 30 mins for the software to run, I had convinced almost all Mastodon, GoToSocial, and Akkoma instances to talk to my fake fediverse instance and gather their backend IP addresses as well as what DNS resolver that they use.
Since the risk of abuse on this method is quite high (One could use it to find the “real” IP to DDoS for example), I will not be open sourcing any of the code to do this. However it should be known that a competent actor would be able to write something like this without too much stress, meaning the protection offered by such services are not that great.
Since this mitigation was made for dealing with cloudflare, I began by filtering exclusively for instances that had their frontends on Cloudflare, we can see that in general they are either hosted on hosting providers without a great DDoS mitigation service (IE: a provider who will respond to DDoS attacks by “null routing” you), and users who appear to be hosting their instances on their residential connection, who I aggregated up into a single entity based on bgp.tool’s Home ISP called “Generic Home ISP”:
As we can see for the population using Cloudflare, a fair number of them (30%) appear to be hosting the instances behind a home broadband connection.
Overall however, if we now look at the backend hosting providers and weigh it by users, we can see that the German hosting provider Hetzner hosts over 51% of the entire network!
A reasonable thing to point out is that the network has a very very large instance called mastodon.social, and since they are hosted also on Hetzner (it appears) they inflate the Hetzner number by a decent amount, but even if we remove mastodon.social entirely from the list we can still see Hetzner are the biggest share by far!
Now, having 51% of your users in your ecosystem behind a single hosting provider is not exactly great. While I believe that Hetzner and OVH is a better pick than say, Amazon/Azure/Oracle/Google, large scale disasters with hosting providers have happened in the past. That being said, given that almost all of these instances are run by different people, all of this is migratable depending on who is offering the best deal on compute and bandwidth, and smaller instances are preferable than single services that are too big to fail. So overall, I believe the fediverse is not doing terribly.
If you want to stay up to date with the blog you can use the RSS feed or you can follow me on Fediverse @benjojo@benjojo.co.uk
Until next time!
Related Posts:
Who is squatting IPv4 addresses? (2022)
Random Post:
Building a legacy search engine for a legacy protocol (2017)