zlacker

[return to "So this guy is now S3. All of S3"]
1. smcl+wx1[view] [source] 2023-05-05 07:26:13
>>aendru+(OP)
Last night I opened this, saw the HTTP 429 and figured "ah too many requests, I'll check the comments and try in the morning". The comments were all people swooning in shock about why some non-specific they (S3? Amazon? Someone else?) didn't use ".well-known" and others complaining about Mastodon and/or the fediverse. I had to read multiple comments to piece together the story, I swear it was like Elden Ring[0].

What this is actually about: BlueSky is Jack Dorsey's new Twitter clone, it is eventually intended to be some sort of fediverse thing but it's not there yet and it's not the source of the fediverse gripes here. You can authenticate your BlueSky user as the owner of a given domain or subdomain by placing a certain file with a given content somewhere under that domain/subdomain. However that "somewhere" was just a location one of the devs at BlueSky chose, rather than somewhere relatively standardised, like under the ".well-known" path (which you might recognise from things like OpenID Connect where the configuration doc is located @ example.com/.well-known/openid-configuration). So one user exploited this and became the "owner" of that Amazon S3 domain by setting up a storage account on Amazon S3 and following BlueSky's setup instructions. That is the main story here - some non-Amazon rando is now officially the Amazon S3 guy on Bluesky.

The next part is that someone posted about it on this https://chaos.social Mastodon instance, which got overwhelmed, the owners decided to save their server by electing to return a 429 response for that specific post if users don't belong to chaos.social, and that is why people are upset about Mastodon.

Interesting story, but I'm not interested in Dorsey's version of Twitter 2.0 unless it actually allows you to signup[1] and brings something compelling that Twitter didn't and Mastodon doesn't.

[0] - game with an intricate story that does its damndest to not actually tell you. If you want to know the story you have to piece it together yourself by picking up dozens of items scattered throughout the game and reading all their descriptions. Or you can do what I did - watch a video on YouTube.

[1] - they're doing an open beta and letting a little trickle of users on, who post about it on their Twitter/Mastodon/whatever. Feels a bit deliberate, like they're trying to build anticipation and frankly I detest little manipulative things like that so I'm out

◧◩
2. accoun+aC1[view] [source] 2023-05-05 08:11:19
>>smcl+wx1
> The next part is that someone posted about it on this https://chaos.social Mastodon instance, which got overwhelmed, the owners decided to save their server by electing to return a 429 response for that specific post if users don't belong to chaos.social, and that is why people are upset about Mastodon.

It's like all these newfangled webapps don't understand the concept of caching static pages for anonymous users. There is absolutely no reason that something like this should result in more than one request (plus a handful more for static resources) handled entirely by the frontent webserver's in-memory cache for each person linked from other sides. But instead its all dynamic and the page shoots off more API requests before being able to show anything.

◧◩◪
3. smcl+wE1[view] [source] 2023-05-05 08:38:02
>>accoun+aC1
So the thing is that in one respect they actually do get caching, almost to a fault. One of the complaints I've seen among some Mastodon instance operators is that they end up storing some pretty hefty amounts of data locally as their instance caches remote posts, images and profiles from other instances that its members follow. One source of problems, which may have been resolved, was that even though there's a job that cleans out this cache the banner images from external profiles stick around. I saw this a while back and it seems like an easy fix so I imagine it's been addressed.

I don't think I am equipped to diagnose what the root cause was here. It's even possible that this instance wasn't intended to have viral posts (i.e. high profile posts that get would get shared to many external users) and they didn't want to invest in hardware/services to facilitate this.

[go to top]