It's funny the bluesky devs say they implemented "something like webfinger" but left out the only important part of webfinger that protects against these attacks in the first place. Weird oversight and something something don't come up with your own standards
Maybe I'm old but what are some popular use cases for webfinger? (I'm just learning about it now)
Webfinger is when you want to multiplex multiple identities on a single domain
E.g. https://example.com/.well-known/webfinger?resource=nick@exam...
Will serve the challenge proving your handle is @nick@example.com
A few things are effectively grandfathered in due to their vintage: /favicon.ico, /sitemap.xml and /robots.txt are the three that occur to me—so if you’re running something vaguely like S3, you’ll want to make sure users can’t create files at the top level of your domain matching at least those names.
But nothing new should use anything other than /.well-known/ for domain-scoped stuff, or else you run into exactly this problem.
Also the penalty isn't very high here. Someone impersonated a domain on a burgeoning protocol for a short while. So what?
We're talking about folks setting up a custom domain for a personal social media presence. If they can handle nameservers and DNS records, they can handle a folder with a dot in the name.
This is not how Mastodon does verification (at least not the main method). Mastodon doesn't just link users -> domain. It can link user -> webpage, for example to link social profiles between sites.
If you have a website with user generated content, and a user can set an arbitrary html attribute (rel="me") in a link pointing back to their profile, they can claim ownership of the page on Mastodon. Likewise, if they can set a link tag in the head element of the page for some reason.
Presumably this is somewhat harder to exploit than a (new, poorly thought out) dependency on a static file under /xrpc, but Mastodon does introduce more authentication footguns for sites than just .well-known! https://docs.joinmastodon.org/user/profile/#verification
Edit: authentication -> verification, since Mastodon distinguishes between the two (see below)
I also recall /crossdomain.xml as an important one; allowing users to create an arbitrary file matching that name could allow certain kinds of cross-site attacks against your site.
You're thinking of how Mastodon does verified links. You could do something similar, provide a verified link on your profile to a file in an S3 bucket, but there's very utility (or risk) in that.
Mastodon also allows you to be discoverable via a custom domain, using .well-known as parent mentioned https://docs.joinmastodon.org/spec/webfinger/ https://www.hanselman.com/blog/use-your-own-user-domain-for-...
I'm not sure what Bluesky was attempting to do here but what they achieved in practice was allowing a user to claim control of a domain by claiming control of a page. But if you allow user generated content on the home page of your site, there's not a distinction (from a Mastodon user point of view) between the two. It's effectively the same problem if I can "verify" yourdomain.com on Mastodon - and my point is that you can do that without using .well-known.
That's the problem with expecting people to agree with and follow standards.
You've effectively described what happens when people don't agree.
You'd hope that people doing job X would seek at least some insight into whether there are best practices for doing X, even if it's not a regulated job where you're required by law to have proper training. Not so much unfortunately.
Example: Many years ago now, early CA/B Forum rules allowed CAs to issue certificates for DNS names under TLDs which don't exist on the Internet. So e.g. back then you could buy a cert for some.random.nonsense and that was somehow OK, and people actually paid for that. It's worthless obviously, nobody owns these names, but until it was outlawed they found customers. But, even though the list of TLDs is obviously public information, some CAs actually didn't know which ones existed. As a result some companies were able to tell a real public CA, "Oh we use .int for our internal services, so just give us a certificate for like www.corp-name.int" and that worked. The CAs somehow didn't realise .int exists, it's for International Organisations, like ISO or the UN, and so they issued these garbage certificates.
[Today the rules require that publicly trusted CAs issue only for names which do exist on the public Internet, or which if they did exist would be yours, and only after seeing suitable Proof of Control over the name(s) on the certificate.]
https://wiki.archlinux.org/title/XDG_Base_Directory
I support both well-known and XDG because I think the benefit outweighs that perhaps they could have been designed better. But I don't think that those who opt out of it could only be doing so out of ignorance.
Reader could have an optional Flash plugin, and better yet, you could configure the PDF interactive plugin to dynamically download the swf file to run.
I built an entire Flex based rich UI that was dynamically loaded by the 1kb PDF you’d receive in email, the Flex app retrieved and posted data via HTTP APIs.
Because reasons.
That product was live for years. I think we shut it down as recently as 2 years ago.
To be 100% clear, wasn’t my idea.
But it was my mistake to joke about the absurd possibility to build such a thing in front of some biz folks.
If you allow UGC with *arbitrary HTML* or explicitly support generating rel=me. Both are you explicitly giving someone control of the site (or at least letting them claim they have it).
in an interesting coincidence, I found this today!