That's at least how I think a federated system should work. Not sure if reality matches that.
really... surprised it got submitted here
incidentally i'm running pleroma, not mastodon. minor detail but you know
which is what led me to block all other IPs - it's not the hardest thing to just make an openssl req and get the common names of the certificate returned
especially if you know the hosting provider, which narrows down the ip space significantly
it's right at the end of the article - the attacker was abusing the "create a preview card of any posted URL" feature - he'd post a link, wait for pleroma to go and grab the url to preview it, then narrow down which one was mine based on user agent
i added an upstream proxy and anonymised the user agent, so even if he were to do that, the most he'd find was my proxy box
If you setup Cloudflare properly, then you only see a CF-based certificate, not that actual hostnames. Since you didn't send a proper hostname (unless you use PTR, which isn't reliable either) it'll use whatever default hostname it has configured (or just close the connection).
Or in a case like my setup, you'll get an empty 0-byte response if no Host: header is present. The certificate is a wildcard for the primary domain the server runs, not even related to the mastodon service.
And of course, this post contains enough information to probably nail it down but on the other hand, mass scanning the internet is a lot of trouble.
Unless you're, yanno, EMI Records, Sky TV or someone with political sway.
The most productive (best outcome) way of handling it tends to be to turn your OPSEC up to eleven and put all your XP into defence. Again, based on experience.
If a customer wants to hide their IP then the best way to do it:
1. Onboard onto Cloudflare
2. Audit your app and ensure you aren't leaking your IP (are you sending email directly? making web calls directly? - make adjustments to use APIs of other providers, i.e. send emails via Sendgrid API, etc)
3. Change your IP (it was previously public knowledge in your DNS records)
At this point your IP should be unknown, so...
4. Use `cloudflared` and https://www.cloudflare.com/en-gb/products/argo-tunnel/ to have your server call us, rather than us call you (via DNS A / AAAA records)
Because this connects a tunnel from your server, you can configure iptables and your firewall to close everything :)
Here's the help info: https://developers.cloudflare.com/argo-tunnel/quickstart/
PS: to the OP I tried to contact you via keybase, feel free to ping my email. We are working to improve the DDoS protection for attacks in the range you were impacted by and the product manager would enjoy your feedback if you're willing to share them in the new year.
Would you prefer the title were modified? The mods can do that. I thought that specifying what the DDoS mitigation was applied to would be helpful, though my presumption of Mastodon was in error, apologies.
Clouflare logs also suck for enterprise customers
Other Fediverse protocols -- I believe either Friendica or ... I think Hubzilla? -- have some level of account portability.
There's a fairly long-standing request for Mastodon to support this. For now, you can have accounts on other instances forwarded to your primary.
While you can export and import your own follows, followers of your account won't automatically redirect to the new home.
Masodon content however will syndicate across the Fediverse, and even some of my posts from a now-dead instance can (occasionally) be found.
Unfortunately, this is probabalistically likely as any community grows. Equivalent raging occured fairly early in the life of other social networks such as Usenet and The WELL.
The DDoS protection is the same across all tiers - it's built in and you aren't charged for that. You even see other features (like the Rate Limit feature cited in the article) explicitly structure their pricing so that you are not charged for attack traffic even if you are on a paid plan or feature.
For small denial of service attacks the Security Level switch is very good at stopping the vast majority of attack traffic, and then the IP blocking and User Agent blocking is good too - this is available on the free plan, as are a handful of Firewall Rules that can allow complex expressions to match and drop traffic.
So you can get a very long way on the free plan.
Paid features I'd recommend if you want to stay on the free plan month-to-month yet go paranoid for a small cost:
1. Rate Limit, configure it on your dynamic endpoints to minimise the costs to you but have it highly effective against attacks. Predicted cost is relative to how many requests for dynamic endpoints you have... you can be smart here and combine with Firewall Rules to drop traffic that does not have auth credentials.
2. Argo Tunnel, to hide your IP.
There are other plan level benefits, and the most notable is the quantity of Firewall Rules per plan level and the complexity they allow: https://www.cloudflare.com/en-gb/plans/
I'm no longer on keybase, deleted it a few days ago - but I'm more than happy to share what I found if you want
pretty sure it's nothing groundbreaking though
other contact methods are listed on my profile
(edit: by OP I mean article author)
I also pull-requested a user agent anonymisation setting (pleroma.http.user_agent) to make this better
Even worse is the pattern of requesting LetsEncrypt certificates for multiple domains on one certificate. Now all of a sudden you're leaking development server hostnames, peeling off the white label of multi-tenant, and making things easier for automated scanners.
I get it that security by hostname obscurity is a poor practice on its own, but there's also something to be said for cutting down a large amount of malicious traffic with some common best practices.
If a decentralised system is to stay decentralised, it needs to consider spammy bad actors.
[1] https://blog.cloudflare.com/l4drop-xdp-ebpf-based-ddos-mitig... [2] https://identity.foundation
What if I own a server and connect it to an ISP under an agreement where the ISP is accountable for clearly malicious behavior coming from its connection (regardless of origin)?
Then, that ISP requires the same agreement from me, and everyone connecting to that ISP, and on down the chain.
Wouldn't we all be very active in policing bad actors in the networks we manage?
This information is outdated since October 11, 2019. You can move followers from one account to another in Mastodon 3.0.
It also requires de-anonymisation (so you can identify who the bad actor actually is!) - you wouldn't be allowed a Tor exit node on this network, for example.
2) This would require ISPs to do even more invasive monitoring of all traffic to be in compliance. They'd essentially have to DPI everything, or even break TLS between you and your destination, to know if your traffic was malicious. No thank you.
3) Many ISPs simply don't care. A lot of malicious traffic comes from countries where ISPs will just look the other way for a bit of cash. I suppose we could come up with a system that depeers bad ISPs, but this would have tons of collateral damage to innocents as well as reintroducing the exact centralization we're trying to avoid (where's the "master list" of bad ISPs to depeer?)
Whatever the solution to bad actors online is, it isn't ISPs.
It also requires de-anonymisation (so you can identify who the bad actor actually is!) - you wouldn't be allowed a Tor exit node on this network, for example.
I think that "because it is a lot of work" can't be a reason for not taking preventative measures against something you own that is performing harm on another party.
As far as available technologies, "it isn't perfect" does not mean it isn't better.
If an automated ride-sharing service has customers who are shooting guns out their windows and damaging property, then maintaining the anonymity of the attackers is not a reason to permit them to engage in this behavior.
If Tor is permitting criminals to use the Tor tool, itself, to do harm to others, then it is up to the Tor project to remedy this. If I, as a network operator, do not want them damaging my network, then that is my choice. If I, as a customer of a network operator inform the network operator that I am willing to accept Tor and all liability, then this exception can be written into a contract.
A single fediverse instance blocking TOR users doesn't make much of a difference: my instance still allows them, and I know of many that do.
Federated systems, like email, have used these anti-spam techniques for a long time.
Federated systems always evolve into an Oligarchy, like Gmail/Hotmail/Yahoo, etc. or like banks, JPMorganChase/GoldmanSachs/etc.
If you want decentralization, you should more go for something like https://notabug.io/ (P2P Reddit), which uses the GUN protocol (mine). Or any WebTorrent-based approach.
2) This would require ISPs to do even more invasive monitoring of all traffic to be in compliance. They'd essentially have to DPI everything, or even break TLS between you and your destination, to know if your traffic was malicious. No thank you.
3) Many ISPs simply don't care. A lot of malicious traffic comes from countries where ISPs will just look the other way for a bit of cash. I suppose we could come up with a system that depeers bad ISPs, but this would have tons of collateral damage to innocents as well as reintroducing the exact centralization we're trying to avoid (where's the "master list" of bad ISPs to depeer?)
Whatever the solution to bad actors online is, it isn't ISPs.
Yes, I would like it if I had something that unbeknownst to me is harming others (beyond some de minimis) through their service, and per their contract, they certainly have the right refuse my service until the condition is rectified. Anyone relying on my service will either suffer or be owed something, by me. Note that this isn't some arbitrary shuttering of some service. This is a harmful activity being blocked from harming and is spelled out in the contract clauses.
You make it sound as if this stuff is so hard, yet here we are discussing this in a comment section of a post by a person who doesn't seem to be employing highly-sophisticated tools in identifying the bad behaviors. All he would have to do in my dream world is show this behavior to his (contracted) service providers, and them on up the chain.
But notice that this option is not available, thus the only option is to use a centralized provider that is effectively big enough to completely absorb a huge percentage of bad activity. He even comments that owners of networks are only voluntarily providing responses and actions to these activities. They could just as well not be bothered and what then?
If the ISPs don't care and people who don't want this traffic on their networks disconnect from them, this is bad? And, yes, whole countries may have problems connecting anywhere. Mind you, even those countries had some reason to connect to the World Wide Web (itself with a mountain of even just protocol requirements) in the first place, and it likely has to do with some minimal amount of trade with the outside world. To continue this trade communications they will have to provide a service that others are willing to connect to.
It wasn't until this post that I realized the italics of your upstream post wasn't your original content. I don't find it nice to squint at the text to see when it stops being italicized to know when you've started your post. But a final ">" is easy to see.
Also, it was only up to like very early 2000s when researchers of decentralized systems mostly ignored the existence of malicious actors, but later everyone became well aware of them and started considering how to deal with them.
Well, don't leave us hanging, do enlighten us how
But in practice datacenters, uplinks and internet exchanges often are able to do flowspec, firewall rules, block all UDP for a subnet in all networks they have relationships with, etc. So plenty of those nodes can be behind ISPs that mitigate volumetric attacks automatically, so even simple DNS failover might be good enough to protect from such attacks. It's not that hard. Layer 7 is where the hard part is.
1. Having had ~25 servers per datacenter
2. 5 Datacenters (1 in Texas, 1 in Utah, 2 in California, 1 in Chicago)
3. 1 Server in each location connected 10gbit, the rest 1 gbit.
I got to watch first hand as DNS reflection attacks crippled our infrastructure one server at a time. Only 2 of the datacenters (1 in LA, 1 in Chicago) had the infrastructure to mitigate the DDoS without significantly effecting their operations. Even post mitigation, the 2 datacenters that didn't end-up blackhole-ing our IPs at the edge still let so much malicious traffic through that only the 2 10Gbit servers remained online and they were nearly CPU bound over 24 cores just handling all the SENDQ/RECVQ for the NIC.
I mention this because it's sometimes easy to dismiss until you're in the situation and the realities of what you have control over are vastly different from technically feasible. The size and scope of modern DDoS attacks can easily overwhelm entire uplinks to a datacenter, even after pushing mitigations upstream. The reason these reverse proxies from companies like Cloudflare have become so popular is because most will not have the raw resources required to mitigate this themselves. Even some larger datacenters don't have the resources.
I wonder if there are any other ways to defend against DDoS?
Maybe looking for a host that helps with such matters? But then they will probably be more expensive, too?
I understand, but you are still talking about a situation where surviving a volumetric DDoS attack without a global centralized provider was possible. It wasn't smooth for you, but it could have been if things were done a bit differently.
Anyway, here on the other side of the world it's not like that, DDoS protection is more common. Because in the early days of DDoS attacks with all the dreadful blackholing one of the big European providers OVH invested in DDoS protection and kind of pushed the whole market to provide it too instead of blackholing.