I have seen similar bugs in the systems I oversee because network libraries love to retry requests without sane limitations by default. But I never saw them make our rate limiters sweat. It's slightly more annoying when they hit an API which actually does some expensive work before returning an error but that's why we have rate limits on all public endpoints.
I also guess that the webapp is the least of Twitters traffic and the native apps probably don't have this problem.
Infact it got so bad because of all those retries at multiple levels from upstream callers that requests were essentially timing out at the TCP buffer/queue before they could be processed by the application.
Don’t know if the Twitter homepage backend is at similar scale.
If IPs or IP ranges get really annoying we block them on the network level.
Big public sites like Twitter obviously need to have this technology. Due to their political content they probably also need sophisticated DDoS protection.