I have seen similar bugs in the systems I oversee because network libraries love to retry requests without sane limitations by default. But I never saw them make our rate limiters sweat. It's slightly more annoying when they hit an API which actually does some expensive work before returning an error but that's why we have rate limits on all public endpoints.
I also guess that the webapp is the least of Twitters traffic and the native apps probably don't have this problem.
Seems like either my quota reset or they changed the policy because I’m able to access the site again.
Elon can be a monumental asshat, and he can be self-DDOS’ing, and can be accurate about scraping at the same time. It’s why every single social media platform is heading toward becoming a walled garden.
A real scraper would be stopped by a rate limit set to, like, 100 tweets/minute. 600 tweets/day is a completely pointless, punitive limit.
I'm guessing you've never played an offensive or defensive role in scraping because what you've described is in no way a problem for a serious scraping effort. I agree the rate limits are stupid. They fuck over users, they stop amateur scrapers, and do nothing whatsoever to impede professional scraping.
If you want to stop most scraping, employ device attestation techniques and TLS fingerprinting.
There are definitely more and more sites doing TLS/TCP/etc fingerprinting or device attestation for mobile APIs, but it's still pretty rare. I mean Twitter is trying to limit requests by IP, so definitely amateur hour over there.