Though it’s hard to know for sure what really went down. Could be a number of things. Including a lack of subject matter experts (Elon recently admitted to laying off some people they shouldn’t have).
On that note, the 10 requests/second in the post is also negligible for the same reason. Only requests that hit backend servers matter
Personally I'd just cache HTTP 429 responses for 1 minute, but you could also implement rate-limiting inside the load balancer with an in-memory KV store or bloom filter if you wanted to.
Perhaps the context you're missing is that all large sites use ECMP routing and consistent hashing to ensure that requests from the same IP hit the same load balancer. Twitter only has ~238 million daily active users. 10 requests/second on keepalive TCP+TLS connections can be handled by a couple of nginx servers. The linked "Full-stack Drupal developer" has no idea how any of this works and it's kinda sad how most people in this thread took his post at face value