zlacker

[return to "Twitter Is DDOSing Itself"]
1. Topfi+Ew[view] [source] 2023-07-01 21:09:47
>>ZacnyL+(OP)
Speaking from very painful, personal experience, few things are more agitating than being forced to execute on something you fully know is a horrible idea, especially when you tried and failed to communicate this fact to the individual pushing you to go against your best judgement.

Even more so when that person later loudly proclaims that they never made such a request, even when provided with written proof.

I can of course not say whether the people currently working at Twitter did warn that the recent measures could have such major side effects, but I would not be surprised in the slightest, considering their leadership's mode of operation.

Even as someone who very much detests what Twitter has become over the last few months and in fact did not like Twitter before the acquisition, partly due to short format making nuance impossible, but mostly for the effect Tweets easy embeddability had on reporting (3 Tweets from random people should not serve as the main basis for an article in my opinion), I must say, I feel very sorry for the people forced to work at that company under that management.

◧◩
2. goalie+PX[view] [source] 2023-07-02 00:43:23
>>Topfi+Ew
I’ll play the devils advocate here but frontend devs need to smarten up. This is basic error handling that should have been in place for years. Blocking tweets with 403 or whatever they chose shouldn’t trigger endless retries on short intervals.. ever!
◧◩◪
3. sheeps+501[view] [source] 2023-07-02 01:02:51
>>goalie+PX
..and using exponential back-off, if not limiting the number of retries.

Though it’s hard to know for sure what really went down. Could be a number of things. Including a lack of subject matter experts (Elon recently admitted to laying off some people they shouldn’t have).

◧◩◪◨
4. Dextro+I01[view] [source] 2023-07-02 01:07:41
>>sheeps+501
Devil's advocate here: did we consider that any such exponential back off goes out the window when users, faced with a non-working site, will just refresh the page therefor reseting the whole process?
◧◩◪◨⬒
5. NavinF+W51[view] [source] 2023-07-02 02:03:15
>>Dextro+I01
The server load from that is negligible since those requests stop at the load balancer.

On that note, the 10 requests/second in the post is also negligible for the same reason. Only requests that hit backend servers matter

◧◩◪◨⬒⬓
6. 8n4vid+qv1[view] [source] 2023-07-02 07:03:30
>>NavinF+W51
How does the load balancer know if they hit the tweet limit or not? Sounds like they need to query a db for that
◧◩◪◨⬒⬓⬔
7. NavinF+zC4[view] [source] 2023-07-03 10:22:00
>>8n4vid+qv1
There are a million ways to skin that cat.

Personally I'd just cache HTTP 429 responses for 1 minute, but you could also implement rate-limiting inside the load balancer with an in-memory KV store or bloom filter if you wanted to.

Perhaps the context you're missing is that all large sites use ECMP routing and consistent hashing to ensure that requests from the same IP hit the same load balancer. Twitter only has ~238 million daily active users. 10 requests/second on keepalive TCP+TLS connections can be handled by a couple of nginx servers. The linked "Full-stack Drupal developer" has no idea how any of this works and it's kinda sad how most people in this thread took his post at face value

[go to top]