zlacker

[parent] [thread] 4 comments
1. camero+(OP)[view] [source] 2023-07-02 03:09:53
> So about the only thing they could do was issue multiple, parallel requests and hope that at least one of them was fast.

lol nobody would do this to solve this problem because it doesn't even remotely solve it or give the appearance of solving it, if anything it's guaranteed to make things go slower

replies(1): >>kortil+h1
2. kortil+h1[view] [source] 2023-07-02 03:21:42
>>camero+(OP)
Google did this a long time ago. It’s more nuanced than you think. Two requests would be issued and the first acknowledged one cause the second to be cancelled. There was a public paper on this from Dean iirc and the method is a decade old.
replies(3): >>sagarm+N2 >>dmazzo+3v >>camero+761
◧◩
3. sagarm+N2[view] [source] [discussion] 2023-07-02 03:43:28
>>kortil+h1
The technique is called "request hedging".
◧◩
4. dmazzo+3v[view] [source] [discussion] 2023-07-02 09:21:09
>>kortil+h1
I think the fundamental difference is that Google was using request hedging within their own network, sending the same request to two different internal servers in case one was slow, while Twitter appears to be sending the same request to the same server over the public Internet.
◧◩
5. camero+761[view] [source] [discussion] 2023-07-02 15:02:59
>>kortil+h1
This was very illuminating thanks, learned something new
[go to top]