zlacker

[return to "Show HN: Hatchet – Open-source distributed task queue"]
1. fcsp+t41[view] [source] 2024-03-08 22:26:28
>>abelan+(OP)
> Hatchet is built on a low-latency queue (25ms average start)

That seems pretty long - am I misunderstanding something? By my understanding this means the time from enqueue to job processing, maybe someone can enlighten me.

◧◩
2. mhh__+t51[view] [source] 2024-03-08 22:33:09
>>fcsp+t41
It's only a few billion instructions on a decent sized server these days
◧◩◪
3. spencz+De1[view] [source] 2024-03-08 23:41:39
>>mhh__+t51
Damn, I want one of these 100GHz CPUs you have, that sounds great.

I think you mean million :)

◧◩◪◨
4. jlokie+AH1[view] [source] 2024-03-09 05:43:46
>>spencz+De1
You'd be surprised. 1 billion instructions in 25ms is realistic these days.

My laptop can execute about 400 billion CPU instructions per second on battery.

That's about 10 billion instructions in 25ms.

Ihat's the CPU alone, i.e. not including the GPU which would increase the total considerably. Also not counting SIMD lanes as separate: The count is bona fide assembly language instructions.

It comes from cores running at ~4GHz, 8 issued instructions per clock, times 12 cores, plus 4 additional "efficiency" cores adding a bit more. People have confirmed by measurement the 8 instructions per clock is achievable (or close) in well-optimised code. Average code is more like 2-3 per cycle.

Only for short periods as the CPU is likely to get hot and thermally throttle even with its fan. But when it throttles it'll still exceed 1 billion in 25ms.

For perspective on how far silicon has come, the GPU on my laptop is reported to do about 14 trillion floating-point 32-bit calculations per second.

[go to top]