zlacker

[parent] [thread] 4 comments
1. hughrr+(OP)[view] [source] 2021-06-12 10:00:12
It’s only the same problem until someone dumps 25,000 entries in your queue.
replies(2): >>kasey_+B8 >>crazyg+wC
2. kasey_+B8[view] [source] 2021-06-12 11:34:20
>>hughrr+(OP)
I’ve seen all manner of hokey queue implementations in sql going back about 20 years and all of them could handle 25k enqueue bursts. That wasn’t a problem for a Sybase database on a commodity host circa 2000.

I think if I were going to argue against using DBs as queues it would be around: heavy parallel write use cases, latency concerns of both reads/writes and scaling to millions of events per second.

If you don’t have those concerns using a properly normalized and protected schema (which you are doing anyway right? Cause if not you are already shooting your toes off) for queues goes a very long way and removes a very big operational burden and tons of failure modes.

replies(2): >>justso+Je >>hughrr+Bn
◧◩
3. justso+Je[view] [source] [discussion] 2021-06-12 12:53:15
>>kasey_+B8
I agree.

Going from a single database to a (database + queue) means two server processes to manage, maintain, observe, test etc.

I actually start with SQLite to reduce as much distributed state as possible, then move to something else once it’s proven it will not work.

◧◩
4. hughrr+Bn[view] [source] [discussion] 2021-06-12 14:18:13
>>kasey_+B8
It wasn't a problem for Sybase on a commodity host circa 2000 because clearly that host wasn't doing a whole lot of other stuff. It's a big problem for our 48 core nodes with 2TiB of RAM and a metric shit ton of DAS NVMe. Ergo anecdotes don't scale either.

To clarify we just moved this entire problem to SQS.

5. crazyg+wC[view] [source] 2021-06-12 16:41:03
>>hughrr+(OP)
What database can't handle an extra 25,000 entries?

That's... nothing. Databases handle billions of rows effortlessly using a b-tree index. So not really sure what point you're trying to make?

[go to top]