zlacker

[parent] [thread] 2 comments
1. kasey_+(OP)[view] [source] 2021-06-12 11:34:20
I’ve seen all manner of hokey queue implementations in sql going back about 20 years and all of them could handle 25k enqueue bursts. That wasn’t a problem for a Sybase database on a commodity host circa 2000.

I think if I were going to argue against using DBs as queues it would be around: heavy parallel write use cases, latency concerns of both reads/writes and scaling to millions of events per second.

If you don’t have those concerns using a properly normalized and protected schema (which you are doing anyway right? Cause if not you are already shooting your toes off) for queues goes a very long way and removes a very big operational burden and tons of failure modes.

replies(2): >>justso+86 >>hughrr+0f
2. justso+86[view] [source] 2021-06-12 12:53:15
>>kasey_+(OP)
I agree.

Going from a single database to a (database + queue) means two server processes to manage, maintain, observe, test etc.

I actually start with SQLite to reduce as much distributed state as possible, then move to something else once it’s proven it will not work.

3. hughrr+0f[view] [source] 2021-06-12 14:18:13
>>kasey_+(OP)
It wasn't a problem for Sybase on a commodity host circa 2000 because clearly that host wasn't doing a whole lot of other stuff. It's a big problem for our 48 core nodes with 2TiB of RAM and a metric shit ton of DAS NVMe. Ergo anecdotes don't scale either.

To clarify we just moved this entire problem to SQS.

[go to top]