zlacker

[parent] [thread] 6 comments
1. MattIP+(OP)[view] [source] 2023-09-25 00:13:26
We process around 1 million events a day using a queue like this in Postgres, and have processed over 400 million events since the system this is used in went live. Only issue we've had was slow queries due to the table size, as we keep an archive of all the events processed, but some scheduled vacuums every so often kept that under control.
replies(2): >>djbusb+B4 >>andrew+W7
2. djbusb+B4[view] [source] 2023-09-25 01:19:44
>>MattIP+(OP)
Active Queue table and then archive jobs to a JobDone table? I do that. Queue table is small but archive goes back many months
replies(2): >>MattIP+C7 >>pauldd+L7
◧◩
3. MattIP+C7[view] [source] [discussion] 2023-09-25 01:47:53
>>djbusb+B4
We just have a single table, with a column indicating if the job has been taken by a worker or not. Probably could get a bit more performance out of it by splitting into two tables, but it works as it is for now.
◧◩
4. pauldd+L7[view] [source] [discussion] 2023-09-25 01:49:50
>>djbusb+B4
In modern PG you can use partitioned table for a similar effect.
5. andrew+W7[view] [source] 2023-09-25 01:51:31
>>MattIP+(OP)
Partial indexes might help.
replies(2): >>emilse+Oh >>runeks+lN1
◧◩
6. emilse+Oh[view] [source] [discussion] 2023-09-25 03:47:24
>>andrew+W7
Exactly. A partial index should make things fly here.
◧◩
7. runeks+lN1[view] [source] [discussion] 2023-09-25 15:21:09
>>andrew+W7
Also: an ordered index that matches the ordering clause in your job-grabbing query. This is useful if you have lots of pending jobs.
[go to top]