zlacker

[return to "Postgres is a great pub/sub and job server (2019)"]
1. aneesh+qO[view] [source] 2021-12-18 06:33:24
>>anonu+(OP)
I think the key concept here is atomicity. If some API is responsible for creating a job, storing it in the database AND publishing it can never be an atomic operation. Both the database and pub/sub servers are separate network connections from the application server. For example, if you save the record first and then publish, It's quite possibe that you save the record in the database and then lose the connection to the pub/sub server when publishing. If this happens, you can never know if the pub/sub server received the request and published it. In systems where it's critical to guarantee that a record was saved and guarantee that it was published as well, the only way to do that is by using a single external connection - in this case to a Postgres DB. We've used the same setup on an AWS RDS t2.medium machine to process over 600 records/second.
◧◩
2. kgeist+GR[view] [source] 2021-12-18 07:16:05
>>aneesh+qO
>If some API is responsible for creating a job, storing it in the database AND publishing it can never be an atomic operation.

We use transactional outbox for that - we insert a record about the event into the event table in the same transaction as the rest of the operation (providing atomicity), and then a special goroutine reads this table and pushes new events to the message broker on a different server. In our design, there are multiple services which might want to subscribe to the event, and the rule is that they shouldn't share their DB's (for proper scaling) so we can't handle event dispatch in some single central app instance. Of course we could implement our own pub/sub server in Go over a DB like Postgres if we wanted, but what's the point of reinventing the wheel if there's already existing battle-tested tools for that, considering you have to reimplement: queues, exchanges, topics, delivery guarantee, proper error handling, monitoring etc.

[go to top]