zlacker

[return to "The part of Postgres we hate the most: Multi-version concurrency control"]
1. cybera+kL[view] [source] 2023-04-26 20:57:45
>>andren+(OP)
Yup. A lot of heavy users of Postgres eventually hit the same barrier. Here's another take from Uber: https://www.uber.com/blog/postgres-to-mysql-migration/

I had a similar personal experience. In my previous job we used Postgres to implement a task queuing system, and it created a major bottleneck, resulting in tons of concurrency failures and bloat.

And most dangerously, the system failed catastrophically under load. As the load increased, most transactions ended up in concurrent failures, so very little actual work got committed. This increased the amount of outstanding tasks, resulting in even higher rate of concurrent failures.

And this can happen suddenly, one moment the system behaves well, with tasks being processed at a good rate, and the next moment the queue blows up and nothing works.

I re-implemented this system using pessimistic locking, and it turned out to work much better. Even under very high load, the system could still make forward progress.

The downside was having to make sure that no deadlocks can happen.

◧◩
2. osigur+hc1[view] [source] 2023-04-27 00:01:49
>>cybera+kL
>> In my previous job we used Postgres to implement a task queuing system, and it created a major bottleneck, resulting in tons of concurrency failures and bloat

Yet, every month or two an article about doing exactly this is upvoted to near the top of HN. It can of course work but might hard to replace years later once "barnacles" have grown on it. Every situation is different of course.

[go to top]