zlacker

[return to "Postgres Postmaster does not scale"]
1. paulkr+Nj2[view] [source] 2026-02-05 07:33:31
>>davidg+(OP)
Can’t believe they needed this investigation to realize they need a connection pooler. It’s a fundamental component of every large-scale Postgres deployment, especially for serverless environments.
◧◩
2. jstron+tl2[view] [source] 2026-02-05 07:52:38
>>paulkr+Nj2
can't believe postgres still uses a process-per-connection model that leads to endless problems like this one.
◧◩◪
3. IsTom+wL2[view] [source] 2026-02-05 11:30:27
>>jstron+tl2
You can't process significantly many more queries than you've got CPU cores at the same time anyway.
◧◩◪◨
4. namibj+DO2[view] [source] 2026-02-05 12:01:01
>>IsTom+wL2
Much of the time in a transaction can reasonably be non-db-cpu time, be it io wait or be it client CPU processing between queries. Note I'm not talking about transactions that run >10 seconds, just ones with the queries themselves technically quite cheap. At 10% db-CPU-usage, you get a 1 second transaction from just 100ms of CPU.
◧◩◪◨⬒
5. vbezhe+al3[view] [source] 2026-02-05 15:29:32
>>namibj+DO2
In a properly optimized database absolute majority of queries will hit indices and most data will be in memory cache, so majority of transactions will be CPU or RAM bound. So increasing number of concurrent transactions will reduce throughput. There will be few transactions waiting for I/O, but if majority of transactions are waiting for I/O, it's either horrifically inefficient database or very non-standard usage.
◧◩◪◨⬒⬓
6. CodesI+Nz3[view] [source] 2026-02-05 16:50:32
>>vbezhe+al3
Your arguments make sense for concurrent queries (though high-latency storage like S3 is becoming increasingly popular, especially for analytic loads).

But transactions aren't processing queries all the time. Often the application will do processing between sending queries to the database. During that time a transaction is open, but doesn't do any work on the database server.

◧◩◪◨⬒⬓⬔
7. vbezhe+l14[view] [source] 2026-02-05 18:52:07
>>CodesI+Nz3
It is bad application architecture. Database work should be concentrated in minimal transactional units and connection should be released between these units. All data should be prepared before unit start and additional processing should take place after transaction ended. Using long transactions will cause locks, even deadlocks and generally should be avoided. That's my experience at least. Sometimes business transaction should be split into several database transaction.
◧◩◪◨⬒⬓⬔⧯
8. namibj+Ei4[view] [source] 2026-02-05 20:04:51
>>vbezhe+l14
Your database usage should not involve application-focused locks, MVCC will restart your transaction if needed to resolve concurrency.
[go to top]