How connection pooling actually works
Published March 11, 2026 · 8 min read
Opening a database connection is expensive. The TCP handshake, TLS negotiation, and PostgreSQL authentication sequence can take 30–100ms — longer than many queries themselves. Connection pooling solves this by keeping a set of established connections ready for reuse.
The pool lifecycle
When your application starts, the pool opens a configurable number of connections — the minimum pool size. As load increases, the pool grows up to a maximum. Connections that sit idle longer than the idle timeout are closed and removed.
Choosing pool size
A common mistake is setting the pool size to match your thread count. This ignores how PostgreSQL handles connections on the server side. Each connection consumes roughly 5–10 MB of server memory. Better heuristics include:
- Start with
2 × CPU coreson the database host - Monitor
pg_stat_activityin production - Tune down if most connections are in the idle state
"The right pool size is the smallest one that keeps your p99 latency acceptable under peak load. Bigger pools cause more contention, not less."
Before reaching for a proxy, instrument your application. Add connection acquisition time to your traces. If p99 acquisition time is under 5ms, your current setup is likely fine. Optimize the queries, not the pool.