Heading

h1–h6 headings with aesthetic-appropriate sizing, weight, and line height.

View standalone →

The future of infrastructure is serverless

Rethinking how teams deploy software

Continuous delivery without the overhead

Configure your deployment pipeline

Environment variables and secrets
Setting a production secret

Paragraph

Body text with proper line height, measure, and muted secondary variant.

View standalone →

Lead

Most outages aren't caused by exotic failure modes. They happen because a configuration change was applied to production before staging, a retry loop overwhelmed a downstream service, or a migration ran without a rollback plan.

Body

Rate limiting protects your API from both accidental and intentional overuse. A well-designed rate limiter communicates its limits clearly through response headers, returns meaningful error messages when the limit is reached, and uses a sliding window algorithm to avoid the thundering herd problem that fixed-window approaches can cause at interval boundaries.

Small / Muted

Billing is calculated at the end of each calendar month based on your peak active connections during that period. Charges appear on your invoice within 3 business days of month close. For questions about your bill, contact billing support.

Prose

Long-form content container for articles and blog posts with full typographic rhythm.

View standalone →

How connection pooling actually works

Published March 11, 2026 · 8 min read

Opening a database connection is expensive. The TCP handshake, TLS negotiation, and PostgreSQL authentication sequence can take 30–100ms — longer than many queries themselves. Connection pooling solves this by keeping a set of established connections ready for reuse.

The pool lifecycle

When your application starts, the pool opens a configurable number of connections — the minimum pool size. As load increases, the pool grows up to a maximum. Connections that sit idle longer than the idle timeout are closed and removed.

Choosing pool size

A common mistake is setting the pool size to match your thread count. This ignores how PostgreSQL handles connections on the server side. Each connection consumes roughly 5–10 MB of server memory. Better heuristics include:

  • Start with 2 × CPU cores on the database host
  • Monitor pg_stat_activity in production
  • Tune down if most connections are in the idle state

"The right pool size is the smallest one that keeps your p99 latency acceptable under peak load. Bigger pools cause more contention, not less."

— Brandur Leach, Crunchy Data

Before reaching for a proxy, instrument your application. Add connection acquisition time to your traces. If p99 acquisition time is under 5ms, your current setup is likely fine. Optimize the queries, not the pool.

Code Block

Monospaced code display with syntax-highlighted appearance and copy affordance.

View standalone →
JavaScript
1 async function fetchWithRetry(url, options = {}) {
2   const { maxRetries = 3, baseDelay = 200 } = options;
3  
4   for (let attempt = 0; attempt <= maxRetries; attempt++) {
5     try {
6       const response = await fetch(url, options);
7       if (!response.ok && attempt < maxRetries) {
8         const delay = baseDelay * Math.pow(2, attempt);
9         await new Promise(r => setTimeout(r, delay));
10         continue;
11       }
12       return response;
13     } catch (err) {
14       if (attempt === maxRetries) throw err;
15     }
16   }
17 }