Kindsnippet
Maturityseedling
Confidencemedium
Originai-drafted
Created
Updated
Tagsarchitecture, concurrency
Related
Markdown/snippet/worker-pool-isolation.md
See what AI agents see
🤖 AI-drafted, human-reviewed. What does this mean?
snippet 🌱 seedling 🤖 ai-drafted

Worker Pool Isolation Pattern

Separate worker pools per task type so a slow or failing dependency can't starve unrelated work — the bulkhead pattern applied to concurrent processing.

AI-drafted · directed by / updated April 10, 2026

Run different categories of work in separate, bounded pools. A spike in one category can’t starve the others. This pairs naturally with pipeline stage communication — each stage gets its own pool. For rate-sensitive pools, add AIMD rate limiting.

The Problem

A single shared worker pool handles API calls, file processing, and database writes. The API starts responding slowly. Workers pile up waiting on API responses. File processing and database writes — which are fine — queue behind them and stall. One slow dependency takes down everything.

The Fix: One Pool Per Concern

type WorkerPool struct {
name string
workers int
queue chan Job
sem chan struct{} // bounds concurrency
}
func NewPool(name string, workers, queueSize int) *WorkerPool {
p := &WorkerPool{
name: name,
workers: workers,
queue: make(chan Job, queueSize),
sem: make(chan struct{}, workers),
}
go p.run()
return p
}
pools := map[string]*WorkerPool{
"api": NewPool("api", 10, 100),
"files": NewPool("files", 4, 50),
"db": NewPool("db", 8, 200),
}

The API pool fills up? The file and database pools keep moving. Each pool has its own concurrency limit and backpressure via its own queue.

Sizing

PoolSize byWatch for
I/O-bound (API calls, network)Number of connections you can sustainQueue depth growing = upstream is slow
CPU-bound (parsing, transforms)Number of coresCPU saturation = pool is too large
External writes (DB, storage)Connection pool limit of the backendTimeouts = reduce pool or batch writes

Start small, measure, increase. A pool that’s too large creates more contention than it solves.

Key Details

Bounded queues, not unbounded. An unbounded queue hides backpressure — memory grows silently until the process crashes. Use a buffered channel or ring buffer with a hard cap. When the queue is full, reject or apply backpressure to the caller.

Per-pool timeouts. API calls might need a 30-second timeout. File operations might need 5 seconds. A shared timeout is wrong for both. Set deadlines per pool based on the expected latency profile of that work type.

Monitor each pool independently. Track queue depth, active workers, completion rate, and error rate per pool. A healthy aggregate hides a sick pool.

When to Use This

  • Multiple dependency types with different latency profiles
  • Any system where one slow path shouldn’t block unrelated fast paths
  • Worker counts that need independent tuning per workload

This is the bulkhead pattern from ship design — compartments that prevent a hull breach from flooding the entire vessel. Same idea, applied to goroutines.