Question on io_min_workers > io_max_workers semantics

First seen: 2026-05-07 11:50:40+00:00 · Messages: 1 · Participants: 1

Latest Update

2026-05-08 · opus 4.7

Analysis: io_min_workers > io_max_workers Semantics in AIO Worker Pool

Background and Architectural Context

Commit d1c01b79d4ae ("aio: Adjust I/O worker pool automatically") is part of the asynchronous I/O (AIO) subsystem introduced by Thomas Munro and Andres Freund in PostgreSQL 18. The io_method=worker backend relies on a pool of dedicated I/O worker processes that service submission queues on behalf of regular backends. Because workload I/O demand is bursty, the pool needs to scale dynamically rather than being statically sized, hence the introduction of an adjustment loop driven by maybe_start_io_workers_scheduled_at().

The pool is bounded by two GUCs:

Both are validated independently with the range [1, MAX_IO_WORKERS]. Crucially, there is no cross-GUC assign hook that rejects or reconciles the case where io_min_workers > io_max_workers.

The Reported Issue

The reporter (xunengzhou) observed that with io_min_workers = 32 and io_max_workers = 1, postgres silently accepts the configuration. Then, inside the scheduler helper:

if (io_worker_count >= io_max_workers)
    return 0;                          /* do not schedule another start */
if (io_worker_count < io_min_workers)
    return TIMESTAMP_MINUS_INFINITY;   /* start one immediately */

Because the io_max_workers guard is evaluated first, it short-circuits the io_min_workers branch. Effectively, io_max_workers silently caps io_min_workers, and the documented "minimum" is not actually honored when it exceeds the maximum.

Why This Matters

  1. Silent misconfiguration. An operator who sets io_min_workers high (perhaps via ALTER SYSTEM) and later lowers io_max_workers below it will get a pool sized by io_max_workers, with no warning. The io_min_workers value becomes a lie in pg_settings.
  2. Precedent from autovacuum. The reporter correctly draws a parallel to the recent autovacuum_max_workers / autovacuum_worker_slots work (PG17), where a similar ordering issue was handled by emitting a WARNING at GUC assignment / at launcher start when the relationship is inverted. That precedent establishes community-accepted behavior: do not reject, but inform.
  3. Code ordering is load-bearing. The current behavior is a consequence of the order of two if statements rather than an explicit policy decision. That is a fragile way to encode semantics — a future refactor could invert the ordering and silently change which GUC "wins."

Design Options

The post implicitly enumerates three possible resolutions:

  1. Document the current behavior. Add a note in config.sgml that io_max_workers caps the effective io_min_workers. Lowest-cost fix; preserves current runtime behavior. Weakness: leaves a foot-gun in place.
  2. Emit a WARNING (autovacuum-style). When either GUC is assigned such that io_min_workers > io_max_workers, log a warning. This is the most consistent option given the autovacuum precedent and is likely the preferred approach for committers who value cross-subsystem consistency.
  3. Reject the configuration via a GUC check hook. Stronger, but problematic because GUC check hooks see only one variable at a time; the other may be mid-assignment or not yet loaded during startup. This is why autovacuum chose WARNING rather than ERROR.

Technical Subtleties

Likely Resolution

Given Thomas Munro's style and the autovacuum precedent, the most likely outcome is a small patch that:

  1. Keeps the current ordering (max wins).
  2. Adds a WARNING (likely in assign_io_max_workers / assign_io_min_workers or at postmaster startup) when io_min_workers > io_max_workers.
  3. Possibly adds a documentation note clarifying that io_max_workers is the hard cap.

This is a minor correctness/UX issue rather than a deep architectural problem — the AIO pool sizing algorithm itself is sound; only the GUC validation surface is underspecified.