pg_stream — Benchmark Guide

This document explains how the database-level refresh benchmarks work and how to interpret their output.


Overview

The benchmark suite in tests/e2e_bench_tests.rs measures wall-clock refresh time for FULL vs INCREMENTAL mode across a matrix of table sizes, change rates, and query complexities. Each benchmark spawns an isolated PostgreSQL 18.x container via Testcontainers, ensuring reproducible and interference-free measurements.

The core question the benchmarks answer:

How much faster is an INCREMENTAL refresh compared to a FULL refresh, given a specific workload?


Prerequisites

Build the E2E test Docker image before running any benchmarks:

./tests/build_e2e_image.sh

Docker must be running on the host.


Running Benchmarks

All benchmark tests are tagged #[ignore] so they are skipped during normal CI. The --nocapture flag is required to see the printed output tables.

Quick Spot Checks (~5–10 seconds each)

# Simple scan, 10K rows, 1% change rate
cargo test --test e2e_bench_tests --features pg18 -- --ignored --nocapture bench_scan_10k_1pct

# Aggregate query, 100K rows, 1% change rate
cargo test --test e2e_bench_tests --features pg18 -- --ignored --nocapture bench_aggregate_100k_1pct

# Join + aggregate, 100K rows, 10% change rate
cargo test --test e2e_bench_tests --features pg18 -- --ignored --nocapture bench_join_agg_100k_10pct

Zero-Change Latency (~5 seconds)

cargo test --test e2e_bench_tests --features pg18 -- --ignored --nocapture bench_no_data_refresh_latency

Full Matrix (~15–30 minutes)

Runs all 30 combinations and prints a consolidated summary:

cargo test --test e2e_bench_tests --features pg18 -- --ignored --nocapture bench_full_matrix

Run All Benchmarks in Parallel

cargo test --test e2e_bench_tests --features pg18 -- --ignored --nocapture

Note: each test starts its own container, so parallel execution requires sufficient Docker resources.


Benchmark Dimensions

Table Sizes

SizeRowsPurpose
Small10,000Fast iteration; measures per-row overhead
Medium100,000More realistic; reveals scaling characteristics

Change Rates

RateDescription
1%Low churn — the sweet spot for incremental refresh
10%Moderate churn — tests delta query scalability
50%High churn — stress test; approaches full-refresh cost

Query Complexities

ScenarioDefining QueryOperators Tested
scanSELECT id, region, category, amount, score FROM srcTable scan only
filterSELECT id, region, amount FROM src WHERE amount > 5000Scan + filter (WHERE)
aggregateSELECT region, SUM(amount), COUNT(*) FROM src GROUP BY regionScan + group-by aggregate
joinSELECT s.id, s.region, s.amount, d.region_name FROM src s JOIN dim d ON ...Scan + inner join
join_aggSELECT d.region_name, SUM(s.amount), COUNT(*) FROM src s JOIN dim d ON ... GROUP BY ...Scan + join + aggregate

DML Mix per Cycle

Each change cycle applies a realistic mix of operations:

OperationFractionExample at 10K rows, 10% rate
UPDATE70%700 rows have amount incremented
DELETE15%150 rows removed
INSERT15%150 new rows added

What Each Benchmark Does

1. Start a fresh PostgreSQL 18.x container
2. Install the pg_stream extension
3. Create and populate the source table (10K or 100K rows)
4. Create dimension table if needed (for join scenarios)
5. ANALYZE for stable query plans

── FULL mode ──
6. Create a Stream Table in FULL refresh mode
7. For each of 3 cycles:
   a. Apply random DML (updates + deletes + inserts)
   b. ANALYZE
   c. Time the FULL refresh (TRUNCATE + re-execute entire query)
   d. Record refresh_ms and ST row count
8. Drop the FULL-mode ST

── INCREMENTAL mode ──
9. Reset source table to same starting state
10. Create a Stream Table in INCREMENTAL refresh mode
11. For each of 3 cycles:
    a. Apply random DML (same parameters)
    b. ANALYZE
    c. Time the INCREMENTAL refresh (delta query + MERGE)
    d. Record refresh_ms and ST row count

12. Print results table and summary

Both modes start from the same data to ensure a fair comparison. The 3-cycle design captures warm-up effects (cycle 1 may be slower due to plan caching).


Reading the Output

Detail Table

╔══════════════════════════════════════════════════════════════════════════════════════╗
║                    pg_stream Refresh Benchmark Results                      ║
╠════════════╤══════════╤════════╤═════════════╤═══════╤════════════╤═════════════════╣
║ Scenario   │ Rows     │ Chg %  │ Mode        │ Cycle │ Refresh ms │ ST Rows         ║
╠════════════╪══════════╪════════╪═════════════╪═══════╪════════════╪═════════════════╣
║ aggregate  │    10000 │     1% │ FULL        │     1 │       22.1 │               5 ║
║ aggregate  │    10000 │     1% │ FULL        │     2 │        4.8 │               5 ║
║ aggregate  │    10000 │     1% │ FULL        │     3 │        5.3 │               5 ║
║ aggregate  │    10000 │     1% │ INCREMENTAL │     1 │        8.4 │               5 ║
║ aggregate  │    10000 │     1% │ INCREMENTAL │     2 │        4.4 │               5 ║
║ aggregate  │    10000 │     1% │ INCREMENTAL │     3 │        4.6 │               5 ║
╚════════════╧══════════╧════════╧═════════════╧═══════╧════════════╧═════════════════╝
ColumnMeaning
ScenarioQuery complexity level (scan, filter, aggregate, join, join_agg)
RowsNumber of rows in the base table
Chg %Percentage of rows changed per cycle
ModeFULL (truncate + recompute) or INCREMENTAL (delta + merge)
CycleWhich of the 3 measurement rounds (cycle 1 often includes warm-up)
Refresh msWall-clock time for the refresh operation
ST RowsRow count in the Stream Table after refresh (sanity check)

Summary Table

┌─────────────────────────────────────────────────────────────────────────┐
│                        Summary (avg ms per cycle)                       │
├────────────┬──────────┬────────┬─────────────────┬──────────────────────┤
│ Scenario   │ Rows     │ Chg %  │ FULL avg ms     │ INCREMENTAL avg ms   │
├────────────┼──────────┼────────┼─────────────────┼──────────────────────┤
│ aggregate  │    10000 │     1% │       10.7       │        5.8 (  1.8x) │
└────────────┴──────────┴────────┴─────────────────┴──────────────────────┘

The Speedup value in parentheses is FULL avg / INCREMENTAL avg — how many times faster the incremental refresh is compared to a full refresh.


Interpreting the Speedup

What to Expect

Change RateTable SizeExpected SpeedupExplanation
1%10K1.5–5xSmall table; overhead is similar, delta is tiny
1%100K5–50xLarger table amplifies full-refresh cost
10%100K2–10xModerate delta; still significantly faster
50%any1–2xDelta is nearly as large as full table

Rules of Thumb

SpeedupInterpretation
> 10xStrong win for INCREMENTAL — typical at low change rates on larger tables
5–10xClear advantage for INCREMENTAL
2–5xModerate advantage — INCREMENTAL is the right choice
1–2xMarginal gain — either mode is acceptable
~1xBreak-even — change rate is too high for incremental to help
< 1xINCREMENTAL is slower — would indicate overhead exceeds savings (investigate)

Key Patterns to Look For

  1. Scaling with table size: For the same change rate, speedup should increase with table size. FULL must re-process all rows; INCREMENTAL processes only the delta.

  2. Degradation with change rate: As change rate rises from 1% → 50%, speedup should decrease. At 50%, INCREMENTAL processes half the table which approaches FULL cost.

  3. Query complexity amplifies speedup: Aggregate and join queries benefit more from INCREMENTAL because they avoid expensive re-computation. A join_agg at 1% changes should show higher speedup than a simple scan at the same parameters.

  4. Cycle 1 warm-up: The first cycle in each mode may be slower due to PostgreSQL plan cache population. Use cycles 2–3 for the steadiest numbers.

  5. ST Rows consistency: The ST row count should be similar between FULL and INCREMENTAL for the same scenario (accounting for random DML). Large discrepancies indicate a correctness issue.


Zero-Change Latency

The bench_no_data_refresh_latency test measures the overhead of a refresh when no data has changed — the NO_DATA code path.

┌──────────────────────────────────────────────┐
│ NO_DATA Refresh Latency (10 iterations)      │
├──────────────────────────────────────────────┤
│ Avg:     3.21 ms                             │
│ Max:     5.10 ms                             │
│ Target: < 10 ms                              │
│ Status: ✅ PASS                              │
└──────────────────────────────────────────────┘
MetricMeaning
AvgAverage wall-clock time across 10 no-op refreshes
MaxWorst-case single iteration
TargetThe PLAN.md goal: < 10 ms per no-op refresh
StatusPASS if avg < 10 ms, SLOW otherwise

A passing result confirms the scheduler's per-cycle overhead is negligible. Values > 10 ms in containerized environments may be acceptable due to Docker overhead; bare-metal PostgreSQL should comfortably meet the target.


Available Tests

Individual Tests (10K rows)

Test NameScenarioChange Rate
bench_scan_10k_1pctscan1%
bench_scan_10k_10pctscan10%
bench_scan_10k_50pctscan50%
bench_filter_10k_1pctfilter1%
bench_aggregate_10k_1pctaggregate1%
bench_join_10k_1pctjoin1%
bench_join_agg_10k_1pctjoin_agg1%

Individual Tests (100K rows)

Test NameScenarioChange Rate
bench_scan_100k_1pctscan1%
bench_scan_100k_10pctscan10%
bench_scan_100k_50pctscan50%
bench_aggregate_100k_1pctaggregate1%
bench_aggregate_100k_10pctaggregate10%
bench_join_agg_100k_1pctjoin_agg1%
bench_join_agg_100k_10pctjoin_agg10%

Special Tests

Test NameDescription
bench_full_matrixAll 30 combinations (5 queries × 2 sizes × 3 rates)
bench_no_data_refresh_latencyZero-change overhead (10 iterations)

In-Process Micro-Benchmarks (Criterion.rs)

In addition to the E2E database benchmarks, the project includes two Criterion.rs benchmark suites that measure pure Rust computation time without database overhead. These are useful for tracking performance regressions in the internal query-building and IVM differentiation logic.

Benchmark Suites

refresh_bench — Utility Functions

benches/refresh_bench.rs benchmarks the low-level helper functions used during refresh operations:

Benchmark GroupWhat It Measures
quote_identPostgreSQL identifier quoting speed
col_listColumn list SQL generation
prefixed_col_listPrefixed column list generation (e.g., NEW.col)
expr_to_sqlAST expression → SQL string conversion
output_columnsOutput column extraction from parsed queries
source_oidsSource table OID resolution
lsn_gtLSN comparison expression generation
frontier_jsonFrontier state JSON serialization
canonical_periodInterval parsing and canonicalization
dag_operationsDAG topological sort and cycle detection
xxh64xxHash-64 hashing throughput

diff_operators — IVM Operator Differentiation

benches/diff_operators.rs benchmarks the delta SQL generation for every IVM operator. Each benchmark creates a realistic operator tree and measures differentiate() throughput:

Benchmark GroupWhat It Measures
diff_scanTable scan differentiation (3, 10, 20 columns)
diff_filterFilter (WHERE) differentiation
diff_projectProjection (SELECT subset) differentiation
diff_aggregateGROUP BY aggregate differentiation (simple + complex)
diff_inner_joinInner join differentiation
diff_left_joinLeft outer join differentiation
diff_distinctDISTINCT differentiation
diff_union_allUNION ALL differentiation (2, 5, 10 children)
diff_windowWindow function differentiation
diff_join_aggregateComposite join + aggregate pipeline
differentiate_fullFull differentiate() call for scan-only and filter+scan trees

Running Micro-Benchmarks

# Run all Criterion benchmarks
just bench

# Run only refresh utility benchmarks
cargo bench --bench refresh_bench --features pg18

# Run only IVM diff operator benchmarks
just bench-diff
# or equivalently:
cargo bench --bench diff_operators --features pg18

# Output in Bencher-compatible format (for CI integration)
just bench-bencher

Output and Reports

Criterion produces statistical analysis for each benchmark including:

  • Mean and standard deviation of execution time
  • Throughput (iterations/sec)
  • Comparison with previous run — reports improvements/regressions with confidence intervals

HTML reports are generated in target/criterion/ with interactive charts showing distributions and regression history. Open target/criterion/report/index.html to browse all results.

Sample output:

diff_scan/3_columns   time:   [11.834 µs 12.074 µs 12.329 µs]
diff_scan/10_columns  time:   [16.203 µs 16.525 µs 16.869 µs]
diff_aggregate/simple time:   [21.447 µs 21.862 µs 22.301 µs]
diff_inner_join       time:   [25.919 µs 26.421 µs 26.952 µs]

Continuous Benchmarking with Bencher

Bencher provides continuous benchmark tracking in CI, detecting performance regressions on pull requests before they merge.

How It Works

The .github/workflows/benchmarks.yml workflow:

  1. On main pushes — runs both Criterion suites and uploads results to Bencher as the baseline. This establishes the expected performance for each benchmark.

  2. On pull requests — runs the same benchmarks and compares against the main baseline using a Student's t-test with a 99% upper confidence boundary. If any benchmark regresses beyond the threshold, the PR check fails.

Setup

To enable Bencher for your fork or deployment:

  1. Create a Bencher account at bencher.dev and create a project.

  2. Add the API token as a GitHub Actions secret:

    • Go to Settings → Secrets and variables → Actions
    • Add BENCHER_API_TOKEN with your Bencher API token
  3. Update the project slug in .github/workflows/benchmarks.yml if your Bencher project name differs from pg-stream.

The workflow gracefully degrades — if BENCHER_API_TOKEN is not set, benchmarks still run and upload artifacts but skip Bencher tracking.

Local Bencher-Format Output

To see what Bencher would receive from CI:

just bench-bencher

This runs both suites with --output-format bencher, producing JSON output compatible with bencher run.

Dashboard

Once configured, the Bencher dashboard shows:

  • Historical trends for every benchmark across commits
  • Statistical thresholds with configurable alerting
  • PR annotations highlighting which benchmarks regressed and by how much

Troubleshooting

IssueResolution
docker: command not foundInstall Docker Desktop and ensure it is running
Container startup timeoutIncrease Docker memory allocation (≥ 4 GB recommended)
image not foundRun ./tests/build_e2e_image.sh to build the test image
Highly variable timingsClose other workloads; use --test-threads=1 to avoid container contention
SLOW status on latency testExpected in Docker; bare-metal should pass < 10 ms