Skip to content

Performance

This guide shows how to measure Regor performance with stable, repeatable workflows:

  1. Browser DOM benchmark (real rendering path)
  2. Minidom benchmark (engine-only, no paint/layout noise)
  3. CPU profiling (find real hotspots before changing code)

In this repo’s benchmark suite, Regor is the update-throughput winner in many real mutation patterns.

Highlights:

  1. Regor is consistently strong in mutation-heavy scenarios.
  2. Regor often leads total update cost when interaction drives many list changes.
  3. Regor can outperform even when mount is not the absolute fastest metric.

In the 1000-row snapshot on this page, Regor wins 10 of 11 scenario medians.

Regor median wins:

  1. swap_second_last
  2. rotate_head_tail
  3. splice_middle_replace
  4. mutate_stride_fields
  5. increment_odd_values_by_5
  6. insert_head_32
  7. remove_every_5th
  8. shuffle_deterministic
  9. replace_all_objects_same_keys
  10. toggle_class_all_rows

Verification paths:

  1. /benchmarks/index.html for mount + mutation totals
  2. Scenario table in the same page for per-pattern winners

Benchmark settings:

  1. Row count: 1000
  2. Warmups: 6
  3. Samples: 20
FrameworkMount MedianMount P90Mutate MedianMutate P90Total MedianTotal P90
Regor49.40 ms60.20 ms217.40 ms256.40 ms270.60 ms312.20 ms
Vue@latest27.20 ms34.70 ms325.10 ms343.70 ms354.10 ms371.90 ms
ScenarioRegor Median (ms)Vue Median (ms)
swap_second_last5.2016.60
reverse_rows26.0025.90
rotate_head_tail10.3014.20
splice_middle_replace9.9016.60
mutate_stride_fields8.0016.60
increment_odd_values_by_513.5016.60
insert_head_328.5016.70
remove_every_5th106.10124.50
shuffle_deterministic4.2028.60
replace_all_objects_same_keys8.0030.80
toggle_class_all_rows14.3014.40

Interpretation:

Regor shows strong wins on several real mutation patterns.

Use this when you care about user-visible page performance.

  1. Run: yarn bench:serve
  2. Open:
    1. /benchmarks/index.html for mount + mutation benchmarks
    2. /benchmarks/initial-load.html for mount-only benchmarks
  3. Use the controls:
    1. row count dropdown (500, 1000, 2000, 5000)
    2. warmups
    3. samples

Interpretation:

  1. Median is your primary number.
  2. P90 reflects tail behavior and jitter.
  3. Compare scenario tables, not only one aggregate total.

Use this for fast local iteration and CI-like guardrails.

Commands:

  1. Run suite: yarn perf (default rows: 500)
  2. Run with rows: yarn perf 1000
  3. Record baseline: yarn perf:record 1000
  4. Check against baseline: yarn perf:check 1000

Notes:

  1. Baselines are row-specific (benchmarks/minidom/perf-baseline.<rows>.json).
  2. Output includes run metrics and unmount metrics (UnmMed, UnmP90).
  3. In interactive run mode: Enter reruns, q quits.

Useful environment variables:

  1. PERF_SAMPLES (default 20)
  2. PERF_WARMUPS (default 5)
  3. PERF_MAX_REGRESSION_PCT (default 5)
  4. PERF_FAIL_ON_REGRESSION (1 fail, 0 warn)

Example:

Terminal window
PERF_SAMPLES=30 PERF_WARMUPS=8 PERF_MAX_REGRESSION_PCT=3 yarn perf:check 1000

Use this before optimization work. It prevents random tuning and points to real hotspots.

Quick start:

  1. yarn perf:profile (default rows: 2000, top lines: 30)
  2. yarn perf:profile 2000 40 (custom rows + hotspot line count)

What the command does:

  1. Bundles benchmarks/minidom/perf.ts to plain Node JS.
  2. Runs Node CPU profiler on that bundle.
  3. Produces:
    1. .cpuprofile file for Chrome DevTools
    2. .summary.txt hotspot summary
  4. Saves artifacts under benchmarks/minidom/profiles/.

Profile artifacts:

  1. benchmarks/minidom/profiles/perf-<rows>-<timestamp>.cpuprofile
  2. benchmarks/minidom/profiles/perf-<rows>-<timestamp>.summary.txt
  1. Reproduce with yarn perf <rows> and browser benchmark.
  2. Capture profile with yarn perf:profile <rows>.
  3. Change one thing only.
  4. Re-run yarn perf:check <rows>.
  5. Validate functional correctness with tests.
  6. Re-check browser benchmark to confirm user-visible impact.
  1. Optimize measured hotspots, not assumptions.
  2. Prefer algorithmic wins over micro-tweaks.
  3. Keep median and p90 both healthy.
  4. Use row counts that match your real product usage.