<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://www.memcp.org/index.php?action=history&amp;feed=atom&amp;title=Performance_Measurement</id>
	<title>Performance Measurement - Revision history</title>
	<link rel="self" type="application/atom+xml" href="https://www.memcp.org/index.php?action=history&amp;feed=atom&amp;title=Performance_Measurement"/>
	<link rel="alternate" type="text/html" href="https://www.memcp.org/index.php?title=Performance_Measurement&amp;action=history"/>
	<updated>2026-04-04T15:39:43Z</updated>
	<subtitle>Revision history for this page on the wiki</subtitle>
	<generator>MediaWiki 1.39.1</generator>
	<entry>
		<id>https://www.memcp.org/index.php?title=Performance_Measurement&amp;diff=251&amp;oldid=prev</id>
		<title>Carli: Created page with &quot; == Performance Testing Framework == MemCP includes an auto-calibrating performance test framework for regression detection and benchmarking.  === Running Performance Tests ===  &lt;code&gt;# Run performance tests (uses calibrated baselines)  PERF_TEST=1 make test    # Calibrate for your machine (run ~10 times to reach target)  PERF_TEST=1 PERF_CALIBRATE=1 make test    # Freeze row counts for bisecting regressions  PERF_TEST=1 PERF_NORECALIBRATE=1 make test    # Show query exe...&quot;</title>
		<link rel="alternate" type="text/html" href="https://www.memcp.org/index.php?title=Performance_Measurement&amp;diff=251&amp;oldid=prev"/>
		<updated>2026-01-26T00:00:10Z</updated>

		<summary type="html">&lt;p&gt;Created page with &amp;quot; == Performance Testing Framework == MemCP includes an auto-calibrating performance test framework for regression detection and benchmarking.  === Running Performance Tests ===  &amp;lt;code&amp;gt;# Run performance tests (uses calibrated baselines)  PERF_TEST=1 make test    # Calibrate for your machine (run ~10 times to reach target)  PERF_TEST=1 PERF_CALIBRATE=1 make test    # Freeze row counts for bisecting regressions  PERF_TEST=1 PERF_NORECALIBRATE=1 make test    # Show query exe...&amp;quot;&lt;/p&gt;
&lt;p&gt;&lt;b&gt;New page&lt;/b&gt;&lt;/p&gt;&lt;div&gt;&lt;br /&gt;
== Performance Testing Framework ==&lt;br /&gt;
MemCP includes an auto-calibrating performance test framework for regression detection and benchmarking.&lt;br /&gt;
&lt;br /&gt;
=== Running Performance Tests ===&lt;br /&gt;
 &amp;lt;code&amp;gt;# Run performance tests (uses calibrated baselines)&lt;br /&gt;
 PERF_TEST=1 make test&lt;br /&gt;
 &lt;br /&gt;
 # Calibrate for your machine (run ~10 times to reach target)&lt;br /&gt;
 PERF_TEST=1 PERF_CALIBRATE=1 make test&lt;br /&gt;
 &lt;br /&gt;
 # Freeze row counts for bisecting regressions&lt;br /&gt;
 PERF_TEST=1 PERF_NORECALIBRATE=1 make test&lt;br /&gt;
 &lt;br /&gt;
 # Show query execution plans&lt;br /&gt;
 PERF_TEST=1 PERF_EXPLAIN=1 make test&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== How Auto-Calibration Works ===&lt;br /&gt;
&lt;br /&gt;
# Target query time: &amp;#039;&amp;#039;&amp;#039;10-20 seconds&amp;#039;&amp;#039;&amp;#039;&lt;br /&gt;
# Each calibration run scales row counts by &amp;#039;&amp;#039;&amp;#039;30%&amp;#039;&amp;#039;&amp;#039; up or down&lt;br /&gt;
# Baselines are machine-specific, stored in &amp;lt;code&amp;gt;.perf_baseline.json&amp;lt;/code&amp;gt;&lt;br /&gt;
# Tests include 2 warmup runs before the measured run&lt;br /&gt;
&lt;br /&gt;
=== Output Format ===&lt;br /&gt;
 &amp;lt;code&amp;gt;✅ Perf: COUNT (4.3s / 13s, 100,000,000 rows, 0.04µs/row, 25GB heap, 1522%/2400% CPU)&lt;br /&gt;
          │      │     │              │            │           └─ CPU utilization&lt;br /&gt;
          │      │     │              │            └─ Memory usage&lt;br /&gt;
          │      │     │              └─ Time per row&lt;br /&gt;
          │      │     └─ Calibrated row count&lt;br /&gt;
          │      └─ Threshold&lt;br /&gt;
          └─ Actual time&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Bisecting Performance Regressions ===&lt;br /&gt;
 &amp;lt;code&amp;gt;# 1. Calibrate on known-good commit&lt;br /&gt;
 git checkout &amp;lt;good-commit&amp;gt;&lt;br /&gt;
 PERF_TEST=1 PERF_CALIBRATE=1 make test  # run 10x&lt;br /&gt;
 &lt;br /&gt;
 # 2. Save baseline&lt;br /&gt;
 cp .perf_baseline.json .perf_baseline_good.json&lt;br /&gt;
 &lt;br /&gt;
 # 3. Run git bisect&lt;br /&gt;
 git bisect start HEAD &amp;lt;good-commit&amp;gt;&lt;br /&gt;
 git bisect run bash -c &amp;#039;PERF_TEST=1 PERF_NORECALIBRATE=1 make test&amp;#039;&amp;lt;/code&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Environment Variables ===&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
!Variable&lt;br /&gt;
!Values&lt;br /&gt;
!Description&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;code&amp;gt;PERF_TEST&amp;lt;/code&amp;gt;&lt;br /&gt;
|&amp;lt;code&amp;gt;0&amp;lt;/code&amp;gt;/&amp;lt;code&amp;gt;1&amp;lt;/code&amp;gt;&lt;br /&gt;
|Enable performance tests&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;code&amp;gt;PERF_CALIBRATE&amp;lt;/code&amp;gt;&lt;br /&gt;
|&amp;lt;code&amp;gt;0&amp;lt;/code&amp;gt;/&amp;lt;code&amp;gt;1&amp;lt;/code&amp;gt;&lt;br /&gt;
|Update baselines&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;code&amp;gt;PERF_NORECALIBRATE&amp;lt;/code&amp;gt;&lt;br /&gt;
|&amp;lt;code&amp;gt;0&amp;lt;/code&amp;gt;/&amp;lt;code&amp;gt;1&amp;lt;/code&amp;gt;&lt;br /&gt;
|Freeze row counts&lt;br /&gt;
|-&lt;br /&gt;
|&amp;lt;code&amp;gt;PERF_EXPLAIN&amp;lt;/code&amp;gt;&lt;br /&gt;
|&amp;lt;code&amp;gt;0&amp;lt;/code&amp;gt;/&amp;lt;code&amp;gt;1&amp;lt;/code&amp;gt;&lt;br /&gt;
|Show query plans&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>Carli</name></author>
	</entry>
</feed>