Bench v2: redesigning the performance testing workflow
Bench v1 was built in a weekend. It worked, but the UX was clearly a weekend project. Version 2 is the real thing.
What changed
The biggest addition: distributed test runners. You can now spin up workers across regions and aggregate results in real time. The latency histogram updates live as tests run.
The UI was rebuilt from scratch with a clearer mental model: configure → run → analyze. No more jumping between tabs.
Why the original design failed
V1 treated configuration as a one-time setup. In practice, you're constantly tweaking — adjusting concurrency, changing payload size, re-running to verify a fix. The interface needed to support that loop, not just the initial setup.
The histogram
The latency histogram is the part I'm most proud of. P50, P95, P99 rendered as you run, with a color band that shifts from green to red as variance increases. You can see the moment your backend starts struggling without waiting for the run to finish.
What's next
Persistent test suites — save a configuration and run it on every deploy. The groundwork is in v2; the UI is next.