capnwasm render bench
End-to-end pipeline: request → wire → decode → field reads →
DOM mutation → forced layout. capnweb × capnwasm crossed with WebSocket × HTTP-batch,
each at three sizes, with cold & warm timings. No averages between transports — you see every cell.
Summary
How to read this page
These are live end-to-end timings from this browser to the current Worker server.
A cell includes the client call, Worker dispatch, server-side response encoding,
transport, browser-side decode/deserialization, the field reads used by that
workload, DOM mutation, and one forced layout via offsetHeight.
capnwasm is expected to lose some rows. Even with Immer-style draft()
projections, DOM work, transport scheduling, and capnweb's eager JSON materialization
can dominate small cells. capnwasm tends to show up when the workload is
binary-heavy, sparse-field, or benefits from batched Cap'n Proto wire reads.
The useful result is the shape of the tradeoff, not a single winner.
Fairness model: each workload asks both libraries to produce the same DOM output.
The code uses each library's intended fast path: capnweb returns eagerly materialized
JS objects; capnwasm uses generated readers plus draft() projections when the app
wants to materialize fields. If an optimization requires capnwasm-specific syntax,
that is called out below.
Workload 1 — List render
Server returns N user records in one RPC reply. Client decodes, builds a
<ul> with one <li> per row, forces layout via
offsetHeight. Realistic shape: list view, table page, search results.
Same app output: render id, name, email, and active for every row.
capnweb reads plain object properties; capnwasm uses one draft() projection with
normal list .map(...) syntax to batch per-row field materialization.
Workload 2 — Sparse field access (read 3 of 32)
Server returns a 32-field metadata struct. Client reads only 3 fields and renders them.
Cap'n Proto’s schema-evolved layout lets capnwasm skip the other 29 and batch the
selected reads through draft(); capnweb's JSON round-trip materializes all 32 upfront.
The size axis here is the number of consecutive requests fired in the same tick
(testing batching).
This is intentionally a partial-read workload. Same app output: render only
field0, field5, and field10. capnwasm uses
draft(m => ({ field0: m.field0, field5: m.field5, field10: m.field10 }));
capnweb still pays for parsing the full JSON object.
Workload 3 — Dense field access (read all 32)
Same metadata struct, but the client reads every field and renders all 32 to the DOM.
capnweb's eager-decode model turns every field into a plain JS property read.
capnwasm uses draft() here so the 32 reads are batched before rendering.
This is the full-read counterpart to sparse. Same app output: render all 32 fields.
capnwasm uses one draft() projection over all 32 fields; capnweb reads all object properties.
Workload 4 — Re-read storm (10 fields × N reads, animation simulation)
After one fetch, read 10 fields N times each (simulating a re-render loop or per-frame animation). Both paths materialize the 10 fields once, then re-read plain JS values in the hot loop. Picks N = small/medium/large. Each timed cell includes one fetch followed by repeated reads; increasing N makes the post-fetch read cost dominate the single fetch.
Same app output: repeatedly read the same 10 logical fields after one response.
capnwasm first materializes those 10 fields with draft(), then rereads the cached JS values;
capnweb rereads the JS object values it already materialized during JSON decode.
Workload 5 — Binary blob round-trip
Server echoes K bytes of binary data. Client decodes, renders the byte length to DOM (no canvas paint — we're measuring wire + decode, not GPU work). capnwasm sends raw bytes; capnweb base64-encodes (1.33× bandwidth + parse cost both directions).
Same app output: display the echoed byte length. This is not a field-read benchmark; it isolates binary transport and decode/deserialize overhead.
Methodology
- Cold = the very first call against a transport (includes WS handshake / first POST connect, and for capnwasm the wasm fetch + compile if not cached).
- Warm = median of the configured iter count, after a warmup pass.
- End-to-end includes client request construction, Worker dispatch, server-side response encode/serialize, transport, browser-side decode/deserialize, every
.fieldaccess used by the render path, DOM mutation, and forced layout viaoffsetHeight. - Both libraries hit the same Worker endpoints in the deployed path (or the matching Vite shim during frontend-only dev), with matching method semantics, so the only thing varying is the library + transport.
- Same app output, different API shape: capnweb exposes eager JS objects; capnwasm exposes lazy readers and uses Immer-style
draft()projections where batching is needed. The benchmark uses those intended paths rather than forcing both libraries through the same syntax. - Partial vs full reads are separated: sparse reads 3/32 fields; dense reads 32/32 fields; reread materializes 10 fields once and then rereads cached JS values.
- capnwasm’s sparse-field bench actually skips reads — we don’t materialize the 29 fields we don’t need, and the selected fields are read through one batched
draft()projection. capnweb has no equivalent (the JSON is fully parsed upfront), so it’s an architectural win for capnp. - capnweb base64-inflates binary; capnwasm sends raw bytes — on the blob workload the wire-bytes column tells the story.