capnwasm render bench

End-to-end pipeline: request → wire → decodefield readsDOM mutation → forced layout. capnweb × capnwasm crossed with WebSocket × HTTP-batch, each at three sizes, with cold & warm timings. No averages between transports — you see every cell.

Summary

Run a bench to see the rollup.

How to read this page

These are live end-to-end timings from this browser to the current Worker server. A cell includes the client call, Worker dispatch, server-side response encoding, transport, browser-side decode/deserialization, the field reads used by that workload, DOM mutation, and one forced layout via offsetHeight.

capnwasm is expected to lose some rows. Even with Immer-style draft() projections, DOM work, transport scheduling, and capnweb's eager JSON materialization can dominate small cells. capnwasm tends to show up when the workload is binary-heavy, sparse-field, or benefits from batched Cap'n Proto wire reads. The useful result is the shape of the tradeoff, not a single winner.

Fairness model: each workload asks both libraries to produce the same DOM output. The code uses each library's intended fast path: capnweb returns eagerly materialized JS objects; capnwasm uses generated readers plus draft() projections when the app wants to materialize fields. If an optimization requires capnwasm-specific syntax, that is called out below.

Workload 1 — List render

Server returns N user records in one RPC reply. Client decodes, builds a <ul> with one <li> per row, forces layout via offsetHeight. Realistic shape: list view, table page, search results.

Same app output: render id, name, email, and active for every row. capnweb reads plain object properties; capnwasm uses one draft() projection with normal list .map(...) syntax to batch per-row field materialization.

Workload 2 — Sparse field access (read 3 of 32)

Server returns a 32-field metadata struct. Client reads only 3 fields and renders them. Cap'n Proto’s schema-evolved layout lets capnwasm skip the other 29 and batch the selected reads through draft(); capnweb's JSON round-trip materializes all 32 upfront. The size axis here is the number of consecutive requests fired in the same tick (testing batching).

This is intentionally a partial-read workload. Same app output: render only field0, field5, and field10. capnwasm uses draft(m => ({ field0: m.field0, field5: m.field5, field10: m.field10 })); capnweb still pays for parsing the full JSON object.

Workload 3 — Dense field access (read all 32)

Same metadata struct, but the client reads every field and renders all 32 to the DOM. capnweb's eager-decode model turns every field into a plain JS property read. capnwasm uses draft() here so the 32 reads are batched before rendering.

This is the full-read counterpart to sparse. Same app output: render all 32 fields. capnwasm uses one draft() projection over all 32 fields; capnweb reads all object properties.

Workload 4 — Re-read storm (10 fields × N reads, animation simulation)

After one fetch, read 10 fields N times each (simulating a re-render loop or per-frame animation). Both paths materialize the 10 fields once, then re-read plain JS values in the hot loop. Picks N = small/medium/large. Each timed cell includes one fetch followed by repeated reads; increasing N makes the post-fetch read cost dominate the single fetch.

Same app output: repeatedly read the same 10 logical fields after one response. capnwasm first materializes those 10 fields with draft(), then rereads the cached JS values; capnweb rereads the JS object values it already materialized during JSON decode.

Workload 5 — Binary blob round-trip

Server echoes K bytes of binary data. Client decodes, renders the byte length to DOM (no canvas paint — we're measuring wire + decode, not GPU work). capnwasm sends raw bytes; capnweb base64-encodes (1.33× bandwidth + parse cost both directions).

Same app output: display the echoed byte length. This is not a field-read benchmark; it isolates binary transport and decode/deserialize overhead.

Methodology