✓ Criterion v0.5 · Apple Silicon

Benchmarks

Statistical benchmarks from Criterion with Wasmtime 18.0 release builds. Every number includes full host↔guest memory bridge overhead.

12.7μs
1KB Parse
12.9μs
10KB Parse
0.50ms
Cold Start
8.5×
Precompile Win

Parse Throughput

PayloadMedianThroughputTypical Use
1 KB12.7 μs78.7 MB/sModbus RTU Frame
10 KB12.9 μs775 MB/sOPC UA ReadResponse
100 KB~15 μs~6.5 GB/sEDIFACT BAPLIE

Cold Start

OperationMeanDescription
Module::new()0.35 msJIT compile from .wasm bytes
Module::deserialize()0.11 msMemory-map from .cwasm
Full Pool Create4.41 msEngine + compile + first instance
Pre-compiled Pool0.50 msEngine + deserialize + first instance

Runtime Comparison

1KB payload parse latency

VECSLATE (Rust/Wasm)
12.7μs
Node.js (Buffer)
~250μs
Python (struct)
~1,000μs
RuntimeLatencySafetyBinaryIsolation
VECSLATE12.7 μs✓ Compile77 KB✓ Sandbox
Node.js~250 μsGC~45 MB✗ Process
Python~1,000 μsGC~90 MB✗ Process
Methodology

Statistical benchmarks via Criterion v0.5 with async Tokio runtime. Parse path measures the full critical path: pool checkout → Wasm alloc → memcpy → parse → result read → dealloc.

Reproduce locally: cargo bench -p vecslate-core
← Driver Catalog