Summary
A tight loop calling buf.readInt32BE(i*4) completes normally up to ~250k iterations, then crashes silently (no output, no error message) somewhere between 250k and 300k iterations. Exit code propagates as 0 through shell pipes, making this very easy to miss.
Discovered while benchmarking #92 — trivial to reproduce on main at v0.5.179.
Repro
```ts
console.log('N=' + process.argv[2]);
const N = parseInt(process.argv[2] || '100000');
const buf = Buffer.alloc(N * 4);
console.log('alloc ok, len=' + buf.length);
for (let i = 0; i < N; i++) buf.writeInt32BE(i * 37, i * 4);
console.log('fill ok');
let sum = 0;
for (let i = 0; i < N; i++) sum += buf.readInt32BE(i * 4);
console.log('read ok, sum=' + (sum & 0xFFFF));
```
```
$ perry compile bench92_crash.ts -o bench92_crash && for n in 100000 200000 250000 300000 500000 1000000; do echo "--- N=$n ---"; ./bench92_crash $n; done
--- N=100000 ---
N=100000
alloc ok, len=400000
fill ok
read ok, sum=49008
--- N=200000 ---
N=200000
alloc ok, len=800000
fill ok
read ok, sum=29408
--- N=250000 ---
N=250000
alloc ok, len=1000000
fill ok
read ok, sum=18456
--- N=300000 ---
N=300000
alloc ok, len=1200000
fill ok # <-- no "read ok" line; process exits silently
--- N=500000 ---
N=500000
alloc ok, len=2000000
fill ok # <-- same silent failure
--- N=1000000 ---
N=1000000
alloc ok, len=4000000
# <-- crashes even during the fill loop now
```
Node and Bun complete all sizes in single-digit ms.
What seems to be happening
- Buffer.alloc is fine up to at least 4MB (1M × 4) — the header prints.
- writeInt32BE fill loop crashes only at very large sizes (N=1M visible).
- readInt32BE read loop threshold is ~250k iterations: works at 250k, fails at 300k. The threshold appears iteration-count dependent, not buffer-size dependent (N=300k and N=500k both silently stop after `fill ok`, which is the same number of writeInt32BE iterations that worked at N=500k — so it's something about the 300k-ish `readInt32BE` call count, not the buffer size).
Plausible causes to investigate:
- GC triggering mid-loop and corrupting something (Mark-sweep GC has an arena block allocation threshold)
- Arena bounds being exceeded
- A counter wrap or signed-overflow in the read path
- Exit code not propagated because the crash is after a flushed `console.log` — SIGKILL or process::exit(0) somewhere silently
Impact
This crash currently blocks accurate benchmarking of #92 (bulk decode throughput). Any measurement past ~250k Buffer reads hits this instead of measuring the primitive cost.
Also potentially affects any Perry app doing bulk Buffer work — large Postgres result sets, file-to-buffer processing, binary codecs. The silent-exit-0 shape is the worst possible failure mode.
Environment
- Perry 0.5.179
- macOS arm64 (Apple Silicon)
Related: #92 (bulk decode perf).
Summary
A tight loop calling
buf.readInt32BE(i*4)completes normally up to ~250k iterations, then crashes silently (no output, no error message) somewhere between 250k and 300k iterations. Exit code propagates as 0 through shell pipes, making this very easy to miss.Discovered while benchmarking #92 — trivial to reproduce on
mainat v0.5.179.Repro
```ts
console.log('N=' + process.argv[2]);
const N = parseInt(process.argv[2] || '100000');
const buf = Buffer.alloc(N * 4);
console.log('alloc ok, len=' + buf.length);
for (let i = 0; i < N; i++) buf.writeInt32BE(i * 37, i * 4);
console.log('fill ok');
let sum = 0;
for (let i = 0; i < N; i++) sum += buf.readInt32BE(i * 4);
console.log('read ok, sum=' + (sum & 0xFFFF));
```
```
$ perry compile bench92_crash.ts -o bench92_crash && for n in 100000 200000 250000 300000 500000 1000000; do echo "--- N=$n ---"; ./bench92_crash $n; done
--- N=100000 ---
N=100000
alloc ok, len=400000
fill ok
read ok, sum=49008
--- N=200000 ---
N=200000
alloc ok, len=800000
fill ok
read ok, sum=29408
--- N=250000 ---
N=250000
alloc ok, len=1000000
fill ok
read ok, sum=18456
--- N=300000 ---
N=300000
alloc ok, len=1200000
fill ok # <-- no "read ok" line; process exits silently
--- N=500000 ---
N=500000
alloc ok, len=2000000
fill ok # <-- same silent failure
--- N=1000000 ---
N=1000000
alloc ok, len=4000000
# <-- crashes even during the fill loop now
```
Node and Bun complete all sizes in single-digit ms.
What seems to be happening
Plausible causes to investigate:
Impact
This crash currently blocks accurate benchmarking of #92 (bulk decode throughput). Any measurement past ~250k Buffer reads hits this instead of measuring the primitive cost.
Also potentially affects any Perry app doing bulk Buffer work — large Postgres result sets, file-to-buffer processing, binary codecs. The silent-exit-0 shape is the worst possible failure mode.
Environment
Related: #92 (bulk decode perf).