fix(cli-kit): batch large output chunks to prevent event loop blocking#7008
fix(cli-kit): batch large output chunks to prevent event loop blocking#7008alfonso-noriega wants to merge 3 commits intomainfrom
Conversation
When a POS UI extension throws a large stack trace (3MB+), all lines arrive as a single write to ConcurrentOutput's Writable stream. The synchronous stripAnsi + split + React state update causes a long render cycle that blocks the Node.js event loop, making keyboard shortcuts (q, p) unresponsive. Fix: split chunks exceeding 100 lines into batches and schedule each via setImmediate, yielding to the event loop between renders so Ink's useInput hook can process keypresses between batches. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Coverage report
Test suite run success3807 tests passing in 1462 suites. Report generated by 🧪jest coverage report action from 1afb5c2 |
Benchmarking with a 3.85MB stack trace (30k lines) shows: - batch=100: 121ms max event loop block - batch=20: 44ms max event loop block (vs 27,266ms on main) - batch=10: 35ms max event loop block Batch=20 hits the sweet spot: 4x lower max block than 100 with essentially the same total flush time (~380ms vs ~430ms). Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
This stack of pull requests is managed by Graphite. Learn more about stacking. |
|
We detected some changes at Caution DO NOT create changesets for features which you do not wish to be included in the public changelog of the next CLI release. |
There was a problem hiding this comment.
nice job! the current approach is in the right direction, but using setImmediate with recursion isn't going to give you backpressure (something node is good at managing), it's just spreading work across event loop ticks. i'm, pretty sure you can leverage node's streaming to split work + auto-support backpressure with something like this:
const splitter = new Transform({
transform(chunk, _encoding, callback) {
const lines = chunk.toString('utf8').replace(/\n$/, '').split('\n')
for (let i = 0; i < lines.length; i += MAX_LINES_PER_BATCH) {
this.push(lines.slice(i, i + MAX_LINES_PER_BATCH).join('\n'))
}
callback()
},
})
const writable = new Writable({
write(chunk, _encoding, next) {
// existing logic, unchanged — always gets small chunks now
const log = chunk.toString('utf8')
// ...
next()
},
})
splitter.pipe(writable)
// hand `splitter` to the process as its stdout/stderr
that way, when writer hasn't called next() yet, the internal buffer fills up, and node automatically pauses the producer. that will allow us to be a bit more blind to the chunking management in the main flow and allow node to stream accordingly to the local machine's memory limits.
…form+pipe Replace the manual recursive-setImmediate approach with a proper Node.js stream pipeline: - Transform (splitter): reads outputContextStore while still in the writer's async context, strips ANSI, and splits large chunks into MAX_LINES_PER_BATCH (20) line pieces. Single-batch writes pass through unchanged. - Writable (sink): renders each batch into React state. For large-chunk batches setImmediate(next) yields the event loop between renders so keyboard shortcuts (q, p) can fire. It also creates real Node.js backpressure: when next() is pending the pipe pauses the splitter, capping memory use from fast producers without manual bookkeeping. Single-batch writes call next() synchronously to preserve existing rendering behaviour. Benchmark (3.85 MB / 30k-line stack trace): main: 27,266ms max event loop block recursive fix: 44ms max event loop block stream fix: 32ms max event loop block Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

Problem
When a POS UI extension throws a large stack trace, the entire output arrives as a single
write()call toConcurrentOutput'sWritablestream. The synchronous processing of large chunks —stripAnsi(),split(/\n/), andsetProcessOutputwith a spread of the full accumulated array — causes a render cycle expensive enough to block the Node.js event loop. During that time, Ink'suseInputhook cannot process keypresses, soq(quit) andp(preview) become unresponsive.Fix
The
writableStreamwrite handler now uses a two-stage pipeline: aTransformstream that splits large chunks into batches ofMAX_LINES_PER_BATCH(20) lines, followed by aWritablesink that renders each batch. For batches from large chunks,setImmediate(next)yields to the event loop between renders so Ink's input handling can run.next()called synchronouslysetImmediatescheduling between each batchThe Transform stream preserves the existing
outputContextStorecontext (prefix and stripAnsi overrides) and maintains proper Node.js backpressure by deferringnext()until each batch is processed.Test
Added a test that writes 250 lines as a single chunk and verifies all lines appear in the rendered output without being dropped during batch processing.