For an application that receives a continuous stream of very large (multi-gigabyte) binary data, which is the most memory-efficient approach to concatenating the incoming chunks?

Node.js interview question for Advanced practice.

Answer

Pipe the incoming readable stream directly to a writable stream (e.g., a file or network socket).

Explanation

Piping a readable stream to a writable stream is by far the most memory-efficient method. This approach processes the data in chunks without ever holding the entire dataset in memory. Option A will consume vast amounts of memory by storing all chunks in an array. Option B is highly inefficient as it creates a new, larger buffer on every single chunk, leading to significant memory churn. Option D is also very inefficient due to the overhead of string conversion.

Related Questions