A {@link ChannelHandler} that adds support for writing a large data streamasynchronously neither spending a lot of memory nor getting {@link java.lang.OutOfMemoryError}. Large data streaming such as file transfer requires complicated state management in a {@link ChannelHandler}implementation. {@link ChunkedWriteHandler} manages such complicated statesso that you can send a large data stream without difficulties.
To use {@link ChunkedWriteHandler} in your application, you have to inserta new {@link ChunkedWriteHandler} instance:
{@link ChannelPipeline} p = ...;p.addLast("streamer", new {@link ChunkedWriteHandler}()); p.addLast("handler", new MyHandler());
Once inserted, you can write a {@link ChunkedInput} so that the{@link ChunkedWriteHandler} can pick it up and fetch the content of thestream chunk by chunk and write the fetched chunk downstream:
{@link Channel} ch = ...;ch.write(new {@link ChunkedFile}(new File("video.mkv"));
Sending a stream which generates a chunk intermittently
Some {@link ChunkedInput} generates a chunk on a certain event or timing.Such {@link ChunkedInput} implementation often returns {@code null} on{@link ChunkedInput#nextChunk()}, resulting in the indefinitely suspended transfer. To resume the transfer when a new chunk is available, you have to call {@link #resumeTransfer()}.
@apiviz.landmark
@apiviz.has io.netty.handler.stream.ChunkedInput oneway - - reads from