A general purpose batcher that combines messages into batches. Callers of process don't block. Configurable parameters control the number of messages that may be queued awaiting processing, the maximum size of a batch, the maximum time a message waits to be combined with others in a batch and the size of the pool of threads that process batches.
The implementation aims to avoid congestion, by working more efficiently as load increases. As messages arrive faster, the collector executes less code and batch sizes increase (up to the configured maximum). It should be more efficient to process a batch than to process the messages individually.
The implementation works by adding the arriving messages to a queue. The collector thread takes messages from the queue and collects them into batches. When a batch is big enough or old enough, the collector passes it to the processor, which passes the batch to the target stream.
The processor maintains a thread pool. If there's more work than threads, the collector participates in processing by default, and consequently stops collecting more batches.
@author Karthik Ranganathan