PoolingFilter, is a no-op pass through filter that hands all events down the Mina filter chain by default. As it adds no behaviour by default to the filter chain, it is abstract.
PoolingFilter provides a capability, available to sub-classes, to handle events in the chain asynchronously, by adding them to a job. If a job is not active, adding an event to it activates it. If it is active, the event is added to the job, which will run to completion and eventually process the event. The queue on the job itself acts as a buffer between stages of the pipeline.
There are two convenience methods, {@link #createAynschReadPoolingFilter} and{@link #createAynschWritePoolingFilter}, for obtaining pooling filters that handle 'messageReceived' and 'filterWrite' events, making it possible to process these event streams seperately.
Pooling filters have a name, in order to distinguish different filter types. They set up a {@link Job} on theMina session they are working with, and store it in the session against their identifying name. This allows different filters with different names to be set up on the same filter chain, on the same Mina session, that batch their workloads in different jobs.
CRC Card Responsibilities | Collaborations |
---|
Implement default, pass through filter. |
Create pooling filters and a specific thread pool. | {@link ReferenceCountingExecutorService} |
Provide the ability to batch Mina events for asynchronous processing. | {@link Job}, {@link Event} |
Provide a terminal continuation to keep jobs running till empty. | {@link Job}, {@link Job.JobCompletionHandler} |
@todo This seems a bit bizarre. ReadWriteThreadModel creates seperate pooling filters for read and write events.The pooling filters themselves batch read and write events into jobs, but hand these jobs to a common thread pool for execution. So the same thread pool ends up handling read and write events, albeit with many threads so there is concurrency. But why go to the trouble of seperating out the read and write events in that case? Why not just batch them into jobs together? Perhaps its so that seperate thread pools could be used for these stages.
@todo Why set an event limit of 10 on the Job? This also seems bizarre, as the job can have more than 10 events init. Its just that it runs them 10 at a time, but the completion hander here checks if there are more to run and trips off another batch of 10 until they are all done. Why not just have a straight forward consumer/producer queue scenario without the batches of 10? So instead of having many jobs with batches of 10 in them, just have one queue of events and worker threads taking the next event. There will be coordination between worker threads and new events arriving on the job anyway, so the simpler scenario may have the same amount of contention. I can see that the batches of 10 is done, so that no job is allowed to hog the worker pool for too long. I'm not convinced this fairly complex scheme will actually add anything, and it might be better to encapsulate it under a Queue interface anyway, so that different queue implementations can easily be substituted in.
@todo The static helper methods are pointless. Could just call new.