Incoming events arrive asynchronously and are processed into two queues, one for server initiated events, and one for the results of client requested background jobs.
Each queue is serviced by a different thread pool (to ensure lowest latency for event-driven events) and each queue is guaranteed to be processed (and listeners notified) in the order in which the events are received off the wire.
This design ensures that incoming event processing is not blocked by any long-running listener process. However multiple listeners will be notified sequentially, and so one slow listener can cause latency to other listeners. @author david varnes
|
|