The HashJoin pipe allows for two or more tuple streams to join into a single stream via a {@link Joiner} whenall but one tuple stream is considered small enough to fit into memory.
When planned onto MapReduce, this is effectively a non-blocking "asymmetrical join" or "replicated join", where the left-most side will not block (accumulate into memory) in order to complete the join, but the right-most sides will. See below...
No aggregations can be performed with a HashJoin pipe as there is no guarantee all value will be associated with a given grouping key. In fact, an Aggregator would see the same grouping many times with a partial set of values.
For every incoming {@link Pipe} instance, a {@link Fields} instance must be specified that denotes the field namesor positions that should be joined with the other given Pipe instances. If the incoming Pipe instances declare one or more field with the same name, the declaredFields must be given to name the outgoing Tuple stream fields to overcome field name collisions.
By default HashJoin performs an inner join via the {@link cascading.pipe.joiner.InnerJoin}{@link cascading.pipe.joiner.Joiner} class.
Self joins can be achieved by using a constructor that takes a single Pipe and a numSelfJoins value. A value of 1 for numSelfJoins will join the Pipe with itself once. Note that a self join will block until all data is accumulated thus the stream must be reasonably small.
Note "outer" joins on the left most side will not behave as expected. All observed keys on the right most sides will be emitted with {@code null} for the left most stream, thus when running distributed, duplicate values willemerge from every Map task split on the MapReduce platform.
HashJoin does not scale well to large data sizes and thus requires streams with more data on the left hand side to join with more sparse data on the right hand side. That is, always attempt to effect M x N joins where M is large and N is small, instead of where M is small and N is large. Right hand side streams will be accumulated, and spilled to disk if the collection reaches a specific threshold when using Hadoop.
If spills are happening, consider increasing the spill thresholds, see {@link cascading.tuple.collect.SpillableTupleMap}.
If one of the right hand side streams starts larger than memory but is filtered (likely by a {@link cascading.operation.Filter} implementation) down to the point it fits into memory, it may be useful to usea {@link Checkpoint} Pipe to persist the stream and force a new FlowStep (MapReduce job) to read the data fromdisk, instead of applying the filter redundantly. This will minimize the amount of data "replicated" across the network.
See the {@link cascading.tuple.collect.TupleCollectionFactory} and {@link cascading.tuple.collect.TupleMapFactory} for a meansto use alternative spillable types.
@see cascading.pipe.joiner.InnerJoin
@see cascading.pipe.joiner.OuterJoin
@see cascading.pipe.joiner.LeftJoin
@see cascading.pipe.joiner.RightJoin
@see cascading.pipe.joiner.MixedJoin
@see cascading.tuple.Fields
@see cascading.tuple.collect.SpillableTupleMap