Implementation:
Note that this implementation is not synchronized. Uses a {@link cern.colt.map.OpenIntObjectHashMap}, which is a compact and performant hashing technique.
Memory requirements:
Cells that
worst case: memory [bytes] = (1/minLoadFactor) * nonZeros * 13.
best case: memory [bytes] = (1/maxLoadFactor) * nonZeros * 13.
Where nonZeros = cardinality() is the number of non-zero cells. Thus, a 100 x 100 x 100 matrix with minLoadFactor=0.25 and maxLoadFactor=0.5 and 1000000 non-zero cells consumes between 25 MB and 50 MB. The same 100 x 100 x 100 matrix with 1000 non-zero cells consumes between 25 and 50 KB.
Time complexity:
This class offers expected time complexity O(1) (i.e. constant time) for the basic operations get, getQuick, set, setQuick and size assuming the hash function disperses the elements properly among the buckets. Otherwise, pathological cases, although highly improbable, can occur, degrading performance to O(N) in the worst case. As such this sparse class is expected to have no worse time complexity than its dense counterpart {@link DenseObjectMatrix2D}. However, constant factors are considerably larger.
Cells are internally addressed in (in decreasing order of significance): slice major, row major, column major. Applications demanding utmost speed can exploit this fact. Setting/getting values in a loop slice-by-slice, row-by-row, column-by-column is quicker than, for example, column-by-column, row-by-row, slice-by-slice. Thus
for (int slice=0; slice < slices; slice++) { for (int row=0; row < rows; row++) { for (int column=0; column < columns; column++) { matrix.setQuick(slice,row,column,someValue); } } }is quicker than
for (int column=0; column < columns; column++) { for (int row=0; row < rows; row++) { for (int slice=0; slice < slices; slice++) { matrix.setQuick(slice,row,column,someValue); } } }@see cern.colt.map @see cern.colt.map.OpenIntObjectHashMap @author wolfgang.hoschek@cern.ch @version 1.0, 09/24/99
|
|
|
|
|
|
|
|