A fast reference management system. The least recently used items move to the end of the list and get spooled to disk if the cache hub is configured to use a disk cache. Most of the cache bottelnecks are in IO. There are no io bottlenecks here, it's all about processing power. Even though there are only a few adjustments necessary to maintain the double linked list, we might want to find a more efficient memory manager for large cache regions. The LRUMemoryCache is most efficeint when the first element is selected. The smaller the region, the better the chance that this will be the case. < .04 ms per put, p3 866, 1/10 of that per get
@author
Aaron Smuts
@author
James Taylor
@author
John McNally
@created May 13, 2002
@version $Id: LRUMemoryCache.java,v 1.17 2002/07/27 06:07:12 jmcnally Exp $