FastCache
is a map implemented with soft references,
* optimistic copy-on-write updates, and approximate count-based
* pruning. It is intended for scalable multi-threaded caches. It
* sacrifices recall of mappings for speed of put and get operations
* along with conservative memory guarantees.
*
* Note:The class {@link HardFastCache} is nearly identical * to this class, but with no soft references around hash buckets. * *
The basis of the cache is a fixed-size hash map, based on values
* returned by objects' hashCode()
and
* equals(Object)
methods.
*
*
The map entries in the hash map are stored in buckets held by * soft references. Thus entries in the mapping may be garbage * collected. In the implementation, whole hash buckets are * collected, which is safe and highly efficient but may require some * additional recomputation of values that were removed by pruning. * *
Entries are stored in the map using a very optimistic update. * No synchronization at all is performed on the cache entries or * their counts. A copy-on-write strategy coupled with Java's memory * model for references ensures that the cache remains consistent, if * not complete. What this means is that multiple threads may both * try to cache a mapping and only one will be saved and/or * incremented in count. * *
When the table approximately exceeds the specified load factor,
* the thread performing the add will perform a garbage collection by
* reducing reference counts by half, rounding down, and removing
* entries with zero counts. Pruning is subject to the caveats
* mentioned in the last paragraph. Counts are not guaranteed to be
* accurate. Pruning itself is synchronized and more conservatively
* copy-on-write. By setting the load factor to
* Double.POSITIVE_INFINITY
there will be never be any
* pruning done by this class; all pruning will take place by soft
* reference garbage collection.
*
*
A fast cache acts as a mapping with copy-on-write semantics. * Equality and iteration are defined as usual, with the caveat that * the snapshot taken of the elements may not be up to date. Iterators * may be used concurrently, but their remove methods do not affect * the underlying cache. * *
Serialization
* *A fast cache may be serialized if the keys and values it * contains are serializable. It may always be serialized if * it is first cleared using {@link #clear()}. * *
References
* *For more information on soft references, see: * *
null
if
* there is no value attached. Note that the argument is not
* the generic <K>
key type, but Object
* to match the requirements of java.util.Map
.
*
* Warning: Because of the approximate cache-like
* behavior of this class, key-value pairs previously added
* by the {@link #put(Object,Object)} method may disappear.
*
* @param key Mapping key.
* @return The value for the specified key.
*/
@Override
public V get(Object key) {
int bucketId = bucketId(key);
for (Record Warning: Because of the approximate cache-like
* behavior of this class, setting the value of a key with this
* method is not guaranteed to replace older values or remain in
* the mapping until the next call to {@link #get(Object)}.
*
* @param key Mapping key.
* @param value New value for the specified key.
* @return null
, even if there is an existing
* value for the specified key.
*/
@Override
public V put(K key, V value) {
int bucketId = bucketId(key);
Record