Least Recently Used (LRU) Cache
Java Solution
- HashMap for O(1) read and write.
- maintain a LinkedList so the head of the list is the oldest and the tail is the most recent.
- in both get() and set(), move the node to the tail of the list.
- remove head if capacity is reached.
public class LRU {
class Node {
Node prev = null;
Node next = null;
int val;
int key;
public Node(int key, int val) {
this.key = key;
this.val = val;
}
}
private Node head = new Node(0, 0);
private Node tail = new Node(0, 0);
private int capacity;
private Map<Integer, Node> map;
// @param capacity, an integer
public LRU(int capacity) {
// write your code here
head.next = tail;
tail.prev = head;
this.capacity = capacity;
this.map = new HashMap<>(capacity);
}
// @return an integer
public int get(int key) {
// write your code here
if (!map.containsKey(key)) {
return -1;
}
Node n = map.get(key);
moveToTail(n);
return n.val;
}
// @param key, an integer
// @param value, an integer
// @return nothing
public void set(int key, int value) {
// write your code here
if (map.containsKey(key)) {
Node n = map.get(key);
n.val = value;
moveToTail(n);
} else {
if (size() == capacity) {
map.remove(head.next.key);
head.next = head.next.next;
head.next.prev = head;
}
Node newNode = new Node(key, value);
tail.prev.next = newNode;
newNode.prev = tail.prev;
newNode.next = tail;
tail.prev = newNode;
map.put(key, newNode);
}
}
private void moveToTail(Node n) {
n.prev.next = n.next;
n.next.prev = n.prev;
tail.prev.next = n;
n.prev = tail.prev;
n.next = tail;
tail.prev = n;
}
private int size() {
int size = 0;
Node n = head.next;
while (n != tail) {
size++;
n = n.next;
}
return size;
}
}
set()
and get()
should be synchronized
to ensure thread safety.
Modified LinkedHashMap
Java has a built-in LinkedHashMap. Extend it by modifying removeEldestEntry()
function.
import java.util.LinkedHashMap;
import java.util.Map.Entry;
public class LRUCache < K, V > extends LinkedHashMap < K, V > {
private int capacity; // Maximum number of items in the cache.
public LRUCache(int capacity) {
super(capacity+1, 1.0f, true); // Pass 'true' for accessOrder.
this.capacity = capacity;
}
protected boolean removeEldestEntry(Entry entry) {
return (size() > this.capacity);
}
}
Map<String, String> example = Collections.synchronizedMap(new LRUCache<String, String>(CACHE_SIZE));
LinkedHashMap
constructor(Java source code):
/**
* Constructs an empty <tt>LinkedHashMap</tt> instance with the
* specified initial capacity, load factor and ordering mode.
*
* @param initialCapacity the initial capacity
* @param loadFactor the load factor
* @param accessOrder the ordering mode - <tt>true</tt> for
* access-order, <tt>false</tt> for insertion-order
* @throws IllegalArgumentException if the initial capacity is negative
* or the load factor is nonpositive
*/
public LinkedHashMap(int initialCapacity,
float loadFactor,
boolean accessOrder) {
super(initialCapacity, loadFactor);
this.accessOrder = accessOrder;
}
- access-order: accessing(reading/updating/inserting) will put the entry to the end of the link
- insertion-order: only insertions change the map structure
- load factor: at which point would the capacity be doubled. Set to 1.0f to fix the size.
Redis Approximate LRU
Redis LRU implementation: http://antirez.com/news/109
The 24 bits in the object are enough to store the least significant bits of the current unix time in seconds. This representation, called “LRU clock” inside the source code of Redis, takes 194 days to overflow.
The initial Redis algorithm was as simple as that: when there is to evict a key, select 3 random keys, and evict the one with the highest idle time. Basically we do random sampling over the key space and evict the key that happens to be the better. Later this “3 random keys” was turned into a configurable “N random keys” and the algorithm speed was improved so that the default was raised to 5 keys sampling without losing performances.