Implementing LRU Cache

The LRU (Least Recently Used) cache algorithm maintains a fixed-capacity cache by evicting the least recently used items when new items need to be added. This ensures that frequently accessed data remains readily available while discarding less frequently accessed data, optimizing both memory usage and access performance.

Key Components of an LRU Cache

Data Structures

  1. Hash Map (Unordered Map)

    • Purpose: Provides O(1) lookup time for cache keys
    • Stores: Key → Iterator mapping, where each key maps to its position in the doubly-linked list
    • Why needed: Enables instant access to any cached item without traversing the list
  2. Doubly-Linked List

    • Purpose: Maintains the access order of cached items
    • Structure: Each node stores a key-value pair
    • Order: Most Recently Used (MRU) at the front → Least Recently Used (LRU) at the back
    • Why doubly-linked: Allows O(1) insertion, deletion, and repositioning of nodes
    • Key advantage: The splice() operation can move a node to the front in O(1) time by simply rewiring pointers

Core Operations

Get(key) - O(1) Time Complexity

  1. Look up the key in the hash map
  2. If not found: Return -1 (cache miss)
  3. If found:
    • Retrieve the value from the linked list node
    • Move the node to the front of the list (mark as most recently used)
    • Return the value

Put(key, value) - O(1) Time Complexity

  1. Check if the key already exists in the hash map:

    • If yes: Update the value and move the node to the front
  2. If the key is new:

    • Check if cache is full (size == capacity):
      • If yes: Evict the LRU item (remove the back node from the list and its key from the hash map)
    • Insert the new key-value pair at the front of the list
    • Add the key and its list iterator to the hash map

Time & Space Complexity

  • Get Operation: O(1)
  • Put Operation: O(1)
  • Space Complexity: O(capacity) - stores at most capacity items

Why This Design?

The combination of a hash map and doubly-linked list provides:

  • Fast lookups via hash map (O(1))
  • Fast updates to access order via list operations (O(1))
  • Efficient eviction by removing from the back of the list (O(1))
  • No traversal needed for any operation

Step-by-Step Example: LRU Cache Operations

Below is a detailed walkthrough of an LRU cache with capacity = 2.

Initial State

Capacity: 2
LRU List: []
Cache:    {}

Step 1: cache.Put(1, 1)

Operation: Insert key 1 with value 1

  • Key 1 doesn’t exist, cache not full
  • Insert [1,1] at front

State after operation:

LRU List: [1,1]
          ^MRU/LRU

Cache: {1 → iter[1,1]}

Step 2: cache.Put(2, 2)

Operation: Insert key 2 with value 2

  • Key 2 doesn’t exist, cache not full (size=1 < capacity=2)
  • Insert [2,2] at front

State after operation:

LRU List: [2,2] → [1,1]
          ^MRU    ^LRU

Cache: {1 → iter[1,1], 2 → iter[2,2]}

Step 3: cache.Get(1)

Operation: Access key 1

  • Key 1 found! Move [1,1] to front (most recently used)
  • Output: 1

State after operation:

LRU List: [1,1] → [2,2]
          ^MRU    ^LRU

Cache: {1 → iter[1,1], 2 → iter[2,2]}

Step 4: cache.Put(3, 3)

Operation: Insert key 3 with value 3

  • Key 3 doesn’t exist
  • ⚠️ Cache is FULL (size=2 == capacity=2)
  • Evict LRU: lru.back() is [2,2] → Remove key 2
  • Insert [3,3] at front

State after operation:

LRU List: [3,3] → [1,1]
          ^MRU    ^LRU

Cache: {1 → iter[1,1], 3 → iter[3,3]}

🗑️ Key 2 was evicted!


Step 5: cache.Get(2)

Operation: Access key 2

  • Key 2 NOT found (was evicted in Step 4)
  • Output: -1

State after operation:

LRU List: [3,3] → [1,1]
          ^MRU    ^LRU

Cache: {1 → iter[1,1], 3 → iter[3,3]}

Step 6: cache.Put(4, 4)

Operation: Insert key 4 with value 4

  • Key 4 doesn’t exist
  • ⚠️ Cache is FULL (size=2 == capacity=2)
  • Evict LRU: lru.back() is [1,1] → Remove key 1
  • Insert [4,4] at front

State after operation:

LRU List: [4,4] → [3,3]
          ^MRU    ^LRU

Cache: {3 → iter[3,3], 4 → iter[4,4]}

🗑️ Key 1 was evicted!


Step 7: cache.Get(1)

Operation: Access key 1

  • Key 1 NOT found (was evicted in Step 6)
  • Output: -1

State after operation:

LRU List: [4,4] → [3,3]
          ^MRU    ^LRU

Cache: {3 → iter[3,3], 4 → iter[4,4]}

Step 8: cache.Get(3)

Operation: Access key 3

  • Key 3 found! Move [3,3] to front
  • Output: 3

State after operation:

LRU List: [3,3] → [4,4]
          ^MRU    ^LRU

Cache: {3 → iter[3,3], 4 → iter[4,4]}

Step 9: cache.Get(4)

Operation: Access key 4

  • Key 4 found! Move [4,4] to front
  • Output: 4

State after operation:

LRU List: [4,4] → [3,3]
          ^MRU    ^LRU

Cache: {3 → iter[3,3], 4 → iter[4,4]}

Final Output

1 -1 -1 3 4


Program for implementing LRU cache.

#include <unordered_map>
#include <list>
#include <iostream>

using namespace std;

class LRUCache {
    int capacity;
    list<pair<int,int>> lru;  // Stores the key and value in the order of most recently used to least recently used
    unordered_map<int, list<pair<int,int>>::iterator> cache; // key, iterator to the list

public:
    LRUCache(int cap) : capacity(cap) {}

    int Get(int key) {
        auto it = cache.find(key);
        if (it == cache.end())
            return -1;
        // Move accessed node to front (MRU)
        // list.splice(destination_position, source_list, element_to_move)
        // This operation moves (not copies) an element from one position in the list to 
        // another position in O(1) time. Thus splice just rewires pointers. i.e
        // Detach node
        // Attach it at the front
        // No traversal. No copying.
        lru.splice(lru.begin(), lru, it->second);
        return it->second->second;
    }

    void Put(int key, int value) {
        if (capacity == 0) return;

        auto it = cache.find(key);
        if (it != cache.end()) {
            // Just update the value and move to front as it is the most recently used
            it->second->second = value;
            //  list.splice(destination_position, source_list, element_to_move)
            lru.splice(lru.begin(), lru, it->second);
            return;
        }

        // Evict LRU if needed as the cache is full
        if (cache.size() == capacity) {
            int lru_key = lru.back().first;
            // Remove the last element from the list as it is the least recently used
            // and remove the corresponding key from the cache
            lru.pop_back();
            cache.erase(lru_key);
        }

        // Insert new key and value at the front of the list
        lru.emplace_front(key, value);
        // Add the key and the iterator to the front of the list to the cache
        cache[key] = lru.begin();
    }
};

int main() {

    LRUCache cache(2);
    cache.Put(1,1);
    cache.Put(2,2);
    cout << cache.Get(1) << " ";
    cache.Put(3,3);
    cout << cache.Get(2) << " ";
    cache.Put(4,4);
    cout << cache.Get(1) << " ";
    cout << cache.Get(3) << " ";
    cout << cache.Get(4) << " ";
    return 0;
}

Output

1 -1 -1 3 4


© 2019-2026 Algotree.org | All rights reserved.

This content is provided for educational purposes. Feel free to learn, practice, and share knowledge.
For questions or contributions, visit algotree.org

"Simplicity is the ultimate sophistication. - Leonardo da Vinci"