Cache eviction algorithms (LRU, SIEVE, FIFO) need to find the 'nth oldest' entry to evict when the cache reaches capacity.
Here's a simplified LRU (Least Recently Used) cache using a Map:
class LRUCache {
constructor(limit) {
this.limit = limit;
this.cache = new Map();
}
get(key) {
if (!this.cache.has(key)) return null;
const value = this.cache.get(key);
// Refresh: delete and re-insert to update order
this.cache.delete(key);
this.cache.set(key, value);
return value;
}
set(key, value) {
if (this.cache.has(key)) {
this.cache.delete(key);
} else if (this.cache.size >= this.limit) {
// Evict the FIRST (oldest) entry
const firstKey = this.cache.keys().next().value;
this.cache.delete(firstKey);
}
this.cache.set(key, value);
}
}
Currently, getting the 'first' key is easy (.next().value). But getting the 'nth' key (e.g., for SIEVE, which needs to traverse from a 'hand' position) requires either:
[...cache.keys()][n] (O(n) memory)
Or manual iteration with a counter (verbose)
map.at(n) would make this cleaner and more memory-efficient, especially for caches with millions of entries where spreading to an array is prohibitive.
ok, but the only reason to use a Map there - instead of a normal object - is if you can't represent the keys as strings. I've never seen a cache that doesn't use strings as keys - even if only for logging/debugging.
Insertion order is guaranteed (objects have edge cases with numeric keys)
.size is directly available
Iteration is more ergonomic
Deletion is faster
No risk of prototype pollution (__proto__ keys)
That's why many modern cache implementations (including LRU caches in popular libraries) use Map even when keys are strings. It's not about key type — it's about the guaranteed order, cleaner API, and better performance for deletions.
If Maps were only for non-string keys, the committee wouldn't have added guaranteed insertion order for all keys. The fact that they did suggests Maps are intended as a general-purpose ordered collection — not just a 'non-string object replacement.'
Anytime I've had to implement an LRU or similar cache, I've just kept another structure for the order. Removing and re-inserting into a map/set is just too expensive. And as noted a lot of cache cases also need weak keys, which means using Weak collections that aren't iterable.
I honestly don't see many use cases for indexing map/set.
Thanks for sharing your experience. A few responses:
Separate order structure — That works, but it doubles memory (Map + array). For large caches, that's a real cost. Map already tracks insertion order internally — .at(n) just exposes it without the memory overhead.
Re-insertion cost — Map delete + set is O(1) and very fast in practice. Many popular LRU implementations (like lru-cache ) use exactly this pattern. If re-insertion were 'too expensive,' they wouldn't exist.
Weak collections — You're right, WeakMap/WeakSet aren't iterable and this proposal doesn't apply to them. That's fine — .at(n) is only for iterable collections (Map, Set).
The use cases exist even if they don't match your specific needs. The SIEVE algorithm example earlier in this thread is one real implementation that would benefit from .nth() access.