As the comments note, you’d do this if you need constant time insertion/deletion from the list.
There are two ways Array
could be reasonably implemented that allow for populating non-contiguous indices:
- As an actual C-like contiguous block of memory large enough to contain all the indices used; unpopulated indices would contain a reference to a dummy value so they wouldn’t be treated as populated entries (and excess capacity beyond the max index could be left as garbage, since the
length
says it’s not important) - As a hash table keyed by integers (based on a C-like array)
In either case, the cost to insert at the end is going to be amortized O(1)
, but with spikes of O(n)
work done whenever the capacity of the underlying C-like array is exhausted (or the hash load threshold exceeded) and a reallocation is necessary.
If you insert at the beginning or in the middle (and the index in question is in use), you have the same possible work spikes as before, and new problems. No, the data for the existing indices doesn’t change. But all the keys above the entry you’re forcing in have to be incremented (actually or logically). If it’s a plain C-like array implementation, that’s mostly just a memmove
(modulo the needs of garbage collections and the like). If it’s a hash table implementation, you need to essentially read every element (from the highest index to the lowest, which means either sorting the keys or looking up every index below the current length, even if it’s a sparse Array
), pop it out, and reinsert it with a key value that is one higher. For a big Array
, the cost could be enormous. It’s possible the implementation in some browsers might do some clever nonsense by using an “offset” that would internally use negative “keys” relative to the offset to avoid the rehash while still inserting before index 0
, but it would make all operations more expensive in exchange for making shift
/unshift
cheaper.
Point is, a linked list written in JavaScript would incur overhead for being JS (which usually runs more slowly than built-ins for similar magnitude work). But (ignoring the possibility of the memory manager itself introducing work spikes), it’s predictable:
- If you want to insert or delete from the beginning or the end, it’s fixed work (one allocation or deallocation, and reference fixups)
- If you are iterating and inserting/deleting as you go, aside from the cost of iteration, it’s the same fixed work
- If it turns out that offsets aren’t used to implement
shift
/unshift
in your browser’s implementation (with them,shift
would usually be cheap, andunshift
cheap until you’veunshift
-ed more than you’veshift
-ed), then you’d definitely want to use a linked list when working with a FIFO queue of potentially unbounded size
It’s wholly possible all browsers use offsets to optimize (avoiding memmove
or re-keying under certain conditions, though it can’t avoid occasional realloc
and memmove
or rehashing without wasting memory). I don’t know one way or the other what the major browsers do, but if you’re trying to write portable code with consistent performance, you probably don’t want to assume that they sacrificed general performance to make the shift
/unshift
case faster with huge Array
s, they might have preferred to make all other operations faster and assumed shift
/unshift
would only be used with small Array
s where the cost is trivial.