Nevron Open Vision Documentation
Framework / Data Structures / Lists, Deques, Stacks and Queues
In This Topic
    Lists, Deques, Stacks and Queues
    In This Topic

    The most common data structures (List, Deque, Stack and Queue) deal with items sets, that have no particular graph or tree relation, no ordering and no particular uniqueness constrains.

    In NOV a specific class that implements a common data structure interface, may implement multiple other such interfaces - for example both the NList<T> and the NDeque<T> classes implement the INList<T> and INDeque<T> interfaces. By design the default implementation (best performance for the most common usage) is named after the interface - this helps you easily spot the default implementation for a particular interface - NList for INList, NDeque for INDeque etc.

    There are also cases in which a particular interface can be implemented with a different underlying storage organization - for example NQueue<T> and NBlocksQueue<T>. In such cases the name of the non-default implementation aims to highlight the underlying storage organization - in the case of the queue, the default implementation (NQueue<T>) is based on a dynamic array ring, while the NBlocksQueue<T> is based on a blocks ring.

    Each implementation class may also contains specific functionality, that is not a part of any interface (for example the NList has methods for sorting its items).

    The following matrix shows the common data structures implementation classes and the interfaces that they implement:

    INList<T> INDeque<T> INArray<T> INStack<T> INQueue<T>

    Dynamic Array Implementations

    NList<T> X X X
    NStack<T> X X

    Dynamic Array Ring Implementations

    NDeque<T> X X X
    NQueue<T> X X

    Blocks Ring Implementations

    NBlocksDeck<T> X X X
    NBlocksQueue<T> X X
    NBlocksStack<T> X X

    Linked List Implementations

    NChainList<T> X X X

     

    Because each implementation has its pros and cons for a specific usage, all implementation classes implement initializing constructors that help you initialize the object to contain the items of another set or the items provided by an iterator or iterable object. This helps you easily swap between the implementations.

    The rest of this topic discusses the advantages and disadvantages of the particular implementations.

     Dynamic Array Implementations

    The NList and NStack classes derive from the NDynamicArray abstract class, that implements a dynamic array (auto growable array). It does so by automatically maintaining a generic CLR array, accessible from its Items property (backing array). The valid items in the backing array begin from index 0 and end at index Count - 1.

    When you add or insert items, the dynamic array will automatically create a new backing array, if the current array does not have enough capacity to handle the additional item. The backing array is growing in a geometric progression with a common ratio of 2. The default capacity of the backing array is 4, which means that if you do not explicitly set a capacity at the time you construct a list or stack, the backing array size will grow in this sequence: 0, 4, 8, 16, 32 etc.

    Pros:

    • Constant append time, except in the cases when the backing array needs to be resized. These cases are with O(n) time, but the dynamic array quickly "warms-up" to adapt to large sets due to the geometric progression of resize.
    • Constant indexed access time - the fastest among all implementations, because no additional operations are needed to compute the actual index in the backing data structure.
    • It is faster to use direct indexing rather than an iterator to iterate through the items of a dynamic array.
    • No memory overhead per item.

    Cons:

    • Constant to O(n) RemoveAt time, with worst case (O(n) time) experienced when you remove the first item and best case (constant) experienced when you remove the last item.
    • Constant to O(n) InsertAt time, with worst case (O(n) time) experienced when you insert at the start (prepend) and best case (constant) experienced when you insert at the end (except in the cases when the backing array needs to be resized).
    • Max unused items memory is Count - 1.

    Because of these factors the dynamic array is a good performer for INList<T> and INStack<T> implementations and generally bad for deques and queues.

    The most notable optimization you can use it to create NList<T> and NStack<T> with an estimated initial capacity to reduce the number of backing array resizes when you add items to it.

     Dynamic Array Ring Implementations

    The NDeck<T> and NQueue<T> classes derive from the NDynamicArrayRing<T> abstract class, that implements a dynamic array with circular (ring) items organization. It does so by automatically maintaining a generic CLR array, accessible from its Items property (backing array). The valid items in the backing array begin from HeadIndex and end at TailIndex. When HeadIndex is larger than TailIndex, valid items are from HeadIndex to Capacity - 1 and from 0 to TailIndex.

    When you add or insert items, the array ring will automatically create a new backing array, if the current array does not have enough capacity to handle the additional item. The backing array is growing in a geometric progression with a common ratio of 2. The default capacity of the backing array is 4, which means that if you do not explicitly set a capacity at the time you construct a deck or queue, the backing array size will grow in this sequence: 0, 4, 8, 16, 32 etc.

    Pros:

    • Constant append/prepend times (add to end, add to start), except in the cases when the backing array needs to be resized. These cases are with O(n) time, but the array ring quickly "warms-up" to adapt to large sets due to the geometric progression of resize.
    • Constant remove first/remove last times.
    • Constant indexed access time.

    No memory overhead per item.

    Cons:

    • Although constant, the indexed access time is slower compared to dynamic array implementations, because of the additional operations needed by the ring organization.
    • Constant to O(n / 2) RemoveAt time, with worst case (O(n / 2) time) experienced when you remove the middle item and best case (constant) experienced when you remove the first/last item.
    • Constant to O(n / 2) InsertAt time, with worst case (O(n / 2) time) experienced when you insert a middle item and best case (constant) experienced when you append/prepend items.
    • Max unused items memory is Count - 1.

    Because of these factors the array ring is a good performer for INDeque<T> and INQueue<T> implementations and generally bad for lists, where fast direct indexing is needed.

    As with dynamic arrays, the most notable optimization you can use is it to create NDeque<T> and NQueue<T> with an estimated initial capacity to reduce the number of backing array resizes when you add items to it.

     Blocks Ring Implementations

    The NBlocksDeck<T>, NBlocksQueue<T> and NBlocksStack<T> classes derive from the NBlocksRing<T> abstract class, that implements an ordered sequence of fixed size array blocks with circular (ring) items organization. It does so by automatically maintaining a doubly linked list (chain) of fixed size array blocks. The size of the array blocks is specified at object construction time.

    When you add or insert items, the blocks ring will automatically create a new fixed size array block and add it to the chain, if it currently does not have enough capacity to handle the additional item. This means that blocks ring grows in an arithmetic progression with a common difference of the blocks size, with which it was constructed.

    Pros:

    • Constant append/prepend times (add to end, add to start). In the cases a new block needs to be added the complexity is O(BlockSize) time (constant again).
    • Constant remove first/remove last times.
    • Max unused items memory is BlocksSize - 1.
    • No memory overhead per item.

    Cons

    • Constant to O(n / BlockSize) indexed access time, because of the need to cycle through the chain to find a block. It is recommended to use an iterator to iterate through the items (iteration is performed at constant time).
    • Constant to O(n / 2) RemoveAt time, with worst case (O(n / 2) time) experienced when you remove the middle item and best case (constant) experienced when you remove the first/last item.
    • Constant to O(n / 2) InsertAt time, with worst case (O(n / 2) time) experienced when you insert a middle item and best case (constant) experienced when you append/prepend items.

    Because of these factors the blocks ring is a good performer for INDeque<T>, INQueue<T> and INStack<T> implementations that need to handle large sets and generally bad for lists, where fast direct indexing is needed.

    The most notable optimization you can use it to create a NBlocksDeque<T>, NBlocksQueue<T> and NBlocksStack<T> with a larger fixed size of the array blocks, to reduce the complexities introduced by the internal blocks chain.

     Linked List Implementations

    The NChainList<T> class implements the INList<T> and INDeque<T> interfaces, via a linked list (see Chains (Doubly Linked Lists)), each node of which contains a distinct item of the NChainList<T> generic type. The chain created for the chain list is accessible from the Chain property. Modifications that you make to the chain are directly reflecting the items in the list and vice versa.

    When you add or insert items, the chain list will automatically add/insert new nodes in its backing chain.

    Pros:

    • Constant append/prepend times (add to end, add to start).
    • Constant remove first/remove last times.
    • Constant insert/remove times, if you know the reference chain node.
    • No unused items memory.

       

    Cons

    • Constant to O(n / 2) indexed access time, because of the need to cycle through the chain. It is recommended to use an iterator to iterate through the items (iteration is performed at constant time).
    • Constant to O(n / 2) RemoveAt time, with worst case (O(n / 2) time) experienced when you remove the middle item and best case (constant) experienced when you remove the first/last item.
    • Constant to O(n / 2) InsertAt time, with worst case (O(n / 2) time) experienced when you insert a middle item and best case (constant) experienced when you append/prepend items.
    • Large memory overhead per item (because each item is associated with a chain node).

     Because of these factors the chain list is a good performer for very dynamic, but small lists and deques. It is generally used in hybrid data structures such as the MRU (Most Recently Used) map.