Searching and sorting are algorithmic primitives. It's helpful for the standard library to have fast reliable implementations. Otherwise, developers waste time reinventing the wheel.
However, in the case of the .NET Framework, it's unfortunate that the specific choices of algorithms happens to make them less useful than they might be. In some cases, their behaviour is not defined:
List<T>.BinarySearch
If the List contains more than one element with the same value, the method returns only one of the occurrences, and it might return any one of the occurrences, not necessarily the first one.
List<T>
This implementation performs an unstable sort; that is, if two elements are equal, their order might not be preserved. In contrast, a stable sort preserves the order of elements that are equal.
That's a shame, because there are deterministic algorithms that are just as fast, and these would be much more useful as building blocks. It's noteworthy that the binary search algorithms in Python, Ruby and Go all find the first matching element.
Sort()
which would return a new copy of list as an immutableISortedList<T>
. The whole copying thing would be an overhead but can be mitigated with some copy-on-modify semantics. – SubstantializeOrderBy
is exactly what I was trying to get at. I was almost sure that I was inventing something new :) You're also correct on that a SortedList could work with in-place insertions without overhead. I didn't consider that either. – Substantialize