Working with the PriorityQueue<TElement, TPriority>
collection, frequently I have the need to preserve the insertion order of elements with the same priority (FIFO order), and AFAIK the only way to do it is by coupling the TPriority
with an extra incremented int
index: PriorityQueue<TElement, (TPriority, int)>
. This works well, but it makes me feel uncomfortable knowing that eventually, if the queue is used for an extensive amount of time, the int
indices will wrap around the Int32.MaxValue
limit, breaking the FIFO functionality. I can patch this problem by switching from the int
to a long
, making it practically impossible for the index to wrap, but still feels dirty. I wonder if it's possible to do better, by rebuilding the indices of the TElement
+TPriority
pairs when the index is about to wrap with the next Enqueue
operation. So I wrote this extension method for indexed FIFO priority queues:
public static void Reindex<TElement, TPriority>(
this PriorityQueue<TElement, (TPriority, int)> source)
{
ArgumentNullException.ThrowIfNull(source);
(TElement, (TPriority, int))[] entries = source.UnorderedItems
.OrderBy(e => e.Priority, source.Comparer)
.ToArray();
source.Clear();
for (int i = 0; i < entries.Length; i++)
source.Enqueue(entries[i].Item1, (entries[i].Item2.Item1, i));
}
My problem with this implementation is that it allocates a disproportional amount of memory during the re-indexing. For example 61,456 bytes are allocated for rebuilding the indices of a PriorityQueue<object, (char, int)>
with 1,000 elements:
PriorityQueue<object, (char, int)> queue = new(Comparer<(char, int)>.Create((x, y) =>
{
int result = x.Item1.CompareTo(y.Item1);
if (result == 0) result = x.Item2.CompareTo(y.Item2);
return result;
}));
Random random = new(0);
for (int i = 0; i < 100_000; i++)
{
char key = (char)random.Next(65, 91);
queue.Enqueue(new object(), (key, i));
}
while (queue.Count > 1_000) queue.Dequeue();
long mem0 = GC.GetTotalAllocatedBytes(true);
queue.Reindex();
long mem1 = GC.GetTotalAllocatedBytes(true);
Console.WriteLine($"Count: {queue.Count:#,0}, Allocated: {mem1 - mem0:#,0} bytes");
Output:
Count: 1,000, Allocated: 61,456 bytes
I would like to ask if it's possible to rebuild the indices with zero allocations (in-place), or at least with no more than System.Runtime.CompilerServices.Unsafe.SizeOf<(TElement, TPriority)>() * queue.Count
allocated bytes (16,000 bytes in the above example).
long
like you did. – Pruelong
values as it would take to represent every "Tick" from now until the year 9999, and on my machine it consistently takes at least a tick to put an item into a PriorityQueue. I'd say you're fine just using along
, and if that "feels dirty" then you can detect an overflow and only re-index when that (never) happens. – HemanEnqueue
operations per second, it would take almost 300 years to wrap along
value, which makes it practically safe. But in order to get this statistical safety you have to increase the per element state by 4 bytes, which is not ideal. And this question transcends practicality. It's about ideals! :-) – Carousel