is Queue.Synchronized faster than using a Lock()?
Asked Answered
H

3

8

I have a Queue on which the Enqueue operation would be performed by one thread and Dequeue would be performed by another. Needless to say, I had to implement some thread safety for it.

I first tried using a lock on the queue before each Enqueue/Dequeue as it gives a better control for the locking mechanism. It worked well, but my curious mind led me to test some more.

I then tried using Queue.Synchronized wrapper keeping everything else the same. Now, I am not sure if its true, but the performance does seem a tad bit faster with this approach.

Do you think, there actually is some difference in perfomance between the two, or I am just imagining things here..? :)

Housley answered 27/1, 2011 at 15:20 Comment(1)
Just taking out a lock using either method is insanely quick. The performance hit comes from how much contention there is for the locks.Universalism
G
18

When requesting Queue.Synchonized you get a SynchronizedQueue in return which uses a lock very minimally around calls to Enqueue and Dequeue on an inner queue. Therefore, the performance should be the same as using a Queue and managing locking yourself for Enqueue and Dequeue with your own lock.

You are indeed imagining things - they should be the same.

Update

There is actually the fact that when using a SynchronizedQueue you are adding a layer of indirection as you have to go through the wrapper methods to get to the inner queue which it is managing. If anything this should slow things down very fractionally as you've got an extra frame on the stack that needs to be managed for each call. God knows if in-lining cancels this out though. Whatever - it's minimal.

Update 2

I have now benchmarked this, and as predicted in my previous update:

"Queue.Synchronized" is slower than "Queue+lock"

I carried out a single-threaded test as they both use the same locking technique (i.e. lock) so testing pure overhead in a "straight line" seems reasonable.

My benchmark produced the following results for a Release build:

Iterations      :10,000,000

Queue+Lock      :539.14ms
Queue+Lock      :540.55ms
Queue+Lock      :539.46ms
Queue+Lock      :540.46ms
Queue+Lock      :539.75ms
SynchonizedQueue:578.67ms
SynchonizedQueue:585.04ms
SynchonizedQueue:580.22ms
SynchonizedQueue:578.35ms
SynchonizedQueue:578.57ms

Using the following code:

private readonly object _syncObj = new object();

[Test]
public object measure_queue_locking_performance()
{
    const int TestIterations = 5;
    const int Iterations = (10 * 1000 * 1000);

    Action<string, Action> time = (name, test) =>
    {
        for (int i = 0; i < TestIterations; i++)
        {
            TimeSpan elapsed = TimeTest(test, Iterations);
            Console.WriteLine("{0}:{1:F2}ms", name, elapsed.TotalMilliseconds);
        }
    };

    object itemOut, itemIn = new object();
    Queue queue = new Queue();
    Queue syncQueue = Queue.Synchronized(queue);

    Action test1 = () =>
    {
        lock (_syncObj) queue.Enqueue(itemIn);
        lock (_syncObj) itemOut = queue.Dequeue();
    };

    Action test2 = () =>
    {
        syncQueue.Enqueue(itemIn);
        itemOut = syncQueue.Dequeue();
    };

    Console.WriteLine("Iterations:{0:0,0}\r\n", Iterations);
    time("Queue+Lock", test1);
    time("SynchonizedQueue", test2);

    return itemOut;
}

[SuppressMessage("Microsoft.Reliability", "CA2001:AvoidCallingProblematicMethods", MessageId = "System.GC.Collect")]
private static TimeSpan TimeTest(Action action, int iterations)
{
    Action gc = () =>
    {
        GC.Collect();
        GC.WaitForFullGCComplete();
    };

    Action empty = () => { };

    Stopwatch stopwatch1 = Stopwatch.StartNew();

    for (int j = 0; j < iterations; j++)
    {
        empty();
    }

    TimeSpan loopElapsed = stopwatch1.Elapsed;

    gc();
    action(); //JIT
    action(); //Optimize

    Stopwatch stopwatch2 = Stopwatch.StartNew();

    for (int j = 0; j < iterations; j++) action();

    gc();

    TimeSpan testElapsed = stopwatch2.Elapsed;

    return (testElapsed - loopElapsed);
}
Gaspar answered 27/1, 2011 at 15:24 Comment(4)
Not if the OP is not aggressive enough in releasing the lock, or is too eager in acquiring it.Dowser
@Jason I am making the assumption that the OP is using locking minimally around Enqueue and Dequeue calls, but I see your point. I have clarified a little in my answer.Gaspar
thanks for investing time and efforts in getting an authoritive answer to this problem. Kudos to you.Housley
How do the values in the tests above compare to using a ConcurrentQueue?Fuselage
D
6

We can't answer this for you. Only you can answer it for yourself by getting a profiler and testing both scenarios (Queue.Synchronized vs. Lock) on real-world data from your application. It might not even be a bottleneck in your application.

That said, you should probably just be using ConcurrentQueue.

Dowser answered 27/1, 2011 at 15:23 Comment(7)
@Danish: Check out Rx. It has a backport of ConcurrentQueue<T> for .NET 3.5.Andizhan
@Dan Is that version of ConcurrentQueue<T> actually generally faster though? I have seen that advice a few times, but no actual performance figures. Any figures?Gaspar
@chibacity: Are you asking if it's as fast as the ConcurrentQueue<T> in .NET 4.0, or if it's faster than a plain vanilla Queue<T> guarded by lock statements? In the former case, I don't know. In the latter case, it depends on the level of contention (with little to no contention, Queue<T> with locks is better; as potential contention increases, the concurrent version leads more and more). I could probably dig up some numbers somewhere if you want.Andizhan
@Dan My main point is I know there is a version in Rx (3.5), but I'm guessing this is not the same version that is in .Net 4.0.Gaspar
@chibacity: Well, it's hardly an authoritative answer, but a quick look in Reflector at the decompiled versions of both types suggests they are identical. For what it's worth, Rx has a lot of the parallelization-related functionality from .NET 4.0, such as the entire System.Threading.Tasks namespace for example, as well.Andizhan
@Dan Cheers for having a dig around, I knew Rx had it, but I thought the implementation could have easily changed between then and .Net 4.0. And you are quite correct, with no contention "Queue+Lock" is faster than "ConcurrentQueue<T>".Gaspar
@Dan thanks for a great suggestion, I would sure check it out. This also gives me a good opportunity to get my hands dirty with Rx in implementing a real workable scenario. :)Housley
N
0
  1. Queue.Synchronize Wraps a new queue Synchronized while Lock Queue.SyncRoot gives an Object to access the Queue synchronized way so this way you can ensure Thread Safety in the Queue while using the operations Enqueue and Dequeue simultaneously using Threads.
Nonsuit answered 18/11, 2014 at 14:7 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.