ManualResetEvent vs. Thread.Sleep
Asked Answered
K

2

14

I implemented the following background processing thread, where Jobs is a Queue<T>:

static void WorkThread()
{
    while (working)
    {
        var job;

        lock (Jobs)
        {
            if (Jobs.Count > 0)
                job = Jobs.Dequeue();
        }

        if (job == null)
        {
            Thread.Sleep(1);
        }
        else
        {
            // [snip]: Process job.
        }
    }
}

This produced a noticable delay between when the jobs were being entered and when they were actually starting to be run (batches of jobs are entered at once, and each job is only [relatively] small.) The delay wasn't a huge deal, but I got to thinking about the problem, and made the following change:

static ManualResetEvent _workerWait = new ManualResetEvent(false);
// ...
    if (job == null)
    {
        lock (_workerWait)
        {
            _workerWait.Reset();
        }
        _workerWait.WaitOne();
    }

Where the thread adding jobs now locks _workerWait and calls _workerWait.Set() when it's done adding jobs. This solution (seemingly) instantly starts processing jobs, and the delay is gone altogether.

My question is partly "Why does this happen?", granted that Thread.Sleep(int) can very well sleep for longer than you specify, and partly "How does the ManualResetEvent achieve this level of performance?".

EDIT: Since someone asked about the function that's queueing items, here it is, along with the full system as it stands at the moment.

public void RunTriggers(string data)
{
    lock (this.SyncRoot)
    {
        this.Triggers.Sort((a, b) => { return a.Priority - b.Priority; });

        foreach (Trigger trigger in this.Triggers)
        {
            lock (Jobs)
            {
                Jobs.Enqueue(new TriggerData(this, trigger, data));
                _workerWait.Set();
            }
        }
    }
}

static private ManualResetEvent _workerWait = new ManualResetEvent(false);
static void WorkThread()
{
    while (working)
    {
        TriggerData job = null;

        lock (Jobs)
        {
            if (Jobs.Count > 0)
                job = Jobs.Dequeue();

            if (job == null)
            {
                _workerWait.Reset();
            }
        }

        if (job == null)
            _workerWait.WaitOne();
        else
        {
            try
            {
                foreach (Match m in job.Trigger.Regex.Matches(job.Data))
                    job.Trigger.Value.Action(job.World, m);
            }
            catch (Exception ex)
            {
                job.World.SendLineToClient("\r\n\x1B[32m -- {0} in trigger ({1}): {2}\x1B[m",
                    ex.GetType().ToString(), job.Trigger.Name, ex.Message);
            }
        }
    }
}
Karalynn answered 12/7, 2009 at 15:34 Comment(0)
U
20

The events are kernel primitives provided by the OS/Kernel that's designed just for this sort of things. The kernel provides a boundary upon which you can guarantee atomic operations which is important for synchronization(Some atomicity can be done in user space too with hardware support).

In short, when a thread waits on an event it's put on a waiting list for that event and marked as non-runnable. When the event is signaled, the kernel wakes up the ones in the waiting list and marks them as runnable and they can continue to run. It's naturally a huge benefit that a thread can wake up immediately when the event is signalled, vs sleeping for a long time and recheck the condition every now and then.

Even one millisecond is a really really long time, you could have processed thousands of event in that time. Also the time resolution is traditionally 10ms, so sleeping less than 10ms usually just results in a 10ms sleep anyway. With an event, a thread can be woken up and scheduled immediately

Undercoat answered 12/7, 2009 at 15:58 Comment(1)
Newer info: 10ms minimum resolution is an XP and earlier thing as the OS used static increments of 10ms for scheduling. I think Vista, and I know Win7 does, use a dynamic "tickless" time slice. With Win7, I can start a high-res timer, issues a sleep(1), and the timing is extremely close to 1ms, sometimes less than.Theresa
G
10

First locking on _workerWait is pointless, an Event is a system (kernel) object designed for signaling between threads (and heavily used in the Win32 API for asynchronous operations). Therefore it is quite safe for multiple threads to set or reset it without additional synchronization.

As to your main question, need to see the logic for placing things on the queue as well, and some information on how much work is done for each job (is the worker thread spending more time processing work or on waiting for work).

Likely the best solution would be to use an object instance to lock on and use Monitor.Pulse and Monitor.Wait as a condition variable.

Edit: With a view of the code to enqueue, it appears that answer #1116297 has it right: a 1ms delay is too long to wait, given that many of the work items will be extremely quick to process.

The approach of having a mechanism to wake up the worker thread is correct (as there is no .NET concurrent queue with a blocking dequeue operation). However rather than using an event, a condition variable is going to be a little more efficient (as in non-contended cases it does not require a kernel transition):

object sync = new Object();
var queue = new Queue<TriggerData>();

public void EnqueueTriggers(IEnumerable<TriggerData> triggers) {
  lock (sync) {
    foreach (var t in triggers) {
      queue.Enqueue(t);
    }
    Monitor.Pulse(sync);  // Use PulseAll if there are multiple worker threads
  }
}

void WorkerThread() {
  while (!exit) {
    TriggerData job = DequeueTrigger();
    // Do work
  }
}

private TriggerData DequeueTrigger() {
  lock (sync) {
    if (queue.Count > 0) {
      return queue.Dequeue();
    }
    while (queue.Count == 0) {
      Monitor.Wait(sync);
    }
    return queue.Dequeue();
  }
}

Monitor.Wait will release the lock on the parameter, wait until Pulse() or PulseAll() is called against the lock, then re-enter the lock and return. Need to recheck the wait condition because some other thread could have read the item off the queue.

Guam answered 12/7, 2009 at 15:44 Comment(3)
The majority of jobs will just be matching a (precompiled) Regex and quitting (because the match failed). How many depends on how many the user puts in, and how much data is being recieved by the application (it's a network app). It's very possible it might peak at several hundred a second at maximum load, perhaps as many as a thousand. I wasn't sure if anyone would be interested in the code queueing items, but I'm editing it in now, since you asked so nicely :)Karalynn
I thought I read somewhere that Monitor was the backing behind the lock() {} construct? How is it then that you can use lock() and Monitor on the same sync object like that?Karalynn
Oh wait, I just read and understood the last paragraph there.Karalynn

© 2022 - 2024 — McMap. All rights reserved.