C# Events and Thread Safety
Asked Answered
M

15

244

I frequently hear/read the following advice:

Always make a copy of an event before you check it for null and fire it. This will eliminate a potential problem with threading where the event becomes null at the location right between where you check for null and where you fire the event:

// Copy the event delegate before checking/calling
EventHandler copy = TheEvent;

if (copy != null)
    copy(this, EventArgs.Empty); // Call any handlers on the copied list

Updated: I thought from reading about optimizations that this might also require the event member to be volatile, but Jon Skeet states in his answer that the CLR doesn't optimize away the copy.

But meanwhile, in order for this issue to even occur, another thread must have done something like this:

// Better delist from event - don't want our handler called from now on:
otherObject.TheEvent -= OnTheEvent;
// Good, now we can be certain that OnTheEvent will not run...

The actual sequence might be this mixture:

// Copy the event delegate before checking/calling
EventHandler copy = TheEvent;

// Better delist from event - don't want our handler called from now on:
otherObject.TheEvent -= OnTheEvent;    
// Good, now we can be certain that OnTheEvent will not run...

if (copy != null)
    copy(this, EventArgs.Empty); // Call any handlers on the copied list

The point being that OnTheEvent runs after the author has unsubscribed, and yet they just unsubscribed specifically to avoid that happening. Surely what is really needed is a custom event implementation with appropriate synchronisation in the add and remove accessors. And in addition there is the problem of possible deadlocks if a lock is held while an event is fired.

So is this Cargo Cult Programming? It seems that way - a lot of people must be taking this step to protect their code from multiple threads, when in reality it seems to me that events require much more care than this before they can be used as part of a multi-threaded design. Consequently, people who are not taking that additional care might as well ignore this advice - it simply isn't an issue for single-threaded programs, and in fact, given the absence of volatile in most online example code, the advice may be having no effect at all.

(And isn't it a lot simpler to just assign the empty delegate { } on the member declaration so that you never need to check for null in the first place?)

Updated: In case it wasn't clear, I did grasp the intention of the advice - to avoid a null reference exception under all circumstances. My point is that this particular null reference exception can only occur if another thread is delisting from the event, and the only reason for doing that is to ensure that no further calls will be received via that event, which clearly is NOT achieved by this technique. You'd be concealing a race condition - it would be better to reveal it! That null exception helps to detect an abuse of your component. If you want your component to be protected from abuse, you could follow the example of WPF - store the thread ID in your constructor and then throw an exception if another thread tries to interact directly with your component. Or else implement a truly thread-safe component (not an easy task).

So I contend that merely doing this copy/check idiom is cargo cult programming, adding mess and noise to your code. To actually protect against other threads requires a lot more work.

Update in response to Eric Lippert's blog posts:

So there's a major thing I'd missed about event handlers: "event handlers are required to be robust in the face of being called even after the event has been unsubscribed", and obviously therefore we only need to care about the possibility of the event delegate being null. Is that requirement on event handlers documented anywhere?

And so: "There are other ways to solve this problem; for example, initializing the handler to have an empty action that is never removed. But doing a null check is the standard pattern."

So the one remaining fragment of my question is, why is explicit-null-check the "standard pattern"? The alternative, assigning the empty delegate, requires only = delegate {} to be added to the event declaration, and this eliminates those little piles of stinky ceremony from every place where the event is raised. It would be easy to make sure that the empty delegate is cheap to instantiate. Or am I still missing something?

Surely it must be that (as Jon Skeet suggested) this is just .NET 1.x advice that hasn't died out, as it should have done in 2005?


UPDATE

As of C# 6, the answer to this question is:

SomeEvent?.Invoke(this, e);
Methenamine answered 24/4, 2009 at 15:37 Comment(5)
This question came up in an internal discussion a while back; I've been intending to blog this for some time now. My post on the subject is here: Events and RacesSubphylum
Stephen Cleary has a CodeProject article which examines this issue, and he concludes a general purpose, "thread-safe" solution does not exist. Basically it's up to the event invoker to ensure the delegate is not null, and up to the event handler to be able to handle being invoked after it's been unsubscribed.Riles
@Riles - actually the second issue sometimes has to be dealt with by the event handler even if threads are not involved. Can happen if one event handler tells another handler to unsubscribe from the event currently being handled, but that 2nd subscriber will then receive the event anyway (because it unsubscribed while handling was in progress).Methenamine
Adding a subscription to an event with zero subscribers, removing the only subscription for the event, invoking an event with zero subscribers, and invoking an event with exactly one subscriber, are all much faster operations than add/remove/invoke scenarios involving other numbers of subscribers. Adding a dummy delegate slows down the common case. The real problem with C# is that its creators decided to have EventName(arguments) invoke the event's delegate unconditionally, rather than having it only invoke the delegate if non-null (do nothing if null).Fribourg
I moved the answer from the top to the bottom of the question. Ideally the question should not include an answer IMHO.Rout
T
103

The JIT isn't allowed to perform the optimization you're talking about in the first part, because of the condition. I know this was raised as a spectre a while ago, but it's not valid. (I checked it with either Joe Duffy or Vance Morrison a while ago; I can't remember which.)

Without the volatile modifier it's possible that the local copy taken will be out of date, but that's all. It won't cause a NullReferenceException.

And yes, there's certainly a race condition - but there always will be. Suppose we just change the code to:

TheEvent(this, EventArgs.Empty);

Now suppose that the invocation list for that delegate has 1000 entries. It's perfectly possible that the action at the start of the list will have executed before another thread unsubscribes a handler near the end of the list. However, that handler will still be executed because it'll be a new list. (Delegates are immutable.) As far as I can see this is unavoidable.

Using an empty delegate certainly avoids the nullity check, but doesn't fix the race condition. It also doesn't guarantee that you always "see" the latest value of the variable.

Truculent answered 24/4, 2009 at 15:53 Comment(14)
Is that a "yes"? :) I'm inclined to accept it as such. Do people really try to write systems in which events are dynamically enlisted/delisted from multiple threads as they are firing? (I'm mostly a GUI man on the CLR so far.) If so, I'd imagine this null reference problem is only the start of their problems.Methenamine
It depends on exactly what you're trying to protect against. The simple form you object to gets rid of the issue of making the code go "bang!" at inconvenient moments, but that's all. Yes, it takes more effort to give a fully thread-safe event. Yes, the "delegate{}" syntax is a more convenient way of working in C# 2, if you only want basic protection. (A lot of the advice for events is probably pre-C#-2.)Truculent
Joe Duffy's "Concurrent Programming on Windows" covers the JIT optimization and memory model aspect of the question; see code.logos.com/blog/2008/11/events_and_threads_part_4.htmlBradlybradman
I've accepted this based on the comment about the "standard" advice being pre-C#2, and I'm not hearing anyone contradicting that. Unless it is really expensive to instantiate your event args, just put '= delegate {}' on the end of your event declaration and then call your events directly as if they are methods; never assign null to them. (The other stuff I brought in about ensuring a handler is not called after delisting, that was all irrelevant, and is impossible to ensure, even for single threaded code, e.g. if handler 1 asks handler 2 to delist, handler 2 will still get called next.)Methenamine
The only problem case (as always) is structs, where you cannot ensure that they will be instantiated with anything other than null values in their members. But structs suck.Methenamine
About empty delegate, see also this question: stackoverflow.com/questions/170907/….Dimerous
Especially comment from @supercat: "If the event would otherwise have exactly one subscriber (a very common case), the dummy handler will make it have two. Events with one handler are handled much more efficiently than those with two."Dimerous
@JonSkeet, how do you think if we create a some wrapper with custom implementation of invokation list like I did it in an answer below: https://mcmap.net/q/16675/-c-events-and-thread-safety I tried it and seems to me a bit slower, but fully thread safe.Turning
@Tony: There's still fundamentally a race condition between something subscribing/unsubscribing and the delegate being executed. Your code (having just browsed through it briefly) reduces that race condition by allowing subscription/unsubscription to take effect while it's being raised, but I suspect that in most cases where the normal behaviour isn't good enough, this isn't either.Truculent
@Turning The only way to truly remove the race conditions would be to ensure both subscription, unsubscription and invocation would happen within (the same) lock. This of course invites deadlocks, and would still require the handlers to be tolerant to unexpected invocations (after all, it would just mean you're blocking on the -= call - the handler can launch at this time as well). And now we're talking about a significant performance and correctness hit. Your code is still relatively naïve multi-threading - you're trying too hard, and losing safety.Plum
Hi @Jon Skeet what would happen if LITERALLY a variable is updated by one thread and another thread reads it EXACTLY AT THE SAME TIME (Considering there is no synchronization)? I know maybe this is not the right question but it has some relation. Thanks in advancePoland
@Daniel: It's not clear whether you're specifically asking about events, or general variables - and the answer can vary. I suggest you ask a new question with more details.Truculent
Hi @Jon Skeet, thanks, general variables, the simple assignment statementPoland
@Daniel: Right, that should definitely be in a different question, rather than one which is 13 years old and explicitly about events (which have their own thread-safety aspects).Truculent
W
52

I see a lot of people going toward the extension method of doing this ...

public static class Extensions   
{   
  public static void Raise<T>(this EventHandler<T> handler, 
    object sender, T args) where T : EventArgs   
  {   
    if (handler != null) handler(sender, args);   
  }   
}

That gives you nicer syntax to raise the event ...

MyEvent.Raise( this, new MyEventArgs() );

And also does away with the local copy since it is captured at method call time.

Werth answered 24/4, 2009 at 15:56 Comment(6)
I like the syntax, but let's be clear... it doesn't solve the problem with a stale handler getting invoked even after it has been unregistered. This only solves the null dereference problem. While I like the syntax, I question whether it is really any better than: public event EventHandler<T> MyEvent = delete {}; ... MyEvent (this, new MyEventArgs()); This is also a very low-friction solution that I like for its simplicity.Removable
@Simon I see different people saying different things on this. I've tested it and what I've done indicates to me that this does handle the null handler issue. Even if the original Sink unregisters from the Event after the handler != null check, the Event is still raised and no exception is thrown.Werth
yup, cf this question: stackoverflow.com/questions/192980/…See
+1. I just wrote this method myself, starting thinking about thread-safety, did some research and stumbled upon this question.Lomeli
How can this be called from VB.NET? Or does 'RaiseEvent' already cater for multi-threading scenarios?Undies
@SimonGillbee Handlers are supposed to gracefully handle being called even after being unregistered.Libradalibrarian
S
36

"Why is explicit-null-check the 'standard pattern'?"

I suspect the reason for this might be that the null-check is more performant.

If you always subscribe an empty delegate to your events when they are created, there will be some overheads:

  • Cost of constructing the empty delegate.
  • Cost of constructing a delegate chain to contain it.
  • Cost of invoking the pointless delegate every single time the event is raised.

(Note that UI controls often have a large number of events, most of which are never subscribed to. Having to create a dummy subscriber to each event and then invoke it would likely be a significant performance hit.)

I did some cursory performance testing to see the impact of the subscribe-empty-delegate approach, and here are my results:

Executing 50000000 iterations . . .
OnNonThreadSafeEvent took:      432ms
OnClassicNullCheckedEvent took: 490ms
OnPreInitializedEvent took:     614ms <--
Subscribing an empty delegate to each event . . .
Executing 50000000 iterations . . .
OnNonThreadSafeEvent took:      674ms
OnClassicNullCheckedEvent took: 674ms
OnPreInitializedEvent took:     2041ms <--
Subscribing another empty delegate to each event . . .
Executing 50000000 iterations . . .
OnNonThreadSafeEvent took:      2011ms
OnClassicNullCheckedEvent took: 2061ms
OnPreInitializedEvent took:     2246ms <--
Done

Note that for the case of zero or one subscribers (common for UI controls, where events are plentiful), the event pre-initialised with an empty delegate is notably slower (over 50 million iterations...)

For more information and source code, visit this blog post on .NET Event invocation thread safety that I published just the day before this question was asked (!)

(My test set-up may be flawed so feel free to download the source code and inspect it yourself. Any feedback is much appreciated.)

Snuggery answered 11/5, 2009 at 6:15 Comment(4)
I think you make the key point in your blog post: there is no need to worry about the performance implications until it is a bottleneck. Why let the ugly way be the recommended way? If we wanted premature optimisation instead of clarity, we'd be using assembler - so my question remains, and I think the likely answer is that the advice simply predates anonymous delegates and it takes a long time for human culture to shift old advice, like in the famous "pot roast story".Methenamine
And your figures prove the point very well: the overhead comes down to just two and a half NANOSECONDS(!!!) per event raised (pre-init vs. classic-null). This would be undetectable in almost apps with real work to do, but given that the vast majority of event usage is in GUI frameworks, you'd have to compare this with the cost of repainting parts of a screen in Winforms, etc., so it becomes even more invisible in the deluge of real CPU work and waiting around for resources. You get +1 from me for the hard work, anyway. :)Methenamine
@DanielEarwicker said it right, you have moved me to be a believer in the public event WrapperDoneHandler OnWrapperDone =(x,y)=> {}; model.Yalu
It might also be good to time a Delegate.Combine/Delegate.Remove pair in cases where the event has zero, one, or two subscribers; if one repeatedly adds and removes the same delegate instance, the cost differences among the cases will be especially pronounced since Combine has fast special-case behavior when one of the arguments is null (just return the other), and Remove is very fast when the two arguments are equal (just return null).Fribourg
I
12

I truly enjoyed this read - not! Even though I need it to work with the C# feature called events!

Why not fix this in the compiler? I know there are MS people who read these posts, so please don't flame this!

1 - the Null issue) Why not make events be .Empty instead of null in the first place? How many lines of code would be saved for null check or having to stick a = delegate {} onto the declaration? Let the compiler handle the Empty case, IE do nothing! If it all matters to the creator of the event, they can check for .Empty and do whatever they care with it! Otherwise all the null checks / delegate adds are hacks around the problem!

Honestly I'm tired of having to do this with every event - aka boilerplate code!

public event Action<thisClass, string> Some;
protected virtual void DoSomeEvent(string someValue)
{
  var e = Some; // avoid race condition here! 
  if(null != e) // avoid null condition here! 
     e(this, someValue);
}

2 - the race condition issue) I read Eric's blog post, I agree that the H (handler) should handle when it dereferences itself, but cannot the event be made immutable/thread safe? IE, set a lock flag on its creation, so that whenever it is called, it locks all subscribing and un-subscribing to it while its executing?

Conclusion,

Are not modern day languages supposed to solve problems like these for us?

Impeach answered 27/4, 2011 at 19:47 Comment(5)
Agreed, there should be better support for this in the compiler. Until then, I created a PostSharp aspect which does this in a post-compile step. :)Hydrotropism
Heaving thread subscription/unsubscription requests block while waiting for arbitrary outside code to complete is far worse than having subscribers receive events after subscriptions are canceled, especially since the latter "problem" can be solved easily by simply having event handlers check a flag to see if they're still interested in receiving their event, but deadlocks resulting from the former design may be intractable.Fribourg
@supercat. Imo, the "far worse" comment is pretty application dependent. Who wouldn't want very strict locking without extra flags when its an option? A deadlock should only occur if the event handling thread is waiting on yet another thread (that is subcribing/unsubscribing) since locks are same-thread re-entrant and a subscribe/unsubscribe within the original event handler would not be blocked. If there is cross thread waiting as part of an event handler that would be a part of the design I'd prefer to rework. I'm coming from a server-side app angle that has predictable patterns.Chuu
@crokusek: The analysis required to prove that a system is free of deadlock is easy if there would be no cycles in a directed graph connecting each lock to all of the locks that might be needed while it's held [the lack of cycles proves the system deadlock-free]. Allowing arbitrary code to be invoked while a lock is held will create an edge in the "might-be-needed" graph from that lock to any lock which arbitrary code might acquire (not quite every lock in the system, but not far from it). The consequent existence of cycles wouldn't imply that deadlock would occur, but...Fribourg
...would greatly increase the level of analysis necessary to prove that it couldn't.Fribourg
A
9

With C# 6 and above, code could be simplified using new ?. operator as in:

TheEvent?.Invoke(this, EventArgs.Empty);

Here is the MSDN documentation.

Anima answered 29/8, 2017 at 23:47 Comment(0)
C
6

According to Jeffrey Richter in the book CLR via C#, the correct method is:

// Copy a reference to the delegate field now into a temporary field for thread safety
EventHandler<EventArgs> temp =
Interlocked.CompareExchange(ref NewMail, null, null);
// If any methods registered interest with our event, notify them
if (temp != null) temp(this, e);

Because it forces a reference copy. For more information, see his Event section in the book.

Calvano answered 11/11, 2010 at 0:18 Comment(2)
May be I'm missing smth, but Interlocked.CompareExchange throws NullReferenceException if its first argument is null which is exactly what we want to avoid. msdn.microsoft.com/en-us/library/bb297966.aspxProtist
Interlocked.CompareExchange would fail if it were somehow passed a null ref, but that's not the same thing as being passed a ref to a storage location (e.g. NewMail) which exists and which initially holds a null reference.Fribourg
C
4

I've been using this design pattern to ensure that event handlers aren't executed after they're unsubscribed. It's working pretty well so far, although I haven't tried any performance profiling.

private readonly object eventMutex = new object();

private event EventHandler _onEvent = null;

public event EventHandler OnEvent
{
  add
  {
    lock(eventMutex)
    {
      _onEvent += value;
    }
  }

  remove
  {
    lock(eventMutex)
    {
      _onEvent -= value;
    }
  }

}

private void HandleEvent(EventArgs args)
{
  lock(eventMutex)
  {
    if (_onEvent != null)
      _onEvent(args);
  }
}

I'm mostly working with Mono for Android these days, and Android doesn't seem to like it when you try to update a View after its Activity has been sent to the background.

Civism answered 16/7, 2013 at 21:49 Comment(1)
Actually, I see someone else is using a very similar pattern here: stackoverflow.com/questions/3668953/…Civism
P
3

This practice is not about enforcing a certain order of operations. It's actually about avoiding a null reference exception.

The reasoning behind people caring about the null reference exception and not the race condition would require some deep psychological research. I think it has something to do with the fact that fixing the null reference problem is much easier. Once that is fixed, they hang a big "Mission Accomplished" banner on their code and unzip their flight suit.

Note: fixing the race condition probably involves using a synchronous flag track whether the handler should run

Pavel answered 24/4, 2009 at 16:4 Comment(12)
I'm not asking for a solution to that problem. I'm wondering why there is widespread advice to spray extra code-mess around event firing, when it only avoids a null exception when there is a hard-to-detect race condition, which will still exist.Methenamine
Well that was my point. They don't care about the race condition. They only care about the null reference exception. I will edit that into my answer.Pavel
And my point is: why would it make sense to care about the null reference exception and yet not care about the race condition?Methenamine
"I think it has something to do with the fact that fixing the null reference problem is much easier."Pavel
I dont think they care about the NullRef only, rather, this is best practice and this is what they know. When I found out about the race condition (especially considering the multithreading work i do) i was appalled, and am glad to use the new methodology and never have it breakYalu
A properly-written event handler should be prepared to handle the fact that any particular request to raise an event whose processing might overlap a request to add or remove it, may or may not end up raising the event which was being added or removed. The reason programmers don't care about the race condition is that in properly-written code it won't matter who wins.Fribourg
supercat, I think what you're saying is that programmers should care about it in their handler (properly written), and so they don't care about it in the invocation of the handler. But that is the point of the question. A lot of programmers don't care about it in their handler (i.e. their handler is not properly written). I rarely see advice to make the handler resilient against calls after unsubscribing, but I often see advice to check if the handler is null (apparently the OP sees the same).Pavel
@dss539: Not only is it generally easy to deal with the problem in the handler--it is in general impossible to robustly deal with it anywhere else. It's no harder for an event handler to deal with the fact that it's invoked minutes after the event was unsubscribed, than to deal with the possibility that it might be invoked while the event is subscribed but not manage to get anything done until after the event is unsubscribed. The only way that event dispatch code could guard against the latter scenario would be to not allow events to be unsubscribed once dispatched.Fribourg
@dss539: While one could design an event framework which would block unsubscription requests until after pending event invocations had finished, such a design would make it impossible for any event (even something like an Unload event) to safely cancel an object's subscriptions to other events. Nasty. Better is to simply say that event unsubscription requests will cause events to be unsubscribed "eventually", and that event subscribers should check when they are invoked whether there's anything useful for them to do.Fribourg
@Fribourg Yes, guarding against calls after unsubscription is easy. I never advocated an elaborate mechanism for handling "late" calls that happen after unsubscription. My comments are entirely about the psychology of typical programmers and not about techniques to address specific technical issues. If you believe that making event handlers safe is as equally easy as null ref checking, then you will arrive at a different conclusion about programmer psychology.Pavel
@dss539: In many cases, the authors of event subscribers will know more about the threading situations in which the events would be raised, than would the authors of event publishers. For example, code subscribing to the collection-changed events will often know more about the threading scenarios of code that changes the collection, than would the author of the collection itself. The collections' author should generally not try to impose thread-safety on its consumers, but should avoid posing any needless impediments to consumers wishing to implement it themselves.Fribourg
@Fribourg We are talking about extremely different things.Pavel
C
1

So I'm a little late to the party here. :)

As for the use of null rather than the null object pattern to represent events with no subscribers, consider this scenario. You need to invoke an event, but constructing the object (EventArgs) is non-trivial, and in the common case your event has no subscribers. It would be beneficial to you if you could optimize your code to check to see if you had any subscribers at all before you committed processing effort to constructing the arguments and invoking the event.

With this in mind, a solution is to say "well, zero subscribers is represented by null." Then simply perform the null check before performing your expensive operation. I suppose another way of doing this would have been to have a Count property on the Delegate type, so you'd only perform the expensive operation if myDelegate.Count > 0. Using a Count property is a somewhat nice pattern that solves the original problem of allowing optimization, and it also has the nice property of being able to be invoked without causing a NullReferenceException.

Keep in mind, though, that since delegates are reference types, they are allowed to be null. Perhaps there was simply no good way of hiding this fact under the covers and supporting only the null object pattern for events, so the alternative may have been forcing developers to check both for null and for zero subscribers. That would be even uglier than the current situation.

Note: This is pure speculation. I'm not involved with the .NET languages or CLR.

Corydon answered 1/5, 2009 at 5:33 Comment(5)
I assume you mean "the use of empty delegate rather than..." You can already do what you suggest, with an event initialized to the empty delegate. The test (MyEvent.GetInvocationList().Length == 1) will be true if the initial empty delegate is the only thing on the list. There still would be no need to make a copy first. Though I think the case you describe would be extremely rare anyway.Methenamine
I think we're conflating the ideas of delegates and events here. If I have an event Foo on my class, then when external users call MyType.Foo += / -=, they're actually calling add_Foo() and remove_Foo() methods. However, when I reference Foo from within the class where it's defined, I'm actually referencing the underlying delegate directly, not the add_Foo() and remove_Foo() methods. And with the existence of types like EventHandlerList, there's nothing mandating that the delegate and the event even be in the same place. This is what I meant by the "Keep in mind" paragraph in my response.Corydon
(cont.) I admit that this is a confusing design, but the alternative may have been worse. Since in the end all you have is a delegate - you might reference the underlying delegate directly, you might get it from a collection, you might instantiate it on the fly - it may have been technically infeasible to support anything other than the "check for null" pattern.Corydon
As we're talking about firing the event, I can't see why the add/remove accessors are important here.Methenamine
@Levi: I really dislike the way C# handles events. If I had my druthers, the delegate would have been given a different name from the event. From outside a class, the only permissible operations on an event name would be += and -=. Within the class, the permissible operations would also include invocation (with built-in null check), testing against null, or setting to null. For anything else, one would have to use the delegate whose name would be the event name with some particular prefix or suffix.Fribourg
L
0

for single threaded applicaitons, you are correc this is not an issue.

However, if you are making a component that exposes events, there is no guarantee that a consumer of your component is not going to go multithreading, in which case you need to prepare for the worst.

Using the empty delegate does solve the problem, but also causes a performance hit on every call to the event, and could possibly have GC implications.

You are right that the consumer trie dto unsubscribe in order for this to happen, but if they made it past the temp copy, then consider the message already in transit.

If you don't use the temporary variable, and don't use the empty delegate, and someone unsubscribes, you get a null reference exception, which is fatal, so I think the cost is worth it.

Lordling answered 24/4, 2009 at 15:46 Comment(0)
C
0

I've never really considered this to be much of an issue because I generally only protect against this sort of potential threading badness in static methods (etc) on my reusable components, and I don't make static events.

Am I doing it wrong?

Chesney answered 24/4, 2009 at 17:31 Comment(3)
If you allocate an instance of a class with mutable state (fields that change their values) and then let several threads access the same instance simultaneously, without using locking to protect those fields from being modified by two threads at the same time, you're probably doing it wrong. If all your threads have their own separate instances (sharing nothing) or all your objects are immutable (once allocated, their fields' values never change) then you're probably okay.Methenamine
My general approach is to leave synchronization up to the caller except in static methods. If I'm the caller, then I'll synchronize at that higher level. (with the exception of an object whose sole purpose is handling synchronized access, of course. :) )Chesney
@GregD that depends how complex the method is and what data it uses. if it affects internal members, and you decide to run in an a threaded/Tasked state, you are in for a lot of hurtYalu
S
0

Wire all your events at construction and leave them alone. The design of the Delegate class cannot possibly handle any other usage correctly, as I will explain in the final paragraph of this post.

First of all, there's no point in trying to intercept an event notification when your event handlers must already make a synchronized decision about whether/how to respond to the notification.

Anything that may be notified, should be notified. If your event handlers are properly handling the notifications (i.e. they have access to an authoritative application state and respond only when appropriate), then it will be fine to notify them at any time and trust they will respond properly.

The only time a handler shouldn't be notified that an event has occurred, is if the event in fact hasn't occurred! So if you don't want a handler to be notified, stop generating the events (i.e. disable the control or whatever is responsible for detecting and bringing the event into existence in the first place).

Honestly, I think the Delegate class is unsalvageable. The merger/transition to a MulticastDelegate was a huge mistake, because it effectively changed the (useful) definition of an event from something that happens at a single instant in time, to something that happens over a timespan. Such a change requires a synchronization mechanism that can logically collapse it back into a single instant, but the MulticastDelegate lacks any such mechanism. Synchronization should encompass the entire timespan or instant the event takes place, so that once an application makes the synchronized decision to begin handling an event, it finishes handling it completely (transactionally). With the black box that is the MulticastDelegate/Delegate hybrid class, this is near impossible, so adhere to using a single-subscriber and/or implement your own kind of MulticastDelegate that has a synchronization handle that can be taken out while the handler chain is being used/modified. I'm recommending this, because the alternative would be to implement synchronization/transactional-integrity redundantly in all your handlers, which would be ridiculously/unnecessarily complex.

Spoondrift answered 27/5, 2009 at 18:49 Comment(15)
[1] There is no useful event handler that happens at "a single instant in time". All operations have a timespan. Any single handler can have a non-trivial sequence of steps to perform. Supporting a list of handlers changes nothing.Methenamine
[2] Holding a lock while an an event is fired is total madness. It leads inevitably to deadlock. The source takes out lock A, fires event, the sink takes out lock B, now two locks are held. What if some operation in another thread results in the locks being taken out in the reverse order? How can such deadly combinations be ruled out when responsibility for locking is divided between separately designed/tested components (which is the whole point of events)?Methenamine
[3] None of these issues in any way reduce the all-pervasive usefulness of normal multicast delegates/events in single-threaded composition of components, especially in GUI frameworks. This use case covers the vast majority of uses of events. Using events in a free-threaded way is of questionable value; this does not in any way invalidate their design or their obvious usefulness in the contexts where they make sense.Methenamine
[4] Threads + synchronous events is essentially a red herring. Queued asynchronous communication is the way to go.Methenamine
[1] I wasn't referring to measured time...I was talking about atomic operations, which logically occur instantaneously... and by that I mean nothing else involving the same resources they're using can change while the event is happening because it's serialized with a lock.Spoondrift
[2] For an event to be fired, a thread must be running that reads values (mouse_xy, time, status, etc.) and then calls an event handler chain. If, before calling that event chain, a lock on the event chain is taken out, another thread cannot modify the event chain. If another thread wants to modify the event chain, it has to wait for the event to complete. This simply prevents disturbances to the event chain, and ensures all handlers complete instantaneously from the perspective of anything wanting to modify the event chain.Spoondrift
[3] In a single-threaded application, MulticastDelegate (MCD) is fine, because calling a MCD with handlers A,B, and C, results in A,B, and C called in order, without disruption (i.e. instantaneously from the perspective of other code in the thread). But in multi-threaded apps, assumptions held just before calling the same MCD no longer hold. With no lock, the authoritative event chain may change while the immutable original is in the middle of calling events in the chain; hence the confusion when updating the original chain doesn't produce the desired effect. The convention is misleading.Spoondrift
[4] In order for anything that runs in parallel to produce any kind of coherent result, synchronization must occur. In the brain, we call it consciousness. Synchronization must occur, otherwise you just have a set of unrelated things performing unrelated tasks.Spoondrift
Avoiding deadlock is easy anyway, just don't assume the "event generating and processing" thread is single-threaded; don't assume it's never blocked. Problems appear when single-threaded assumptions enter a multi-threaded design.Spoondrift
Given your comment [3], it sounds like you generally agree with me. Events/MCD are fine for single-threaded, not much use for multi-threaded. But in the specifics, there are problems. The idea that handlers A, B, C happen "instantaneously" from the perspective of "other code in the thread" - what does that mean? They execute sequentially, and each can see partial changes in state caused by the ones executing before. Also, anyone of of them can replace the event chain during this, leading to a single-threaded version of exactly the same issues.Methenamine
"Avoiding deadlock is easy..." Oh dear.Methenamine
Yep, agreement on [3]. Executing "instantaneously" from the perspective of "other code in the thread" means that the MCD call (to all the handlers) completes without "timing"-related side effects from external (other thread) activity. The handlers in the MCD chain can do whatever they want but that's designed into the handlers and into the call as a whole, and would not be considered external or time-sensitive activity. Since there are no timing issues during the call, time loses all meaning so the call can be said to occur instantaneously from outside (before/after) the call to the MCD.Spoondrift
Deadlock is easy to avoid, because its easy to understand and create. Spawn two threads and join them to each other -- each waiting for the other to complete... [something]. To avoid deadlock, just make sure you don't (assume/depend on/write code as-if) another thread that shares resources with you won't ever be blocked. For example, don't ask someone to run a marathon for you in the same breath you ask to borrow their shoes.Spoondrift
@Spoondrift - you're looking it at a problem that occurs purely because of a locally recognisable contradiction. Those deadlocks are easy to spot. What is harder is when they arise due to a combination of two separate modules, each of which looks perfectly correct and harmless by itself. Holding a lock while firing an event is inviting such combinatory problems to arise. To run a marathon I need both shoes, which I share with my neighbour. He acquires the left shoe, I acquire the right shoe, and then we both wait for eternity for the other shoe to become available.Methenamine
@Earwicker - Code location doesn't change the condition under which deadlock occurs; therefore, the difficulty in spotting it is the same, but more files are involved. Identify threads first - the basic unit of concurrency - and code they may execute. Enumerate the resources each thread's code locks. Deadlock is possible if and only if the sets intersect (i.e. they share resources). If the sets intersect AND either thread asks the other to complete an operation involving a shared resource, then deadlock is probable; otherwise, it cannot occur. Tracking your threads and resources works.Spoondrift
A
0

Please take a look here: http://www.danielfortunov.com/software/%24daniel_fortunovs_adventures_in_software_development/2009/04/23/net_event_invocation_thread_safety This is the correct solution and should always be used instead of all other workarounds.

“You can ensure that the internal invocation list always has at least one member by initializing it with a do-nothing anonymous method. Because no external party can have a reference to the anonymous method, no external party can remove the method, so the delegate will never be null” — Programming .NET Components, 2nd Edition, by Juval Löwy

public static event EventHandler<EventArgs> PreInitializedEvent = delegate { };  

public static void OnPreInitializedEvent(EventArgs e)  
{  
    // No check required - event will never be null because  
    // we have subscribed an empty anonymous delegate which  
    // can never be unsubscribed. (But causes some overhead.)  
    PreInitializedEvent(null, e);  
}  
Attach answered 26/4, 2013 at 16:10 Comment(0)
C
0

I don't believe the question is constrained to the c# "event" type. Removing that restriction, why not re-invent the wheel a bit and do something along these lines?

Raise event thread safely - best practice

  • Ability to sub/unsubscribe from any thread while within a raise (race condition removed)
  • Operator overloads for += and -= at the class level.
  • Generic caller-defined delegate
Chuu answered 22/10, 2013 at 1:56 Comment(0)
T
0

Thanks for a useful discussion. I was working on this problem recently and made the following class which is a bit slower, but allows to avoid callings to disposed objects.

The main point here is that invocation list can be modified even event is raised.

/// <summary>
/// Thread safe event invoker
/// </summary>
public sealed class ThreadSafeEventInvoker
{
    /// <summary>
    /// Dictionary of delegates
    /// </summary>
    readonly ConcurrentDictionary<Delegate, DelegateHolder> delegates = new ConcurrentDictionary<Delegate, DelegateHolder>();

    /// <summary>
    /// List of delegates to be called, we need it because it is relatevely easy to implement a loop with list
    /// modification inside of it
    /// </summary>
    readonly LinkedList<DelegateHolder> delegatesList = new LinkedList<DelegateHolder>();

    /// <summary>
    /// locker for delegates list
    /// </summary>
    private readonly ReaderWriterLockSlim listLocker = new ReaderWriterLockSlim();

    /// <summary>
    /// Add delegate to list
    /// </summary>
    /// <param name="value"></param>
    public void Add(Delegate value)
    {
        var holder = new DelegateHolder(value);
        if (!delegates.TryAdd(value, holder)) return;

        listLocker.EnterWriteLock();
        delegatesList.AddLast(holder);
        listLocker.ExitWriteLock();
    }

    /// <summary>
    /// Remove delegate from list
    /// </summary>
    /// <param name="value"></param>
    public void Remove(Delegate value)
    {
        DelegateHolder holder;
        if (!delegates.TryRemove(value, out holder)) return;

        Monitor.Enter(holder);
        holder.IsDeleted = true;
        Monitor.Exit(holder);
    }

    /// <summary>
    /// Raise an event
    /// </summary>
    /// <param name="args"></param>
    public void Raise(params object[] args)
    {
        DelegateHolder holder = null;

        try
        {
            // get root element
            listLocker.EnterReadLock();
            var cursor = delegatesList.First;
            listLocker.ExitReadLock();

            while (cursor != null)
            {
                // get its value and a next node
                listLocker.EnterReadLock();
                holder = cursor.Value;
                var next = cursor.Next;
                listLocker.ExitReadLock();

                // lock holder and invoke if it is not removed
                Monitor.Enter(holder);
                if (!holder.IsDeleted)
                    holder.Action.DynamicInvoke(args);
                else if (!holder.IsDeletedFromList)
                {
                    listLocker.EnterWriteLock();
                    delegatesList.Remove(cursor);
                    holder.IsDeletedFromList = true;
                    listLocker.ExitWriteLock();
                }
                Monitor.Exit(holder);

                cursor = next;
            }
        }
        catch
        {
            // clean up
            if (listLocker.IsReadLockHeld)
                listLocker.ExitReadLock();
            if (listLocker.IsWriteLockHeld)
                listLocker.ExitWriteLock();
            if (holder != null && Monitor.IsEntered(holder))
                Monitor.Exit(holder);

            throw;
        }
    }

    /// <summary>
    /// helper class
    /// </summary>
    class DelegateHolder
    {
        /// <summary>
        /// delegate to call
        /// </summary>
        public Delegate Action { get; private set; }

        /// <summary>
        /// flag shows if this delegate removed from list of calls
        /// </summary>
        public bool IsDeleted { get; set; }

        /// <summary>
        /// flag shows if this instance was removed from all lists
        /// </summary>
        public bool IsDeletedFromList { get; set; }

        /// <summary>
        /// Constuctor
        /// </summary>
        /// <param name="d"></param>
        public DelegateHolder(Delegate d)
        {
            Action = d;
        }
    }
}

And the usage is:

    private readonly ThreadSafeEventInvoker someEventWrapper = new ThreadSafeEventInvoker();
    public event Action SomeEvent
    {
        add { someEventWrapper.Add(value); }
        remove { someEventWrapper.Remove(value); }
    }

    public void RaiseSomeEvent()
    {
        someEventWrapper.Raise();
    }

Test

I tested it in the following manner. I have a thread which creates and destroys objects like this:

var objects = Enumerable.Range(0, 1000).Select(x => new Bar(foo)).ToList();
Thread.Sleep(10);
objects.ForEach(x => x.Dispose());

In a Bar (a listener object) constructor I subscribe to SomeEvent (which is implemented as shown above) and unsubscribe in Dispose:

    public Bar(Foo foo)
    {
        this.foo = foo;
        foo.SomeEvent += Handler;
    }

    public void Handler()
    {
        if (disposed)
            Console.WriteLine("Handler is called after object was disposed!");
    }

    public void Dispose()
    {
        foo.SomeEvent -= Handler;
        disposed = true;
    }

Also I have couple of threads which raise event in a loop.

All these actions are performed simultaneously: many listeners are created and destroyed and event is being fired at the same time.

If there were a race conditions I should see a message in a console, but it is empty. But if I use clr events as usual I see it full of warning messages. So, I can conclude that it is possible to implement a thread safe events in c#.

What do you think?

Turning answered 18/10, 2014 at 7:18 Comment(1)
Looks good enough to me. Although I think it might be possible (in theory) for disposed = true to happen before foo.SomeEvent -= Handler in your test application, producing a false positive. But apart from that, there's a few things you might want to change. You really want to use try ... finally for the locks - this will help you make this not just thread-safe, but also abort-safe. Not to mention that you could then get rid of that silly try catch. And you're not checking the delegate passed in Add/Remove - it could be null (you should throw straight away in Add/Remove).Plum

© 2022 - 2024 — McMap. All rights reserved.