What is the real overhead of try/catch in C#?
Asked Answered
O

12

103

So, I know that try/catch does add some overhead and therefore isn't a good way of controlling process flow, but where does this overhead come from and what is its actual impact?

Oliveolivegreen answered 9/9, 2008 at 16:36 Comment(0)
C
58

I'm not an expert in language implementations (so take this with a grain of salt), but I think one of the biggest costs is unwinding the stack and storing it for the stack trace. I suspect this happens only when the exception is thrown (but I don't know), and if so, this would be decently sized hidden cost every time an exception is thrown... so it's not like you are just jumping from one place in the code to another, there is a lot going on.

I don't think it's a problem as long as you are using exceptions for EXCEPTIONAL behavior (so not your typical, expected path through the program).

Cyprio answered 9/9, 2008 at 16:41 Comment(3)
More precisely: try is cheap, catch is cheap, throw is expensive. If you avoid try and catch, throw is still expensive.Geof
Hmmm - mark-up doesn't work in comments. To try again - exceptions are for errors, not for "exceptional behaviour" or conditions: blogs.msdn.com/kcwalina/archive/2008/07/17/…Varnado
@Windows programmer Stats / source please?Could
G
109

Three points to make here:

  • Firstly, there is little or NO performance penalty in actually having try-catch blocks in your code. This should not be a consideration when trying to avoid having them in your application. The performance hit only comes into play when an exception is thrown.

  • When an exception is thrown in addition to the stack unwinding operations etc that take place which others have mentioned you should be aware that a whole bunch of runtime/reflection related stuff happens in order to populate the members of the exception class such as the stack trace object and the various type members etc.

  • I believe that this is one of the reasons why the general advice if you are going to rethrow the exception is to just throw; rather than throw the exception again or construct a new one as in those cases all of that stack information is regathered whereas in the simple throw it is all preserved.

Gertudegerty answered 9/9, 2008 at 17:19 Comment(2)
Also: When you rethrow an exception as "throw ex" then you lose the original stack trace and replace it with the CURRENT stack trace; rarely what's wanted. If you just "throw" then the original stack trace in the Exception is preserved.Damiondamita
@Damiondamita Or throw new Exception("Wrapping layer’s error", ex);Toluca
C
58

I'm not an expert in language implementations (so take this with a grain of salt), but I think one of the biggest costs is unwinding the stack and storing it for the stack trace. I suspect this happens only when the exception is thrown (but I don't know), and if so, this would be decently sized hidden cost every time an exception is thrown... so it's not like you are just jumping from one place in the code to another, there is a lot going on.

I don't think it's a problem as long as you are using exceptions for EXCEPTIONAL behavior (so not your typical, expected path through the program).

Cyprio answered 9/9, 2008 at 16:41 Comment(3)
More precisely: try is cheap, catch is cheap, throw is expensive. If you avoid try and catch, throw is still expensive.Geof
Hmmm - mark-up doesn't work in comments. To try again - exceptions are for errors, not for "exceptional behaviour" or conditions: blogs.msdn.com/kcwalina/archive/2008/07/17/…Varnado
@Windows programmer Stats / source please?Could
V
23

Are you asking about the overhead of using try/catch/finally when exceptions aren't thrown, or the overhead of using exceptions to control process flow? The latter is somewhat akin to using a stick of dynamite to light a toddler's birthday candle, and the associated overhead falls into the following areas:

  • You can expect additional cache misses due to the thrown exception accessing resident data not normally in the cache.
  • You can expect additional page faults due to the thrown exception accessing non-resident code and data not normally in your application's working set.

    • for example, throwing the exception will require the CLR to find the location of the finally and catch blocks based on the current IP and the return IP of every frame until the exception is handled plus the filter block.
    • additional construction cost and name resolution in order to create the frames for diagnostic purposes, including reading of metadata etc.
    • both of the above items typically access "cold" code and data, so hard page faults are probable if you have memory pressure at all:

      • the CLR tries to put code and data that is used infrequently far from data that is used frequently to improve locality, so this works against you because you're forcing the cold to be hot.
      • the cost of the hard page faults, if any, will dwarf everything else.
  • Typical catch situations are often deep, therefore the above effects would tend to be magnified (increasing the likelihood of page faults).

As for the actual impact of the cost, this can vary a lot depending on what else is going on in your code at the time. Jon Skeet has a good summary here, with some useful links. I tend to agree with his statement that if you get to the point where exceptions are significantly hurting your performance, you have problems in terms of your use of exceptions beyond just the performance.

Varnado answered 20/9, 2008 at 14:59 Comment(0)
U
10

Contrary to theories commonly accepted, try/catch can have significant performance implications, and that's whether an exception is thrown or not!

  1. It disables some automatic optimisations (by design), and in some cases injects debugging code, as you can expect from a debugging aid. There will always be people who disagree with me on this point, but the language requires it and the disassembly shows it so those people are by dictionary definition delusional.
  2. It can impact negatively upon maintenance. This is actually the most significant issue here, but since my last answer (which focused almost entirely on it) was deleted, I'll try to focus on the less significant issue (the micro-optimisation) as opposed to the more significant issue (the macro-optimisation).

The former has been covered in a couple of blog posts by Microsoft MVPs over the years, and I trust you could find them easily yet StackOverflow cares so much about content so I'll provide links to some of them as filler evidence:

There's also this answer which shows the difference between disassembled code with- and without using try/catch.

It seems so obvious that there is an overhead which is blatantly observable in code generation, and that overhead even seems to be acknowledged by people who Microsoft value! Yet I am, repeating the internet...

Yes, there are dozens of extra MSIL instructions for one trivial line of code, and that doesn't even cover the disabled optimisations so technically it's a micro-optimisation.


I posted an answer years ago which got deleted as it focused on the productivity of programmers (the macro-optimisation).

This is unfortunate as no saving of a few nanoseconds here and there of CPU time is likely to make up for many accumulated hours of manual optimisation by humans. Which does your boss pay more for: an hour of your time, or an hour with the computer running? At what point do we pull the plug and admit that it's time to just buy a faster computer?

Clearly, we should be optimising our priorities, not just our code! In my last answer I drew upon the differences between two snippets of code.

Using try/catch:

int x;
try {
    x = int.Parse("1234");
}
catch {
    return;
}
// some more code here...

Not using try/catch:

int x;
if (int.TryParse("1234", out x) == false) {
    return;
}
// some more code here

Consider from the perspective of a maintenance developer, which is more likely to waste your time, if not in profiling/optimisation (covered above) which likely wouldn't even be necessary if it weren't for the try/catch problem, then in scrolling through source code... One of those has four extra lines of boilerplate garbage!

As more and more fields are introduced into a class, all of this boilerplate garbage accumulates (both in source and disassembled code) well beyond reasonable levels. Four extra lines per field, and they're always the same lines... Were we not taught to avoid repeating ourselves? I suppose we could hide the try/catch behind some home-brewed abstraction, but... then we might as well just avoid exceptions (i.e. use Int.TryParse).

This isn't even a complex example; I've seen attempts at instantiating new classes in try/catch. Consider that all of the code inside of the constructor might then be disqualified from certain optimisations that would otherwise be automatically applied by the compiler. What better way to give rise to the theory that the compiler is slow, as opposed to the compiler is doing exactly what it's told to do?

Assuming an exception is thrown by said constructor, and some bug is triggered as a result, the poor maintenance developer then has to track it down. That might not be such an easy task, as unlike the spaghetti code of the goto nightmare, try/catch can cause messes in three dimensions, as it could move up the stack into not just other parts of the same method, but also other classes and methods, all of which will be observed by the maintenance developer, the hard way! Yet we are told that "goto is dangerous", heh!

At the end I mention, try/catch has its benefit which is, it's designed to disable optimisations! It is, if you will, a debugging aid! That's what it was designed for and it's what it should be used as...

I guess that's a positive point too. It can be used to disable optimizations that might otherwise cripple safe, sane message passing algorithms for multithreaded applications, and to catch possible race conditions ;) That's about the only scenario I can think of to use try/catch. Even that has alternatives.


What optimisations do try, catch and finally disable?

A.K.A

How are try, catch and finally useful as debugging aids?

they're write-barriers. This comes from the standard:

12.3.3.13 Try-catch statements

For a statement stmt of the form:

try try-block
catch ( ... ) catch-block-1
... 
catch ( ... ) catch-block-n
  • The definite assignment state of v at the beginning of try-block is the same as the definite assignment state of v at the beginning of stmt.
  • The definite assignment state of v at the beginning of catch-block-i (for any i) is the same as the definite assignment state of v at the beginning of stmt.
  • The definite assignment state of v at the end-point of stmt is definitely assigned if (and only if) v is definitely assigned at the end-point of try-block and every catch-block-i (for every i from 1 to n).

In other words, at the beginning of each try statement:

  • all assignments made to visible objects prior to entering the try statement must be complete, which requires a thread lock for a start, making it useful for debugging race conditions!
  • the compiler isn't allowed to:
    • eliminate unused variable assignments which have definitely been assigned to before the try statement
    • reorganise or coalesce any of it's inner-assignments (i.e. see my first link, if you haven't already done so).
    • hoist assignments over this barrier, to delay assignment to a variable which it knows won't be used until later (if at all) or to pre-emptively move later assignments forward to make other optimisations possible...

A similar story holds for each catch statement; suppose within your try statement (or a constructor or function it invokes, etc) you assign to that otherwise pointless variable (let's say, garbage=42;), the compiler can't eliminate that statement, no matter how irrelevant it is to the observable behaviour of the program. The assignment needs to have completed before the catch block is entered.

For what it's worth, finally tells a similarly degrading story:

12.3.3.14 Try-finally statements

For a try statement stmt of the form:

try try-block
finally finally-block

• The definite assignment state of v at the beginning of try-block is the same as the definite assignment state of v at the beginning of stmt.
• The definite assignment state of v at the beginning of finally-block is the same as the definite assignment state of v at the beginning of stmt.
• The definite assignment state of v at the end-point of stmt is definitely assigned if (and only if) either: o v is definitely assigned at the end-point of try-block o v is definitely assigned at the end-point of finally-block If a control flow transfer (such as a goto statement) is made that begins within try-block, and ends outside of try-block, then v is also considered definitely assigned on that control flow transfer if v is definitely assigned at the end-point of finally-block. (This is not an only if—if v is definitely assigned for another reason on this control flow transfer, then it is still considered definitely assigned.)

12.3.3.15 Try-catch-finally statements

Definite assignment analysis for a try-catch-finally statement of the form:

try try-block
catch ( ... ) catch-block-1
... 
catch ( ... ) catch-block-n
finally finally-block

is done as if the statement were a try-finally statement enclosing a try-catch statement:

try { 
    try   
    try-block
    catch ( ... ) catch-block-1
    ...   
    catch ( ... ) catch-block-n
} 
finally finally-block
Unkindly answered 15/8, 2017 at 20:50 Comment(0)
P
6

In my experience the biggest overhead is in actually throwing an exception and handling it. I once worked on a project where code similar to the following was used to check if someone had a right to edit some object. This HasRight() method was used everywhere in the presentation layer, and was often called for 100s of objects.

bool HasRight(string rightName, DomainObject obj) {
  try {
    CheckRight(rightName, obj);
    return true;
  }
  catch (Exception ex) {
    return false;
  }
}

void CheckRight(string rightName, DomainObject obj) {
  if (!_user.Rights.Contains(rightName))
    throw new Exception();
}

When the test database got fuller with test data, this lead to a very visible slowdown while openening new forms etc.

So I refactored it to the following, which - according to later quick 'n dirty measurements - is about 2 orders of magnitude faster:

bool HasRight(string rightName, DomainObject obj) {
  return _user.Rights.Contains(rightName);
}

void CheckRight(string rightName, DomainObject obj) {
  if (!HasRight(rightName, obj))
    throw new Exception();
}

So in short, using exceptions in normal process flow is about two orders of magnitude slower then using similar process flow without exceptions.

Peculiar answered 9/9, 2008 at 16:48 Comment(2)
Why would you want to throw an exception here? You could handle the case of not having the rights on the spot.Tobias
@Tobias that's actually what I changed, making it two orders of magnitude faster.Peculiar
G
3

Not to mention if it's inside a frequently-called method it may affect the overall behavior of the application.
For example, I consider the use of Int32.Parse as a bad practice in most cases since it throws exceptions for something that can be caught easily otherwise.

So to conclude everything written here:
1) Use try..catch blocks to catch unexpected errors - almost no performance penalty.
2) Don't use exceptions for excepted errors if you can avoid it.

Ganister answered 12/9, 2008 at 18:6 Comment(0)
B
3

I wrote an article about this a while back because there were a lot of people asking about this at the time. You can find it and the test code at http://www.blackwasp.co.uk/SpeedTestTryCatch.aspx.

The upshot is that there is a tiny amount of overhead for a try/catch block but so small that it should be ignored. However, if you are running try/catch blocks in loops that are executed millions of times, you may want to consider moving the block to outside of the loop if possible.

The key performance issue with try/catch blocks is when you actually catch an exception. This can add a noticeable delay to your application. Of course, when things are going wrong, most developers (and a lot of users) recognise the pause as an exception that is about to happen! The key here is not to use exception handling for normal operations. As the name suggests, they are exceptional and you should do everything you can to avoid them being thrown. You should not use them as part of the expected flow of a program that is functioning correctly.

Bake answered 9/3, 2009 at 0:17 Comment(0)
B
2

I made a blog entry about this subject last year. Check it out. Bottom line is that there is almost no cost for a try block if no exception occurs - and on my laptop, an exception was about 36μs. That might be less than you expected, but keep in mind that those results where on a shallow stack. Also, first exceptions are really slow.

Broyles answered 9/9, 2008 at 17:6 Comment(2)
I couldn't reach your blog (the connection is timing out; are you using try/catch too much? heh heh), but you seem to be arguing with the language spec and some MS MVPs who have also written blogs on the subject, providing measurements to the contrary of your advice... I'm open to the suggestion that the research I've done is wrong, but I'll need to read your blog entry to see what it says.Unkindly
In addition to @Hafthor's blog post, here's another blog post with code specifically written to test speed performance differences. According to the results, if you have an exception occur even just 5% of the time, Exception Handling code runs 100x slower overall than non-exception handling code. The article specifically targets the try-catch block vs tryparse() methods, but the concept is the same.Warden
E
2

It is vastly easier to write, debug, and maintain code that is free of compiler error messages, code-analysis warning messages, and routine accepted exceptions (particularly exceptions that are thrown in one place and accepted in another). Because it is easier, the code will on average be better written and less buggy.

To me, that programmer and quality overhead is the primary argument against using try-catch for process flow.

The computer overhead of exceptions is insignificant in comparison, and usually tiny in terms of the application's ability to meet real-world performance requirements.

Exeat answered 22/10, 2008 at 21:6 Comment(1)
@Ritchard T, why? In comparison to the programmer and quality overhead it is insignificant.Tobias
H
1

I really like Hafthor's blog post, and to add my two cents to this discussion, I'd like to say that, it's always been easy for me to have the DATA LAYER throw only one type of exception (DataAccessException). This way my BUSINESS LAYER knows what exception to expect and catches it. Then depending on further business rules (i.e. if my business object participates in the workflow etc), I may throw a new exception (BusinessObjectException) or proceed without re/throwing.

I'd say don't hesitate to use try..catch whenever it is necessary and use it wisely!

For example, this method participates in a workflow...

Comments?

public bool DeleteGallery(int id)
{
    try
    {
        using (var transaction = new DbTransactionManager())
        {
            try
            {
                transaction.BeginTransaction();

                _galleryRepository.DeleteGallery(id, transaction);
                _galleryRepository.DeletePictures(id, transaction);

                FileManager.DeleteAll(id);

                transaction.Commit();
            }
            catch (DataAccessException ex)
            {
                Logger.Log(ex);
                transaction.Rollback();                        
                throw new BusinessObjectException("Cannot delete gallery. Ensure business rules and try again.", ex);
            }
        }
    }
    catch (DbTransactionException ex)
    {
        Logger.Log(ex);
        throw new BusinessObjectException("Cannot delete gallery.", ex);
    }
    return true;
}
Heine answered 24/2, 2009 at 4:50 Comment(2)
David, would you wrap the call to 'DeleteGallery' in a try/catch block?Eventuate
Since the DeleteGallery is a Boolean function, it seems to me that throwing an exception in there is not useful. This would require the call to DeleteGallery to be enclosed in a try/catch block. An if(!DeleteGallery(theid)) { //handle } looks more meaningful to me. in that specific example.Tobias
L
1

We can read in Programming Languages Pragmatics by Michael L. Scott that the nowadays compilers do not add any overhead in common case, this means, when no exceptions occurs. So every work is made in compile time. But when an exception is thrown in run-time, compiler needs to perform a binary search to find the correct exception and this will happen for every new throw that you made.

But exceptions are exceptions and this cost is perfectly acceptable. If you try to do Exception Handling without exceptions and use return error codes instead, probably you will need a if statement for every subroutine and this will incur in a really real time overhead. You know a if statement is converted to a few assembly instructions, that will performed every time you enter in your sub-routines.

Sorry about my English, hope that it helps you. This information is based on cited book, for more information refer to Chapter 8.5 Exception Handling.

Lotti answered 17/1, 2010 at 12:38 Comment(1)
The compiler is out of picture at runtime. There has to be an overhead for try/catch blocks so that the CLR can handle the exceptions. C# runs on the .NET CLR(a virtual machine). It seems to me that the overhead of the block itself is minimal when there is no exception but the cost of the CLR handling the exception is very significant.Tobias
T
-4

Let us analyse one of the biggest possible costs of a try/catch block when used where it shouldn't need to be used:

int x;
try {
    x = int.Parse("1234");
}
catch {
    return;
}
// some more code here...

And here's the one without try/catch:

int x;
if (int.TryParse("1234", out x) == false) {
    return;
}
// some more code here

Not counting the insignificant white-space, one might notice that these two equivelant pieces of code are almost exactly the same length in bytes. The latter contains 4 bytes less indentation. Is that a bad thing?

To add insult to injury, a student decides to loop while the input can be parsed as an int. The solution without try/catch might be something like:

while (int.TryParse(...))
{
    ...
}

But how does this look when using try/catch?

try {
    for (;;)
    {
        x = int.Parse(...);
        ...
    }
}
catch
{
    ...
}

Try/catch blocks are magical ways of wasting indentation, and we still don't even know the reason it failed! Imagine how the person doing debugging feels, when code continues to execute past a serious logical flaw, rather than halting with a nice obvious exception error. Try/catch blocks are a lazy man's data validation/sanitation.

One of the smaller costs is that try/catch blocks do indeed disable certain optimizations: http://msmvps.com/blogs/peterritchie/archive/2007/06/22/performance-implications-of-try-catch-finally.aspx. I guess that's a positive point too. It can be used to disable optimizations that might otherwise cripple safe, sane message passing algorithms for multithreaded applications, and to catch possible race conditions ;) That's about the only scenario I can think of to use try/catch. Even that has alternatives.

Terry answered 4/5, 2010 at 13:44 Comment(2)
I am pretty sure that TryParse does a try {int x = int.Parse("xxx"); return true;} catch{ return false; } internally. Indentation is not a concern in the question, only performance and overhead.Tobias
@Tobias Alternatively, read the new answer I posted. It contains more links, one of which is an analysis of the massive performance boost when you avoid Int.Parse in favour of Int.TryParse.Unkindly

© 2022 - 2024 — McMap. All rights reserved.