Which is faster, try catch or if-else in java (WRT performance)
Asked Answered
I

12

36

Which one is faster:

Either this

try {
  n.foo();
} 
catch(NullPointerException ex) {
}

or

if (n != null) n.foo();
Ibbie answered 16/8, 2010 at 5:21 Comment(9)
How about "Which is more readable?" Performance is your last concern.Bandog
Using try-catch clause this way is definitely bad idea. Also try-catch results in instantinating of new object (the exception). Thus I'd say that (n!=null) is faster in case you have a lot of cases where n == null. Also n!=null is superfast construct.Projector
Performance is very much different depending on whether n is null or not.Panoptic
"Also try-catch results in instantinating of new object (the exception)". No, it does not. Throwing an exception does.Panoptic
Hmm, funny that this is the subject of the latest Javaspecialists newsletter quiz. Heinz M. Kabutz does argue that the first form is indeed very bad for performance. He does however provide one additional piece of information: n is null 10% of the time.Archaeological
They're different... the first form will also catch NPEs thrown from within foo()Musicology
@Thilo: "No, it does not. Throwing an exception does." No, it does not. ;) Constructing an exception to be thrown does. Some code may choose to purposely re-throw the same exception over and over again (as crazy as that may be). Sorry, I just had to add that. :)Overall
In the very old days exceptions were slow (because they captured the stack trace when generated). Today they are faster, but it is still a lot of work which is unnecessary in this case.Beecher
I came here wondering the same thing. What about the case where you're looking for a condition that happens once, and once only (because your catch-block makes an initialization based on information in an argument), on a program that could run for hours? Basically, is it faster to pass through a TRY that always succeeds than an IF that is always true? Is there any long-term cost at all to a TRY that always succeeds? In Python, they encourage this sort of thing - why is the story different in Java?Noonan
K
54

It's not a question of which is faster, rather one of correctness.

An exception is for circumstances which are exactly that, exceptional.

If it is possible for n to be null as part of normal business logic, then use an if..else, else throw an exception.

Karakoram answered 16/8, 2010 at 5:23 Comment(4)
As usual, the first poster who says "don't do premature optimization" gets the most upvotes. This question is not about what is correct, best, or most readable, but what is fastest. The OP most likely found this question in the last issue of the Java Specialist Newsletter, which means they're probably interested in learning the reason why the fastest version is faster (and so deepening their understanding of the VM internals), rather than thinking about implementing this in production code.Pleochroism
@gustafc: Whilst being first sometimes gets the most upvotes, my daily limit was reached some time ago. I still stand by my answer. I'm sure you're aware that the question asked is not always the real question. There was no mention of VM internals. Had there been I might have phrased my answer differently.Karakoram
Yes, it was a question of which is faster -- look at the original OP's question. It may not be "best practice" to prematurely optimize the solution, but maybe the OP was interested in learning something about how these things are implemented.Northey
@Ralph: sometimes you move past what is asked, and answer the real question.Karakoram
S
52
if (n != null) n.foo();

is faster.

Sato answered 16/8, 2010 at 5:25 Comment(9)
@Ibbie Correctness is always more important than speed. A program that runs fast but is incorrect is useless.Shank
You can still get a NPE if something in foo() throws it. Since I don't know what's inside of foo() asserting this is incorrect. So, if inside foo() you have to do a lot of "if( x != null)" then catching the exception here is faster.Coeval
@Shank correctness alone does not answer the question. It is unimportant if it is premature optimisation or not. Answer the question and add a warning to your answer instead of just repeating the "premature optimisation" gospel to get upvotedPhylogeny
@Mitch Wheat yes, because it gets the people who copy paste it up-voted for something which was not asked. There is no reason why answering the question and giving good advise would be mutually exclusive. Not to say that premature optimisation is not evil, but stand alone it is not a valid answer to any question about performance.Phylogeny
@josefx: you are way off mark. From 25 years of programming experience, I can personally assure you that Knuth's advice on premature optimisation is just as relevant now as when he made it 20+ years ago.Karakoram
@Mitch Wheat I did never say that it wasn't valid or good advise, what I wanted to point out is that it does not answer the question asked - it is only part of the answer.Phylogeny
@Ibbie Juyal: Good comment! I really bothers me when someone asks a question and the responses all include "What is the use case?", even when the question is clear. Isn't it enough to want to know the answer, without having to justify the question?Northey
@Emerald214 The question was Which is faster? not Why it is faster?. You shouldn't give him -1 because you are in the wrong questionVlaminck
What if you do the test ten trillion times and you just expect a null value a couple of times? Does it still take time just to enter and exit the for statement or would that be the faster method in this instance?Marabout
A
36

Explicitly testing for a null pointer is much faster than using exception handling.

For the record, most of the oherheads in using exceptions are incurred in the instantiation of the exception object. In particular in the call to fillInStackTrace() which has to:

  • examine every stack frame for the current thread's stack, and
  • create a data structure to capture the stack frame details.

In some cases, you can reduce this by reusing the exception object, or by overriding an application specific exception's fillInStackTrace() method to make it a no-op. The downside in both cases is that proper stacktraces will no longer be available to help you debug unexpected exceptions. (And neither of these are applicable to the OP's example.)

While exception instantiation is expensive, exception throwing, propagation and catching are not exactly cheap either.

(I should add that I agree with @Mitch's general point about premature optimization. However, the cost of an exception that actually occurs is large enough that it is best to avoid using them for routine null checks; i.e. if you intend to catch and recover from the NPE.)


There is a second reason why explicit null testing is a better idea. Consider this:

try {
    doSomething(a.field);
} catch (NullPointerException ex) {
    System.err.println("a.field is null");
}

What happens if an NPE happens within the call to doSomething(...) instead of during the evaluation of the a.field expression? Sure, we'll catch an NPE, but we will misdiagnose it, and then attempt to continue ... incorrectly assuming that a.field is unset or something.

Distinguishing an "expected" NPE from an "unexpected" NPE is theoretically possible, but in practice very difficult. A much simpler and more robust approach is to explicitly test for the null values that you are expecting (e.g. with an if statement), and treat all NPEs as bugs.

(I'm sure that this is what @Mitch means by "treating exceptions as exceptional", but I think it helps to spell things out with an illustrative example ...)


Finally, it is worth nothing that in

if (n != null) n.foo();

there are actually two null tests:

  • There is an explicit test in n == null.
  • There is also an implicit check in n.foo().

However, the JIT compiler should be able to optimize away the second check and the associated native code that throws the NPE. In fact, the addition of the if (n != null) is likely to add zero runtime overhead once the code has been compiled to native code.

Addington answered 16/8, 2010 at 5:43 Comment(1)
This should be the chosen answer since it actually answers the question and goes in detail about what is faster and why.Efrem
P
26

The answer to this is not as simple as it looks, because this will depend on the percentage of times that the object is really null. When this is very uncommon (say in 0.1% of the time), it might even be faster. To test this I've done some benchmarking with the following results (with Java 1.6 client):

Benchmaring with factor 1.0E-4
Average time of NullIfTest: 0.44 seconds
Average time of NullExceptionTest: 0.45 seconds
Benchmaring with factor 0.0010
Average time of NullIfTest: 0.44 seconds
Average time of NullExceptionTest: 0.46 seconds
Benchmaring with factor 0.01
Average time of NullIfTest: 0.42 seconds
Average time of NullExceptionTest: 0.52 seconds
Benchmaring with factor 0.1
Average time of NullIfTest: 0.41 seconds
Average time of NullExceptionTest: 1.30 seconds
Benchmaring with factor 0.9
Average time of NullIfTest: 0.07 seconds
Average time of NullExceptionTest: 7.48 seconds

This seems pretty conclusive to me. NPE's are just very slow. (I can post the benchmarking code if wanted)

edit: I've just made an interesting discovery: when benchmarking using the server JVM, the results change drastically:

Benchmaring with factor 1.0E-4
Average time of NullIfTest: 0.33 seconds
Average time of NullExceptionTest: 0.33 seconds
Benchmaring with factor 0.0010
Average time of NullIfTest: 0.32 seconds
Average time of NullExceptionTest: 0.33 seconds
Benchmaring with factor 0.01
Average time of NullIfTest: 0.31 seconds
Average time of NullExceptionTest: 0.32 seconds
Benchmaring with factor 0.1
Average time of NullIfTest: 0.28 seconds
Average time of NullExceptionTest: 0.30 seconds
Benchmaring with factor 0.9
Average time of NullIfTest: 0.05 seconds
Average time of NullExceptionTest: 0.04 seconds

Using the server VM, the difference is hardly noticable. Still: I'd rather not use catching NullPointerException unless it really is an exception.

Parenthood answered 16/8, 2010 at 6:37 Comment(9)
@Marc: Beware - Java micro-benchmarks are very difficult to get right. Without seeing your code, it is impossible say whether your results are meaningful.Addington
@Stephen C - I've posted the code on github - github.com/marcdejonge/javabenchmarking . Maybe we can add more of these benchmarks in the future and build a complete framework for microbenchmarks.Parenthood
@Marc: I looked at these benchmarks, and they are dodgy, IMO. The JIT compiler may have figured out that both execute methods were effectively no-ops and optimized them away. Also. you are doing nothing to ensure that the methods are JIT compiled, or to allow for the CPU overhead of JIT compilation.Addington
@Stephen C - The JIT compiler can not optimize the function away, because it won't know which side effects the function has. Furthermore, the test is run 10 times to allow the first (and maybe second one) to be ignored. When calculating the average times, the fastest and slowest times are ignored.Parenthood
BTW, I found out why the server version is so much faster: it doesn't create a stacktrace. Apparently the JIT compiler knows that the stacktrace is not used and it returns an empty (singleton) instance of the NullPointerException.Parenthood
That is what the client version of Java does and thus it is much slower. But the point is that the two pieces of code perform almost the same when you simply ignore the exception. This is making use of the smart JIT compiler, although it still is not a recommended way of programming.Parenthood
There you go. The JIT compiler has apparently optimized most of the stuff that you were trying to measure. I also think that a smart enough JIT compiler could figure out that all of the objects are java.lang.Object instances and that the toString() calls are therefore side-effect free. You really need to change the benchmarks so that the values of the expressions are used. For example, make the objects Integer instances and sum their intValue() values.Addington
Besides, without looking at the native code, it is not possible to be sure what is really happening here. We don't even know for sure that the JIT compiler is being used.Addington
I've tried the suggestion you did, by replacing Object.toString() with Integer.intValue() (and summing the values). This makes no difference in the conclusions, but the intValue is much faster.Parenthood
V
8

If n.foo() happens to throw internally a NPE, you are off for a long debugging session (or worse, your app fails in production..). Just don't do it.

How many nano-seconds do you plan to save, anyways?

Vivi answered 16/8, 2010 at 5:33 Comment(0)
P
8

I notice I'm not the only one reading the Java Specialist's Newsletter :)

Apart from the fact that there's a semantic difference (the NPE isn't necessarily caused by dereferencing n, it might have been thrown by some error in foo()), and a readability issue (the try/catch is more confusing to a reader than the if), they should be about equally fast in the case when n != null (with the if/else version having a slight advantage), but when n == null if/else is a lot faster. Why?

  1. When n == null, the VM must create a new exception object and fill in its stack trace. The stack trace info is really expensive to acquire, so here the try/catch version is far more expensive.
  2. Some believe that conditional statements are slower because they prevent instruction pipelining, and by avoiding the explicit if they think they got away cheap when n != null. The thing is, however, that the VM will do an implicit null check when dereferencing... that is, unless the JIT can determine that n must be non-null, which it can in the if/else version. This means that the if/else and try/catch versions should be perform approximately the same. But...
  3. ... try/catch clauses can interfere with how the JIT can inline method calls, which means that it might not be able to optimize the try/catch version as well as the if/else.
Pleochroism answered 16/8, 2010 at 6:58 Comment(0)
U
3

Beside the good answers (use exceptions for exceptional cases) I see that you're basically trying to avoid the null checks everywhere. Java 7 will have a "null safe" operator that will return null when n?.foo() is called instead of throwing a NPE. That's borrowed from the Groovy language. There's also a trend to avoid using null altogether in one's code except when really needed (ie: dealing with libraries). See this other answer for more discussion on this. Avoiding != null statements

Urbanism answered 16/8, 2010 at 6:10 Comment(6)
I think the elvis operator ( b?.foo() ) has actually been excluded from JDK 7, and there are no plans to include it.Zoniazoning
"There's also a trend to avoid using null altogether" I don't think this is a trend. Returning/using null has been a bad practice for ever. I haven't find yet a situation under which a method should return null instead of throwing an exception.Coeval
@Ubersoldat: How about specifying an unspecified value? Do you throw an exception if the app does not have some data filled in (yet)?Guileless
@Guileless well, I think that is an exceptional condition for which the given process is not ready to handle, since it shouldn't be called if the data is not there for it to handle it.Coeval
@Ubersoldat: Are you really throwing an exception instead of returning null meaning e.g. "delivery date not yet known"?Guileless
See this interesting blog on implementing the Elvis operator in Scala: codecommit.com/blog/scala/…Northey
C
2

It is usually expensive to handle exceptions. The VM Spec might give you some insight into how much, but in the above case if (n != null) n.foo(); is faster.

Although I agree with Mitch Wheat regarding the real question is correctness.

@Mitch Wheat - In his defense this is a pretty contrived example. :)

Cidevant answered 16/8, 2010 at 5:29 Comment(1)
Exceptions are only expensive if they happen (and I think the cost is more on the throw than on the catch). If the exception (almost) never happens, not checking for null might be faster. To defend the OP, in Python, the recommended practice is to let the exception happen.Panoptic
L
2

The if construct is faster. The condition can be easily translated to machine code (processor instructions).

The alternative (try-catch) requires creating a NullPointerException object.

Loganloganberry answered 16/8, 2010 at 6:13 Comment(0)
V
0

Definitely second form is much faster. In the try-catch scenario, it throws an exception which does a new Exception() of some form. Then the catch block is invoked which is a method call and has to execute whatever code is in it. You get idea.

Volgograd answered 16/8, 2010 at 5:32 Comment(0)
M
0

Firstly the if.. then .. else is better, for numerous reasons pointed out by the other posters.

However it is not necceseraly faster! It depends entirly on the ration of null objects to not null objects. It probably takes a hundreds of thousands times the resources to process an exception rather than test for null, however, if a null object occurs only once for every million objects then the exception option will be slightly faster. But not that much faster that its worth making your program less readable and harder to debug.

Mantling answered 16/8, 2010 at 6:23 Comment(1)
-1 for incorrect information. In the JIT compiler of the Oracle Hotspot VM "Null checks can be hoisted manually, and suppress implicit null checks in dominated blocks.", i.e. both the if and the try approach perform the same number of checks. Therefore, the if version is never slower on this VM once the JIT has run.Riana
I
0

This issue has discussed recently by Dr. Heinz:

http://javaspecialists.eu/webinars/recordings/if-else-npe-teaser.mov

Interspace answered 16/8, 2010 at 8:0 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.