Which one is fast, Abstract class or Interface? [duplicate]
Asked Answered
T

2

10

Possible Duplicate:
Why are interface method invocations slower than concrete invocations?

I recently had a chance to appear in an interview in which interviewer asked which one is faster among Abstract class and Interface. Though i got confused with the question but I responded Interface primarily because i thought that late binding concept can cause a performance delay in Abstract class. After exploring this same question on web, I came to know that Abstract methods are faster though according to some blogs Interface methods are faster. I was little confused so i thought to ask this question to have a correct understanding that which one is faster and why with strong reason.

According to the following the Abstract class is fast but there is no justified reason for it. http://www.codeproject.com/Articles/11155/Abstract-Class-versus-Interface

Tanishatanitansy answered 20/9, 2012 at 13:51 Comment(0)
H
12

The answer depends on the programming language and possibly the compiler you use. In environments like the Java VM where run-time optimizations are used, it probably cannot be answered at all. And honestly, in a typical Java project, no-one cares because even if there was a difference, it would be so tiny that it won't slow down your software noticeably. Unless you have strict real-time constraints, in which case you wouldn't use Java (and possibly no polymorphism at all).

Basically, both interface methods and abstract methods make use of dynamic dispatching, so the difference is minimal if there is any at all. Without much knowledge of the details, I would assume that theoretically, abstract methods dispatch faster as long as the language doesn't implement multiple inheritance for classes. The position of the method pointer in the dispatch vector would be static, while it is not for interface methods (a class can typically implement multiple interfaces).

But as I said, I don't know the details about what happens in the compiler. There may be other factors I didn't think about. If I had to answer this question in an interview, I'd quote Don Knuth's "premature optimization is the root of all evil".

Hypophyge answered 20/9, 2012 at 14:24 Comment(0)
H
0

The best answer to that question is "I would write a small test to find out, if I really, really needed to". Get a real example and run it, under controlled conditions, with two implementations that vary only in interface vs abstract class. Without a concrete implementation, "which is faster" questions make little sense. Either the interviewer was trying to show off (and probably ignorant of the actual answer), trying to test your critical thinking skills (does this question make sense?), or extremely nit-pickish.

The first rule of optimization is do not optimize... yet. The second rule is to profile your program to find bottlenecks before any refactoring: a change of algorithm or data-structure the right place is often the only thing that is needed; and I am willing to bet that in, say, Java, the hotspot compiler will make any difference between abstract and interface, if any, truly hard to find indeed.

Henhouse answered 21/9, 2012 at 0:0 Comment(8)
severely tempted to repeat first rule twice, as in all those "the first rule of Fight Club" quotes...Henhouse
A problem with that philosophy is that it won't reveal whether real-world factors may cause a generally-superior approach to sometimes perform much worse than a generally-inferior one. For example, given the choice between two pieces of code, one of which requires an couple four vtable accesses and one of which requires allocating an extra 32 bytes of garbage, it's possible that the one which generates more garbage would usually be faster, but could end up being much slower if there's a lot of gen2 garbage that gets touched between gen0 collection cycles.Golliner
my point is that hand-optimizing everywhere is a much poorer use of your time than optimizing only where it really counts. Profile first, find what's hot, and optimize that. Otherwise you are wasting time and effort.Henhouse
Hand-optimizing everywhere is not a good idea. On the other hand, I've encountered situations (VB6 Collection objects) where something which seemed to work well in benchmarking started exhibiting behavior that was worse than N^3 (with N being the total number of Collection objects--not the size of any particular one!) once N got to be around a hundred thousand. It would not be implausible that especially if one gets into generics, an AbstractClass<T> might behave rather differently from an Interface<T> if there are many different types T.Golliner
What might be most helpful would be an organized catalog of the types of things that generally perform well but may cause unexpected performance bottlenecks in some cases, so that people who are considering using them could determine if their intended design might hit a performance bottleneck and--if so--choose an alternative. The answer to "will XX have any unexpected performance bottlenecks" will 90% of the time be no, and the "no"'s aren't terribly interesting or helpful, but the few "yes"es may be very interesting and helpful.Golliner
O(n³) performance with n ~= 10⁵? You should have run into it with much smaller n; also, consider filing a bug. Looks very corner-case, implementation-specific, and fragile to code around. Should something like this crop up in Java, it would soon be fixed -- either by Oracle or OpenJDK; and it would be better to treat it as a bug than as a feature to be fully documented for later generations.Henhouse
The Collection class was pretty well obsolescent by the time system memory sizes grew to the point that one could realistically have enough collections for it to get bogged down. While it may at first seem curious that O(n³) performance would only start to dominate around N=100K, it's easy to understand if one considers that VB's roots go back to a time when code was expected to use 16-bit integers absent a reason for using larger ones. It could have been reasonable to design the Collection with 16-bit "mostly-unique" IDs, and figure that collisions would be rare enough that...Golliner
...one could use a slower algorithm when they occurred, especially if the algorithm could do something like randomly reassign IDs that collide (if the expected N was 500 collections, such an algorithm would work decently with N up to 100 times bigger). Of course, if N gets to be bigger than 65,536 collisions are going to start becoming the norm.Golliner

© 2022 - 2024 — McMap. All rights reserved.