Method overloads resolution and Jon Skeet's Brain Teasers
Asked Answered
P

5

24

Jon's Brain Teasers

Here Be Spoilers...

I'm looking at the answer to #1, and I must admit I never knew this was the case in overload resolution. But why is this the case. In my tiny mind Derived.Foo(int) seems like the logical route to go down.

What is the logic behind this design decision?

BONUS TIME!

Is this behaviour a result of the C# specification, the CLR implementation, or the Compiler?

Preferable answered 30/4, 2010 at 12:40 Comment(9)
That's definitely a weird behavior! They probably designed that on a friday afternoon after too much beer on lunch time.Wolsky
Or after hours of deliberation about what the best overall behaviour would be ;)Palstave
I guess you saw the same thing I did because we have just been debating the answer to the exact same question!Goat
Now does that count as your tag or mine? :)Preferable
need to post the other link in another post... Versioning, Virtual, and OverrideMoxie
Don't worry someone with 59.3k rep will come along and remove it at some point, they always do...Goat
Oh, yay, skeetisms. Another tag to add to my ignore list.Adolescence
@SLC: Not if I remove it first. Please do not create frivolous tags.Providential
See what I mean, lol. I half expected there to be a tag already.Goat
M
15

This behaviour is deliberate and carefully designed. The reason is because this choice mitigates the impact of one form of the Brittle Base Class Failure.

Read my article on the subject for more details.

https://learn.microsoft.com/en-us/archive/blogs/ericlippert/future-breaking-changes-part-three

Mckown answered 30/4, 2010 at 15:41 Comment(2)
I knew there would be a solid reason out there somewhere! So (for Bonus Time!) is this a specification of C# or CLR?Preferable
@runrunraygun: The CLR doesn't have an overload resolution algorithm; overload resolution is a language concept. The CLR IL just has instructions that invoke whatever method reference is in a particular location. So this cannot possibly be a CLR specification. This behaviour is specified in the C# specification section 7.6.5.1, the point which begins "The set of candidate methods is reduced to contain only methods from the most derived types..."Mckown
M
1

Here is a possible explanation:

When the compiler links the method calls, the first place it looks in in the class that is lowest in the inheritance chain (in this case the Derived class). It's instance methods are checked and matched. The overridden method Foo is not an instance method of Derived, it is an instance method of the Base class.

The reason why could be performance, as Jack30lena proposed, but it could also be how the compiler interprets the coder's intention. It's a safe assumption that the developer's intended code behavior lies in the code at the bottom of the inheritance chain.

Malvina answered 30/4, 2010 at 14:26 Comment(5)
By "lowest" you mean the thing that is farthest from the base class? Normally I'd describe the thing that was farthest from the base of a thing as being the "highest" thing. (Then again, the root of a tree is the highest node in the tree, and that makes no sense either...) That said, your analysis is correct; the developer of the derived class knows more than the developer of the base class, so their methods get priority.Mckown
I was thinking in tree terms. What's really interesting is why this type of hiding doesn't result in at least a warning. Essentially, any instance method with a more general parameter will effectively hide the more specific override. I know this code isn't completely unreachable (per Foxfire), but it's hidden nonetheless. Seems like it should produce a warning. Also, thanks for verifying my answer, your articles were helpful.Malvina
Suppose we produced a warning. How would you turn the warning off if the behaviour was desired? We try to reserve warnings for behaviours which are highly likely to be wrong, and if that's what you want, there's a way to write the code so that the warning goes away. This meets neither criterion; the behaviour is highly likely to be correct, and if it is, then there is no way to write the code to say "no, REALLY, I meant it, stop warning me". The result is that on many function calls you'd have pragmas around them to suppress the warning, which is ugly.Mckown
Good point. However, outright hiding of a method yields a warning (0114). The hiding in question compiles quietly and results in an invisibly hidden member - and likely a runtime bug. Why not at least warn to that level for this kind of hiding (and allow the new keyword to work the same)? I'm sure this kind of hiding is harder for the compiler to find, since it's not just signature matching, but also parameter base type discovery; however, hiding in this way on purpose seems a bit too clever to be good design, and I think hiding in this way is more likely to be by mistake than by design.Malvina
The reason we give a warning if the "new" isn't there is because that is indicative that the hiding was accidental. We don't want accidental hiding to be an error because that then is once more a brittle base class failure; you upgrade your base class and your derived class doesn't compile because the base class author added a member which you are now hiding. You turn off the warning by saying "yes, I meant it, I said 'new'".Mckown
G
1

It's a result of the compiler, we examined the IL code.

Goat answered 30/4, 2010 at 14:54 Comment(1)
Yes, but it's like that because of the specification.Ermey
P
-1

The reason is because it is ambiguous. The compiler just has to decide for one. And somebody thought that the less indirect one would be better (performance might be a reason). If the developer just wrote:

((Base)d).Foo (i);

it's clear and giving you the expected result.

Petro answered 30/4, 2010 at 14:40 Comment(0)
M
-3

the reason is: performance. calling a virtual method takes a bit more time. calling a delegate on a virtual method takes much more time and so on....

see: The cost of method calls

Moxie answered 30/4, 2010 at 14:18 Comment(2)
Really? Whilst the info on call costs in very interesting and no doubt would have had influence on a number of decisions, I'm not sure I can see a direct link between the two problems. Yes they have opted for non-virtual as the default where not defined for method-call performance reasons, does this really dictate the overload resolution to such a great extent as to make them opt for what seems to me and a number of other people as "unintuitive"? I remain unconvinced that this is THE defining reason, but grateful for an interesting answer none the less. TPreferable
This is absolutely not the reason.Mckown

© 2022 - 2024 — McMap. All rights reserved.