Why avoid the final keyword?
Asked Answered
S

10

12

In java, is there ever a case for allowing a non-abstract class to be extended?

It always seems to indicate bad code when there are class hierarchies. Do you agree, and why/ why not?

Stickleback answered 6/2, 2009 at 11:10 Comment(0)
B
11

I agree with Jon and Kent but, like Scott Myers (in Effective C++), I go much further. I believe that every class should be either abstract, or final. That is, only leaf classes in any hierarchy are really apt for direct instantiation. All other classes (i.e. inner nodes in the inheritance) are “unfinished” and should consequently be abstract.

It simply makes no sense for usual classes to be further extended. If an aspect of the class is worth extending and/or modifying, the cleaner way would be to take that one class and separate it into one abstract base class and one concrete interchangeable implementation.

Basketry answered 6/2, 2009 at 11:41 Comment(10)
What if there are 4 different (reasonable) virtual methods and I want to be just like implementation X in all but one of them? You could split that up into 4 classes, but it's not always the cleanest way to go. Your route sounds a little bit extreme, but good as a general principle (cont)Mensa
In other words - there are situations where it's probably worth violating, but only when you've thought carefully about the alternatives.Mensa
@Jon: sure. Although this sounds like something I would probably do with an altogether different pattern (composition, or a decorator).Basketry
Too strict. There are perfectly good cases where a class can be inherited from but instantiable. How about JButton and JLabel?Friedcake
"Negative variation." Its not entirely unreasonable to require an concrete class with only constructors separate from a fully-concrete abstract class (it just sounds it). The trouble is, what if you've published the API before realising the negative variation. Ah, constructors are evil.Baboon
All absolute statements about software development are at least a little bit wrong. Including, of course, the one I just wrote.Subcelestial
Don't know if I fully agree with you. I've overridden the .NET's Button and Textbox controls to add my own functionality, both of which are non-final leaf classes in the BCL. What would have been my options if those classes were final?Ruisdael
I actually agree with @David and I wouldn’t make my statement all that absolute but the fact is, nobody here has yet provided a concrete example where the rule should not apply. In fact, @Juliet’s example has strengthened my point: the WinForms library uses the leaf-or-abstract paradigm but makes the API easy to use incorrectly by failing to mark leaf classes as such. @Friedcake has made a valid point by citing an API that was – in this particular regard – not well-designed (proof: WinForms did it better).Basketry
Commenting on a 13 year old answer, but a legit reason: Writing a custom framework that is designed to be extended. In this scenario, the consumer can override select default functionality easily while leaving remaining functionality as default. Sure, you could jam final in there and follow this (extreme IMO) viewpoint, but it would make things more complicated for the client consuming the framework.Liman
@ChristopherSchneider Extensible frameworks can and have been written following this guideline. There’s no issue. In fact, it’s best practice especially when designing extensible frameworks.Basketry
M
12

There are certainly times when it makes sense to have non-final concrete classes. However, I agree with Kent - I believe that classes should be final (sealed in C#) by default, and that Java methods should be final by default (as they are in C#).

As Kent says, inheritance requires careful design and documentation - it's very easy to think you can just override a single method, but not know the situations in which that method may be called from the base class as part of the rest of the implementation.

See "How do you design a class for inheritance" for more discussion on this.

Mensa answered 6/2, 2009 at 11:22 Comment(1)
+1 this along with not defaulting to package private access, no checked exceptions and, well, a whole bunch of other stuff is stuff that C# got right (vs Java) IMHO. Second adopter advantage I guess.Mallee
B
11

I agree with Jon and Kent but, like Scott Myers (in Effective C++), I go much further. I believe that every class should be either abstract, or final. That is, only leaf classes in any hierarchy are really apt for direct instantiation. All other classes (i.e. inner nodes in the inheritance) are “unfinished” and should consequently be abstract.

It simply makes no sense for usual classes to be further extended. If an aspect of the class is worth extending and/or modifying, the cleaner way would be to take that one class and separate it into one abstract base class and one concrete interchangeable implementation.

Basketry answered 6/2, 2009 at 11:41 Comment(10)
What if there are 4 different (reasonable) virtual methods and I want to be just like implementation X in all but one of them? You could split that up into 4 classes, but it's not always the cleanest way to go. Your route sounds a little bit extreme, but good as a general principle (cont)Mensa
In other words - there are situations where it's probably worth violating, but only when you've thought carefully about the alternatives.Mensa
@Jon: sure. Although this sounds like something I would probably do with an altogether different pattern (composition, or a decorator).Basketry
Too strict. There are perfectly good cases where a class can be inherited from but instantiable. How about JButton and JLabel?Friedcake
"Negative variation." Its not entirely unreasonable to require an concrete class with only constructors separate from a fully-concrete abstract class (it just sounds it). The trouble is, what if you've published the API before realising the negative variation. Ah, constructors are evil.Baboon
All absolute statements about software development are at least a little bit wrong. Including, of course, the one I just wrote.Subcelestial
Don't know if I fully agree with you. I've overridden the .NET's Button and Textbox controls to add my own functionality, both of which are non-final leaf classes in the BCL. What would have been my options if those classes were final?Ruisdael
I actually agree with @David and I wouldn’t make my statement all that absolute but the fact is, nobody here has yet provided a concrete example where the rule should not apply. In fact, @Juliet’s example has strengthened my point: the WinForms library uses the leaf-or-abstract paradigm but makes the API easy to use incorrectly by failing to mark leaf classes as such. @Friedcake has made a valid point by citing an API that was – in this particular regard – not well-designed (proof: WinForms did it better).Basketry
Commenting on a 13 year old answer, but a legit reason: Writing a custom framework that is designed to be extended. In this scenario, the consumer can override select default functionality easily while leaving remaining functionality as default. Sure, you could jam final in there and follow this (extreme IMO) viewpoint, but it would make things more complicated for the client consuming the framework.Liman
@ChristopherSchneider Extensible frameworks can and have been written following this guideline. There’s no issue. In fact, it’s best practice especially when designing extensible frameworks.Basketry
P
5

This question is equally applicable to other platforms such as C# .NET. There are those (myself included) that believe types should be final/sealed by default and need to be explicitly unsealed to allow inheritance.

Extension via inheritance is something that needs careful design and is not as simple as just leaving a type unsealed. Therefore, I think it should be an explicit decision to allow inheritance.

Pylos answered 6/2, 2009 at 11:17 Comment(0)
T
5

there a good reasons to keep your code non-final. many frameworks such as hibernate, spring, guice depend sometimes on non-final classes that they extends dynamically at runtime.

for example, hibernate uses proxies for lazy association fetching. especially when it comes to AOP, you will want your classes non-final, so that the interceptors can attach to it. see also the question at SO

Thema answered 6/2, 2009 at 11:35 Comment(0)
F
4

Your best reference here is Item 15 of Joshua Bloch's excellent book "Effective Java", called "Design and document for inheritance or else prohibit it". However the key to whether extension of a class should be allowed is not "is it abstract" but "was it designed with inheritance in mind". There is sometimes a correlation between the two, but it's the second that is important. To take a simple example most of the AWT classes are designed to be extended, even those that are not abstract.

The summary of Bloch's chapter is that interaction of inherited classes with their parents can be surprising and unpredicatable if the ancestor wasn't designed to be inherited from. Classes should therefore come in two kinds a) classes designed to be extended, and with enough documentation to describe how it should be done b) classes marked final. Classes in (a) will often be abstract, but not always. For

Friedcake answered 6/2, 2009 at 15:15 Comment(0)
P
3

I disagree. If hierarchies were bad, there'd be no reason for object oriented languages to exist. If you look at UI widget libraries from Microsoft and Sun, you're certain to find inheritance. Is that all "bad code" by definition? No, of course not.

Inheritance can be abused, but so can any language feature. The trick is to learn how to do things appropriately.

Pu answered 6/2, 2009 at 11:21 Comment(3)
Inheritance is not an essential part of OO. Composition and polymorphism rock.Baboon
Inhertiance was modern 15 yeras ago. Today we know that inhertiance often causes much more problems, then composition.Socio
Thanks for weighing in almost four years after the fact, Alex. Maybe you can answer some questions and boost your reputation here instead of trolling with comments like this.Pu
S
3

In some cases you want to make sure there's no subclassing, in other cases you want to ensure subclassing (abstract). But there's always a large subset of classes where you as the original author don't care and shouldn't care. It's part of being open/closed. Deciding that something should be closed is also to be done for a reason.

Staffer answered 6/2, 2009 at 11:21 Comment(0)
V
2

I couldn't disagree more. Class hierarchies make sense for concrete classes when the concrete classes know the possible return types of methods that they have not marked final. For instance, a concrete class may have a subclass hook:

protected SomeType doSomething() {
return null;
}

This doSomething is guarenteed to be either null or a SomeType instance. Say that you have the ability to process the SomeType instance but don't have a use case for using the SomeType instance in the current class, but know that this functionality would be really good to have in subclasses and most everything is concrete. It makes no sense to make the current class an abstract class if it can be used directly with the default of doing nothing with its null value. If you made it an abstract class, then you would have its children in this type of hierarchy:

  • Abstract base class
    • Default class (the class that could have been non-abstract, only implements the protected method and nothing else)
    • Other subclasses.

You thus have an abstract base class that can't be used directly, when the default class may be the most common case. In the other hierarchy, there is one less class, so that the functionality can be used without making an essentially useless default class because abstraction just had to be forced onto the class.

  • Default class
    • Other subclasses.

Now, sure, hierarchies can be used and abused, and if things are not documented clearly or classes not well designed, subclasses can run into problems. But these same problems exist with abstract classes as well, you don't get rid of the problem just because you add "abstract" to your class. For instance, if the contract of the "doSomething()" method above required SomeType to have populated x, y and z fields when they were accessed via getters and setters, your subclass would blow up regardless if you used the concrete class that returned null as your base class or an abstract class.

The general rule of thumb for designing a class hierarchy is pretty much a simple questionaire:

  1. Do I need the behavior of my proposed superclass in my subclass? (Y/N) This is the first question you need to ask yourself. If you don't need the behavior, there's no argument for subclassing.

  2. Do I need the state of my proposed superclass in my subclass? (Y/N) This is the second question. If the state fits the model of what you need, this may be a canidate for subclassing.

  3. If the subclass was created from the proposed superclass, would it truly be an IS-A relation, or is it just a shortcut to inherit behavior and state? This is the final question. If it is just a shortcut and you cannot qualify your proposed subclass "as-a" superclass, then inheritance should be avoided. The state and logic can be copied and pasted into the new class with a different root, or delegation can be used.

Only if a class needs the behavior, state and can be considered that the subclass IS-A(n) instance of the superclass should it be considered to inherit from a superclass. Otherwise, other options exist that would be better suited to the purpose, although it may require a little more work up front, it is cleaner in the long run.

Violetteviolin answered 6/2, 2009 at 16:38 Comment(0)
C
1

There are a few cases where we dont want to allow to change the behavior. For instance, String class, Math.

Cilium answered 6/2, 2009 at 11:16 Comment(2)
That is certainly not true. There is always room for improved implementations. There is nothing in String and Math that should be final.Alcibiades
In fact, Math should not have any static methods. You should be able to instantiate it. Override math with your own if you want to change the behavior. (Say for a fast but inaccurate sine function.)Karlotte
N
1

I don't like inheritance because there's always a better way to do the same thing but when you're making maintenance changes in a huge system sometimes the best way to fix the code with minimum changes is to extend a class a little. Yes, it's usually leads to a bad code but to a working one and without months of rewriting first. So giving a maintenance man as much flexibility as he can handle is a good way to go.

Nee answered 6/2, 2009 at 11:21 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.