Good examples of OCP in open source libraries [closed]
Asked Answered
Q

3

6

There has been a lot of discussion on the subject of “Open Closed Principle” on stackoverflow. It seems however, that generally a more relaxed interpretation of the principle is prevalent, so for example the Eclipse is open for modification through plug-ins.

According to strict OCP, you should modify the original code only to fix bugs, not to add new behaviour.

Are there any good examples of strict interpretation of OCP in public or OS libraries, where you can observe evolution of a feature through OCP: there is a class Foo with method bar() and than there is a FooDoingAlsoX with foo2() method in the next version of the library, where original class has been extended where original code was not modified.

EDIT: According to Robert C. Martin: “The binary executable version of the module, whether a linkable library, a DLL, or a Java .jar remain untouched”*. I never see libraries kept closed, in practice new behaviour is added to a library and new version published. According to OCP, new behaviour belongs to new binary module.

*Agile Software Development, Principles, Patterns, and Practices by Robert C. Martin

Qualifier answered 28/8, 2011 at 17:26 Comment(0)
P
3

The OCP principle says that a class shall be open for extension but closed for changes. The key to achieve this is abstraction. If you also read the DIP principle you'll find out that abstractions should not depend upon details, but details should depend upon abstractions. In your example you have details in your interface (two specific methods bar() and foo2()). To fully implement OCP you shall try to avoid such details (and for example try to move them behind the abstraction and instead have one general foo-method with different implementations).

For example take a look at this interface in SolrNet: https://github.com/mausch/SolrNet/blob/master/SolrNet/ISolrCommand.cs This is a general command that that only tell that a command can be executed, it doesn't give more details than that.

The details instead lies in the implementations of the interface: https://github.com/mausch/SolrNet/tree/master/SolrNet/Commands

As you see you can add as many commands as you wish without changing the implementation of any other class. The specific implementations can hereby be considered closed for modifications, but the interface allow us to extend the functionality with new commands, and is hereby open for extension.

(SolrNet isn't extraordinarily in anyway, I just used examples from this project because I happen to have it in my browser when I read this post, almost all good coded OO projects make use of the OCP principle in one way or another)

EDIT: If you want examples of this on the binary level you can for example take a look at nopCommerce (http://nopcommerce.codeplex.com/releases/view/69081) where you for example can add your own shipping providers, payment providers or exchange rate providers without even touching the original DLL by implementing a set of interfaces. And again, it is not something extraordinarily with nopCommerce, it was just the first project that came into mind because I used it a couple of days ago ;)

OCP is not not a principle that shall only be used on binary level though, good OOD uses OCP, not everywhere, but in all levels where it is suitable ;) "Strict" OCP on the binary level is not always suitable and would add an extra level of complexity if you used it in every single situation, it is mostly interesting in situations when you want to change implementation in runtime or when you want to let external developers be able to extend your interfaces. You shall always keep the OCP principle in mind when you desing your interfaces, but you shall not see it as a law but a principle that shall be used in the correct situations.

I guess you refer to Agile Principles, Patterns and Practices when you quote Robert C Martin, if so, also read the conclusion in the same chapter where he says about the same thing as I did above. If you for example read his book Clean Code he gives a more gradate explanation of the OCP principle and I would say the quote above is a bit unfortunate since it can let people think that you shall always put new code in new DLL:s, JAR:s or libs when the truth is that you shall always consider the context.

I think your rather should take a look at Martins more up to date whitepaper about OCP http://objectmentor.com/resources/articles/ocp.pdf (which he also refer to in his later book Clean Code), there he never refer to separate binaries, rather he refer to "classes, modules, functions". I think this proves that Martin means not just binary extension when he speaks about OCP but also extensions of classes and functions, so binary extension is not more "strict" than the class extension in my first example.

Pat answered 5/9, 2011 at 12:55 Comment(10)
I know I formulated poorly my question. According to Robert C. Martin: “The binary executable version of the module, whether a linkable library, a DLL, or a Java .jar remain untouched”. This generally does not happen.Qualifier
OCP is a principle and shall be considered as a principle. I do not know where you have found that quote so I do not know the full context, but one thing Robert C Martin also says is that you shall always consider the context. It is in most situations not interesting to apply the OPC principle on the binary level between versions of the same project, that is only interesting if you shall be able to change implementation in runtime. But I would not say its not generally does happen. I would say it happens quite often.Pat
For one example you can take a look at my own project where I have an interface of an IUserRepository code.google.com/p/njupiter/source/browse/trunk/Development/… and then via an abstract factory loads different implementations placed in different asseemblies, one implementation for SQL and one for a Directory Service code.google.com/p/njupiter/source/browse/… code.google.com/p/njupiter/source/browse/…Pat
I just updated my post with one more example of OCP on the binary level. Hope that helps.Pat
Thanks for bearing with me and my last minute changes. You guessed right the source of my quote :) Now, I disagree with your interpretation of the chapter, Martin insists on binary aspect of closure on more than one occasion and I am trying to see some concrete examples of how this can be achieved.Qualifier
In PPP page 105 Robert Martin writes that following OCP is expensive (it increases the complexity of the system) and should be done only against the most probable changes. He also writes that you should not preemptively put in the hooks to follow OCP, but you should put in the hooks only after a change happens that needs them.Inexpert
@Esko, he speaks about premature generalization, true. This is not in contradiction with binary closure. You can have IShare. IRotatingShape can extend IShape and be in another binary. You don't need to modify the IShape and add rotate() to it.Qualifier
I actually checked this up further and the quote you give is the original principle stated by Bertrand Meyer. In clean code Martin refer to his original whitepaper about his version of the principle objectmentor.com/resources/articles/ocp.pdf , there he never refer to separate binaries, rather he refer to "classes, modules, functions" as in my first example.Pat
And as I said before, you shall always consider the context and be pragmatic, not slavishly follow this principle if so you make a major mistake, therefor you will probably never find any "good" example of this where every change in the application result in a new binary, because that is neither good nor pragmatic.Pat
And if you want to see a real world example how you can binary extend your functions I have also given you two examples ;) For example my own code where you have a SQL implementation and a Directory Service implementation for the same UserRepository (those where both written on different occasions) and you can easliy for example add a XML-implementation without changing the original DLL.Pat
S
1

I am not aware of really good examples but I think that there might be a reason for the more "relaxed interpretation" (for example here on SO):

To fully realize the OCP principle in a real world project you need to do the coupling via lean Interfaces (see ISP and DIP for this) and Dependency Injection (either property or constructor based)... otherwise you are really fast either stuck or need to resort to the "relaxed interpretation"...

some interesting links in this regard:

Sitology answered 31/8, 2011 at 1:57 Comment(0)
I
1

Background

In page 100 of PPP Robert Martin says

"Closed for modification"
Extending the behavior of a module does not result in changes to the source or binary code of the module. The binary executable version of the module, whether a linkable library, a DLL, or a Java .jar, remains untouched.

Also on page 103 he discusses an example, written in C, where a non-OCP design results in recompiling the existing classes:

So, not only must we change the source code of all witch/case statements or if/else chains, but we also must alter the binary files (via recompilation) of all the modules that use any of the Shape data structures. Changing the binary files means that any DLLs, shared libraries, or other kinds of binary components must be redeployed.

It's good to remember that this book was published in 2003 and many of the examples use C++, which is a language notorious for long compile times (unless header file dependencies are handled well - developers from Remedy mentioned in one presentation that Alan Wake's full build takes only about 2 minutes).

So when discussing binary compatibility in the small scale (i.e. within one project), one benefit of OCP (and DIP) is faster compile times, which is less of an issue with modern languages and machines. But in the large scale, when a library is used by many other projects, especially if their code is not in our control, the benefits of not having to release new versions of the software still apply.

Example

As an example of an open source library which follows OCP in binary compatibility, look at JUnit. There are tens of testing frameworks which rely on JUnit's @RunWith annotation and Runner interface, so that they can be run with the JUnit test runner - without having to change JUnit, Maven, IDEs etc.

Also JUnit's recently added @Rule annotation allows test writers to plug into standard JUnit tests custom behavior, which would before have required a custom test runner. Once more an example of library-level OCP.

To contrast, TestNG does not follow OCP, but contains JUnit specific checks to execute TestNG and JUnit tests differently. A representative line can be found from the TestRunner.run() method:

  if(test.isJUnit()) {
    privateRunJUnit(test);
  }
  else {
    privateRun(test);
  }

As a result, even tough the TestNG test runner has in some aspects more features (for example is supports running tests in parallel), other testing frameworks do not use it, because it's not extensible to support other testing frameworks without modifying TestNG. (TestNG has a way to plug in custom test runners using the -testrunfactory argument, but AFAIK it allows only one type of runner per suite. So it would not be possible to use many different testing frameworks in one project, unlike with JUnit.)

Conclusion

However, in most situations OCP is used within an application or library, in which case both the base module and its extensions are packaged inside the same binary. In that situation OCP is used to improve the maintainablity of the source code, and not to avoid redeploys and new releases. The possible benefit of not having to recompile an unchanged file is still there, but since compile times are so low with most modern languages, that's not very important.

The thing to always keep in mind is that following OCP is expensive, as it makes the system more complex. Robert Martin talks about this on PPP page 105 and the conclusion of the chapter. OCP should be applied carefully, for only the most probable changes. You should not preemptively put in the hooks to follow OCP, but you should put in the hooks only after a change happens that needs them. Thus it is unlikely to find a project where all new features would have been added without changing existing classes - unless somebody does it as an academic exercise (my intuition says that it would be very hard and the resulting code would not be clean).

Inexpert answered 5/9, 2011 at 22:20 Comment(1)
recompilation is problematic not because of compile times, but because once you have produced another binary, there is no more guarantee that your software that depends on that library is working correctly with the new version. This is somewhat mitigated by established versioning number convention, but is not always respected. And numerous problems with 3rd party version incopatibility could have been avoided have more thought have been put into modularization.Qualifier

© 2022 - 2024 — McMap. All rights reserved.