First, it helps to think about why we have these design principles. Why does following the SOLID principles make software better? Work to understand the goals of each principle, and not just the specific implementation details required to use them with a specific language.
- The Single Responsibility Principle improves modularity by increasing
cohesion; better modularity leads to improved testability,
usability, and reusability.
- The Open/Closed Principle enables asynchronous deployment by
decoupling implementations from each other.
- The Liskov Substitution Principle promotes modularity and reuse of modules by
ensuring the compatibility of their interfaces.
- The Interface Segregation Principle reduces coupling between
unrelated consumers of the interface, while increasing readability and
understandability.
- The Dependency Inversion Principle reduces coupling, and it strongly
enables testability.
Notice how each principle drives an improvement in a certain attribute of the system, whether it be higher cohesion, looser coupling, or modularity.
Remember, your goal is to produce high quality software. Quality is made up of many different attributes, including correctness, efficiency, maintainability, understandability, etc. When followed, the SOLID principles help you get there. So once you've got the "why" of the principles, the "how" of the implementation gets a lot easier.
EDIT:
I'll try to more directly answer your question.
For the Open/Close Principle, the rule is that both the signature and the behavior of the old interface must remain the same before and after any changes. Don't disrupt any code that is calling it. That means it absolutely takes a new interface to implement the new stuff, because the old stuff already has a behavior. The new interface must have a different signature, because it offers the new and different functionality. So you meet those requirements in C just the same way as you'd do it in C++.
Let's say you have a function int foo(int a, int b, int c)
and you want to add a version that's almost exactly the same, but it takes a fourth parameter, like this: int foo(int a, int b, int c, int d)
. It's a requirement that the new version be backward compatible with the old version, and that some default (such as zero) for the new parameter would make that happen. You'd move the implementation code from old foo into new foo, and in your old foo you'd do this: int foo(int a, int b, int c) { return foo(a, b, c, 0);}
So even though we radically transformed the contents of int foo(int a, int b, int c)
, we preserved its functionality. It remained closed to change.
The Liskov substitution principle states that different subtypes must work compatibly. In other words, the things with common signatures that can be substituted for each other must behave rationally the same.
In C, this can be accomplished with function pointers to functions that take identical sets of parameters. Let's say you have this code:
#include <stdio.h>
void fred(int x)
{
printf( "fred %d\n", x );
}
void barney(int x)
{
printf( "barney %d\n", x );
}
#define Wilma 0
#define Betty 1
int main()
{
void (*flintstone)(int);
int wife = Betty;
switch(wife)
{
case Wilma:
flintstone = &fred;
case Betty:
flintstone = &barney;
}
(*flintstone)(42);
return 0;
}
fred() and barney() must have compatible parameter lists for this to work, of course, but that's no different than subclasses inheriting their vtable from their superclasses. Part of the behavior contract would be that both fred() and barney() should have no hidden dependencies, or if they do, they must also be compatible. In this simplistic example, both functions rely only on stdout, so it's not really a big deal. The idea is that you preserve correct behavior in both situations where either function could be used interchangeably.