OOP and Dynamic Typing (not Static vs Dynamic)
Asked Answered
C

4

10

What OOP principles, if any, don't apply or apply differently in a dynamically typed environment as opposed to a statically-typed environment (for example Ruby vs C#)? This is not a call for a Static vs Dynamic debate, but rather I'd like to see whether there are accepted principles on either side of that divide that apply to one and not the other, or apply differently. Phrases like "prefer composition to inheritance" are well known in the statically-typed OOP literature. Are they just as applicable on the dynamic side?

For instance, in a dynamically typed environment, it would seem that the granularity of coupling goes no further than the level of the method. In other words, any given function call only couples the caller to that particular interface, which any class could possibly satisfy -- or to put it another way, anything that quacks like that particular duck.

In Java, on the other hand, the granularity of coupling can go as high as the package. Not only does a particular method call establish a contract with another class/interface, but also couples it into that classes/interface's package/jar/assembly.

Do differences like this give rise to different principles and patterns? If so have these differences been articulated? There's a section in the Ruby Pickaxe book that goes in this direction a bit (Duck Typing/Classes Aren't Types), but I'm wondering if there's anything else. I'm aware of Design Patterns in Ruby but haven't read it.

EDIT -- It has been argued that Liskov doesn't apply the same in a dynamic environment as it does in a static environment, but I can't help thinking that it does. On the one hand there is no high-level contract with an entire class. But don't all calls to any given class constitute an implicit contract that needs to be satisfied by child classes the way Liskov prescribes? Consider the following. The calls in "do some bar stuff" create a contract that needs to be attended to by child classes. Isn't this a case of "treating a specialized object as if it were a base class?":

class Bartender
    def initialize(bar)
       @bar = bar
    end

    def do_some_bar_stuff
        @bar.open
        @bar.tend
        @bar.close
    end
end

class Bar
    def open
        # open the doors, turn on the lights
    end
    def tend
        # tend the bar
    end
    def close
        #clean the bathrooms
    end
end

class BoringSportsBar < Bar
    def open
        # turn on Golden Tee, fire up the plasma screen
    end

    def tend
        # serve lots of Bud Light
    end
end

class NotQuiteAsBoringSportsBar < BoringSportsBar
    def open
        # turn on vintage arcade games
    end
end

class SnootyBeerSnobBar < Bar
    def open
        # replace empty kegs of expensive Belgians
    end

    def tend
        # serve lots of obscure ales, porters and IPAs from 124 different taps
    end
end

# monday night
bartender = Bartender.new(BoringSportsBar.new)
bartender.do_some_bar_stuff

# wednesday night
bartender = Bartender.new(SnootyBeerSnobBar.new)
bartender.do_some_bar_stuff

# friday night
bartender = Bartender.new(NotQuiteAsBoringSportsBar.new)
bartender.do_some_bar_stuff
Canaletto answered 16/12, 2009 at 22:37 Comment(0)
W
5

The essential difference you are touching on I think are:

  • languages group 1. the actual methods that are invoked when eg object.method1, object.method2, object.method3 are called can change during object's lifetime.

  • languages group 2. the actual methods that are invoked when eg object.method1, object.method2, object.method3 are called cannot change during object's lifetime.

Languages in group 1 tend to have dynamic typing and to not support compile-time checked interfaces and languages in group 2 tend to have static typing and to support compile-time chcked interfaces.

I would say that all OO principles apply to both, but

  • some extra (explicit) coding to implement (run-time instead of compile-time) checks may be required in group 1 to assert that new objects are created with all appropriate methods plumbed in to meet an interface contract as there is no compile-time interface-agreement checking, (if you want to make group 1 code more like group 2)

  • some extra coding may be required in group 2 to model changes of the actual method invoked for a method call by using extra state flags to call submethods, or to wrap up the method or a set of methods in a reference to one of several objects attached to the main object, where each of the several objects has different method implementations, (if you want to make group 2 code more like group 1 code)

  • the very restrictions on design in group 2 languages make them better for larger projects where ease of communication (as opposed to comprehension) becomes more important

  • the lack of restrictions on design in group 1 languages makes then better for smaller projects, where the programmer can more easily check whether the various design plumbing constraints are met at a glance simply because the code is smaller

  • making code from one group of languages like the other is interesting and well worth studying but the point of the language differences is really to do with how well they help different sizes of teams ( - I believe! :) )

  • there are various other differences

  • more or less leg-work may be required to implement an OO design in one language or another depending on the exact principles involved.


EDIT

So to answer your original question, I examined

http://c2.com/cgi/wiki?PrinciplesOfObjectOrientedDesign

AND

http://www.dofactory.com/patterns/Patterns.aspx

In practice the OO principles are not followed for various good reasons (and of course some bad) in a system. Good reasons included where performance concerns outweigh pure design quality concerns, wherever cultural benefits of alternate structure/naming outweigh pure design quality concerns and where the cost of the extra work of implementing a function not in the standard way for a particular language outweighs the benefits of a pure design.

Coarser-grained patterns like Abstract Factory, Builder, Factory Method, Prototype, Adapter, Strategy, Chain of Command, Bridge, Proxy, Observer, Visitor and even MVC/MMVM tend to get used less in small systems because the amount of communication about the code is less, so the benefit of creating such structures is not as great.

Finer-grained patterns like State, Command, Factory Method, Composite, Decorator, Facade, Flyweight, Memento, Template method are perhaps more common in group 1 code, but often several design patterns apply not to an object as such but to different parts of an object whereas in group 2 code patterns tend to be present on a one pattern per object basis.

IMHO it makes a lot of sense in most group 1 languages to think of all global data and functions as a kind of singleton "Application" object. I know we're getting to blurring the lines between Procedural and OO programming, but this kind of code definitely quacks like an "Application" object in a lot of cases! :)

Some very fine-grained design patterns like Iterator tend to be built into group 1 languages.

Wickerwork answered 17/12, 2009 at 0:32 Comment(6)
Great comments. Quick question, re: group 1 -- would you say that the State Pattern might constitute an example of method behavior changing during an object's lifetime in a static environment?Canaletto
It's a question of how you define object! In either a group 1 or group 2 context, what an object is is proscribed by the language tool, though the designer is free (although explaining to another person is harder!) to use any conceptual map to define objects and object-boundaries. I suspect this debate is really about "what happens in practice - what is the list of principles and patterns that get used by necessity in group 1 language projects vs group 2 language projects".Wickerwork
I think interface contracts are less important in group 1 BECAUSE the focus in writing in a group 1 language (tends to be / ) is on object data (making it more likely that programmers are mentally checking parameter names, which are the more variable element of function signatures than method names) whereas when writing group 2 code, the focus is on method names, so the compiler needs to provide more backup in checking function signatures. Just my $0.02.Wickerwork
In answer to your question about State Pattern, only if in one of these types of debates...! There are certain default reference points and definitions and then there is flexibility and looking at things from a different angle.Wickerwork
An object is defined by its interfaces; the trend to prefer hierarchies of interfaces over classical OO inheritance is gathering momentum. The only complication is that for performance reasons some "interfaces" need to allow other objects direct access to fields at fixed offsets; and the tools are far from helpful in automating the choice of which fields get this go-faster treatment - the easiest way to make field offsets consistent where required is to specify inheritance.Wickerwork
RE: Liskov / example Bar/SportsBar/WhateverBar object and open/tend methods. If translating the code into group 1 language, the open/tend methods would become functions. In both groups Liskov is down to the programmer! The only in-practice difference is that it is easier to mis-wire the object on creation to call the wrong code in group 1 because in practice group 1 method names are not typically scoped to the object they are written for, so the programmer has to focus more on the language than the task at that point, making group 2 better & easier to get right for big code base polymorphism.Wickerwork
S
3

Let me start by saying that, personally, an OOP principle that does not work on both dynamically and statically typed languages isn't a principle.

That said, here is an example:

The Interface Segregation Principle (http://objectmentor.com/resources/articles/isp.pdf) states that clients should depend on the most specific interface that meets their needs. If client code needs to use two methods of class C then C should implement interface, I, containing only these two methods and the client will use I rather than C. This principle is irrelevant in dynamically typed languages where interfaces are not needed (since interfaces defined types, and types are not needed in a language where variables are type-less)

[edit]

Second example - The Dependency Inversion Principle (http://objectmentor.com/resources/articles/dip.pdf). This principle argues is "the strategy of depending upon interfaces or abstract functions and classes, rather than upon concrete functions and classes". Again, in dynamically typed language client code does not depend on anything - it just specifies method signatures - thereby obviating this principle.

Third example - Liskov Substitution Principle (http://objectmentor.com/resources/articles/lsp.pdf). The text book example for this principle is a Square class that subclasses a Rectangle class. And then client code that invokes a setWidth() method on a Rectangle variable is surprised when the height is also changed since the actual object is a Square. Again, in a dynamically typed language the variables are type-less, the Rectangle class will not be mentioned in the client code and hence no such surprises will arise.

Sodomite answered 16/12, 2009 at 22:50 Comment(10)
Type-less? A dynamically typed language has types. In Common Lisp, for example, any data object is of a type (or, more likely, a hierarchy of types), and at any time in the execution of the program it's possible to determine that type precisely. Further, conversions between types follow strict rules, and therefore CL is a strongly typed language. You just can't look at a symbol in the source code and know what it represents.Nobles
David: What you mean is that object have types, with which I fully agree. I said that the variables are type-less: there is no type associated with a variable and it can point to object of any type.Sodomite
@Itay RE: Dependency Inversion -- but even in a dynamic environment, is it not better to inject an instance on a constructor or setter rather than instantiate an instance? The former keeps dependency at the method level you describe, the latter creates a tight coupling to the entire class. Also, I use factories all the time in Ruby to hide dependence on concrete classes.Canaletto
@Itay RE: Liskov -- have to disagree here. I believe Liskov applies pretty much the same way for dynamic and static. (see added example above)Canaletto
I am not speaking about Dependency Injection (in which case I agree with you) but on the Dependency Inversion principle as described in the link.Sodomite
Regarding Liskov - the principle is sound. However, the text book example "Rectangle typed variable pointing at a Square" is meaningless in a dynamic typing settings.Sodomite
I agree that there is no high level contract at the caller level with an entire base class, and Liskov wouldn't seem to apply in the same way. But isn't it true that a group of calls to a single object at a high level creates the same sort of contract that needs to be satisfied by child classes?Canaletto
You can separate the strictness of Liskov from what it is strict about. In C# there is more to be strict about because you have more formal interface definitions, whereas in ECMAScript there is less. I think Liskov applies equally as a pure principle, but what power the compiler/interpreter has to enforce or encourage Liskov differs between languages, and that the desirability of following Liskov will depend on the width of your object hierarchy and LOC count.Wickerwork
re: DIP -- I see Dependency Injection/IOC as a radical application of Dependency Inversion. I think if you've got Dependency Injection, you are by definition applying DIP.Canaletto
Also talking about LISP and Macros, the type system of a language is sort of like a macro language to check for errors (you can see this in Haskell where its type system can have loops, conditions, and calculation). Using LISP macros with assertions lets you have compile time error checking just like types.Envenom
S
1

I have a "radical" view on all this: in my opinion, backed by mathematics, OOP doesn't work in a statically typed environment for any interesting problems. I define interesting as meaning abstract relations are involved. This can be proven easily (see "covariance problem").

The core of this problem is that the concepts of OOP promise it is a way to model abstractions and combined with the contract programming delivered by static typing, relations cannot be implemented without breaking encapsulation. Just try any covariant binary operator to see: try to implement "less than" or "add" in C++. You can code the base abstraction easily but you can't implement it.

In dynamic systems there are no high level formalised types and no encapsulation to bother with so OO actually works, in particular, prototype based systems like the original Smalltalk actually deliver working models which cannot be encoded at all with static typing constraints.

To answer the question another way: the fundamental assumption of the very question is instrinsically flawed. OO doesn't have any coherent principles because it isn't a consistent theory because there do not exist any models of it with sufficient power to handle anything but simple programming tasks. What differs is what you give up: in dynamic systems you give up encapsulation, in static systems you just switch to models that do work (functional programming, templates, etc) since all statically typed systems support these things.

Speed answered 30/12, 2010 at 9:21 Comment(2)
I hear you, and I'm sympathetic, but when I hear sweeping, coarse-grained statements like "doesn't work for any interesting problems" it strikes me as similar to arguing that Set Theory doesn't work at all because of Godel and Incompleteness. Technically true but mostly inconsequential. There's just too much production code dealing with really complex domains using static typing to take such a statement seriously. Radical views require radical evidence, in principle AND in practice. Most of the problems I see are the results of bad architecture, not intrinsic limitations of static typing.Canaletto
Nevertheless +1 for boldness and waving the dynamic flag high. Since originally posting this question, I've increasingly become convinced, as you are, that static typing really provides only small benefits, during development, in refactoring and code browsing mainly, and in a kind of built-in unit testing with the compiler. But not much else. C# and Java are slooow to code in compared to Ruby and in general they just feel less expressive and less powerful.Canaletto
V
0

Interfaces can add some level of overhead, especially if you directly depend upon someone else's API. Simple solution - don't depend on someone else's API.

Have each object talk to the interfaces that it wished would exist in an ideal world. If you do this, you'll end up with small interfaces that have small scope. By doing so, you'll gain the compile time failures when the interfaces change.

The smaller and more specific your interfaces are, the less 'bookkeeping' you'll have to do when an interface changes.

One of the real benefits of static typing is not statically knowing what methods you can call, but guaranteeing that value objects are already validated... if you need a name, and a name has to be < 10 characters, create a Name class that encapsulates that validation (though not necessarily any I/O aspects - keep it a pure value type), and the compiler can help you catch errors at compilation time, rather than you having to verify at runtime.

If you're going to use a static language, use it to your advantage.

Valverde answered 17/12, 2009 at 6:11 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.