Which algorithms are hard to implement in functional languages?
Asked Answered
G

3

28

I'm dabbling in functional languages and I have found some algorithms (especially those that use dynamic programming) harder to write and sometimes less efficient in worst case runtime. Is there a class of algorithms which are less efficient in functional languages with immutable variables and so side effects?

And is there a reference that someone can point me to that will help with the more difficult to write algorithms (maybe those optimized by shared state)?

Thanks

Game answered 12/6, 2012 at 6:32 Comment(2)
#1990964Signboard
While there is relatively often performance loss in attempting to use the same algorithm in a functional language vs an imperative one, that is much less often the case when you consider different algorithms that solves the same real world problem. If you're an experienced imperative programmer and you're dabbling in functional languages, then all your experience thinking about algorithms will be skewed towards those suitable in an imperative context.Viceregent
N
35

First off, as you may or may not be aware, some languages, including Haskell, implement sharing, which alleviates some of the problem that you might think of.

While Andrew's answer points at Turing completeness, it doesn't really answer the question of what algorithms are hard to implement in functional languages. Instead of asking what algorithms are hard to implement in functional languages, people typically ask what data structures are hard to implement in functional languages.

The simple answer to this: things that involve pointers.

There's no functional equivalent to pointers when you drill down to the machine level, there's a map, and you can compile certain data structures safely down to arrays or things implemented as pointers, but at a high level, you can just can't express things using pointer based data structures as easily as you can in imperative languages.

To get around this, a number of things have been done:

  • Since pointers form the basis for a hash table, and since hash tables really just implement a map, efficient functional maps have been studied comprehensively. In fact, Chris Okasaki has a book ("Purely Functional Data Structures") that details many, many ways to implement functional maps, deques, etc...
  • Since pointers can be used to represent nodes inside the traversal of some larger data structure, there has also been work in this area. The product is the zipper, an efficient structure that succinctly represents the functional equivalent of the "pointer inside of a deeper structure" technique.
  • Since pointers can be used to implement side effecting computations, monads have been used to express this kind of computation in a pretty way. Because keeping track of state is difficult to juggle around, one use for monads is to let you mask an ugly imperative behaving part of your program and use the type system to make sure that the program is chained together correctly (through monadic binds).

While I'd like to say that any algorithm can be translated from an imperative one to a functional one very easily, this is simply not the case. However, I'm fairly convinced that the problem isn't the algorithms per se, but the data structures they manipulate, being based on an imperative notion of state. You can find a long list of functional data structures in this post.

The flip side to all of this is that if you start using a more purely functional style, much of the complexity in your program goes down, and many needs for heavily imperative data structures disappear (for example, a very common use of pointers in imperative languages is to implement nasty design patterns, which usually translate into clever uses of polymorphism and typeclases at the functional level).

EDIT: I believe the essence of this question deals with how to express computation in a functional manner. However, it should be noted that there are ways of defining stateful computation in a functional way. Or rather, it is possible to use functional techniques to reason about stateful computation. For example, the Ynot project does this using a parameterized monad where facts about the heap (in the form of separation logic) are tracked by the monadic binds.

By the way, even in ML, I don't see why dynamic programming is that hard. It seems like dynamic programming problems, which usually build up collections of some sequence to compute a final answer, can accumulate the constructed values via arguments to the function, perhaps requiring a continuation in some circumstances. Using tail recursion there's no reason this can't be just as pretty and efficient as in imperative languages. Now sure, you may run into the argument that if these values are lists (for example), imperatively we can implement them as arrays, but for that, see the content of the post proper :-)

Nymphalid answered 12/6, 2012 at 6:54 Comment(1)
I've done a lot of translating algorithms into functional languages. Probably the only one I've never managed to see done satisfactorily is Ukkonen's suffix-trie algorithm, but that's a very sophisticated piece of work in any event.Shirk
W
7

Please remember that most functional languages allow some notion of side effects; they may be frowned upon, restricted to local use, etc., but you can still use them. In OCaml, Haskell, F#, Scala or Clojure, you can use mutable arrays if you need to.

So if you find an algorithm for which you have a formulation using mutable arrays, and you need to reproduce it in one of these languages, just use mutable arrays!

There is no reason to force oneself to do everything using a single programming paradigm; there are some problem domains where imperative programming is (given our current knowledge) the best-suited tool for the job, just as there are domains where logic programming is excellent. If it saves you time and effort to make a local, encapsulated use of one of these paradigm, you should not hesitate to use them.

For example, the Sieve of Eratosthenes is trivial to implement with mutable arrays, and significantly harder to imitate (reasonably efficiently) in a purely functional way: see Melissa O'Neill article for details.

On the other hand, finding immutable solutions to a given problem can be an interesting and enlightening exercice. The book "Purely Functional Data Structures" of Crhis Okasaki is a good example of very nice reformulations of algorithms in a purely functional way. If you are interested in the algorithm by themselves (rather than their application to your problem), this can be an very interesting activity.

(For examples of uses of sharing to optimize a purely functional algorithm, see Richard Bird and Ralf Hinze's 2003 Functional Pearl: Trouble Shared is Trouble Halved.)

Weakminded answered 14/6, 2012 at 13:57 Comment(1)
I completely agree with this answer, and in practice this is probably what happens most of the time, (though Haskellers obviously seem much more purely functionally oriented)Nymphalid
O
2

One can implement imperative features with low asymptotic cost, so in an abstract sense there is no essential difficulty in translating imperative code to the purely functional universe. In practice, of course, there is. :-) Take a look at Pippenger's "Pure versus Impure Lisp" and papers that cite it.

Odds answered 4/6, 2014 at 19:23 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.