Haskell, Lisp, and verbosity [closed]
Asked Answered
R

8

104

For those of you experienced in both Haskell and some flavor of Lisp, I'm curious how "pleasant" (to use a horrid term) it is to write code in Haskell vs. Lisp.

Some background: I'm learning Haskell now, having earlier worked with Scheme and CL (and a little foray into Clojure). Traditionally, you could consider me a fan of dynamic languages for the succinctness and rapidity they provide. I quickly fell in love with Lisp macros, as it gave me yet another way to avoid verbosity and boilerplate.

I'm finding Haskell incredibly interesting, as it's introducing me to ways of coding I didn't know existed. It definitely has some aspects that seem like they would aid in achieving agility, like ease of writing partial functions. However, I'm a bit concerned about losing Lisp macros (I assume I lose them; truth be told I may have just not learned about them yet?) and the static typing system.

Would anyone who has done a decent amount of coding in both worlds mind commenting on how the experiences differ, which you prefer, and if said preference is situational?

Rowel answered 25/12, 2008 at 10:58 Comment(0)
C
70

Short answer:

  • almost anything you can do with macros you can do with a higher-order function (and I include monads, arrows, etc.), but it might require more thinking (but only the first time, and it's fun and you'll be a better programmer for it), and
  • the static system is sufficiently general that it never gets in your way, and somewhat surprisingly it actually "aids in achieving agility" (as you said) because when your program compiles you can be almost certain that is correct, so this certainty lets you try out things you might be otherwise afraid to try -- there is a "dynamic" feel to programming although it's not the same as with Lisp.

[Note: There is a "Template Haskell" that lets you write macros just as in Lisp, but strictly speaking you should never need it.]

Castorina answered 25/12, 2008 at 15:40 Comment(4)
From Conor McBride, quoted by Don Stewart: 'I like to think of types as warping our gravity, so that the direction we need to travel [to write correct programs] becomes "downhill".' The type system makes it surprisingly easy to write correct programs… see this post and its re-shares.Castorina
High-order functions cannot replace macros, and in fact, CL has both for some reason. The real power of macros in CL is that they allow the developer to introduce new language features that help better express the solution to a problem, without having to wait for a new version of the language like in Haskell or Java. For example, if Haskell had this power there would no need for Haskell authors to write GHC extensions, has they could be implemented by developers themselves as macros at any time.Hooked
@Hooked Do you have a concrete example? See the comments on Hibou57's answer below where an alleged example turned out to be dubious. I'd be interested to know the sort of thing you mean (e.g. Haskell code with and without macros).Castorina
Take out currying from Haskell. Could you implement it with what would be left in Haskell? Another example: suppose that Haskell does not support pattern matching, could you add it yourself without having to for GHC's developers to support it? In CL, you can use macros to extend the language at will. I suppose that's why CL the language did not change since its standard back in the 90's, whereas Haskell seem to have a never ending flux of extensions in GHC.Hooked
B
65

First of all, don't worry about losing particular features like dynamic typing. As you're familiar with Common Lisp, a remarkably well-designed language, I assume you're aware that a language can't be reduced to its feature set. It's all about a coherent whole, isn't it?

In this regard, Haskell shines just as brightly as Common Lisp does. Its features combine to provide you with a way of programming that makes code extremely short and elegant. The lack of macros is mitigated somewhat by more elaborate (but, likewise, harder to understand and use) concepts like monads and arrows. The static type system adds to your power rather than getting in your way as it does in most object-oriented languages.

On the other hand, programming in Haskell is much less interactive than Lisp, and the tremendous amount of reflection present in languages like Lisp just doesn't fit the static view of the world that Haskell presupposes. The tool sets available to you are therefore quite different between the two languages, but hard to compare to one another.

I personally prefer the Lisp way of programming in general, as I feel it fits the way I work better. However, this doesn't mean you're bound to do so as well.

Bailey answered 25/12, 2008 at 11:28 Comment(2)
Could you elaborate a bit more on "programming in Haskell is much less interactive". Doesn't GHCi really provide everything you need?Wintergreen
@JohannesGerer: I have not tried it, but as far as I have read, GHCi is not a shell into the running image, where you can redefine and extend arbitrary parts of the entire program while it is running. Also, Haskell syntax makes it much harder to copy program fragments between the repl and the editor programmatically.Luminescent
O
13

There's less need for metaprogramming in Haskell than in Common Lisp because much can be structured around monads and the added syntax makes embedded DSLs look less tree-like, but there's always Template Haskell, as mentioned by ShreevatsaR, and even Liskell (Haskell semantics + Lisp syntax) if you like the parentheses.

Onfroi answered 7/1, 2009 at 23:1 Comment(1)
the Liskell link is dead, but these days there's Hackett.Mariellamarielle
E
10

Concerning macros, here is a page which talk about it : Hello Haskell, Goodbye Lisp. It explains a point of view where macros are just not needed in Haskell. It comes with a short example for comparison.

Example case where a LISP macro is required to avoid evaluation of both arguments :

(defmacro doif (x y) `(if ,x ,y))

Example case where Haskell does not systematically evaluates both argument, without the need of anything like a macro definition :

doif x y = if x then (Just y) else Nothing

And voilà

Elemental answered 23/2, 2010 at 23:20 Comment(12)
That's a common misconception. Yes, in Haskell laziness means that you don't need macros when you want to avoid evaluating some parts of an expression, but those are only the most trivial subset of all macro uses. Google for "The Swine Before Perl" for a talk demonstrating a macro that cannot be done with laziness. Also, if you do want some bit to be strict, then you can't do that as a function -- mirroring the fact that Scheme's delay cannot be a function.Fremont
@Eli Barzilay: I don't find this example very convincing. Here's a complete, simple Haskell translation of slide 40: pastebin.com/8rFYwTrEEmerick
@Reid Barton: Huh? The main point of that paper is creating a macro which is in fact a small DSL for specifying automatons that are getting "compiled" to Scheme code. Your code, OTOH, is a kind of a simple translation of the code -- but (a) it uses the table lookup that Shriram talks about in the beginning, and much more importantly, (b) you're using plain Haskell, and the result is still not close to defining such a DSL. AFAICT, the only thing that this demonstrates is "it's easy to write such code, easier when you can use function values in a table". I.e, not much related to macros.Fremont
@Eli Barzilay: I don't understand your response at all. accept is the (E)DSL. The accept function is the analogue of the macro outlined on the previous pages, and the definition of v is exactly parallel to the definition of v in Scheme on slide 40. The Haskell and Scheme functions compute the same thing with the same evaluation strategy. At best, the macro allows you to expose more of the structure of your program to the optimizer. You can hardly claim this as an example where macros increase the expressive power of the language in a way not replicated by lazy evaluation.Emerick
I'm not following any of this. First of all, yes -- accept is the function that does the work, but it's not a DSL, it's a function like all other functions -- and things like using it in all sub-lists, or the required use of where with its own scope is exactly what the macro makes unnecessary. As for lazy evaluation -- you're not using it in any significant way so I don't see how is this whole argument relevant.Fremont
@Eli Barzilay: In a hypothetical lazy Scheme, you could write this: pastebin.com/TN3F8VVE My general claim is that this macro buys you very little: slightly different syntax and an easier time for the optimizer (but it wouldn't matter to a "sufficiently smart compiler"). In exchange, you have trapped yourself into an inexpressive language; how do you define an automaton that matches any letter without listing them all? Also, I don't know what you mean by "using it in all sub-lists" or "the required use of where with its own scope".Emerick
Reid: (a) a lazy scheme is not hypothetical -- one has been part of Racket for several years; (b) the fact that macros are still useful there is a good hint; (c) what you wrote is also showing why a macro is useful -- it doesn't use one and is therefore not the DSL that Shriram is talking about; (d) by "using it in sublists" etc I meant that you have certain requirements on your "DSL" that come from the implementation (eg, the use of accept) -- that's one reason why it's not a DSL;Fremont
(e) The illusion that laziness makes flow-control macros (ones that have no new bindings) redundant can be seen as bogus if you think about adding a strict operator to a lazy language -- using such an operator requires special forms (and macros) too; (f) another point: if macros are not needed in Haskell, how come it does have them? (And even before TH, there were uses of CPP.)Fremont
Finally, (g) sure you trap yourself in an "inexpressive language" -- the whole point is a DSL -- not a GPL. Obviously, it's possible to write a more sophisticated macro that will have Scheme expressions (like a predicate for a symbol instead of listing them all), but that goes beyond the DSL in this example.Fremont
OK, I give up. Apparently your definition of DSL is "the arguments to a macro" and so my lazy Scheme example is not a DSL, despite being syntactically isomorphic to the original (automaton becoming letrec, : becoming accept, -> becoming nothing in this version). Whatever.Emerick
The link in this post seems to be broken @Hibou57, can you please check and fix your post if necessary?Anthracene
@batbrat, fixed and thanks for notifying me of the issue (they changed their URLs layout on this site).Elemental
R
9

I'm a Common Lisp programmer.

Having tried Haskell some time ago my personal bottom line was to stick with CL.

Reasons:

  • dynamic typing (check out Dynamic vs. Static Typing — A Pattern-Based Analysis by Pascal Costanza)
  • optional and keyword arguments
  • uniform homoiconic list syntax with macros
  • prefix syntax (no need to remember precedence rules)
  • impure and thus more suited for quick prototyping
  • powerful object system with meta-object protocol
  • mature standard
  • wide range of compilers

Haskell does have its own merits of course and does some things in a fundamentally different way, but it just doesn't cut it in the long term for me.

Rapping answered 9/7, 2009 at 8:3 Comment(6)
Hey do you happen to have the title of that Costanza paper you linked to? Looks like that file was moved.Outdistance
It might have been this one: p-cos.net/documents/dynatype.pdfRapping
Note that haskell too supports the prefix syntax but I'd say that monad >>= would be very very ugly using it. Also I disagree with impurity being a blessing :PLavernalaverne
I like this side-note: We have not yet gathered empirical data whether this issue causes serious problems in real-world programs.Megalith
None of the examples in that paper (Pascal Costanza, Dynamic vs. Static Typing — A Pattern-Based Analysis) apply to Haskell. They're all Java-specific (or more precisely, specific to "object-oriented programming") and I can't see any of those issues coming up in Haskell. Similarly, all your other arguments are debatable: one can as well say Haskell is "pure and thus more suited for quick prototyping", that prefix syntax isn't compulsory, that it doesn't have a wide range of compilers that do different things, etc.Castorina
That paper is indeed almost completely irrelevant to Haskell. "dilbert = dogbert.hire(dilbert);"?? I doubt many Haskell programmers can even read this without twitching a little.Slowmoving
T
6

In Haskell you can define an if function, which is impossible in LISP. This is possible because of laziness, which allows for more modularity in programs. This classic paper: Why FP matters by John Hughes, explains how laziness enhances composability.

Tulley answered 12/1, 2009 at 21:10 Comment(3)
Scheme (one of the two major LISP dialects) actually does have lazy evaluation, though it's not default as in Haskell.Curet
(defmacro doif (x y) `(if ,x ,y))Villanovan
A macro is not the same as a function--macros do not work well with higher-order functions like fold, for example whereas non-strict functions do.Metaplasia
N
5

There are really cool things that you can achieve in Lisp with macros that are cumbersome (if possible) in Haskell. Take for example the `memoize' macro (see Chapter 9 of Peter Norvig's PAIP). With it, you can define a function, say foo, and then simply evaluate (memoize 'foo), which replaces foo's global definition with a memoized version. Can you achieve the same effect in Haskell with higher-order functions?

Narghile answered 9/9, 2009 at 14:31 Comment(3)
Not quite (AFAIK), but you can do something similar by modifying the function (assuming it's recursive) to take the function to call recursively as a parameter(!) rather than simply calling itself by name: haskell.org/haskellwiki/MemoizationSpaceless
You can add foo to a lazy data-structure, where the value will be stored once computed. This will be effectively the same.Removable
Everything in Haskell is memoized and probably inlined when needed by default by the Haskell compiler.Hilariohilarious
R
4

As I continue my Haskell-learning journey, it seems that one thing that helps "replace" macros is the ability to define your own infix operators and customize their precedence and associativity. Kinda complicated, but an interesting system!

Rowel answered 29/12, 2008 at 23:42 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.