How to handle expressions in Haskell?
Asked Answered
G

3

9

Let's say I have :

f :: Double -> Double
f x = 3*x^2 + 5*x + 9

I would like to compute the derivative of this function and write

derivate f

so that

derivate f == \x -> 6*x + 5

but how to define derivate?

derivate :: (a -> a) -> (a -> a)
derivate f = f' -- how to compute f'?

I'm aware there is no native way to do this, but is there a library that can?

Do we have to rely on "meta"-datatypes to achieve this?

data Computation = Add Exp Expr | Mult Expr Expr | Power Expr Expr -- etc

Then, is it not a pain to make a corresponding constructor for each function ? However, datatypes should not represent functions (except for parsers).

Is Pure a good alternative because of its term-rewriting feature? Doesn't it have its drawbacks as well?

Are lists affordable?

f :: [Double]
f = [3, 5, 9]

derivate :: (a -> [a])
derivate f = (*) <$> f <*> (getNs f)

compute f x = sum $
    ((*) . (^) x) <$> (getNs f) <*> f

getNs f = (reverse (iterate (length f) [0..]))

Haskell now looks like it depends on LISP with a less appropriate syntax. Function and arguments waiting to be used together are quite stored in datatypes. Plus, it's not very natural.

They don't seem to be "flexible" enough to be able my derivate function other than polynomials, such as homographic functions.

Right now, for example, I would like to use derivatives for a game. The character runs on a floor made using a function, and I would like him to slide if the floor is steep enough.

I also need to solve equations for various purposes. Some examples:

I'm a spaceship and I want to take a nap. During my sleep, if I don't place myself carefully, I might crash on a planet because of gravity. I don't have enough gas to go far away from celestial objects and I don't have a map either. So I must place myself between the objects in this area so that the sum of their gravitationnal influence on me is canceled. x and y are my coordinates. gravity is a function that takes two objects and return the vector of the gravitationnal force between them. If there are two objects, say the Earth and the Moon, besides me, all I need to do to find where to go is to solve:

gravity earth spaceship + gravity moon spaceship == (0, 0)

It's much simpler and faster, etc., than to create a new function from scratch equigravityPoint :: Object -> Object -> Object -> Point.

If there are 3 objects besides me, it's still simple.

gravity earth spaceship + gravity moon spaceship + gravity sun spaceship == (0, 0)

Same for 4, and n. Handling a list of objects is much simpler this way than with equigravityPoint.

Other example. I want to code an ennemy bot that shoots me. If he just shoots targeting my current position, he will get me if I run towards me, but he'll miss me if I jump and fall on him. A smarter bot thinks like that: "Well, he jumped from a wall. If I shoot targeting where he is now the bullet won't get him, because he will have moved until then. So I'm gonna anticipate where he'll be in a few seconds and shoot there so that the bullet and him reach this point at the same time". Basically, I need the ability to compute trajectories. For example, for this case, I need the solution to trajectoryBullet == trajectoryCharacter, which gives a point where the line and the parabola meet.

A similar and simpler example not involving speed. I'm a fireman bot and there's a building in fire. Another team of firemen is fighting the fire with their water guns. I am and there are people jumping from . While my friends are shooting water, I hold the trampoline. I need to go where the people will fall before they do. So I need trajectories and equation-solving.

Gunmaker answered 27/2, 2013 at 1:26 Comment(7)
You can probably accomplish this with template haskell, but it's going to be a lot of work!Firth
It sounds like you want a symbolic algebra system. E.g. muPad, Axiom, mathematica, maple, etc. But if all you want to do is to calculate the steepness of a floor, then a numerical approach is probably sufficient.Supple
Well, you should come to NYC tomorrow and we can all learn together about an alleged (I can't find any info on the interwebs) new symbolic algebra library in haskell called "Wheeler" by Greg Wright.Philippians
I believe what you're looking for is "Automatic differentiation". Conal Elliott's blog is one decent place to start: conal.net/blog/posts/…Adara
@Philippians I couldn't come to NYC but I'm looking forward testing Wheeler!Gunmaker
@JohnL Thanks for the read, but I was looking for something broader, not only about derivative, which was an example of my "needs".Gunmaker
Related question: #548422Carmelocarmen
M
29

One way of doing this is to do automatic differentiation instead of symbolic differentiation; this is an approach where you simultaneously compute both f(x) and f′(x) in one computation. There's a really cool way of doing this using dual numbers that I learned about from Dan "sigfpe" Piponi's excellent blog post on automatic differentiation. You should probably just go read that, but here's the basic idea. Instead of working with the real numbers (or Double, our favorite (?) facsimile of them), you define a new set, which I'm going to call D, by adjoining a new element ε to ℝ such that ε2 = 0. This is much like the way we define the complex numbers ℂ by adjoining a new element i to ℝ such that i2 = -1. (If you like algebra, this is the same as saying D = ℝ[x]/⟨x2⟩.) Thus, every element of D is of the form a + , where a and b are real. Arithmetic over the dual numbers works like you expect:

  • (a + ) ± (c + ) = (a + c) ± (b + d)ε; and
  • (a + )(c + ) = ac + bcε + adε + bdε2 = ac + (bc + ad)ε.

(Since ε2 = 0, division is more complicated, although the multiply-by-the-conjugate trick you use with the complex numbers still works; see Wikipedia's explanation for more.)

Now, why are these useful? Intuitively, the ε acts like an infinitesimal, allowing you to compute derivatives with it. Indeed, if we rewrite the rule for multiplication using different names, it becomes

  • (f + fε)(g + gε) = fg + (fg + fg′)ε

And the coefficient of ε there looks a lot like the product rule for differentiating products of functions!

So, then, let's work out what happens for one large class of functions. Since we've ignored division above, suppose we have some function f : ℝ → ℝ defined by a power series (possibly finite, so any polynomial is OK, as are things like sin(x), cos(x), and ex). Then we can define a new function fD : D → D in the obvious way: instead of adding real numbers, we add dual numbers, etc., etc. Then I claim that fD(x + ε) = f(x) + f′(x)ε. First, we can show by induction that for any natural number i, it's the case that (x + ε)i = xi + ixi-1ε; this will establish our derivative result for the case where f(x) = xk. In the base case, this equality clearly holds when i = 0. Then supposing it holds for i, we have

  • (x + ε)i+1 = (x + ε)(x + ε)i by factoring out one copy of (x + ε)
  • = (x + ε)(xi + ixi-1ε) by the inductive hypothesis
  • = xi+1 + (xi + x(ixi-1))ε by the definition of dual-number multiplication
  • = xi+1 + (i+1)xiε by simple algebra.

And indeed, this is what we wanted. Now, considering our power series f, we know that

  • f(x) = a0 + a1x + a2x2 + … + aixi + …

Then we have

  • fD(x + ε) = a0 + a1(x + ε) + a2(x + ε)2 + … + ai(x + ε)i + …
  • = a0 + (a1x + a1ε) + (a2x2 + 2a2) + … + (aixi + iaixi-1ε) + … by the above lemma
  • = (a0 + a1x + a2x2 + … + aixi + …) + (a1ε + 2a2 + … + iaixi-1ε + …) by commutativity
  • = (a0 + a1x + a2x2 + … + aixi + …) + (a1 + 2a2x + … + iaixi-1 + …)ε by factoring out the ε
  • = f(x) + f′(x)ε by definition.

Great! So dual numbers (at least for this case, but the result is generally true) can do differentiation for us. All we have to do is apply our original function to, not the real number x, but the dual number x + ε, and then extract the resulting coefficient of ε. And I bet you can see how one could implement this in Haskell:

data Dual a = !a :+? !a deriving (Eq, Read, Show)
infix 6 :+?

instance Num a => Num (Dual a) where
  (a :+? b) + (c :+? d) = (a+c) :+? (b+d)
  (a :+? b) - (c :+? d) = (a-c) :+? (b-d)
  (a :+? b) * (c :+? d) = (a*c) :+? (b*c + a*d)
  negate (a :+? b)      = (-a) :+? (-b)
  fromInteger n         = fromInteger n :+? 0
  -- abs and signum might actually exist, but I'm not sure what they are.
  abs    _              = error "No abs for dual numbers."
  signum _              = error "No signum for dual numbers."

-- Instances for Fractional, Floating, etc., are all possible too.

differentiate :: Num a => (Dual a -> Dual a) -> (a -> a)
differentiate f x = case f (x :+? 1) of _ :+? f'x -> f'x

-- Your original f, but with a more general type signature.  This polymorphism is
-- essential!  Otherwise, we can't pass f to differentiate.
f :: Num a => a -> a
f x = 3*x^2 + 5*x + 9

f' :: Num a => a -> a
f' = differentiate f

And then, lo and behold:

*Main> f 42
5511
*Main> f' 42
257

Which, as Wolfram Alpha can confirm, is exactly the right answer.

More information about this stuff is definitely available. I'm not any kind of expert on this; I just think the idea is really cool, so I'm taking this chance to parrot what I've read and work out a simple proof or two. Dan Piponi has written more about dual numbers/automatic differentiation, including a post where, among other things, he shows a more general construction which allows for partial derivatives. Conal Elliott has a post where he shows how to compute derivative towers (f(x), f′(x), f″(x), …) in an analogous way. The Wikipedia article on automatic differentiation linked above goes into some more detail, including some other approaches. (This is apparently a form of "forward mode automatic differentiation", but "reverse mode" also exists, and can apparently be faster.)

Finally, there's a Haskell wiki page on automatic differentiation, which links to some articles—and, importantly, some Hackage packages! I've never used these, but it appears that the ad package, by Edward Kmett is the most complete, handling multiple different ways of doing automatic differentiation—and it turns out that he uploaded that package after writing a package to properly answer another Stack Overflow question.


I do want to add one other thing. You say "However, datatypes should not represent functions (except for parsers)." I'd have to disagree there—reifying your functions into data types is great for all sorts of things in this vein. (And what makes parsers special, anyway?) Any time you have a function you want to introspect, reifying it as a data type can be a great option. For instance, here's an encoding of symbolic differentiation, much like the encoding of automatic differentiation above:

data Symbolic a = Const a
                | Var String
                | Symbolic a :+: Symbolic a
                | Symbolic a :-: Symbolic a
                | Symbolic a :*: Symbolic a
                deriving (Eq, Read, Show)
infixl 6 :+:
infixl 6 :-:
infixl 7 :*:

eval :: Num a => (String -> a) -> Symbolic a -> a
eval env = go
  where go (Const a) = a
        go (Var   x) = env x
        go (e :+: f) = go e + go f
        go (e :-: f) = go e - go f
        go (e :*: f) = go e * go f

instance Num a => Num (Symbolic a) where
  (+)         = (:+:)
  (-)         = (:-:)
  (*)         = (:*:)
  negate      = (0 -)
  fromInteger = Const . fromInteger
  -- Ignoring abs and signum again
  abs         = error "No abs for symbolic numbers."
  signum      = error "No signum for symbolic numbers."

-- Instances for Fractional, Floating, etc., are all possible too.

differentiate :: Num a => Symbolic a -> String -> Symbolic a
differentiate f x = go f
  where go (Const a)             = 0
        go (Var   y) | x == y    = 1
                     | otherwise = 0
        go (e :+: f)             = go e + go f
        go (e :-: f)             = go e - go f
        go (e :*: f)             = go e * f + e * go f

f :: Num a => a -> a
f x = 3*x^2 + 5*x + 9

f' :: Num a => a -> a
f' x = eval (const x) $ differentiate (f $ Var "x") "x"

And once again:

*Main> f 42
5511
*Main> f' 42
257

The beauty of both of these solutions (or one piece of it, anyway) is that as long as your original f is polymorphic (of type Num a => a -> a or similar), you never have to modify f! The only place you need to put derivative-related code is in the definition of your new data type and in your differentiation function; you get the derivatives of your existing functions for free.

Misbelief answered 27/2, 2013 at 5:16 Comment(4)
That's incredibly cool and the fact that you actually included a working (and really short too!) implementation makes this a really amazing answer. Thank you.Rai
Great answer! I'd like to point out: for only calculating values of the derivative, automatic differentiation is much better – doing this symbolically can lead so some efficiency problems, especially when let statements are involved. That's not to say that I'd object to more symbolic approaches, but it's often better to use specialised data types for what you want to calculate. As you said, thanks to polymorphism you can without problems evaluate the same expression in multiple different ways.Suffumigate
Thanks a lot for automatic differentiation. I'll probably use it in combination with symbolic calculus for the other purposes, such as equation solving (I updated the question to give examples of such uses).Gunmaker
I don't like how symbolic and "regular" computation are separate, because it leads to boilerplate that could be avoided IMO. Thats why I'll take a look at Maxima (Mathematica is not open-source). I wonder if Haskell could be made into one of these CAS languages... And what makes parsers special is that you can't avoid using datatypes to represent the syntax of the language you parse.Gunmaker
M
4

Numerical derivative can be done easily:

derive f x = (f (x + dx) - f (x - dx)) / (2 * dx) where dx = 0.00001

However, for symbolic derivatives, you need to create an AST, then implement the derivation rules through matching and rewriting the AST.

Magdala answered 27/2, 2013 at 1:32 Comment(1)
If only you could throw a limit as dx aproaches zero in there!Firth
M
2

I don't understand your problem with using a custom data type

data Expr = Plus Expr Expr 
           | Times Expr Expr 
           | Negate Expr 
           | Exp Expr Expr 
           | Abs Expr
           | Signum Expr
           | FromInteger Integer
           | Var

instance Num Expr where
   fromInteger = FromInteger
   (+) = Plus
   (*) = Times
   negate = Negate
   abs = Abs
   signum = Signum

toNumF :: Num a => Expr -> a -> a
toNumF e x = go e where
  go Var = x
  go (FromInteger i) = fromInteger i
  go (Plus a b) = (go a) + (go b)
  ...

you can then use this just like you would Int or Double and all will just work! You can define a function

deriveExpr :: Expr -> Expr

which would then let you define the following (RankN) function

derivate :: Num b => (forall a. Num a => a -> a) -> b -> b
derivate f = toNumF $ deriveExpr (f Var)

you can extend this to work with other parts of the numerical hierarchy.

Munos answered 27/2, 2013 at 4:35 Comment(2)
It works fine, but I don't like how, for instance, Plus acts as a duplicate for (+). Maybe it's Haskell or library issue, but I think you should be able to use the expression 1 + 3x however you like. Haskell computes it and doesn't let you manipulate its terms, because of referential transparency. You need a special construct which redefines many existing functions as "values"! Maybe it would be better to have two modes for functions: the normal one, and the algebraic one, where you can handle the start expression and not its result.Gunmaker
But you only need to write this code once. Once you do you can write derivate (\x -> 2x*x) and get an answer using what ever kind of number you want.Munos

© 2022 - 2024 — McMap. All rights reserved.