Advantages of stateless programming?
Asked Answered
P

10

156

I've recently been learning about functional programming (specifically Haskell, but I've gone through tutorials on Lisp and Erlang as well). While I found the concepts very enlightening, I still don't see the practical side of the "no side effects" concept. What are the practical advantages of it? I'm trying to think in the functional mindset, but there are some situations that just seem overly complex without the ability to save state in an easy way (I don't consider Haskell's monads 'easy').

Is it worth continuing to learn Haskell (or another purely functional language) in-depth? Is functional or stateless programming actually more productive than procedural? Is it likely that I will continue to use Haskell or another functional language later, or should I learn it only for the understanding?

I care less about performance than productivity. So I'm mainly asking if I will be more productive in a functional language than a procedural/object-oriented/whatever.

Politicize answered 10/5, 2009 at 2:9 Comment(1)
By productivity do you mean generating a code based solution faster?Counterreply
C
193

Read Functional Programming in a Nutshell.

There are lots of advantages to stateless programming, not least of which is dramatically multithreaded and concurrent code. To put it bluntly, mutable state is enemy of multithreaded code. If values are immutable by default, programmers don't need to worry about one thread mutating the value of shared state between two threads, so it eliminates a whole class of multithreading bugs related to race conditions. Since there are no race conditions, there's no reason to use locks either, so immutability eliminates another whole class of bugs related to deadlocks as well.

That's the big reason why functional programming matters, and probably the best one for jumping on the functional programming train. There are also lots of other benefits, including simplified debugging (i.e. functions are pure and do not mutate state in other parts of an application), more terse and expressive code, less boilerplate code compared to languages which are heavily dependent on design patterns, and the compiler can more aggressively optimize your code.

Coverall answered 10/5, 2009 at 2:20 Comment(12)
I second this! I believe functional programming will be used much more widely in the future because of its suitability to parallel programming.Winnifredwinning
@Ray: I would also add distributed programming!Rimma
Most of that is true, except for debugging. It's generally more difficult in haskell, because you don't have a real call stack, just a pattern-match stack. And it's a lot harder to predict what your code ends up as.Striking
@Striking that is compensated with the possibility of doing code less buggy in haskell. Actually my real concerns are on performance.Feminine
@Feminine no, it's not. You cannot compensate for bad debugging. And you can write buggy haskell code rather easily, including state bugs.Striking
@Striking maybe the quid is using functional programming to go 'stateless' as with Erlang. The thing is as by now, functional programming is perfect for processing data in stateless way and is more prone to correctness than imperative programming.Feminine
@Feminine I disagree with that too. It's not about functional vs imperative, it's about "reasoning about code". That can also be achieved in an imperative language, which has a) an effect system and requires all effects to be on type level, b) uses immutability by default, c) is memory-safe.Striking
Also, functional programming isn't really about "stateless". Recursion is already implicit (local) state and is the main thing we do in haskell. That becomes clear once you implement a few non-trivial algorithms in idiomatic haskell (e.g. computational geometry stuff) and have fun debugging those.Striking
@hasfull, i didn't mean that functional programming is about "stateless". I said that programming "stateless" in Erlang is a "Goal from middle camp", also in Erlang there's not recursion, because there's not return concept in functions so there's not an implicit local state or stack data. Of course that "stateless" approach can't be used for all solutions, that's why i think that it fits for big data computations without persistence. An example of applications is in big networks computing.Feminine
Dislike equating stateless with FP. Many FP programs are filled with state, it simply exists in a closure rather than an object.Hifi
@mikemaccana, I think we get quickly into the weeds with this. Perhaps it's like saying that "Many OO programs are filled with procedural programing". They are still OO programs. Things have always been a tad muddy when sticking paradigm labels to things.Lappet
It's important that people don't confuse FP with stateless however - separating state from functions is a different concern from avoiding state at all.Hifi
Q
54

The more pieces of your program are stateless, the more ways there are to put pieces together without having anything break. The power of the stateless paradigm lies not in statelessness (or purity) per se, but the ability it gives you to write powerful, reusable functions and combine them.

You can find a good tutorial with lots of examples in John Hughes's paper Why Functional Programming Matters (PDF).

You will be gobs more productive, especially if you pick a functional language that also has algebraic data types and pattern matching (Caml, SML, Haskell).

Quintus answered 10/5, 2009 at 2:39 Comment(1)
Wouldn't mixins also provide reusable code in a similar way with OOP? Not advocating OOP just trying to understand things myself.Hifi
W
20

Many of the other answers have focused on the performance (parallelism) side of functional programming, which I believe is very important. However, you did specifically ask about productivity, as in, can you program the same thing faster in a functional paradigm than in an imperative paradigm.

I actually find (from personal experience) that programming in F# matches the way I think better, and so it's easier. I think that's the biggest difference. I've programmed in both F# and C#, and there's a lot less "fighting the language" in F#, which I love. You don't have to think about the details in F#. Here's a few examples of what I've found I really enjoy.

For example, even though F# is statically typed (all types are resolved at compile time), the type inference figures out what types you have, so you don't have to say it. And if it can't figure it out, it automatically makes your function/class/whatever generic. So you never have to write any generic whatever, it's all automatic. I find that means I'm spending more time thinking about the problem and less how to implement it. In fact, whenever I come back to C#, I find I really miss this type inference, you never realise how distracting it is until you don't need to do it anymore.

Also in F#, instead of writing loops, you call functions. It's a subtle change, but significant, because you don't have to think about the loop construct anymore. For example, here's a piece of code which would go through and match something (I can't remember what, it's from a project Euler puzzle):

let matchingFactors =
    factors
    |> Seq.filter (fun x -> largestPalindrome % x = 0)
    |> Seq.map (fun x -> (x, largestPalindrome / x))

I realise that doing a filter then a map (that's a conversion of each element) in C# would be quite simple, but you have to think at a lower level. Particularly, you'd have to write the loop itself, and have your own explicit if statement, and those kinds of things. Since learning F#, I've realised I've found it easier to code in the functional way, where if you want to filter, you write "filter", and if you want to map, you write "map", instead of implementing each of the details.

I also love the |> operator, which I think separates F# from ocaml, and possibly other functional languages. It's the pipe operator, it lets you "pipe" the output of one expression into the input of another expression. It makes the code follow how I think more. Like in the code snippet above, that's saying, "take the factors sequence, filter it, then map it." It's a very high level of thinking, which you don't get in an imperative programming language because you're so busy writing the loop and if statements. It's the one thing I miss the most whenever I go into another language.

So just in general, even though I can program in both C# and F#, I find it easier to use F# because you can think at a higher level. I would argue that because the smaller details are removed from functional programming (in F# at least), that I am more productive.

Edit: I saw in one of the comments that you asked for an example of "state" in a functional programming language. F# can be written imperatively, so here's a direct example of how you can have mutable state in F#:

let mutable x = 5
for i in 1..10 do
    x <- x + i
Winnifredwinning answered 10/5, 2009 at 4:14 Comment(3)
I agree with your post generally, but |> has nothing to do with functional programming per se. Actually, a |> b (p1, p2) is just syntactic sugar for b (a, p1, p2). Couple this with right-associativity and you've got it.Rimma
True, I should acknowledge that probably a lot of my positive experience with F# has more to do with F# than it does with functional programming. But still, there is a strong correlation between the two, and even though things like type inference and |> aren't functional programming per se, certainly I would claim they "go with the territory." At least in general.Winnifredwinning
|> is just another higher-order infix function, in this case a function-application operator. Defining your own higher-order, infix operators is definitely a part of functional programming (unless you're a Schemer). Haskell has its $ which is the same except information in the pipeline flows right to left.Quintus
D
15

Consider all the difficult bugs you've spent a long time debugging.

Now, how many of those bugs were due to "unintended interactions" between two separate components of a program? (Nearly all threading bugs have this form: races involving writing shared data, deadlocks, ... Additionally, it is common to find libraries that have some unexpected effect on global state, or read/write the registry/environment, etc.) I would posit that at least 1 in 3 'hard bugs' fall into this category.

Now if you switch to stateless/immutable/pure programming, all those bugs go away. You are presented with some new challenges instead (e.g. when you do want different modules to interact with the environment), but in a language like Haskell, those interactions get explicitly reified into the type system, which means you can just look at the type of a function and reason about the type of interactions it can have with the rest of the program.

That's the big win from 'immutability' IMO. In an ideal world, we'd all design terrific APIs and even when things were mutable, effects would be local and well-documented and 'unexpected' interactions would be kept to a minimum. In the real world, there are lots of APIs that interact with global state in myriad ways, and these are the source of the most pernicious bugs. Aspiring to statelessness is aspiring to be rid of unintended/implicit/behind-the-scenes interactions among components.

Denitadenitrate answered 10/5, 2009 at 5:11 Comment(1)
Someone once said that overwriting a mutable value means that you are explicitly garbage collecting/freeing the previous value. In some cases other parts of the program weren't done using that value. When values cannot be mutated, this class of bugs also goes away.Kip
K
8

One advantage of stateless functions is that they permit precalculation or caching of the function's return values. Even some C compilers allow you to explicitly mark functions as stateless to improve their optimisability. As many others have noted, stateless functions are much easier to parallelise.

But efficiency is not the only concern. A pure function is easier to test and debug since anything that affects it is explicitly stated. And when programming in a functional language, one gets in the habit of making as few functions "dirty" (with I/O, etc.) as possible. Separating out the stateful stuff this way is a good way to design programs, even in not-so-functional languages.

Functional languages can take a while to "get", and it's difficult to explain to someone who hasn't gone through that process. But most people who persist long enough finally realise that the fuss is worth it, even if they don't end up using functional languages much.

Kiyokokiyoshi answered 10/5, 2009 at 2:31 Comment(2)
That first part is a really interesting point, I'd never thought about that before. Thanks!Politicize
Suppose you have sin(PI/3) in your code, where PI is a constant, the compiler could evaluate this function at compile time and embed the result in the generated code.Kiyokokiyoshi
B
7

Without state, it is very easy to automatically parallelize your code (as CPUs are made with more and more cores this is very important).

Botha answered 10/5, 2009 at 2:12 Comment(5)
Yes, I've definitely looked into that. Erlang's concurrency model in particular is very intriguing. However, at this point I don't really care about concurrency as much as productivity. Is there a productivity bonus from programming without state?Politicize
@musicfreak , no there isn't a productivity bonus. But as a note, modern FP languages still let you use state if you really need it.Little
Really? Can you give an example of state in a functional language, just so I can see how it's done?Politicize
Check out the State Monad in Haskell - book.realworldhaskell.org/read/monads.html#x_NZAngstrom
@Unknown: I disagree. Programming without state reduces the occurence of bugs that are due to unforeseen/unintended interactions of different components. It also encourages better design (more reusability, separation of mechanism and policy, and that sort of stuff). It's not always appropriate for the task at hand but in some cases really shines.Kiyokokiyoshi
S
6

Stateless web applications are essential when you start having higher traffic.

There could be plenty of user data that you don't want to store on the client side for security reasons for example. In this case you need to store it server-side. You could use the web applications default session but if you have more than one instance of the application you will need to make sure that each user is always directed to the same instance.

Load balancers often have the ability to have 'sticky sessions' where the load balancer some how knows which server to send the users request to. This is not ideal though, for example it means every time you restart your web application, all connected users will lose their session.

A better approach is to store the session behind the web servers in some sort of data store, these days there are loads of great nosql products available for this (redis, mongo, elasticsearch, memcached). This way the web servers are stateless but you still have state server-side and the availability of this state can be managed by choosing the right datastore setup. These data stores usually have great redundancy so it should almost always be possible to make changes to your web application and even the data store without impacting the users.

Salpinx answered 22/8, 2013 at 12:47 Comment(0)
I
2

My understanding is that FP also has a huge impact on testing. Not having a mutable state will often force you to supply more data to a function than you would have to for a class. There's tradeoffs, but think about how easy it would be to test a function that is "incrementNumberByN" rather than a "Counter" class.

Object

describe("counter", () => {
    it("should increment the count by one when 'increment' invoked without 
    argument", () => {
       const counter = new Counter(0)
       counter.increment()
       expect(counter.count).toBe(1)
    })
   it("should increment the count by n when 'increment' invoked with 
    argument", () => {
       const counter = new Counter(0)
       counter.increment(2)
       expect(counter.count).toBe(2)
    })
})

functional

 describe("incrementNumberBy(startingNumber, increment)", () => {

   it("should increment by 1 if n not supplied"){
      expect(incrementNumberBy(0)).toBe(1)
   }

   it("should increment by 1 if n = 1 supplied"){
      expect(countBy(0, 1)).toBe(1)
   }

 })

Since the function has no state and the data going in is more explicit, there are fewer things to focus on when you are trying to figure out why a test might be failing. On the tests for the counter we had to do

       const counter = new Counter(0)
       counter.increment()
       expect(counter.count).toBe(1)

Both of the first two lines contribute to the value of counter.count. In a simple example like this 1 vs 2 lines of potentially problematic code isn't a big deal, but when you deal with a more complex object you might be adding a ton of complexity to your testing as well.

In contrast, when you write a project in a functional language, it nudges you towards keeping fancy algorithms dependent on the data flowing in and out of a particular function, rather than being dependent on the state of your system.

Another way of looking at it would be illustrating the mindset for testing a system in each paradigm.

For Functional Programming: Make sure function A works for given inputs, you make sure function B works with given inputs, make sure C works with given inputs.

For OOP: Make sure Object A's method works given an input argument of X after doing Y and Z to the state of the object. Make sure Object B's method works given an input argument of X after doing W and Y to the state of the object.

Insociable answered 29/6, 2021 at 18:25 Comment(0)
S
1

The advantages of stateless programming coincide with those goto-free programming, only more so.

Though many descriptions of functional programming emphasize the lack of mutation, the lack of mutation also goes hand in hand with the lack of unconditional control transfers, such as loops. In functional programming languages, recursion, in particularly tail recursion, replaces looping. Recursion eliminates both the unconditional control construct and the mutation of variables in the same stroke. The recursive call binds argument values to parameters, rather than assigning values.

To understand why this is advantageous, rather than turning to functional programming literature, we can consult the 1968 paper by Dijkstra, "Go To Statement Considered Harmful":

"The unbridled use of the go to statement has an immediate consequence that it becomes terribly hard to find a meaningful set of coordinates in which to describe the process progress."

Dijkstra's observations, however still apply to structured programs which avoid go to, because statements like while, if and whatnot are just window dressing on go to! Without using go to, we can still find it impossible to find the coordinates in which to describe the process progress. Dijkstra neglected to observe that bridled go to still has all the same issues.

What this means is that at any given point in the execution of the program, it is not clear how we got there. When we run into a bug, we have to use backwards reasoning: how did we end up in this state? How did we branch into this point of the code? Often it is hard to follow: the trail goes back a few steps and then runs cold due to a vastness of possibilities.

Functional programming gives us the absolute coordinates. We can rely on analytical tools like mathematical induction to understand how the program arrived into a certain situation.

For example, to convince ourselves that a recursive function is correct, we can just verify its base cases, and then understand and check its inductive hypothesis.

If the logic is written as a loop with mutating variables, we need a more complicated set of tools: breaking down the logic into steps with pre- and post-conditions, which we rewrite in terms mathematics that refers to the prior and current values of variables and such. Yes, if the program uses only certain control structures, avoiding go to, then the analysis is somewhat easier. The tools are tailored to the structures: we have a recipe for how we analyze the correctness of an if, while, and other structures.

However, by contrast, in a functional program there is no prior value of any variable to reason about; that whole class of problem has gone away.

Scalf answered 21/12, 2022 at 21:49 Comment(0)
I
0

Haskel and Prolog are good examples of languages which may be implemented as stateless programming languages. But unfortunately they are not so far. Both Prolog and Haskel have imperative implementations currently. See some SMT's, seem closer to stateless coding.

This is why you are having hard time seeing any benefits from these programing languages. Due to imperative implementations we have no performance and stability benefits. So the lack of stateless languages infrastructure is the main reason you feel no any stateless programming language due to its absence.

These are some benefits of pure stateless:

  • Task description is the program (compact code)
  • Stability due to absense of state-dependant bugs (the most of bugs)
  • Cachable results (a set of inputs always cause same set of outputs)
  • Distributable computations
  • Rebaseable to quantum computations
  • Thin code for multiple overlapping clauses
  • Allows differentiable programming optimizations
  • Consistently applying code changes (adding logic breaks nothing written)
  • Optimized combinatorics (no need to bruteforce enumerations)

Stateless coding is about concentrating on relations between data which then used for computing by deducing it. Basically this is the next level of programming abstraction. It is much closer to native language then any imperative programming languages because it allow describing relations instead of state change sequences.

Infrared answered 14/10, 2022 at 12:39 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.