Is continuation passing style any different to pipes?
Asked Answered
T

2

17

I've been learning about continuation passing style, particularly the asynchronous version as implemented in javascript, where a function takes another function as a final argument and creates an asychronous call to it, passing the return value to this second function.

However, I can't quite see how continuation-passing does anything more than recreate pipes (as in unix commandline pipes) or streams:

replace('somestring','somepattern', filter(str, console.log));

vs

echo 'somestring' | replace 'somepattern' | filter | console.log

Except that the piping is much, much cleaner. With piping, it seems obvious that the data is passed on, and simultaneously execution is passed to the receiving program. In fact with piping, I expect the stream of data to be able to continue to pass down the pipe, whereas in CPS I expect a serial process.

It is imaginable, perhaps, that CPS could be extended to continuous piping if a comms object and update method was passed along with the data, rather than a complete handover and return.

Am I missing something? Is CPS different (better?) in some important way?

To be clear, I mean continuation-passing, where one function passes execution to another, not just plain callbacks. CPS appears to imply passing the return value of a function to another function, and then quitting.

Taenia answered 16/4, 2012 at 12:12 Comment(1)
"I can't quite see how continuation-passing does anything more than recreate pipes" : well, you don't have pipe in javaScript in the first place, so I would say it creates pipe-like usage in JavaScript. It's some readability improvement, that is unfortunately not available as syntactic sugar like pipes (at least yet) in the language. It's not nothing at all IMHO.Wimer
C
9

UNIX pipes vs async javascript

There is a big fundamental difference between the way unix pipes behave vs the async CPS code you link to.

Mainly that the pipe blocks execution until the entire chain is completed whereas your async CPS example will return right after the first async call is made, and will only execute your callback when it is completed. (When the timeout wait is completed, in your example.)

Take a look at this example. I will use the Fetch API and Promises to demonstrate async behavior instead of setTimeout to make it more realistic. Imagine that the first function f1() is responsible for calling some webservice and parsing the result as a json. This is "piped" into f2() that processes the result.

CPS style:

function f2(json){
    //do some parsing
}

function f1(param, next) {
   return fetch(param).then(response => response.json()).then(json => next(json));
}

// you call it like this:
f1("https://service.url", f2);

You can write something that syntactically looks like a pipe if you move call to f2 out of f1, but that will do exactly the same as above:

function f1(param) {
   return fetch(param).then(response => response.json());
}

// you call it like this:
f1("https://service.url").then(f2);

But this still will not block. You cannot do this task using blocking mechanisms in javascript, there is simply no mechanism to block on a Promise. (Well in this case you could use a synchronous XMLHttpRequest, but that's not the point here.)

CPS vs piping

The difference between the above two methods is that who has the control to decide whether to call the next step and with exactly what paramters, the caller (later example) or the called function (CPS).

A good example where CPS comes very handy is middleware. Think about a caching middleware for example in a processing pipeline. Simplified example:

function cachingMiddleware(request, next){
     if(someCache.containsKey(request.url)){
         return someCache[request.url];
     }
     return next(request);
}

The middleware executes some logic, checks if the cache is still valid:

  • If it is not, then next is called, which then will proceed on with the processing pipeline.

  • If it is valid then the cached value is returned, skipping the next execution.

Caplin answered 7/11, 2018 at 12:33 Comment(0)
O
4

Continuation Passing Style at application level

Instead of comparing at an expression/function-block level, factoring Continuation Passing Style at an application level can provide an avenue for flow control advantages through its "continuation" function (a.k.a. callback function). Lets take Express.js for example:

Each express middleware takes a rather similar CPS function signature:

 const middleware = (req, res, next) => {
     /* middleware's logic */
     next();
 }

 const customErrorHandler = (error, req, res, next) => {
     /* custom error handling logic*/
 };

next is express's native callback function.

Correction: The next() function is not a part of the Node.js or Express API, but is the third argument that is passed to the middleware function. The next() function could be named anything, but by convention it is always named “next”

req and res are naming conventions for HTTP request and HTTP response respectively.

A route handler in Express.JS would be made up of one or more middleware functions. Express.js will pass each of them the req, res objects with changes made by the preceding middleware to the next, and an identical next callback.

app.get('/get', middlware1, middlware2, /*...*/ , middlewareN, customErrorHandler)

The next callback function serves:

  1. As a middleware's continuation:

    • Calling next() passes the execution flow to the next middleware function. In this case it fulfils its role as a continuation.
  2. Also as a route interceptor:

    • Calling next('Custom error message') bypasses all subsequent middlewares and passes the execution control to customErrorHandler for error handling. This makes 'cancellation' possible in the middle of the route!
    • Calling next('route') bypasses subsequent middlewares and passes control to the next matching route eg. /get/part.

Imitating Pipe in JS

There is a TC39 proposal for pipe , but until it is accepted we'll have to imitate pipe's behaviour manually. Nesting CPS functions can potentially lead to callback hell, so here is my attempt for cleaner code:

Assuming that we want to compute a sentence 'The fox jumps over the moon' by replacing parts of a starter string (e.g props)

const props = "     The [ANIMAL] [ACTION] over the [OBJECT] "

Every function to replace different parts of the string are sequenced with an array

const insertFox = s => s.replace(/\[ANIMAL\]/g, 'fox')
const insertJump = s => s.replace(/\[ACTION\]/g, 'jumps')
const insertMoon = s => s.replace(/\[OBJECT\]/g, 'moon')
const trim = s => s.trim()
const modifiers = [insertFox, insertJump, insertMoon, trim]

We can achieve a synchronous, non-streaming, pipe behaviour with reduce.

const pipeJS = (chain, callBack) => seed => 
    callBack(chain.reduce((acc, next) => next(acc), seed))
const callback = o => console.log(o)

pipeJS(modifiers, callback)(props) //-> 'The fox jumps over the moon'

And here is the asynchronous version of pipeJS;

const pipeJSAsync = chain => async seed =>
    await chain.reduce((acc, next) => next(acc), seed)
const callbackAsync = o => console.log(o)

pipeJSAsync(modifiers)(props).then(callbackAsync) //-> 'The fox jumps over the moon'

Hope this helps!

Okeechobee answered 13/11, 2018 at 18:35 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.