Long polling with Nancy Async Beta
Asked Answered
R

1

6

Normally I would use SignalR to handle any data pushing but in this case I'm using a self-hosted instance of Nancy Async Beta on the server end. I've heard its still possible to use SignalR in this scenario but I would still prefer to handle the long-polling myself in this case. Here is the code from a simple experiment app I'm writing that produces the issue described below:

        Get["/receivechat", true] = async (x, ct) =>
            {
                string result = await _massPublisher.WaitForResult("test");
                return result;
            };

This handles the actual long-polling requests. The requests seem to enter this lambda in chunks of 4 or 5. So for example if I put a break point on the very first line of the lambda I wont see that breakpoint get hit until I send in 4 or 5 more requests and then suddenly all of the requests enter the lambda all at once. Obviously I need them to all enter as they're requested so they can all await my WaitForResult method. The WaitForResult method simply awaits a common TaskCompletionSource. I can post that code and the client-side code if necessary. As far as I can tell it's seems to be a problem with how I'm using Nancy Async Beta since the requests are being handled in parallel and the requests wont even enter the lambda until a few other requests are made.

It's probably note worthy that this application is still responsive to all other requests during this time.

I've read the documentation I can find on Nancy Async Beta and it looks like this example should work...but it's not - for me anyway. If anyone can offer some insight on why this doesn't work it would be much appreciated. And like I said I can post more code from this experiment but for now it seemed it would just clutter the question.

Update: Since I'm relatively new to the TPL and Nancy I've taken my code out of the experiment in order to isolate the issue and troubleshoot a little more. Here is the code I've updated to. It simply awaits a Task Delay of 5 seconds and then sends the current time to the client.

        Get["/receivechat", true] = async (x, ct) =>
            {
                //string result = await _massPublisher.WaitForResult("test");
                //return result;
                await Task.Delay(5000);
                return DateTime.Now.ToString();
            };

My understanding is that each request will be processed in parallel and independent of each other. Now with that understanding I would think each client should see a reply every 5 seconds regardless of how many other clients are polling those requests. However, here are the results:

async illustration

So in other words, a response is sent every 5 seconds but only to one client at a time. So for 3 clients it takes each client 15 seconds to receive its response. For 2 = 10 seconds etc...

So far I cant see what I'm doing wrong. Which is why I'm here. I love finding out I'm wrong! :) I'll get to learn something new. So if you know or may know where I'm going wrong please let me know. I've probably missed something small and dumb that I've overlooked while looking for it many times & hopefully it's a mistake that other people will find useful.

Renatorenaud answered 15/4, 2013 at 23:56 Comment(11)
There's nothing in the async code that can cause this behaviour, and we have it in production in several places now, so I'd suggest the way you're testing it, or something somewhere else, may be at fault?Forrestforrester
Hi Steven, I'm pretty sure I'm doing something wrong. I just have no idea what since 1) the async lambda is simply awaiting a common TaskCompletionSource and 2) right now I'm manually testing using two breakpoints (1 at the beginning of the lambda and one after the await) so I can see what & when something enters/exits.Renatorenaud
I think I'll post a concise play-by-play test. Another thing I've noticed is this example works fine with one client but not multiples.Renatorenaud
Here is a play-by-play test (below). I was testing with 4 or 5 clients before - this time with 3. (sorry I had these all broken down one step per line but stackoverflow apparently doesn't let you do that in these comments.Renatorenaud
- client one loads page - client one enters lambda - client two loads page - client two never enters lambda - client three loads page - client three never enters lambdaRenatorenaud
- client two sends a chat (to release the TaskCompletionSource) - client one is the only one to receive the chat. - another thread enters the lambda but only one. - another chat is sent from client two. - this time client 2 receives the chat but no others. (client two must have been the one to enter the lambda last time) - I reload client 3 and two requests enter the lambda this time. - Send another chat from client 2 and both clients 1 and 2 now receive it. - Send another chat from client 2 and this time client 3 receives it.Renatorenaud
From the description it sounds more like you have thread safe issues in your own code - as far as Nancy in concerned every single request is separate, there's no sharing of TCS or anything like that. Given our deferred execution on the body though, I'm not sure building a chat server like this is really a good fit for using Nancy - personally I'd be using OWIN to run Nancy and SignalR (like JabbR does)Forrestforrester
Thanks for the info Steven! I've updated my code to try to isolate the issue. Couple questions on the your comment though 1) by deferred execution on the body do you mean on the lambda in my code posted here? 2) If so, when is the execution deferred to? 3) Since this version of Nancy is async should it be possible to run a long-polling request loop? I realize SignalR is better than manual long-polling as it's automated, has better transports available etc however for the specific app that I'm planning I would like the option of doing the long-polling myself (but only if possible obviously).Renatorenaud
No, not the lambda - the Nancy response class is a series of properties for headers etc, and then a delegate to write the response body to a stream - it's that last part that's the deferred execution. You should be able to do long polling with the async stuff as all that's doing is taking a long time to return a response.Forrestforrester
Alright thanks! I don't know if you noticed I updated the example and description of my issue. In the new example I'm having a hard time understanding why other requests from other clients are getting in the way of other requests since its all independent and parallel according to my understanding.Renatorenaud
Somethign to do with the amount of requests the browser will send to the same server perhaps? We've hammered it with stress testing tools with several thousand simultaneous requests and not seen this behaviour.Forrestforrester
R
7

Figured out the problem with much appreciated help from Steven Robbins. It was the browser not sending more than one pending request at a time but I had seen this work before with all browsers including Chrome which is what I'm using to test. The browser will make multiple concurrent connections to a server BUT those requests need to be unique (apparently). If Chrome sees a pending request that exactly matches a request it's about to send out it will wait until the pending request finishes causing the exact output illustrated in the question since the other clients were more windows of the same browser (see the image in the question and description).

So by changing the commented line of JS (below) to the line above it (mainly to fix a caching issue) it also fixed the long-polling issue and suddenly both of my examples in the question work great.

            $.get("/receivechat?_=" + new Date().getTime(), null, function (data)
            //$.get("/receivechat", null, function (data)
            {
                self.viewModel.chatLines.push(data);
                self.update();
            }).error(function ()
            {
                setTimeout(self.update, 2000);
            });
Renatorenaud answered 18/4, 2013 at 15:38 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.