I fear I'm "doing the wrong thing" here, if so delete me and I apologize. In particular, I fail to see how I create the neat little annotations that some folks have created. However, I have many concerns/observations to make on this thread.
1) The commented element in the pseudo-code in one of the popular answers
result = query( "select smurfs from some_mushroom" );
// twiddle fingers
go_do_something_with_result( result );
is essentially bogus. If the thread is computing, then it's not twiddling thumbs, it's doing necessary work. If, on the other hand, it's simply waiting for the completion of IO, then it's not using CPU time, the whole point of the thread control infrastructure in the kernel is that the CPU will find something useful to do. The only way to "twiddle your thumbs" as suggested here would be to create a polling loop, and nobody who has coded a real webserver is inept enough to do that.
2) "Threads are hard", only makes sense in the context of data sharing. If you have essentially independent threads such as is the case when handling independent web requests, then threading is trivially simple, you just code up the linear flow of how to handle one job, and sit pretty knowing that it will handle multiple requests, and each will be effectively independent. Personally, I would venture that for most programmers, learning the closure/callback mechanism is more complex than simply coding the top-to-bottom thread version. (But yes, if you have to communicate between the threads, life gets really hard really fast, but then I'm unconvinced that the closure/callback mechanism really changes that, it just restricts your options, because this approach is still achievable with threads. Anyway, that's a whole other discussion that's really not relevant here).
3) So far, nobody has presented any real evidence as to why one particular type of context switch would be more or less time consuming than any other type. My experience in creating multi-tasking kernels (on a small scale for embedded controllers, nothing so fancy as a "real" OS) suggests that this would not be the case.
4) All the illustrations that I have seen to date that purport to show how much faster Node is than other webservers are horribly flawed, however, they're flawed in a way that does indirectly illustrate one advantage I would definitely accept for Node (and it's by no means insignificant). Node doesn't look like it needs (nor even permits, actually) tuning. If you have a threaded model, you need to create sufficient threads to handle the expected load. Do this badly, and you'll end up with poor performance. If there are too few threads, then the CPU is idle, but unable to accept more requests, create too many threads, and you will waste kernel memory, and in the case of a Java environment, you'll also be wasting main heap memory. Now, for Java, wasting heap is the first, best, way to screw up the system's performance, because efficient garbage collection (currently, this might change with G1, but it seems that the jury is still out on that point as of early 2013 at least) depends on having lots of spare heap. So, there's the issue, tune it with too few threads, you have idle CPUs and poor throughput, tune it with too many, and it bogs down in other ways.
5) There is another way in which I accept the logic of the claim that Node's approach "is faster by design", and that is this. Most thread models use a time-sliced context switch model, layered on top of the more appropriate (value judgement alert :) and more efficient (not a value judgement) preemptive model. This happens for two reasons, first, most programmers don't seem to understand priority preemption, and second, if you learn threading in a windows environment, the timeslicing is there whether you like it or not (of course, this reinforces the first point; notably, the first versions of Java used priority preemption on Solaris implementations, and timeslicing in Windows. Because most programmers didn't understand and complained that "threading doesn't work in Solaris" they changed the model to timeslice everywhere). Anyway, the bottom line is that timeslicing creates additional (and potentially unnecessary) context switches. Every context switch takes CPU time, and that time is effectively removed from the work that can be done on the real job at hand. However, the amount of time invested in context switching because of timeslicing should not be more than a very small percentage of the overall time, unless something pretty outlandish is happening, and there's no reason I can see to expect that to be the case in a simple webserver). So, yes, the excess context switches involved in timeslicing are inefficient (and these don't happen in kernel threads as a rule, btw) but the difference will be a few percent of throughput, not the kind of whole number factors that are implied in the performance claims that are often implied for Node.
Anyway, apologies for that all being long and rambly, but I really feel that so far, the discussion hasn't proved anything, and I would be pleased to hear from someone in either of these situations:
a) a real explanation of why Node should be better (beyond the two scenarios I've outlined above, the first of which (poor tuning) I believe is the real explanation for all the tests I've seen so far. ([edit], actually, the more I think about it, the more I'm wondering if the memory used by vast numbers of stacks might be significant here. The default stack sizes for modern threads tend to be pretty huge, but the memory allocated by a closure-based event system would be only what's needed)
b) a real benchmark that actually gives a fair chance to the threaded server of choice. At least that way, I'd have to stop believing that the claims are essentially false ;> ([edit] that's probably rather stronger than I intended, but I do feel that the explanations given for performance benefits are incomplete at best, and the benchmarks shown are unreasonable).
Cheers,
Toby
select()
is faster than thread context swaps. – Taper