I am trying to figure out what is more performant, edge triggered or level triggered epoll.
Mainly I am considering "performant" as:
Ability to handle multiple connections without degredation.
Ability to keep the uptmost speed per inbound message.
I am actually more concerned about #2, but #1 is also important.
I've been running tests with a single threaded consumer (accept/read multiple socket connections using epoll_wait
), and multiple producers.
So far I've seen no difference, even up to 1000 file descriptors.
I've been laboring under the idea (delusion?) that edge triggered should be more performant because less interupts will be received. Is this a correct assumption?
One issue with my test, that might be masking performance differences, is that I don't dispatch my messages to threads once they are received, so the less interrupts don't really matter. I've been loath to do this test because I've been using __asm__ rdtsc
to get my "timestamps," so I don't want to have to reconcile what core my original timestamp came from.
What makes me even more suspicious is that level triggered epoll performs better on some benchmarks I've seen.
Which is better? Under what circumstances? Is there no difference? Any insights would be appreciated.
My sockets are non-blocking.