GLib's GAsyncQueue vs. POSIX message_queue
Asked Answered
H

1

13

Does anyone have any idea of the relative performance of GLib's GAsyncQueue vs. POSIX message_queue for inter-thread communication? I will have many small messages (both one way and request-response types), to be implemented in C on top of Linux (for now; may be ported to Windows later). I am trying to decide which one to use.

What I have found out is that using GLib is better for portability purposes, but POSIX mq's have the advantage of being able to select or poll on them.

However, I have not found any information on whose performance is better.

Hug answered 10/2, 2012 at 12:6 Comment(0)
H
18

Since there were no responses to my question, I decided to run some performance tests myself. The main idea was taken from http://cybertiggyr.com/throughput/throughput.html. The test idea was:

  • Create two threads (pthreads / gthreads).
  • One thread produced data and wrote to the IPC in chunks till 1024 MB data was sent.
  • The other thread consumed data from the IPC. I tested with chunk sizes of 4, 64, 256, 512 and 1024 bytes. I tested with GAsyncQueue (with gthreads), POSIX message queue and UNIX domain sockets (with pthreads).

Here is the result obtained:

enter image description here

To summarize, perf(GAsyncQueue) > perf(mq) > perf(UNIX socket), though the performances of GAsyncQueue and POSIX message queue are comparable in most cases - the difference occurs only with small message sizes.

I was wondering how GAsyncQueue is implemented to give comparable of even better performance than Linux's native message queue implementation. It is a pity that it cannot be used for inter process communication, like the other two can.

Hug answered 16/2, 2012 at 5:8 Comment(3)
Very interesting. I've upvoted your answer and question, perhaps it will now let you post the graphs.Rabah
I ran some more experiments: added signalling between threads to let the consumer know that data has been produced. I used the eventfd Linux technique. And as soon as I did so, I saw the performance of GAsyncQueue degrade to be similar to the others.Hug
Does this give an explanation the the results? That all linux IPC mechanisms go through the kernel and therefore have similar performance. GAsyncQueue somehow has user-space implementation - extra userspace - kernel space copy is avoided, which results in better performance. And as soon as eventfd mechanism is added, again the kernel comes into picture. Is that understanding correct?Hug

© 2022 - 2024 — McMap. All rights reserved.