In general, on ucLinux, is ioctl faster than writing to /sys filesystem?
Asked Answered
I

2

9

I have an embedded system I'm working with, and it currently uses the sysfs to control certain features.

However, there is function that we would like to speed up, if possible.

I discovered that this subsystem also supports and ioctl interface, but before rewriting the code, I decided to search to see which is a faster interface (on ucLinux) in general: sysfs or ioctl.

Does anybody understand both implementations well enough to give me a rough idea of the difference in overhead for each? I'm looking for generic info, such as "ioctl is faster because you've removed the file layer from the function calls". Or "they are roughly the same because sysfs has a very simple interface".

Update 10/24/2013:

The specific case I'm currently doing is as follows:

int fd = open("/sys/power/state",O_WRONLY);
write( fd, "standby", 7 );
close( fd );

In kernel/power/main.c, the code that handles this write looks like:

static ssize_t state_store(struct kobject *kobj, struct kobj_attribute *attr,
               const char *buf, size_t n)
{
#ifdef CONFIG_SUSPEND
    suspend_state_t state = PM_SUSPEND_STANDBY;
    const char * const *s;
#endif
    char *p;
    int len;
    int error = -EINVAL;

    p = memchr(buf, '\n', n);
    len = p ? p - buf : n;

    /* First, check if we are requested to hibernate */
    if (len == 7 && !strncmp(buf, "standby", len)) {
        error = enter_standby();
  goto Exit;
    ((( snip )))

Can this be sped up by moving to a custom ioctl() where the code to handle the ioctl call looks something like:

case SNAPSHOT_STANDBY:
    if (!data->frozen) {
        error = -EPERM;
        break;
    }
    error = enter_standby();
    break;

(so the ioctl() calls the same low-level function that the sysfs function did).

Indigent answered 23/10, 2013 at 16:51 Comment(0)
B
3

If by sysfs you mean the sysfs() library call, notice this in man 2 sysfs:

NOTES

This System-V derived system call is obsolete; don't use it. On systems with /proc, the same information can be obtained via /proc/filesystems; use that interface instead.

I can't recall noticing stuff that had an ioctl() and a sysfs interface, but probably they exist. I'd use the proc or sys handle anyway, since that tends to be less cryptic and more flexible.

If by sysfs you mean accessing files in /sys, that's the preferred method.

I'm looking for generic info, such as "ioctl is faster because you've removed the file layer from the function calls".

Accessing procfs or sysfs files does not entail an I/O bottleneck because they are not real files -- they are kernel interfaces. So no, accessing this stuff through "the file layer" does not affect performance. This is a not uncommon misconception in linux systems programming, I think. Programmers can be squeamish about system calls that aren't well, system calls, and paranoid that opening a file will be somehow slower. Of course, file I/O in the ABI is just system calls anyway. What makes a normal (disk) file read slow is not the calls to open, read, write, whatever, it's the hardware bottleneck.

I always use low level descriptor based functions (open(), read()) instead of high level streams when doing this because at some point some experience led me to believe they were more reliable for this specifically (reading from /proc). I can't say whether that's definitively true.

Booted answered 23/10, 2013 at 17:23 Comment(0)
C
3

So, the question was interesting, I built a couple of modules, one for ioctl and one for sysfs, the ioctl implementing only a 4 bytes copy_from_user and nothing more, and the sysfs having nothing in its write interface.

Then a couple of userspace test up to 1 million iterations, here the results:

time ./sysfs /sys/kernel/kobject_example/bar 

real    0m0.427s
user    0m0.056s
sys     0m0.368s

time ./ioctl /run/temp 

real    0m0.236s
user    0m0.060s
sys     0m0.172s

edit

I agree with @goldilocks answer, HW is the real bottleneck, in a Linux environment with a well written driver choosing ioctl or sysfs doesn't make a big difference, but if you are using uClinux probably in your HW even few cpu cycles can make a difference.

The test I've done is for Linux not uClinux and it never wanted to be an absolute reference profiling the two interfaces, my point is that you can write a book about how fast is one or another but only testing will let you know, took me few minutes to setup the thing.

Colettacolette answered 23/10, 2013 at 21:44 Comment(9)
Wow, I didn't expect this. Almost 50% faster? That's fantastic, AlexIndigent
here the tgz with the sources used. The dl link will be valid for few days.Colettacolette
This seems to be a slightly deceptive example -- by which I don't mean intentionally deceptive, but deceptive none-the-less. The sysfs example returns real (albeit empty), previously created and stored data, whereas the ioctl one is just an echo. The other issue is that what you are benchmarking is maybe analogous to to benchmarking the amount of time it takes someone to get into a car and start their ignition for a race -- real, sure, but doing that "50% faster" does not nearly == finishing the race 50% faster. So potentially "premature and irrelevant optimization".Booted
That said I'm very open to eating my words and +1 if you can explain how this amounts to a significant (real) advantage. My position is that one is not (significantly) more advantageous than the other in terms of performance, but the sysfs one is a bit more flexible implementation and user friendliness wise.Booted
In the test there's no return data or echo, the timing are just to reach the interfaces, I added the copy_from_user because sysfs already do it or something similar.Colettacolette
I've added some clarifications, @Alex, LMK if this changes any of your responses.Indigent
@MikeCrowe standby? keep the sysfs. The question had already a good answer and it was fine for me. I was interested in the generic question "sysfs vs ioctl performance", in my mind the ioctl been always obviously faster but I never really checked, just took the opportunity to confirm and measure how much.Colettacolette
@Alex, I used the stock standby code as an example for illustration. We're actually using a custom power savings method, and milliseconds are important. However, I don't have an existing ioctl interface in the application code, so I was trying to justify developing it and a new custom ioctl method in the kernel for the new mode.Indigent
So I think the choice of ioctl over sysfs is proportional to how many times you access the interface per second (less sysfs/more ioctl), but at some point if you're accessing too much maybe better to re-think the design, and keep the thing in kernel space, I guess even on uClinux context switch have a price.Colettacolette

© 2022 - 2024 — McMap. All rights reserved.