File Locking vs. Semaphores
Asked Answered
M

4

10

Just out of curiosity, what is the preferred way to achieve interprocess synchronization on Linux? The sem*(2) family of system calls seem to have a very clunky and dated interface, while there are three ways to lock files - fcntl(), flock() and lockf().

What are the internal differences (if any) and how would you justify the usage of each?

Miskolc answered 25/8, 2010 at 17:52 Comment(0)
H
9

Neither. The actual versions of pthread_* (eg. phtread_mutex_t) all allow to place the variables in shared segments that are created via shm_open. You just have to place some extra parameter to the init calls.

Don't use semaphores (sem_t) if you don't have to, they are too low level and are interupted by IO etc.

Don't abuse file locking for interprocess control. It is not made for that. In particular, you don't have a possibility to flush file meta-data such as locks, so you never know when a lock / unlock will become visible to a second process.

Hildehildebrand answered 25/8, 2010 at 18:38 Comment(4)
I remember that pthread_mutex_t in a shared memory sometimes makes trouble, if I remember correctly on kernel 2.4.x. Do you know something about that ?Minoan
@DarkDust: kernel versions before 2.6.something (but many years ago) had a different pthread implementation, that was in fact not suited for interprocess control. This is history. Anyhow, the corresponding init call will tell you on return if the pshared attribute is not supported by the implementation.Hildehildebrand
Do you sacrifice portability with this technique?Thornburg
First the question was about linux... Then, the specification of the pshared parameter of the sem_init call is specified in POSIX, so the fact of trying it is portable. What a POSIX conforming implementation could do then, is to return an error ENOSYS if its not supporting pshared. For pthread_mutex_t this can be checked beforehand if the function pthread_mutexattr_setpshared exists. It seems e.g on OS X it doesn't, e.g.Hildehildebrand
S
5

You are suffering a wealth of choices from a rich history, as DarkDust noted. For what it's worth my decision tree goes something like this:

Use mutexes when only one process/thread can have access at a time.

Use semaphores when two or more (but nevertheless finite) processes/threads can use a resource.

Use POSIX semaphores unless you really need something SYSV semaphores have - e.g. UNDO, PID of last operation, etc.

Use file locking on files or if the above don't fit your requirements in some way.

Surface answered 25/8, 2010 at 19:2 Comment(0)
M
4

The different locking/semaphore implementations all came to life on different systems. On System V Unix you had semget/semop, POSIX defined a different implementation with sem_init, sem_wait and sem_post. And flock originated in 4.2BSD, as far as I could find out.

Since they all gained a certain significance Linux supports them all now to make porting easy. Also, flock is a mutex (either locked or unlocked), but the sem* functions (both SysV and POSIX) are semaphores: they allow an application to grant several concurrent processes access, e.g. you could allow access to a resource to 4 processes simultaneous with semaphores. You can implement a mutex with semaphores but not the other way round. I remember that in the excellent "Advanced UNIX Programming" by Marc J. Rochkind he demonstrated how to transmit data between processes via semaphores (very inefficient, he did it just to prove it can be done). But I couldn't find anything reliable about efficiency.

I guess it's more like "Use what you want".

Minoan answered 25/8, 2010 at 18:41 Comment(6)
why can't a semaphore be implemented using mutexes?Peachy
@Anurag Uniyal: Because a mutex has only two states: locked or unlocked. A semaphore is a counter and thus has more than two states.Minoan
This doesn't seem right. I see no reason why a semaphore can't be done using a mutex.Plenty
@Darkdust, so are you saying a bit can not represent a byte, because bit has only two states and byte has many?Peachy
@Minoan (I meant bits), anyway so semaphore(3).acquire() == mutex1. acquire(block=False) or mutex2. acquire(block=False) or mutex3. acquire(block=False) or mutex_random.acquire() wouldn't do?Peachy
Sorry, I didn't understand that. Anyway, have a look at the Wikipedia page on semaphores, maybe that clears things up a little.Minoan
L
2

A potentially significant difference might be the fairness of the resource distribution. I don't know the details of the implementation of the semget/semop family, but I suspect that it is typically implemented as a "traditional" semaphore as far as scheduling goes. Generally, I believe the released threads are handled on a FIFO basis (first one waiting for the semaphore is released first). I don't think this would happen with file locking since I suspect (again just guessing) that the handling is not performed at the kernel level.

I had existing code sitting around to test semaphores for IPC purposes, and so I compared the two situations (one using semop and one using lockf). I did a poor man's test and just ran to instances of the application. The shared semaphore was used to sync the start. When running the semop test, both processes finished 3 million loops almost in sync. The lockf loop, on the other hand, was not nearly as fair. One process would typically finish while the other one had only completed half the loops.

The loop for the semop test looked like the following. The semwait and semsignal functions are just wrappers for the semop calls.

   ct = myclock();
   for ( i = 0; i < loops; i++ )
      {
      ret = semwait( "test", semid, 0 );
      if ( ret < 0 ) { perror( "semwait" ); break; }

      if (( i & 0x7f ) == 0x7f )
         printf( "\r%d%%", (int)(i * 100.0 / loops ));

      ret = semsignal( semid, 0 );
      if ( ret < 0 ) { perror( "semsignal" ); break; }
      }
   printf( "\nsemop time: %d ms\n", myclock() - ct );

The total run time for both methods was about the same, although the lockf version actually was faster overall sometimes because of the unfairness of the scheduling. Once the first process finished, the other process would have uncontested access for about 1.5 million iterations and run extremely fast.

When running uncontested (single process obtaining and releasing the locks), the semop version was faster. It took about 2 seconds for 1 million iterations while the lockf version took about 3 seconds.

This was run on the following version:

[]$ uname -r
2.6.11-1.1369_FC4smp
Lonnylonslesaunier answered 25/8, 2010 at 21:42 Comment(2)
AFAIK system semaphores are entirely dependent on the OS scheduler and have no smarts of their own. Although overall thread scheduling policy can be set, thread release sequence is going to be indeterminate in any practical sense.Surface
@Duck, That is certainly true; it would be wrong to expect threads to be released in a certain order. But your point of semaphores being dependent on the OS is the critical point. Because the OS is making the decision, it can apply whatever rules of "fairness" that it wants (whether it is some approximation of FIFO or even a random flip of the coin). When a file lock is released, I don't think the same kind of decision making occurs in the kernel.Lonnylonslesaunier

© 2022 - 2024 — McMap. All rights reserved.