Looking for a lock-free RT-safe single-reader single-writer structure
Asked Answered
C

2

6

I'm looking for a lock-free design conforming to these requisites:

  • a single writer writes into a structure and a single reader reads from this structure (this structure exists already and is safe for simultaneous read/write)
  • but at some time, the structure needs to be changed by the writer, which then initialises, switches and writes into a new structure (of the same type but with new content)
  • and at the next time the reader reads, it switches to this new structure (if the writer multiply switches to a new lock-free structure, the reader discards these structures, ignoring their data).
  • The structures must be reused, i.e. no heap memory allocation/free is allowed during write/read/switch operation, for RT purposes.

I have currently implemented a ringbuffer containing multiple instances of these structures; but this implementation suffers from the fact that when the writer has used all the structures present in the ringbuffer, there is no more place to change from structure... But the rest of the ringbuffer contains some data which don't have to be read by the reader but can't be re-used by the writer. As a consequence, the ringbuffer does not fit this purpose.

Any idea (name or pseudo-implementation) of a lock-free design? Thanks for having considered this problem.

Camphorate answered 25/2, 2010 at 15:7 Comment(7)
Too much emphasis makes no emphasis.Blackberry
@KennyTM: You are right. Edited.Camphorate
Good question. This is a common problem in real-time systems. I'm curious if there's an off-the-shelf way to do this.Maturate
You need atomic data read/write. Like transactions, am I right? Then you must use some synchronization primitive to guarantee atomic data access. You need two instances of your structure and function like TryLock. If you want, I can describe more.Lilialiliaceous
TryLock isn't allowed: in the writer, the new data must never be dropped in case of failure.Camphorate
"But the rest of the ringbuffer contains some data which don't have to be read by the reader but can't be re-used by the writer." Why can't they be reused???Stuff
@moala. It's a Try Lock :) So writer can try lock one of the two buffers. If lock of first buffer is failed - it will lock second.Lilialiliaceous
S
0

You're on the right track.

Lock free communication of fixed messages between threads/processes/processors alt text

fixed size ring buffers can be used in lock free communications between threads, processes or processors if there is one producer and one consumer. Some checks to perform:

head variable is written only by producer (as an atomic action after writing)

tail variable is written only by consumer (as an atomic action after reading)

Pitfall: introduction of a size variable or buffer full/empty flag; these are typically written by both producer and consumer and hence will give you an issue.

I generally use ring buffers for this purpoee. Most important lesson I've learned is that a ring buffer of can never contain more than elements. This way a head and tail variable are written by producer respectively consumer.

Extension for large/variable size blocks To use buffers in a real time environment, you can either use memory pools (often available in optimized form in real time operating systems) or decouple allocation from usage. The latter fits to the question, I believe.

extended queue

If you need to exchange large blocks, I suggest to use a pool with buffer blocks and communicate pointers to buffers using a queue. So use a 3rd queue with buffer pointers. This way the allocates can be done in application (background) and you real time portion has access to a variable amount of memory.

Application

while (blockQueue.full != true)
{
    buf = allocate block of memory from heap or buffer pool
    msg = { .... , buf };
    blockQueue.Put(msg)
}

Producer:
   pBuf = blockQueue.Get()
   pQueue.Put()

Consumer
   if (pQueue.Empty == false)
   {
      msg=pQueue.Get()
      // use info in msg, with buf pointer
      // optionally indicate that buf is no longer used
   }
Silsbye answered 25/2, 2010 at 16:0 Comment(4)
I'm not following the explanation here. Needs some proofreading.Maturate
Thanks for having spent so much time on this. Your design suffers from the fact that if the consumer does not run, the producer and feeding threads will suffer from starvation, as for a ringbuffer, but consuming all of the heap memory.Camphorate
I've just explained the real time part; real time design is complex matter I'm working with daily. Up to you how you handle your exception handling (client not running etc). I can not decide for you (either discard data, perform exception handling or block). Since these are ring buffers, you can control the number of elements to be sufficient for the expected latency of the background task.Silsbye
Normally you'd have the writer read the read-index for full-detection, and the reader read the write-index for empty-detection. That still causes contention, but you don't need a size that's modified by both. Another option is to have a sequence counter in each bucket which can avoid contention between reader and writer when they're not close to each other, as in the MCMP queue from liblfds (discussed in Lock-free Progress Guarantees. For single-producer single-consumer, you could maybe use that with RMW increments being separate load and store)Lashonda
M
0

Here's one. The keys are that there are three buffers and the reader reserves the buffer it is reading from. The writer writes to one of the other two buffers. The risk of collision is minimal. Plus, this expands. Just make your member arrays one element longer than the number of readers plus the number of writers.

class RingBuffer
{
  RingBuffer():lastFullWrite(0)
  { 
    //Initialize the elements of dataBeingRead to false
    for(unsigned int i=0; i<DATA_COUNT; i++)
    {
      dataBeingRead[i] = false;
    } 
  }

  Data read()
  {
    // You may want to check to make sure write has been called once here
    // to prevent read from grabbing junk data. Else, initialize the elements
    // of dataArray to something valid.
    unsigned int indexToRead = lastFullWriteIndex;
    Data dataCopy;
    dataBeingRead[indexToRead] = true;
    dataCopy = dataArray[indexToRead];
    dataBeingRead[indexToRead] = false;
    return dataCopy;
  }

  void write( const Data& dataArg )
  {
    unsigned int writeIndex(0);

    //Search for an unused piece of data.
    // It's O(n), but plenty fast enough for small arrays.
    while( true == dataBeingRead[writeIndex] && writeIndex < DATA_COUNT )
    {
      writeIndex++;
    }  

    dataArray[writeIndex] = dataArg;

    lastFullWrite = &dataArray[writeIndex];
  }

private:
  static const unsigned int DATA_COUNT;
  unsigned int lastFullWrite;
  Data dataArray[DATA_COUNT];
  bool dataBeingRead[DATA_COUNT];
};

Note: The way it's written here, there are two copies to read your data. If you pass your data out of the read function through a reference argument, you can cut that down to one copy.

Maturate answered 25/2, 2010 at 18:32 Comment(1)
What if lastFullWriteIndex in read will be changed by write if write will be called in middle of read? Then you'll have two instances of dataBeingRead set to true.Lilialiliaceous

© 2022 - 2024 — McMap. All rights reserved.