What will be the critical section code for a shared queue accessed by two threads?
Asked Answered
T

1

8

Suppose we have a shared queue (implemented using an array), which two threads can access, one for reading data from it, and other for writing data to it. Now, I have a problem of synchronization. I'm implementing this using Win32 API's (EnterCriticalSection etc.).

But my curiosity is what will be the critical section code in enqueue and dequeue operations of the queue?

Just because, two threads are using a shared resource? Why I'm not able to see any problem is this: front and rear are maintained, so, when ReaderThread reads, it can read from front end and when WriterThread writes, it can easily write to rear end.

What potential problems can occur?

Thomasenathomasin answered 27/7, 2011 at 4:27 Comment(2)
What if you only have 1 entry therefore tail = head?Aluminize
It should be fine. If you have the section locked, the reader will get the data without the writer adding data on top of the reader, and the reader won't remove data causing the next empty slot offset to be incorrect for the writer. Actually, as long as the writer never changes the front, you don't need a criticalsection around it except for the part where the reader is removing what it has already read, if the writer is simultaneously trying to write, where the reader changes the next offset out from under the writer because it wasn't synchronized.Baize
S
6

For a single producer/consumer circular queue implementation, locks are actually not required. Simply set a condition where the producer cannot write into the queue if the queue is full and the consumer cannot read from the queue if it is empty. Also the producer will always write to a tail pointer that is pointing to the first available empty slot in the queue, and the consumer will read from a head pointer that represents the first unread slot in the queue.

You code can look like the following code example (note: I'm assuming in an initialized queue that tail == head, and that both pointers are declared volatile so that an optimizing compiler does not re-order the sequence of operations within a given thread. On x86, no memory barriers are required due to the strong memory consistency model for the architecture, but this would change on other architectures with weaker memory consistency models, where memory barriers would be required):

queue_type::pointer queue_type::next_slot(queue_type::pointer ptr);

bool queue_type::enqueue(const my_type& obj)
{
    if (next_slot(tail) == head)
        return false;

    *tail = obj;
    tail = next_slot(tail);

    return true;
}

bool queue_type::dequeue(my_type& obj)
{
    if (head == tail)
        return false;

    obj = *head;
    head = next_slot(head);

    return true;
}

The function next_slot simply increments the head or tail pointer so that it returns a pointer to the next slot in the array, and accounts for any array wrap-around functionality.

Finally, we guarantee synchronization in the single producer/consumer model because we do not increment the tail pointer until it has written the data into the slot it was pointing to, and we do not increment the head pointer until we have read the data from the slot it was pointing to. Therefore a call to dequeue will not return valid until at least one call to enequeue has been made, and the tail pointer will never over-write the head pointer because of the check in enqueue. Additionally, only one thread is incrementing the tail pointer, and one thread is incrementing the head pointer, so there are no issues with a shared read or write from or to the same pointer which would create synchronization problems necessitating a lock or some type of atomic operation.

Shulamith answered 27/7, 2011 at 4:48 Comment(6)
Too bad, I could give +1 only! Good answer.Thomasenathomasin
This is cool, if I do not want to wait if queue is "busy". So I have to write while(queue.enqueue(obj) != 0); to make sure the obj is pushed back to the queue and this is not so cool.Squinteyed
@sad_man: you can block the producer/consumer on enqueue/dequeue in case the queue is full/empty and use events to wake up the respective thread.Allness
@ChrisWue: I think using events adds almost the same (maybe even more) complexity to the problem. Also using critical section is more perfomant solution.Squinteyed
If you are running in a multi-CPU environment, spin-locks will by far be the fastest solution. A a spin-lock is nothing more than a form of busy-waiting on an atomic variable. This solution avoids slower locks that must be arbitrated by the kernel and it is only when the queue is empty or full that any busy-waiting is required. For instance, if this queue were to be used as a message queue in a fast data pipeline where you need the fastest enqueue and dequeue operations to reduce latency, you would hardly ever hit a prolonged busy-wait ... you definitely would never want to touch the kernel.Shulamith
C++ has a bool type. Use it.Bridewell

© 2022 - 2024 — McMap. All rights reserved.