Multiprocessor programming: lock-free stacks
Asked Answered
A

3

6

In preperation for my upcoming Concurrent Systems exam, I am trying to complete some questions from the text book "The Art of Multiprocessor Programming". One question is bugging me:

Exercise 129: Does it make sense to use the same shared BackOff object for both pushes and pop in our LockFreeStack object? How else could we structure the backoff in space and time in the EliminationBackOffStack?.

This question bugs me because the first thing that comes to my mind is that it does not make sense because all a backoff object does is make a process wait, so why not share it? The second part of the question totally eludes me and any help is most welcome.

The code for the LockFreeStack:

public class LockFreeStack<T> {

    AtomicReference<Node> top = new AtomicReference<Node>(null);

    static final int MIN_DELAY = ...;
    static final int MAX_DELAY = ...;
    Backoff backoff = new Backoff(MIN_DELAY, MAX_DELAY);

    protected boolean tryPush(Node node) {
        Node oldTop = top.get();
        node.next = oldTop;
        return(top.compareAndSet(oldTop, node));
    }

    public void push(T value) {
        Node node = new Node(value);
        while (true) {
            if (tryPush(node)) {
                return;
            } else {
                backoff.backoff();
            }
        }
    }
Abomasum answered 29/10, 2010 at 13:22 Comment(3)
What does the Backoff.backoff() method actually do? What is the "EliminationBackOffStack" mentioned? Please supply some more information on those areas.Perri
Could you provide some information--or even code--for the {{Backoff}} class as well? Is it doing some kind of exponential backoff?Textbook
books.google.com/…Demand
S
1

Not sure if any of my ramblings help but I'll press the "Post" button anyway.

The answer depends on the implementation of backoff(). Since the goal here is to avoid synchronization, there won't be any local storage -- maybe some in a ThreadLocal. If the backoff algorithm uses a randomizer it will need to be reentrant as well. So most likely you are able to share it between pop and push -- now do you want to. Since push and pop are both trying to alter the top reference, it would be good if the backoff gave successive threads vastly different numbers. Is there more contention around push or pop? Do we need to be more aggressively backing off with one or the other method? If this is a generic stack then you won't know.

In terms of how the backoff can be "restructured", I'm also not sure. Could you use a successful push or pop as a opportunity to throttle back on the backoff times? How about the difference between random backoff waits versus prime numbers in a sequence assigned by ThreadLocal?

Selfreliance answered 3/11, 2010 at 22:5 Comment(0)
S
1

Approaching the first question from a synchronization stand point, I believe it would make sense to allow the same BackOff object for both pushes and pops. Regardless of the implementation of this class. The reason for this is that if we have a stack we must maintain a consistent state across the removal and addition of items to the stack. If however, we were only doing a peek (looking at the first element or top of the stack) than we could have more than one BackOff object looking at it as reads should not make a lock on the data source in question. The second question is going to require the code be posted for that class.

Stereotypy answered 3/11, 2010 at 22:14 Comment(0)
M
0

Using a backoff is over kill in this situation.

Any real world application should be spending more time processing the nodes than pushing things on and off the stack. The stack operation by comparison should be very short. It follows that two threads are highly unlikely to be accessing the stack at the same time. However, it will happen sometimes, so you need to code which is correct.

public void push(T value) { 
   Node node = new Node(value); 
   while (!tryPush(node)) 
       backoff.backoff(); 
 } 

IMHO: Can be shortened to

public void push(T value) { 
   Node node = new Node(value); 
   while (!tryPush(node));
} 
Marler answered 4/11, 2010 at 19:10 Comment(2)
doesn't that just result in a busy wait, causing the thread to waste CPU time spinning while trying to perform the stack op? You're right, stack ops shouldn't take as long, but if it's a bounded stack e.g. and the consumer takes a long time to process the last thing it popped off, it will be a long while before the stack can receive another element.Waller
Nevermind, I see now from the example that it's unlikely that it's bounded. It's merely a question of mutual exclusion for the stack operation.Waller

© 2022 - 2024 — McMap. All rights reserved.