do synchronized java methods queue calls?
Asked Answered
E

3

5

I've read the oracle doc about synchronized methods and how they may introduce a lock to the multithreaded program, but there is one thing that is unclear to me. Are the subsequent calls to an already locked methods queued?

Lets say we have a class:

class Astore {
    ...
    public synchronized void a() {
        doSomethingTimeConsuming();
    }
    ...
}

and 3 threads that call astore.a()

final Astore astore = new Astore();

Thread t1 = new Thread(new Runnable() {
    public void run() { 
        astore.a();
        doSomethingElse();
        astore.a();
    }
});
t1.start();

Thread t2 = new Thread(new Runnable() {
    public void run() {
        astore.a();
    }
});
t2.start();

Thread t3 = new Thread(new Runnable() {
    public void run() {
        astore.a();
    }
});
t3.start();

I'm not sure if I've made the example correctly, but the point is, that 3 threads make a call to the same object with synchronized method almost at the same time.

Will the order of operations be stored in a queue so that the threads invoking will be:

  1. t1 (as it was called first)
  2. t2 (was called after T1)
  3. t3
  4. t1 again (it was busy doing something with A already while other threads requested method)

Can I safely assume that will be the behavior, or there is no guarantee that this will be the order (or even worse, t2 and t3 might get called in random order)

What is the best practice when multiple threads may need to share data (for instance a socket server with one thread for each active connection - I don't want 6 clients to time out while waiting for the first one to finish a huge upload to a shared data structure)

Eulogium answered 20/5, 2015 at 10:22 Comment(11)
What is the best practice when multiple threads may need to share data (for instance a socket server with one thread for each active connection That is the billion dollar question. Entire libraries can be filled with books on just that one subject. A more specific question is more likely to be answered.Relique
In the specific case you asked about, you need to reduce your chunk size and/or separate the upload and import processes.Embattle
...or if the data structure is huge, you may want to store it in a database anyway, at which point we're talking about transaction handling.Relique
"I don't want 6 clients to time out while waiting for the first one to finish a huge upload to a shared datastructure" - I assume you want to prevent calamities. If 6 clients have to timeout in order to achieve that goal, then so be it. Now if those timeouts CAN happen, you have architectural design issues not to be blamed on the fact that you need synchronization.Aideaidedecamp
the doSomethingTimeConsuming() is a mistake. One of the most important guidelines you can follow is to keep your synchronized blocks as small as possible. The real art of multi-threaded programming is to design your program so that its threads do not waste time waiting for one another when there is work that they could be doing. If your program does any I/O inside a synchronized block, or if a synchronized block updates more variables than you can count on your fingers, then you might want to re-consider your design.Counterforce
Data is stored in a database and a dedicated server program is accessed the issue is that I wanted to be sure that when multiple clients start the transfer the data would arrive in the order it was sent (so it would crosscheck with timestamp). The structure is small, a few characters at once, like "P1 move up 1" " P1 move right 1" "P1 shoot". I basically want to make sure the synchronisation won't suddenly invert the order of execution (kind of hard to explain in the character limit)Eulogium
It's a mistake to assume that the t1 thread will call astore.a() before either of the other two threads calls it. Each of the three start() calls creates a new thread and makes the new thread runnable. But, "runnable" is not the same thing as "running". It is entirely possible for all three of the start() calls to complete before any of the new threads actually begins to run, and it is entirely up to the operating system to choose the order in which they will get to run.Counterforce
@jameslarge - I didn't know the thread would keep the lock. I was hoping it would release the lock as soon as it exit the a() method so other threads could use a() without thread T1 to finish the doSomethingTimeConsuming() and only reacquire the lock when the large method is complete. To put it simple, I was wondering if it queues calls on itself so that I wouldn't have to worry about it. Kind of garbage collection in Java.Eulogium
When multiple clients are independently sending data, there is no "order in which it was sent". The only way that data from multiple clients can have a meaningful order is if the order is defined by some protocol, and the clients coordinate with each other to obey the protocol.Counterforce
Sounds like what you want is atomic transactions. If your database provides atomic transactions, then write your client threads to use them, and re-try when a transaction fails. If the database does not provide atomic transactions, then I would define an AtomicRequest object that client threads can place into a queue, and I would have a separate thread that consumes AtomicRequests from the queue and updates the database.Counterforce
The clients only send standard messages via a socket connection. It's the server that has a thread for each client. I can queue database request myself, the problem was that when multiple clients write to their sockets, the server part would have to wait until the queue is theirs. I guess I will have to write some extra class that would enqueue messages to save in the database.Eulogium
E
8

No, it will not queue calls to the method.

If the call is made from a Thread that already has got the lock (a recursive call, for example), then it will just proceed like normal.

Other threads that attempt to get the lock to be able to make a call will hold there and wait until the lock is released.

The order is not guaranteed, use a fair ReentrantLock if that is important.

Edify answered 20/5, 2015 at 10:25 Comment(7)
So the other threads would wait...such like one would wait in a queue?Exonerate
No, there is no ordered queue. I edited my answer to clarify.Edify
@Exonerate They would wait but not in an orderly queue, just loitering around, and the scheduler is free to pick any of them when the monitor becomes available.Relique
@Edify Great, that's the clarification I was looking for (for the answer that is). You could also mention that some java.util.concurrent classes provide fair queuing functionality to make this a great answer.Exonerate
You mean, apart from ReentrantLock?Edify
So in this example, T1 would call the method twice, afterwards I have no way of expecting which of the treads will get to be called next. Thank you for clarification. I will check the ReentrantLock - I guess making a custom thread to queue and schedule things myself would not be a good practice?Eulogium
You do not even have a guarantee that t1 will enter a first. You have NO guarantee regarding the order of entry in that synchronized block. If the order of the calls is important, use a ReentrantLock as mentioned abovce, or add them to a queue, or let the same thread handle all calls (either manually, or with a single threaded Executor). Regarding the huge blocking upload: If you actually need to synchronize around the shared object, then you will just have to live with the blocking behaviour. Alternatives are non blocking IO, or locking of a sub-resource of the shared state.Edify
F
2

If you use ReneterantLock instead of synchronized block there is a fairness parameter that you can set so that the thread that is waiting most gets the lock when lock is released by another thread, You can read more here

Fullblooded answered 20/5, 2015 at 10:30 Comment(0)
A
0

folkol is correct.

Actually its depends on the machine's design (CPU)

The callers should wait until the resource got allocated by the CPU. The CPU then randomly choose one of the next caller and allocate the resource for it and lock it (Because of synchroize) until the second caller finishes its job.

Arraignment answered 20/5, 2015 at 10:37 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.