What could be the cause of RejectedExecutionException
Asked Answered
A

2

91

I am getting this exception on my tomcat server (+liferay)

java.util.concurrent.RejectedExecutionException

my class is like that :

public class SingleExecutor extends ThreadPoolExecutor {
  public SingleExecutor(){
    super(1, 1,0L, TimeUnit.MILLISECONDS,new LinkedBlockingQueue<Runnable>());
  }

  @Override
  public void execute(Runnable command) {
    if(command instanceof AccessLogInsert){
        AccessLogInsert ali = (AccessLogInsert)command;
        ali.setConn(conn);
        ali.setPs(ps);
    }
    super.execute(command);
  }
}

I get this exception on the line super.execute(command); This error can occur when the queue is full but the LinkedBlockingQueue size is 2^31, and I am sure that there is no so many command waiting.

At start everything is stable, but after I redeploy a war it starts occuring. This class is not part of the war but in a jar in tomcat/lib.

Do you have any idea why this happend and how to fix it ?

Apeman answered 18/11, 2011 at 13:33 Comment(0)
W
102

From ThreadPoolExecutor JavaDoc (emphasis mine)

New tasks submitted in method execute(java.lang.Runnable) will be rejected when the Executor has been shut down, and also when the Executor uses finite bounds for both maximum threads and work queue capacity, and is saturated. In either case, the execute method invokes the RejectedExecutionHandler.rejectedExecution(java.lang.Runnable, java.util.concurrent.ThreadPoolExecutor) method of its RejectedExecutionHandler. Four predefined handler policies are provided:

  1. In the default ThreadPoolExecutor.AbortPolicy, the handler throws a runtime RejectedExecutionException upon rejection.
  2. In ThreadPoolExecutor.CallerRunsPolicy, the thread that invokes execute itself runs the task. This provides a simple feedback control mechanism that will slow down the rate that new tasks are submitted.
  3. In ThreadPoolExecutor.DiscardPolicy, a task that cannot be executed is simply dropped.
  4. In ThreadPoolExecutor.DiscardOldestPolicy, if the executor is not shut down, the task at the head of the work queue is dropped, and then execution is retried (which can fail again, causing this to be repeated.)

It is possible to define and use other kinds of RejectedExecutionHandler classes. Doing so requires some care especially when policies are designed to work only under particular capacity or queuing policies.

Presumably therefore, reloading the war triggers a shutdown of the Executor. Try putting the relevant libraries in the war, so that Tomcat's ClassLoader has a better chance of correctly reloading your app.

Williamswilliamsburg answered 18/11, 2011 at 13:51 Comment(4)
Last part of the answer is nice one.Sirkin
"New tasks submitted in method execute(java.lang.Runnable) will be rejected when the Executor has been shut down." This gave rise to a bug in my code, which I resolved by sleeping the thread for 500ms after shutdown (which may not be necessary), then setting the scheduler to null, so that next time a task needs to be run, the method in question checks to see if the scheduler is null. If it is, a new one is created. Thus the rejection for the reason of a shutdown is eliminated.Francenefrances
@AgiHammerthief sleeping is not necessary, but proper concurrency control is. Further, while your “solution” may stop crashes, it’s just hiding what sounds like a big resource ownership problem.Williamswilliamsburg
@Williamswilliamsburg very nicely explained. But suppose it never happens to my app after proper testing. Happened just once in production. Can it be caused by some other external factor like DB server down or Kafka server issue. Is increasing max-pool and queue capacity a solution to this?Minatory
C
20

Just to add to OrangeDog's excellent answer, the contract of an Executor is indeed such that its execute method will throw RejectedExecutionException when the executor is saturated (i.e. there is no space in the queue).

However, it would have been useful if it blocked instead, automatically waiting until there is space in the queue for the new task.

With the following custom BlockingQueue it's possible to achieve that:

public final class ThreadPoolQueue extends ArrayBlockingQueue<Runnable> {

    public ThreadPoolQueue(int capacity) {
        super(capacity);
    }

    @Override
    public boolean offer(Runnable e) {
        try {
            put(e);
        } catch (InterruptedException e1) {
            Thread.currentThread().interrupt();
            return false;
        }
        return true;
    }

}

That essentially implements the backpressure algorithm, slowing the producer whenever the executor saturates.

Use it as:

int n = Runtime.getRuntime().availableProcessors();
ThreadPoolExecutor executor = new ThreadPoolExecutor(0, n, 1, TimeUnit.MINUTES, new ThreadPoolQueue(n));
for (Runnable task : tasks) {
    executor.execute(task); // will never throw, nor will queue more than n tasks
}
executor.shutdown();
executor.awaitTermination(1, TimeUnit.HOURS);
Cozart answered 28/8, 2018 at 13:46 Comment(3)
CallerRunsPolicy already exists in order to achieve this, and gives better throughput than simply blocking.Williamswilliamsburg
No it doesn't because blocking the producer by letting it run a task is a pessimization; it will starve the pool while the producer is busy with that single task.Cozart
This conversation is the best part of this page. rustyx is right telling that it's a "pessimization". I would say considering the problem at hand the solution should be one of the 2 above. CallerRunsPolicy does exist and is cheap to use. If it really leads to the pool starving, one might be better off implementing blocking.Brunhilda

© 2022 - 2024 — McMap. All rights reserved.