CDI | Application / Dependent Scope | Memory Leak - javax.enterprise.inject.Instance<T> Not Garbage Collected
Asked Answered
T

2

6

I am using Instance as a lazy / dynamic injector in a TomEE Java application, and I have noticed a memory leak in my application. This is a first for me, so it's actually surprising to see a memory leak warning that has been outlined in the Java EE Library :

package javax.enterprise.inject;

public interface Instance<T> extends Iterable<T>, Provider<T>
{
    /**
     * Destroy the given Contextual Instance.
     * This is especially intended for {@link javax.enterprise.context.Dependent} scoped beans
     * which might otherwise create mem leaks.
     * @param instance
     */
    public void destroy(T instance);
}

Now this is most likely being caused by a clash with @ApplicationScoped and the Instance<T>. I've provided an example of how the layers are in my classes. Notice the nested Instance<T>. This is to provide dynamic injection of tasks.

Outer Class

@ApplicationScoped
public class MessageListenerImpl implements MessageListener {

    @Resource(name="example.mes")
    private ManagedExecutorService mes;

    @Inject @Any
    private Instance<Worker<ExampleObject>> workerInstance;

    // ...

    @Override
    public void onMessage(Message message) {
        ExampleObject eo = new ExampleObject();
        Worker<ExampleObject> taskWorker = workerInstance.get();
        taskWorker.setObject(eo);
        mes.submit(taskWorker);
    }

    // ...
}

Inner Class

public class Worker<T> implements Runnable {

    @Inject @Any
    private Instance<Task> taskInstance;

    @Setter
    private T object

    // ...

    @Override
    public void run() {
        Task t = taskInstance.get();
        t.setObject(object);
        t.doTask();
        // Instance destruction, manual cleanup tried here.
    }

    // ...

}

Interface

public interface Task<T> {
    void doTask();
    void setObject(T obj);
}

The classes that are leaking without calling destroy(T instance) are ExampleObject, Worker<T>, and the implementation of Task<T>. To keep the async design, I have tried passing the instance of Worker<T> within it's instance (probably a bad idea, but I tried anyways), calling destroy(T instance) and setting ExampleObject to null. This cleaned up the Task<T> implementation and ExampleObject, but not Worker<T>.

Another test I tried was doing a synchronous design within MessageListenerImpl (i.e. removing Worker<T> and using Task<T>) as a fallback effort, calling destroy(T instance) to clean up. This STILL left the leak, which leads me to believe it's got to be the clash with @ApplicationScoped and the Instance<T>.

If there is a way to keep the async design while achieving no memory leaks, please let me know. Really appreciate feedback. Thanks!

Testator answered 20/3, 2018 at 15:49 Comment(0)
V
10

Indeed this is a weakness of Instance, it may leak. This article has a good explanation. (As underlined in the comment from Siliarus below, this is not an intrinsic bug of Instance, but wrong usage/design.)

Your Worker declares no scope, thus it is @Dependent scoped. This means it is created anew for each injection. Instance.get() is essentially an injection, so a new dependent-scoped object is created with each invocation of get().

The specification says that dependent-scoped objects are destroyed when their "parent" (meaning the object they are injected into) gets destroyed; but application-scoped beans live as long as the application, keeping all dependent-scoped beans they created alive. This is the memory leak.

To mitigate do as written in the linked article:

  1. Call workerInstance.destroy(taskWorker) as soon as you do not need the taskWorker anymore, preferably within a finally block:

    @Override
    public void onMessage(Message message) {
        ExampleObject eo = new ExampleObject();
        Worker<ExampleObject> taskWorker;
        try {
            taskWorker = workerInstance.get();
            taskWorker.setObject(eo);
            mes.submit(taskWorker);
        }
        finally {
            workerInstance.destroy(taskWorker);
        }
    }
    

    EDIT: Some extra thoughts on this option: What happens if, in the course of time, the implementation of the injected bean changes from @Dependent to e.g. @ApplicationScoped? If the destroy() call is not explicitly removed, which is not something an unsuspecting developer will do in a normal refactoring, you will end up destroying a "global" resource. CDI will take care to recreate it, so no functional harm will come to the application. Still a resource intended to be instantiated only once will be constantly destroyed/recreated, which might have non-functional (performance) implications. So, from my point of view, this solution leads to unnecessary coupling between the client and the implementation, and I would rather not go for it.

  2. If you are only using the Instance for lazy loading, and there is only one instance, you may want to cache it:

    ...
    private Worker<ExampleObject> worker;
    
    private Worker<ExampleObject> getWorker() {
        if( worker == null ) {
            // guard against multi-threaded access if environment is relevant - not shown here
            worker = workerInstance.get();
        }
        return worker;
    }
    
    ...
    
        Worker<ExampleObject> taskWorker = getWorker();
    
    ...
    
  3. Give scope to your Worker, so that its parent is no longer responsible for its lifecycle, but the relevant scope.

Vanderpool answered 20/3, 2018 at 16:49 Comment(1)
Very well written answer. I just want to underline that the behaviour (leaking) is not a bug but wrong usage/design. When using Instance you have to make sure your instances are destroyed, especially when it comes to @Dependent scope.Tattle
T
1

So, I found a great implementation (source) that satisfied my use-case. Using BeanManager allowed me to control the lifecycle of the task bean. I avoided the Worker<T> and went with CompletableFuture<T> instead (with minor changes to the Task<T> interface to allow a returned value from the task). This allowed me to perform cleanup of the task bean and handle any exceptions from the task asynchronously. Rough example shown below. Thanks for the replies, and I hope this helps anyone else struggling with this issue!

Outer Class

@ApplicationScoped
public class MessageListenerImpl implements MessageListener {

    @Resource(name="example.mes")
    private ManagedExecutorService mes;

    @Inject
    private BeanManager bm;

    // ...

    @Override
    public void onMessage(Message message) {
        CreationalContext<MyTask> ctx = bm.createCreationalContext(null);
        Bean<?> beans = bm.resolve(bm.getBeans(MyTask.class));
        MyTask task = (MyTask) bm.getReference(beans, MyTask.class, ctx);
        task.setObject("Hello, Task!");
        Utilities.doTask(mes, ctx, task);
    }

    // ...
}

Implemented Task

public class MyTask implements Task<String, Boolean> {

    private String obj;

    // ...

    @Override
    public Boolean doTask() {
        System.out.println(obj);
        return Boolean.TRUE;
    }

    @Override
    void setObject(String obj) {
        this.obj = obj;
    }

    // ...
}

CompletableFuture Utility Method

public final class Utilities {

    private Utilities() {
    }

    public static final doTask(ManagedExecutorService mes, CreationalContext ctx, Task task) {
        CompletableFuture.supplyAsync((Supplier<Boolean>) task::doTask, mes)
            .exceptionally((e) -> {
                System.out.println("doTask : FAILURE : " + e.getMessage());
                return Boolean.FALSE;
            })
            .thenApplyAsync((b) -> {
                System.out.println("Releasing Context");
                ctx.release();
                return b;
            });
    }
}
Testator answered 26/3, 2018 at 15:32 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.