Timer is evil!
Using timer or executor or any other mechanism which creates a thread/runnable object per request is a very bad idea. Please think wisely and don't do it. Otherwise you will quickly run into all kind of memory issues with more or less real environment. Imagine 1000 req/min means 1000 threads or workers / min. Poor GC. The solution I propose require only 1 watchdog thread and will save you resources time and nerves.
Basically you do 3 steps.
- put request in cache.
- remove request from cache when complete.
- abort requests which are not complete within your limit.
your cache along with watchdog thread may look like this.
import org.apache.http.client.methods.*;
import java.util.*;
import java.util.concurrent.*;
import java.util.stream.*;
public class RequestCache {
private static final long expireInMillis = 300000;
private static final Map<HttpUriRequest, Long> cache = new ConcurrentHashMap<>();
private static final ScheduledExecutorService exe = Executors.newScheduledThreadPool(1);
static {
// run clean up every N minutes
exe.schedule(RequestCache::cleanup, 1, TimeUnit.MINUTES);
}
public static void put(HttpUriRequest request) {
cache.put(request, System.currentTimeMillis()+expireInMillis);
}
public static void remove(HttpUriRequest request) {
cache.remove(request);
}
private static void cleanup() {
long now = System.currentTimeMillis();
// find expired requests
List<HttpUriRequest> expired = cache.entrySet().stream()
.filter(e -> e.getValue() > now)
.map(Map.Entry::getKey)
.collect(Collectors.toList());
// abort requests
expired.forEach(r -> {
if (!r.isAborted()) {
r.abort();
}
cache.remove(r);
});
}
}
and the following sudo code how to use cache
import org.apache.http.client.methods.*;
public class RequestSample {
public void processRequest() {
HttpUriRequest req = null;
try {
req = createRequest();
RequestCache.put(req);
execute(req);
} finally {
RequestCache.remove(req);
}
}
}
HttpClient
way. – Scruple