Java 21 built-in HTTP client pins the carrier thread
Asked Answered
P

1

19

I'm using Java Corretto 21.0.0.35.1 build 21+35-LTS, and the built-in Java HTTP client to retrieve a response as an InputStream. I'm making parallel requests using virtual threads, and for the most part, it's working well. However, occasionally, my testing encounters a "Pinning" event, as seen in the stack trace below.

I believed that the JDK had been updated to fully support virtual threads, and in my understanding, the HTTP client shouldn't be pinning a carrier thread at all. However, it appears that this pinning event occurs sometimes when reading and (automatically) closing an InputStream.

Is this behavior expected, or could it still be a bug in the JDK?

The code:

HttpResponse<InputStream> response = httpClient.send(request, HttpResponse.BodyHandlers.ofInputStream());
try (InputStream responseBody = response.body()) {
  return parser.parse(responseBody); // LINE 52 in the trace below
}

The trace

* Pinning event captured:
  java.lang.VirtualThread.parkOnCarrierThread(java.lang.VirtualThread.java:687)
  java.lang.VirtualThread.park(java.lang.VirtualThread.java:603)
  java.lang.System$2.parkVirtualThread(java.lang.System$2.java:2639)
  jdk.internal.misc.VirtualThreads.park(jdk.internal.misc.VirtualThreads.java:54)
  java.util.concurrent.locks.LockSupport.park(java.util.concurrent.locks.LockSupport.java:219)
  java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(java.util.concurrent.locks.AbstractQueuedSynchronizer.java:754)
  java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(java.util.concurrent.locks.AbstractQueuedSynchronizer.java:990)
  java.util.concurrent.locks.ReentrantLock$Sync.lock(java.util.concurrent.locks.ReentrantLock$Sync.java:153)
  java.util.concurrent.locks.ReentrantLock.lock(java.util.concurrent.locks.ReentrantLock.java:322)
  sun.nio.ch.SocketChannelImpl.implCloseNonBlockingMode(sun.nio.ch.SocketChannelImpl.java:1091)
  sun.nio.ch.SocketChannelImpl.implCloseSelectableChannel(sun.nio.ch.SocketChannelImpl.java:1124)
  java.nio.channels.spi.AbstractSelectableChannel.implCloseChannel(java.nio.channels.spi.AbstractSelectableChannel.java:258)
  java.nio.channels.spi.AbstractInterruptibleChannel.close(java.nio.channels.spi.AbstractInterruptibleChannel.java:113)
  jdk.internal.net.http.PlainHttpConnection.close(jdk.internal.net.http.PlainHttpConnection.java:427)
  jdk.internal.net.http.PlainHttpConnection.close(jdk.internal.net.http.PlainHttpConnection.java:406)
  jdk.internal.net.http.Http1Response.lambda$readBody$1(jdk.internal.net.http.Http1Response.java:355)
  jdk.internal.net.http.Http1Response$$Lambda+0x00007f4cb5e6c438.749276779.accept(jdk.internal.net.http.Http1Response$$Lambda+0x00007f4cb5e6c438.749276779.java:-1)
  jdk.internal.net.http.ResponseContent$ChunkedBodyParser.onError(jdk.internal.net.http.ResponseContent$ChunkedBodyParser.java:185)
  jdk.internal.net.http.Http1Response$BodyReader.onReadError(jdk.internal.net.http.Http1Response$BodyReader.java:677)
  jdk.internal.net.http.Http1AsyncReceiver.checkForErrors(jdk.internal.net.http.Http1AsyncReceiver.java:302)
  jdk.internal.net.http.Http1AsyncReceiver.flush(jdk.internal.net.http.Http1AsyncReceiver.java:268)
  jdk.internal.net.http.Http1AsyncReceiver$$Lambda+0x00007f4cb5e31228.555093431.run(jdk.internal.net.http.Http1AsyncReceiver$$Lambda+0x00007f4cb5e31228.555093431.java:-1)
  jdk.internal.net.http.common.SequentialScheduler$LockingRestartableTask.run(jdk.internal.net.http.common.SequentialScheduler$LockingRestartableTask.java:182)
  jdk.internal.net.http.common.SequentialScheduler$CompleteRestartableTask.run(jdk.internal.net.http.common.SequentialScheduler$CompleteRestartableTask.java:149)
  jdk.internal.net.http.common.SequentialScheduler$SchedulableTask.run(jdk.internal.net.http.common.SequentialScheduler$SchedulableTask.java:207)
  jdk.internal.net.http.HttpClientImpl$DelegatingExecutor.execute(jdk.internal.net.http.HttpClientImpl$DelegatingExecutor.java:177)
  jdk.internal.net.http.common.SequentialScheduler.runOrSchedule(jdk.internal.net.http.common.SequentialScheduler.java:282)
  jdk.internal.net.http.common.SequentialScheduler.runOrSchedule(jdk.internal.net.http.common.SequentialScheduler.java:251)
  jdk.internal.net.http.Http1AsyncReceiver.onReadError(jdk.internal.net.http.Http1AsyncReceiver.java:516)
  jdk.internal.net.http.Http1AsyncReceiver.lambda$handlePendingDelegate$3(jdk.internal.net.http.Http1AsyncReceiver.java:380)
  jdk.internal.net.http.Http1AsyncReceiver$$Lambda+0x00007f4cb5e33ca0.84679411.run(jdk.internal.net.http.Http1AsyncReceiver$$Lambda+0x00007f4cb5e33ca0.84679411.java:-1)
  jdk.internal.net.http.Http1AsyncReceiver$Http1AsyncDelegateSubscription.cancel(jdk.internal.net.http.Http1AsyncReceiver$Http1AsyncDelegateSubscription.java:163)
  jdk.internal.net.http.common.HttpBodySubscriberWrapper$SubscriptionWrapper.cancel(jdk.internal.net.http.common.HttpBodySubscriberWrapper$SubscriptionWrapper.java:92)
  jdk.internal.net.http.ResponseSubscribers$HttpResponseInputStream.close(jdk.internal.net.http.ResponseSubscribers$HttpResponseInputStream.java:653)

  com.acme.service.server.StatusClient.getResponse(com.acme.service.server.StatusClient.java:52)
  com.acme.service.server.StatusClient_ClientProxy.getResponse(com.acme.service.server.StatusClient_ClientProxy.java:-1)
  com.acme.client.Request.execute(com.acme.client.Request.java:96)
  com.acme.service.server.serviceStatusProvider.getStatusHistorys(com.acme.service.server.serviceStatusProvider.java:237)
  com.acme.service.api.RemoteStatusCheck.getStatusHistory(com.acme.service.api.RemoteStatusCheck.java:163)
  com.acme.service.api.RemoteStatusCheck.lambda$doChecks$0(com.acme.service.api.RemoteStatusCheck.java:132)
  com.acme.service.api.RemoteStatusCheck$$Lambda+0x00007f4cb9f0d8d0.979953307.call(com.acme.service.api.RemoteStatusCheck$$Lambda+0x00007f4cb9f0d8d0.979953307.java:-1)
  java.util.concurrent.FutureTask.run(java.util.concurrent.FutureTask.java:317)
  java.lang.VirtualThread.runWith(java.lang.VirtualThread.java:341)
  java.lang.VirtualThread.run(java.lang.VirtualThread.java:311)
  java.lang.VirtualThread$VThreadContinuation$1.run(java.lang.VirtualThread$VThreadContinuation$1.java:192)
  jdk.internal.vm.Continuation.enter0(jdk.internal.vm.Continuation.java:320)
  jdk.internal.vm.Continuation.enter(jdk.internal.vm.Continuation.java:312)
  jdk.internal.vm.Continuation.enterSpecial(jdk.internal.vm.Continuation.java:-1)
Paratroops answered 18/10, 2023 at 11:28 Comment(2)
#77262927 might be related to this questionJunna
How were you able to have the pinned stack trace?Talc
K
24

The method java.nio.channels.spi.AbstractInterruptibleChannel.close() (lines 108 - 115 in Temurin-21+35 (build 21+35-LTS), but probably all OpenJDK derivatives) is implemented as:

public final void close() throws IOException {
    synchronized (closeLock) {
        if (closed)
            return;
        closed = true;
        implCloseChannel();
    }
}

Line 113 in your stacktrace corresponds to the implCloseChannel() call which also corresponds to the previous line in your stacktrace, and this is in the middle of that synchronized block. Virtual threads will be pinned if they are parked/blocked in synchronized blocks, so that is why it is pinned.

In other words, given the code as it is, pinning is the expected and correct behaviour, and thus not a bug.

Whether the use of synchronized here is an oversight in getting rid of synchronized blocks in the JDK, or if there is a specific reason this still uses synchronized, I don't know. Given it is a private lock object, I guess it should be possible to get rid of it (i.e. it is not part of the "API" of a channel) by replacing it with a ReentrantLock or similar, but maybe there are other implementation reasons to keep this for now.

I have asked about this on the nio-dev list in thread Should AbstractInterruptibleChannel.close() still use a synchronized block?

Alan Bateman responded there with:

We decided it wasn't worth doing because it's rare to set SO_LINGER. Temporary pinning due to contention on the readLock or writeLock when closing is okay.

In the mean-time, we are working to remove restriction on synchronized blocks. We hope to have something in the loom repo soon.

Kossuth answered 18/10, 2023 at 12:23 Comment(10)
thanks for response and posting in dev list! I hope it will be fixed soon :) Pinning for a short period of time shouldn't be a problem like here, right?Paratroops
@Paratroops It shouldn't, though - as I understand it - under load it might spin up additional platform threads to compensate for the pinned thread (which might be a pro or a con).Kossuth
@Paratroops My previous comment seems to be wrong looking at JEP 444: "The scheduler does not compensate for pinning by expanding its parallelism". But closes should usually be short-lived.Kossuth
I really hope they can improve it. Having the errors from time to time is not really nice, as you always have to decide if it is something to ignore or not and makes automated tests unstable...Paratroops
@Paratroops Sure, but pinning is not an error, and you shouldn't consider it as such.Kossuth
Right, but rather a potential problemParatroops
@Paratroops only a performance problem. And, as with all potential performance problems, you only have a performance problem if you experience bad performance. So, if your application exhibits actual performance problems, turn on logging of pinning events (and other potential problems), but not earlier.Fishback
@Fishback true. The idea was to catch pinning events with an integration test. And indeed, when using Apache http client, the test revealed hundreds of pinning events just in one request. So we replaced the Apache client with the built-in one and now there's only occasionally a pinning event. In that case it might not be a problem but the code might change and we want to be aware of pinning early and decide if it is a real problem or not.Paratroops
@Fishback There are often advantages to worrying about future performance problems before they happen. If you anticipate there might be a performance problem in the future, then checking which things cause it now, and avoiding them, makes sense. Still, that's only if there is going to be a performance problem in the future - if you really need your program to work as fast as possible all the time. If the occasional little stutter is okay, especially fi it's going to be fixed in the next JDK, then just don't worry.Vola
(and if you really needed maximum performance, you'd be using C, not Java. Java is a tradeoff between performance and ease of use.)Vola

© 2022 - 2024 — McMap. All rights reserved.