I have a Java 8 / Spring4-based web application that is reporting the progress of a long-running process using Server Sent Events (SSEs) to a browser-based client running some Javascript and updating a progress bar. In my development environment and on our development server, the SSEs arrive in near-real-time at the client. I can see them arriving (along with their timestamps) using Chrome dev tools and the progress bar updates smoothly.
However, when I deploy to our production environment, I observe different behaviour. The events do not arrive at the browser until the long-running process completes. Then they all arrive in a burst (the events all have the timestamps within a few hundred milliseconds of each other according to dev tools). The progress bar is stuck at 0% for the duration and then skips to 100% really quickly. Meanwhile, my server logs tell me the events were generated and sent at regular intervals.
Here's the relevant server side code:
public class LongRunningProcess extends Thread {
private SseEmitter emitter;
public LongRunningProcess(SseEmitter emitter) {
this.emitter = emitter;
}
public void run() {
...
// Sample event, representing 10% progress
SseEventBuilder event = SseEmitter.event();
event.name("progress");
event.data("{ \"progress\": 10 }"); // Hand-coded JSON
emitter.send(event);
...
}
}
@RestController
public class UploadController {
@GetMapping("/start")
public SseEmitter start() {
SseEmitter emitter = new SseEmitter();
LongRunningProcess process = new LongRunningProcess(emitter);
process.start();
return emitter;
}
}
Here's the relevant client-side Javascript:
EventSource src = new EventSource("https://www.example.com/app/start");
src.addEventListener('progress', function(event) {
// Process event.data and update progress bar accordingly
});
I believe my code is fairly typical and it works just fine in DEV. However if anyone can see an issue let me know.
The issue could be related to the configuration of our production servers. DEV and PROD are all running the same version of Tomcat. However, some of them are accessed via a load balancer (F5 in out case). Almost all of them are behind a CDN (Akamai in our case). Could there be some part of this setup that causes the SSEs to be buffered (or queued or cached) that might produce what I'm seeing?
Following up on the infrastructure configuration idea, I've observed the following in the response headers. In the development environment, my browser receives:
Cache-Control: no-cache, no-store, max-age=0, must-revalidate
Connection: Keep-Alive
Content-Type: text/event-stream;charset=UTF-8
Keep-Alive: timeout=15, max=99
Pragma: no-cache
Server: Apache
Transfer-Encoding: chunked
Via: 1.1 example.com
This is what I'd expect for an event stream. A chunked response of an unknown content length. In the production environment, my browser receives something different:
Cache-Control: no-cache, no-store, max-age=0, must-revalidate
Connection: keep-alive
Content-Type: text/event-stream;charset=UTF-8
Content-Encoding: gzip
Content-Length: 318
Pragma: no-cache
Vary: Accept-Encoding
Here the returned content has a known length and is compressed. I don't think this should happen for an event stream. It would appear that something is converting my event stream into single file. Any thoughts on how I can figure out what's doing this?
CompressingFilter
. I have not yet found a solution, however I am following up a possible lead at the moment. I'll add some more info if this leads to a solution. – ChemistcompressionMinSize
andcompressibleMimeType
to bypass Tomcat compression (if, in fact, it's present) for SSE traffic. – Hurwit