The trick is to ignore the external static shell around Spark implemented in spark.Spark
and directly work with the internal spark.webserver.SparkServer
. There are some obstackles in the code that require workaround, e.g. spark.webserver.JettyHandler
is not public, so you can't instantiate it from your code, but you can extend that with your own class placed into that package and turn it public.
So the solution is along these lines:
SimpleRouteMatcher routeMatcher1 = new SimpleRouteMatcher();
routeMatcher1.parseValidateAddRoute("get '/foo'", "*/*", wrap("/foo", "*/*", (req, res) -> "Hello World 1"));
MatcherFilter matcherFilter1 = new MatcherFilter(routeMatcher1, false, false);
matcherFilter1.init(null);
PublicJettyHandler handler1 = new PublicJettyHandler(matcherFilter1);
SparkServer server1 = new SparkServer(handler1);
new Thread(() -> {
server1.ignite("0.0.0.0", 4567, null, null, null, null, "/META-INF/resources/", null, new CountDownLatch(1),
-1, -1, -1);
}).start();
And need to duplicate the wrap method in your codebase:
protected RouteImpl wrap(final String path, String acceptType, final Route route) {
if (acceptType == null) {
acceptType = "*/*";
}
RouteImpl impl = new RouteImpl(path, acceptType) {
@Override
public Object handle(Request request, Response response) throws Exception {
return route.handle(request, response);
}
};
return impl;
}
This seems to be a viable workaround if you need multiple Spark servers in your app.