We are currently in the process of adding a server-side scripting capability to one of our products. As part of this I am evaluating JSR 223 script engines. As we may potentially be running large numbers of scripts on the server, I am particularly concerned about memory usage of these script engines. Comparing Rhino (Apple JDK 1.6.0_65-b14-462-11M4609, Mac OS X 10.9.2) with Nashorn (Oracle JDK 1.8.0-b132) there seems to be a dramatic difference in memory usage per ScriptEngine instance.
To test this, I use a simple program that fires up 10 blank ScriptEngine instances and then blocks reading from stdin. I then use jmap to take a heap dump (jmap -dump:format=b,file=heap.bin ) and then search for the relevant script engine instance in the dump:
import javax.script.*;
public class test {
public static void main(String...args) throws Exception {
ScriptContext context = new SimpleScriptContext();
context.setWriter(null);
context.setErrorWriter(null);
context.setReader(null);
ScriptEngine js[] = new ScriptEngine[10];
for (int i = 0; i < 10; ++i) {
js[i] = new ScriptEngineManager().getEngineByName("javascript");
js[i].setContext(context);
System.out.println(js[i].getClass().toString());
}
System.in.read();
}
}
The reason for nulling out the various reader/writer fields in the context is because we do not use them, and earlier heap dumps for Rhino suggest that they constitute a significant fraction of the per-instance overhead (and do not appear to be shared).
Analysing these heap dumps in Eclipse MAT then I get the following per-instance retained heap sizes:
- Rhino: 13,472 bytes/instance (goes up to 73,832 bytes/instance if I do not null the reader/writer fields)
- Nashorn: 324,408 bytes/instance
Is this 24x increase in size for Nashorn to be expected? Execution speed is not a major concern for the scripts we will be executing (which will mostly be I/O bound), so I am considering shipping our own copy of Rhino for use in Java 8+.