Yes, there is an optimization to reduce Reflection costs, though it is implemented mostly in Class Library rather than in JVM.
Before Java 1.4 Method.invoke
worked through a JNI call to VM runtime. Each invocation required at least two transitions from Java to Native and back to Java. The VM runtime parsed a method signature, verified that types of passed arguments were correct, performed boxing/unboxing and constructed a new Java frame for a called method. All that was rather slow.
Since Java 1.4 Method.invoke
uses dynamic bytecode generation if a method is called more than 15 times (configurable via sun.reflect.inflationThreshold
system property). A special Java class responsible for calling the given particular method is built in run-time. This class implements sun.reflect.MethodAccessor which java.lang.reflect.Method
delegates calls to.
The approach with dynamic bytecode generation is much faster since it
- does not suffer from JNI overhead;
- does not need to parse method signature each time, because each method invoked via Reflection has its own unique MethodAccessor;
- can be further optimized, e.g. these MethodAccessors can benefit from all regular JIT optimizations like inlining, constant propagation, autoboxing elimination etc.
Note, that this optimization is implemented mostly in Java code without JVM assistance. The only thing HotSpot VM does to make this optimization possible - is skipping bytecode verification for such generated MethodAccessors. Otherwise the verifier would not allow, for example, to call private methods.
MethodHandles.Lookup#unreflect()
? – Icelander