While at first it might seem obvious that one would want to support the synchronized
modifier on default methods, it turns out that doing so would be dangerous, and so was prohibited.
Synchronized methods are a shorthand for a method which behaves as if the entire body is enclosed in a synchronized
block whose lock object is the receiver. It might seem sensible to extend this semantics to default methods as well; after all, they are instance methods with a receiver too. (Note that synchronized
methods are entirely a syntactic optimization; they're not needed, they're just more compact than the corresponding synchronized
block. There's a reasonable argument to be made that this was a premature syntactic optimization in the first place, and that synchronized methods cause more problems than they solve, but that ship sailed a long time ago.)
So, why are they dangerous? Synchronization is about locking. Locking is about coordinating shared access to mutable state. Each object should have a synchronization policy that determines which locks guard which state variables. (See Java Concurrency in Practice, section 2.4.)
Many objects use as their synchronization policy the Java Monitor Pattern (JCiP 4.1), in which an object's state is guarded by its intrinsic lock. There is nothing magic or special about this pattern, but it is convenient, and the use of the synchronized
keyword on methods implicitly assumes this pattern.
It is the class that owns the state that gets to determine that object's synchronization policy. But interfaces do not own the state of the objects into which they are mixed in. So using a synchronized method in an interface assumes a particular synchronization policy, but one which you have no reasonable basis for assuming, so it might well be the case that the use of synchronization provides no additional thread safety whatsoever (you might be synchronizing on the wrong lock). This would give you the false sense of confidence that you have done something about thread safety, and no error message tells you that you're assuming the wrong synchronization policy.
It is already hard enough to consistently maintain a synchronization policy for a single source file; it is even harder to ensure that a subclass correctly adhere to the synchronization policy defined by its superclass. Trying to do so between such loosely coupled classes (an interface and the possibly many classes which implement it) would be nearly impossible and highly error-prone.
Given all those arguments against, what would be the argument for? It seems they're mostly about making interfaces behave more like traits. While this is an understandable desire, the design center for default methods is interface evolution, not "Traits--". Where the two could be consistently achieved, we strove to do so, but where one is in conflict with the other, we had to choose in favor of the primary design goal.
default synchronized
, yet not necessarily forstatic synchronized
, although I would accept that the latter might've been omitted for consistency reasons. – Viridissasynchronized
modifier may be overriden in subclasses, hence it would only matter if there was something as final default methods. (Your other question) – Knighthoodsynchronized
in super classes, effectively removing synchronization. I wouldn't be surprised that not supportingsynchronized
and not supportingfinal
is related, though, maybe because of multiple inheritance (e.g. inheritingvoid x()
andsynchronized void x()
, etc.). But that's speculation. I'm curious about an authoritative reason, if there is one. – Viridissasuper
which requires a full re-implementation and possible access to private members. Btw, there is a reason those methods are called "defenders" - they are present to allow easier adding new methods. – Lexicostatistics