Modern JavaScript engines utilize inline caching and adaptive recompilation techniques to minimize impact of the dynamic dispatch on the generated code.
If we are speaking about V8 then the fact whether object is observed or not is encoded in its hidden class. Both inline caches stubs and optimized code already check hidden class against some expected value to determine whether an object has an expected shape or not. The very same check gives information about the fact whether the object is observed or not. So nothing changes on the code paths that work with non-observed objects. Starting to observe the object is treated the same way as changing it shape: object's hidden class is switched to a different one, with an observed bit set: you can read Runtime_SetIsObserved
to see this.
Similar reasoning applies to the parts of the system that omit guards in the optimized code and instead deoptimize code dependant on "shape" assumptions: once an object becomes observed all optimized code depending on the assumption that such object was not observed will be deoptimized. Thus again no price is paid for unobserved objects.
That said, current implementation of Object.observe
in V8 makes observed objects pay a high price because it normalizes them (turns them into dictionary representation) and requires round trips through runtime system for observation recording. But there are no inherent technical difficulties in significantly reducing this cost later.