why allocation phase can be increased if we override finalize method?
Asked Answered
N

1

2

I have heard that in Joshua Bloch book written that allocation and memory collection might be increased to 430 times if we override finalize method.

It is clear for me that memory collection can work slower because additional iteration requred for gc to free memory.

But why allocation phase can be increased?

Norri answered 29/12, 2016 at 11:15 Comment(6)
I would first do some research on that topic; and check if you find anything about this ... written after the year 2000 or so. Keep in mind that the most famous thinks Block worked ... are like 15 years in the past. A lot has happened on the java platform since then.Opine
I don't see why the allocation would be any more expensive. The clean up creates objects to add the object to the finalization queue.Bluepoint
Allocation could take longer because it would be more likely to have to wait for GC cycles induced by the non-trivial finalize() overrides.Couteau
@Opine It's "Bloch", not "Block".Couteau
Also, Effective Java is, "like", eight years old, not that that matters because the advice is valid! Why do you consider old as bad, let alone exaggerate the age?Couteau
@LewBloch "Block" was my smart phone doing auto correction without me noticing. It is not always that easy to use the same "keyboard assistant" when regularly posting in two languages. Then: the first edition of Effective Java was published in 2001. That is 15 years. I remembered because I bought that book in 2002. Thing is: I have seen other advise from that book that can be regarded as outdated today. All I am saying is: do not blindly believe in rules and especially numbers that are 15 or maybe 8 years old. A lot of things were 100 times slower with Java back then; compared to today.Opine
D
2

I have searched for the original statement:

On my machine, the time to create and destroy a simple object is about 5.6 ns. Adding a finalizer increases the time to 2,400 ns. In other words, it is about 430 times slower to create and destroy objects with finalizers.

So this isn’t a general statement, but just a report of evidence that suggests that there is a pattern behind it, not that the number is reproducible. This factor is likely to change when using not-so-trivial objects or just a lot more of them.

Of course, these costs depend on how finalization is actually implemented. In HotSpot, an instance of Finalizer will be created by calling the Finalizer.register method every time an object with a non-trivial finalize() method is created.

This might imply much more costs than just allocating two objects instead of one. These Finalizer instances will be strongly linked, which is necessary to prevent the collection of the Finalizer instances themselves, and they have a reference to the constructed object. In other words, regardless of how local the object allocation initially was, the new object will escape, hindering lots of subsequent optimizations.

When it comes to “destruction”, reclaiming an ordinary object is a no-op. No action will be taken and, in fact, it is impossible to do anything with the unreachable object, as it is unreachable. Special reachability states can only be encountered by having a reachable Reference object, like the Finalizer object mentioned above, which holds a reference to the particular object (while the object wasn’t encountered through any other ordinary reference. Then, the Reference object can be enqueued, after which (one of) the finalizer thread(s) can take the appropriate action.

Of course, comparing “no action” with any other action can lead to arbitrary factors. The absolute number was 2,400ns, which is reasonable for an action that involves enqueuing an object and notifying another thread to poll the queue.

Dagney answered 11/1, 2017 at 16:42 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.