The practice of not recreating so many new objects in a tight loop, where easily avoidable, definitely has a clear and obvious benefit as shown by the performance benchmarks.
However it also has a more subtle benefit that no one has mentioned.
This secondary benefit is related to an application freeze I saw in a large app processing the persistent objects produced after parsing CSV files with millions of lines/records and each record having about 140 fields.
Creating a new object here and there doesn't normally affect the garbage collector's workload.
Creating two new objects in a tight loop that iterates through each of the 140 fields in each of the millions of records in the aforementioned app incurs more than just mere wasted CPU cycles. It places a massive burden on the GC.
For the objects created by parsing a CSV file with 10 million lines the GC was being asked to allocate then clean up 2 x 140 x 10,000,000 = 2.8 billion objects!!!
If at any stage of the amount of free memory gets scarce e.g. the app has been asked to process multiple large files simultaneously, then you run the risk that the app ends up doing way more GC'ing than actual real work. When the GC effort takes up more than 98% of the CPU time then BANG! You get one of these dreaded exceptions:
GC Overhead Limit Exceeded
https://www.baeldung.com/java-gc-overhead-limit-exceeded
In that case rewriting the code to reuse objects like the StringBuilder instead of instantiating a new one at each iteration can really avoid a lot of GC activity (by not instantiating an extra 2.8 billion objects unnecessarily), reduce the chance of it throwing a "GC Overhead Limit Exceeded" exception and drastically improve the app's general performance even when it is not yet tight on memory.
Clearly, "leaving to the JVM to optimize" can not be a "rule of thumb" applicable to all scenarios.
With the sort of metrics associated with known large input files nobody who writes code to avoid the unnecessary creation of 2.8 billion objects should ever be accused by the "puritanicals" of "Pre-Optimizing" ;)
Any dev with half a brain and the slightest amount of foresight could see that this type of optimization for the expected input file size was warranted from day one.