The configured limit of 1.000 object references was reached while attempting to calculate the size of the object graph
Asked Answered
C

2

19

I have a jhipster project and I added some entities.
My services are very slow because this warning message:

The configured limit of 1.000 object references was reached while attempting to calculate the size of the object graph. Severe performance degradation could occur if the sizing operation continues. This can be avoided by setting the CacheManger or Cache <sizeOfPolicy> elements maxDepthExceededBehavior to "abort" or adding stop points with @IgnoreSizeOf annotations. If performance degradation is NOT an issue at the configured limit, raise the limit value using the CacheManager or Cache <sizeOfPolicy> elements maxDepth attribute. For more information, see the Ehcache configuration documentation.

What I can change to increase this limit or to cancel cache in my project?

Curricle answered 29/1, 2016 at 0:19 Comment(2)
Cancelling cache is not what one does usually to speed up things :)Joinville
You need to indicate where you are using caching and on which kind of objects, so that you can understand why you cache such a large graph at once.Dannadannel
E
10

Here is what Ehcache official documentation says about Sizing of cached entries:

Sizing of cached entries

Elements put in a memory-limited cache will have their memory sizes measured. The entire Element instance added to the cache is measured, including key and value, as well as the memory footprint of adding that instance to internal data structures. Key and value are measured as object graphs – each reference is followed and the object reference also measured. This goes on recursively.

Shared references will be measured by each class that references it. This will result in an overstatement. Shared references should therefore be ignored.

Configuration for Limiting the Traversed Object Graph

Sizing caches involves traversing object graphs, a process that can be limited with annotations. This process can also be controlled at both the CacheManager and cache levels.

Control how deep the size-of engine can go when sizing on-heap elements by adding the following element at the CacheManager level in resources/ehcache.xml

<sizeOfPolicy maxDepth="100" maxDepthExceededBehavior="abort" />  

This element has the following attributes:

  • maxDepth which controls how many linked objects can be visited before the size-of engine takes any action. This attribute is required.

  • maxDepthExceededBehavior which specifies what happens when the max depth is exceeded while sizing an object graph. Possible values for this filed are:

  • continue which forces the size-of engine to log a warning and continue the sizing operation. If this attribute is not specified, continue is the behavior used

  • abort which forces the size-of engine to abort the sizing, log a warning, and mark the cache as not correctly tracking memory usage. With this setting, Ehcache.hasAbortedSizeOf() returns true

The SizeOf policy can be configured at the cache manager level (directly under <ehcache>) and at the cache level (under <cache> or <defaultCache>). The cache policy always overrides the cache manager one if both are set. This element has no effect on distributed caches.

Electromyography answered 10/10, 2017 at 9:47 Comment(0)
P
6

You can add following tag in your resources/ehcache.xml . property maxDepthExceededBehavior=abort avoids from slowing your services. Also you can change the maxDepth to increase the limit.

<sizeOfPolicy maxDepth="1000" maxDepthExceededBehavior="abort" />
Projectile answered 12/9, 2016 at 23:8 Comment(1)
I am using ehcache3 and creating cache manager programmatically.how can this field be set programmatically in ehcache 3 in spring boot.?Ayeshaayin

© 2022 - 2024 — McMap. All rights reserved.