Oracle SEQUENCE - 'increment by' and 'cache'
Asked Answered
M

1

3

I use JPA along with hibernate for my Spring boot application. I am facing some insert performance issues while doing in bulk. So far the fixes I found are :

  1. Alter the Oracle Sequence update the 'Increment by > 1, I am giving 50'
  2. In the Java Entity use the allocationSize to same value as Oracle Increment By

So that JPA prevents a call from getting the next sequence. My sequence is defined as: CREATE SEQUENCE MYSCM.BOOKING_SCHED_SEQ INCREMENT BY 1 MAXVALUE 9999999999999999999999999999 MINVALUE 1 CACHE 20

When I increase the INCREMENT BY to 50 should the cache be increased to 50 or reduce?

Morphia answered 8/4, 2019 at 14:32 Comment(0)
S
7

When I increase the INCREMENT BY to 50 should the cache be increased to 50 or reduce?

Neither. There is no relationship between INCREMENT BY and CACHE.

The INCREMENT BY controls the monotonic aspect of the sequence. With INCREMENT BY 50 the series goes 1, 51, 101, 151 and so on.

The CACHE controls how many sequence numbers are held in memory to service NEXTVAL requests. The smaller the CACHE number the more often the database has to read from its internal tables to grab the next allocation range. So in a moderately busy system we would want to minimize the number of acquired latches, so we set CACHE to a high-ish number, say 1000.

People obsess over setting the CACHE value, because they think if it's too high they might "lose" some values and have gaps in their series. This is extremely unlikely to happen, and even if it does we shouldn't care. Sequences are a source of guaranteed unique values and have no further meaning.


Although, having re-read your question I don't think this will have any impact on the performance of your bulk inserts. Why did you choose to focus on sequence allocation? Have you run any trace to discover where the bottleneck is? Have you spoke to your DBA?

Setula answered 8/4, 2019 at 14:43 Comment(8)
Yes. The bulk-insert is causing time out because of the sequence.nextval query being issued for each records. I enabled batch inserts which helped improve some performace, still this is a problem.Morphia
Which version of Oracle are you using?Setula
I am using 12C release 2Morphia
Post a snipped of your insert code. Most probably what you call a bulk insert is no bulk at all. I suspect you are making one select and one insert statement for each row. The solution is not to limit is to one select for 50 rows. You'd have to activate at least JDBC batch insert under the cover to enable inserting a bunch of rows in one roundtrip.Detailed
I agree with @marmitebomber. This is not an Oracle problem, and definitely not an Oracle Sequence problem. There is something awry in your JPA / Hibernate / Spring Boot config which is causing your application to do bulk inserts in a sub-optimal fashion.Setula
@MarmiteBomber yes you are correct. Its the hibernate doing bad stuffs. I mitigated this to a level by increasing the sequence increment to 50 and jpa sequence allocation to 50. Hibernate will fill the sequence till 50 and only call next sequence once the 50 is exhausted.Morphia
@Setula the cache for the sequence.nextval is in the database server itself right ? Not on the hibernate side ?Nickolas
@AlexandarPetrov - yes, the database caches the sequenceSetula

© 2022 - 2024 — McMap. All rights reserved.