JPA inserts slow with an object graph
Asked Answered
S

3

6

I'm trying to do a cascading save on a large object graph using JPA. For example (my object graph is a little bigger but close enough):

@Entity
@Table(name="a")
public class A {
  private long id;
  @OneToMany(cascade = CascadeType.ALL, mappedBy = "a")
  private Collection<B> bs;
}

@Entity
@Table(name="b")
public class B {
  private long id;
  @ManyToOne
  private A a;
}

So I'm trying to persist A which has a collection of 100+ B's. Code is just

em.persist(a);

Problem is, it's SLOW. My save is taking approximately 1300ms. I looked at the SQL being generated and it's horribly inefficient. Something like this:

select a_seq.nextval from dual;
select b_seq.nextval from dual;
select b_seq.nextval from dual;
select b_seq.nextval from dual;
...
insert into a (id) values (1);
insert into b (id, fk) values (1, 1);
insert into b (id, fk) values (2, 1);
insert into b (id, fk) values (3, 1);
...

Currently using toplink as the persistence provider but I've tried eclipselink and hibernate also. Backend is oracle 11g. Problem is really how the sql is put together. Each of these operations is getting done discretely rather than in bulk, so if there is a network latency of even 5ms between my appserver and db server, doing 200 discrete operations adds 1 second. I've tried increasing the allocationSize of my sequences but that only helps out a bit. I've also tried direct JDBC as a batch statement:

for...{
  statement = connection.prepareStatement(sql);
  statement.addBatch();
}
statement.executeBatch();

For my datamodel it takes about 33ms done as direct JDBC batch. Oracle itself is taking 5ms for the 100+ inserts.

Is there anyway of making JPA (i'm stuck with 1.0 right now...) go faster without delving into vendor specific things like hibernate bulk insert?

Thanks!

Sorcim answered 23/6, 2010 at 22:56 Comment(0)
N
2

The solution would be to enable JDBC batching and to flush and clear the EntityManager at regular intervals (the same than the batch size) but I'm not aware of a vendor neutral way to do this:

  • With Hibernate, you'd have to set the hibernate.jdbc.batch_size configuration option. See Chapter 13. Batch processing

  • With EclipseLink, it looks like there is a batch writing mode. See Jeff Sutherland's post in this thread (it should also be possible to specify the size).

  • According to the comments of this blog post, batch writing is not available in TopLink Essentials :(

Nepotism answered 24/6, 2010 at 2:41 Comment(0)
B
3

Curious why you find increasing the INCREMENT BY as dirty? It is an optimization that reduces the number of calls to the database to retrieve the next sequence value and is a common pattern used in database clients where the id value is assigned in the client prior to INSERT. I don't see this as a JPA or ORM issue and should be the same cost in your JDBC comparison since it must also retrieve a new sequence number for each new row prior to INSERT. If you have a different approach in your JDBC case then we should be able to get EclipseLink JPA to follow the same approach.

The cost of JPA is probably most obvious on the isolated INSERT scenario because you are not gaining any benefit from repeated reads on a transactional or shared cache and depending on your cache configuration you are paying a price to put these new entities into the cache within the flush/commit.

Please note that there is also a cost to creating the first EntityManager where all of the metadata processing, class-loading, possibly weaving, and metamodel initialization. Make sure you keep this time out of your comparison. In your real application this occurs once and all subsequent EntityManager benefit from the shared metadata.

If you have other scenarios that need to read these entities then the cost of putting them in the cache can reduce their cost of retrieving them. In my experience I can make an application overall much faster then a typical hand-written JDBC solution but its a balance across the entire set of concurrent users and not on an isolated test case.

I hope this helps. Happy to provide any more guidance and EclipseLink JPA and its performance and scalability options.

Doug

Boer answered 5/7, 2010 at 14:54 Comment(2)
Thanks for the response. For sequence fetching with databases like oracle I'm not sure why you can't just put that right in the insert statement (my_seq.nextval). It is the network latency of doing this many times that causes the slow down. The time it takes to grab the next sequence value in oracle itself is statistically insignificant.Sorcim
Assigning the value within the INSERT statement is defintely faster on most databases. The challenge is that you generally also need the new value within the application for caching, to maintain identity, or for population of cascaded primary keys. If your database supports using nextval within the INSERT it also must return the value from the INSERT for the JPA provider to use.Boer
N
2

The solution would be to enable JDBC batching and to flush and clear the EntityManager at regular intervals (the same than the batch size) but I'm not aware of a vendor neutral way to do this:

  • With Hibernate, you'd have to set the hibernate.jdbc.batch_size configuration option. See Chapter 13. Batch processing

  • With EclipseLink, it looks like there is a batch writing mode. See Jeff Sutherland's post in this thread (it should also be possible to specify the size).

  • According to the comments of this blog post, batch writing is not available in TopLink Essentials :(

Nepotism answered 24/6, 2010 at 2:41 Comment(0)
S
2

Thanks Pascal for the response. I've done some tests and I was able to significantly increase the performance.

With no optimizations i had an insert taking approximately 1100ms. Using eclipselink I added to the persistence.xml:

   <property name="eclipselink.jdbc.batch-writing" value="JDBC"/>
   <property name="eclipselink.jdbc.batch-writing.size" value="1000"/>

I tried the other properties (Oracle-JDBC etc) but JDBC appeared to give the best performance increase. That brought the insert down to approximately 900ms. So a fairly modest performance increase of 200ms. A big savings came from increasing the sequence allocationSize. I'm not a huge fan of doing this. I find it dirty to increase the INCREMENT BY of my sequences just to accommodate JPA. Increasing these brought the time down to approximately 600ms for each insert. So a total of about 500 ms were shaved off with those enhancements.

All this is fine and dandy, but it's still significantly slower than JDBC batch. JPA is a pretty high price to pay for ease of coding.

Sorcim answered 25/6, 2010 at 16:22 Comment(1)
Thanks for the feedback. I should have noticed the allocationSize. +1Nepotism

© 2022 - 2024 — McMap. All rights reserved.