Hibernate - Batch update returned unexpected row count from update: 0 actual row count: 0 expected: 1
Asked Answered
S

46

224

I get following hibernate error. I am able to identify the function which causes the issue. Unfortunately there are several DB calls in the function. I am unable to find the line which causes the issue since hibernate flush the session at the end of the transaction. The below mentioned hibernate error looks like a general error. It doesn't even mentioned which Bean causes the issue. Anyone familiar with this hibernate error?

org.hibernate.StaleStateException: Batch update returned unexpected row count from update: 0 actual row count: 0 expected: 1
        at org.hibernate.jdbc.BatchingBatcher.checkRowCount(BatchingBatcher.java:93)
        at org.hibernate.jdbc.BatchingBatcher.checkRowCounts(BatchingBatcher.java:79)
        at org.hibernate.jdbc.BatchingBatcher.doExecuteBatch(BatchingBatcher.java:58)
        at org.hibernate.jdbc.AbstractBatcher.executeBatch(AbstractBatcher.java:195)
        at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:235)
        at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:142)
        at org.hibernate.event.def.AbstractFlushingEventListener.performExecutions(AbstractFlushingEventListener.java:297)
        at org.hibernate.event.def.DefaultFlushEventListener.onFlush(DefaultFlushEventListener.java:27)
        at org.hibernate.impl.SessionImpl.flush(SessionImpl.java:985)
        at org.hibernate.impl.SessionImpl.managedFlush(SessionImpl.java:333)
        at org.hibernate.transaction.JDBCTransaction.commit(JDBCTransaction.java:106)
        at org.springframework.orm.hibernate3.HibernateTransactionManager.doCommit(HibernateTransactionManager.java:584)
        at org.springframework.transaction.support.AbstractPlatformTransactionManager.processCommit(AbstractPlatformTransacti
onManager.java:500)
        at org.springframework.transaction.support.AbstractPlatformTransactionManager.commit(AbstractPlatformTransactionManag
er.java:473)
        at org.springframework.transaction.interceptor.TransactionAspectSupport.doCommitTransactionAfterReturning(Transaction
AspectSupport.java:267)
        at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:106)
        at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:170)
        at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:176)
Steerage answered 30/4, 2010 at 8:11 Comment(2)
I have the same problem. This is not a big issue since it happens very rarely. Using show_sql is not practical since reproducing this behavior requires millions of transactions. However, because this happens repeatedly during a system test I run (that has gazillions of transactions) I suspect that there is a specific reason.Jackhammer
I encountered this problem while I tried to update rows which have NO changes. Do not update something if there is no difference.Centipede
D
83

Without code and mappings for your transactions, it'll be next to impossible to investigate the problem.

However, to get a better handle as to what causes the problem, try the following:

  • In your hibernate configuration, set hibernate.show_sql to true. This should show you the SQL that is executed and causes the problem.
  • Set the log levels for Spring and Hibernate to DEBUG, again this will give you a better idea as to which line causes the problem.
  • Create a unit test which replicates the problem without configuring a transaction manager in Spring. This should give you a better idea of the offending line of code.
Dialysis answered 30/4, 2010 at 9:55 Comment(2)
hibernate.show_sql > i would rather advise to set the log category org.hibernate.SQL level to DEBUG. This way you don't need to modify the hibernate configuration just for logging.Dome
The use of "show_sql" can be very verbose and not practicable in production. For this I have modified the catch clause at the line 74 of the BatchingBatcher class to print the statement with a ps.toString() to have only the statement that has the problem.Nievesniflheim
O
105

I got the same exception while deleting a record by Id that does not exists at all. So check that record you are updating/Deleting actually exists in DB

Overtop answered 9/11, 2012 at 13:49 Comment(5)
I had this problem when I removed a child from a parent-child relationship, saved the parent (which deletes the child) and then tried to also delete the child manually.Rojo
This solved my problem. The record didn't exist and my service was calling updateAll() method while it actually needed to call createOrUpdateAll() method. Thanks.Dorking
so how to solve the issue? If I get a record, and then delete it; but if system already deletes it before I delete, another, my application will throw the exception.Drucilladrucy
@davethieben, this was exactly the information I needed. I didn't realize the parent-child relationship persisted on deletes.Fermi
The solution is to use select for update when fetching the record to lock the row, so nothing else can delete it before you do.Deflower
D
83

Without code and mappings for your transactions, it'll be next to impossible to investigate the problem.

However, to get a better handle as to what causes the problem, try the following:

  • In your hibernate configuration, set hibernate.show_sql to true. This should show you the SQL that is executed and causes the problem.
  • Set the log levels for Spring and Hibernate to DEBUG, again this will give you a better idea as to which line causes the problem.
  • Create a unit test which replicates the problem without configuring a transaction manager in Spring. This should give you a better idea of the offending line of code.
Dialysis answered 30/4, 2010 at 9:55 Comment(2)
hibernate.show_sql > i would rather advise to set the log category org.hibernate.SQL level to DEBUG. This way you don't need to modify the hibernate configuration just for logging.Dome
The use of "show_sql" can be very verbose and not practicable in production. For this I have modified the catch clause at the line 74 of the BatchingBatcher class to print the statement with a ps.toString() to have only the statement that has the problem.Nievesniflheim
D
65

Solution: In the Hibernate mapping file for the id property, if you use any generator class, for that property you should not set the value explicitly by using a setter method.

If you set the value of the Id property explicitly, it will lead the error above. Check this to avoid this error. or It's error show when you mention in the mapping file the field generator="native" or "incremental" and in your DATABASE the table mapped is not auto_incremented Solution: Go to your DATABASE and update your table to set auto_increment

Dustindustman answered 7/5, 2014 at 16:18 Comment(3)
Absolutely right. This must be the right answer. Thanks @Rēda BiramanēTasty
However, you CAN set the ID value to '0', and then Hibernate will feel free to overwrite itSeel
Thanks! Not sure why this isn't the highest ranked answer.Africah
T
31

In my case, I came to this exception in two similar cases:

  • In a method annotated with @Transactional I had a call to another service (with long times of response). The method updates some properties of the entity (after the method, the entity still exists in the database). If the user requests two times the method (as he thinks it doesn't work the first time) when exiting from the transactional method the second time, Hibernate tries to update an entity which already changed its state from the beginning of the transaction. As Hibernate search for an entity in a state, and found the same entity but already changed by the first request, it throws an exception as it can't update the entity. It's like a conflict in GIT.
  • I had automatic requests (for monitoring the platform) which update an entity (and the manual rollback a few seconds later). But this platform is already used by a test team. When a tester performs a test in the same entity as the automatic requests, (within the same hundredth of a millisecond), I get the exception. As in the previous case, when exiting from the second transaction, the entity previously fetched already changed.

Conclusion: in my case, it wasn't a problem which can be found in the code. This exception is thrown when Hibernate founds that the entity first fetched from the database changed during the current transaction, so it can't flush it to the database as Hibernate doesn't know which is the correct version of the entity: the one the current transaction fetch at the beginning; or the one already stored in the database.

Solution: to solve the problem, you will have to play with the Hibernate LockMode to find the one which best fit your requirements.

Translator answered 28/10, 2017 at 12:20 Comment(2)
Hi, thanks for your helpful answer. Could you please inform, which LockMode you went for eventually, to get rid of the error. I am facing similar issue, when an API gets hit successively in a different of miliseconds, due to user clicking twice inadvertently. Pessimistic Locking hurts performance; so will be interested to know what you eventually went for ?Private
It was a long time since I had the problem and i don't remember well the exact solution I've put in place, but it was LockMode.READ or LockMode.OPTIMISTIC.Translator
H
20

This happened to me once by accident when I was assigning specific IDs to some objects (testing) and then I was trying to save them in the database. The problem was that in the database there was an specific policy for setting up the IDs of the objects. Just do not assign an ID if you have a policy at Hibernate level.

Headgear answered 17/4, 2014 at 23:58 Comment(0)
K
17

I just encountered this problem and found out I was deleting a record and trying to update it afterwards in a Hibernate transaction.

Kodiak answered 8/2, 2013 at 12:19 Comment(0)
D
17

Hibernate 5.4.1 and HHH-12878 issue

Prior to Hibernate 5.4.1, the optimistic locking failure exceptions (e.g., StaleStateException or OptimisticLockException) didn't include the failing statement.

The HHH-12878 issue was created to improve Hibernate so that when throwing an optimistic locking exception, the JDBC PreparedStatement implementation is logged as well:

if ( expectedRowCount > rowCount ) {
    throw new StaleStateException(
            "Batch update returned unexpected row count from update ["
                    + batchPosition + "]; actual row count: " + rowCount
                    + "; expected: " + expectedRowCount + "; statement executed: "
                    + statement
    );
}

Testing Time

I created the BatchingOptimisticLockingTest in my High-Performance Java Persistence GitHub repository to demonstrate how the new behavior works.

First, we will define a Post entity that defines a @Version property, therefore enabling the implicit optimistic locking mechanism:

@Entity(name = "Post")
@Table(name = "post")
public class Post {

    @Id
    @GeneratedValue(strategy = GenerationType.SEQUENCE)
    private Long id;

    private String title;

    @Version
    private short version;

    public Long getId() {
        return id;
    }

    public Post setId(Long id) {
        this.id = id;
        return this;
    }

    public String getTitle() {
        return title;
    }

    public Post setTitle(String title) {
        this.title = title;
        return this;
    }

    public short getVersion() {
        return version;
    }
}

We will enable the JDBC batching using the following 3 configuration properties:

properties.put("hibernate.jdbc.batch_size", "5");
properties.put("hibernate.order_inserts", "true");
properties.put("hibernate.order_updates", "true");

We are going to create 3 Post entities:

doInJPA(entityManager -> {
    for (int i = 1; i <= 3; i++) {
        entityManager.persist(
            new Post()
                .setTitle(String.format("Post no. %d", i))
        );
    }
});

And Hibernate will execute a JDBC batch insert:

SELECT nextval ('hibernate_sequence')
SELECT nextval ('hibernate_sequence')
SELECT nextval ('hibernate_sequence')

Query: [
    INSERT INTO post (title, version, id) 
    VALUES (?, ?, ?)
], 
Params:[
    (Post no. 1, 0, 1), 
    (Post no. 2, 0, 2), 
    (Post no. 3, 0, 3)
]

So, we know that JDBC batching works just fine.

Now, let's replicate the optimistic locking issue:

doInJPA(entityManager -> {
    List<Post> posts = entityManager.createQuery("""
        select p 
        from Post p
        """, Post.class)
    .getResultList();

    posts.forEach(
        post -> post.setTitle(
            post.getTitle() + " - 2nd edition"
        )
    );

    executeSync(
        () -> doInJPA(_entityManager -> {
            Post post = _entityManager.createQuery("""
                select p 
                from Post p
                order by p.id
                """, Post.class)
            .setMaxResults(1)
            .getSingleResult();

            post.setTitle(post.getTitle() + " - corrected");
        })
    );
});

The first transaction selects all Post entities and modifies the title properties.

However, before the first EntityManager is flushed, we are going to execute a second transition using the executeSync method.

The second transaction modifies the first Post, so its version is going to be incremented:

Query:[
    UPDATE 
        post 
    SET 
        title = ?, 
        version = ? 
    WHERE 
        id = ? AND 
        version = ?
], 
Params:[
    ('Post no. 1 - corrected', 1, 1, 0)
]

Now, when the first transaction tries to flush the EntityManager, we will get the OptimisticLockException:

Query:[
    UPDATE 
        post 
    SET 
        title = ?, 
        version = ? 
    WHERE 
        id = ? AND 
        version = ?
], 
Params:[
    ('Post no. 1 - 2nd edition', 1, 1, 0), 
    ('Post no. 2 - 2nd edition', 1, 2, 0), 
    ('Post no. 3 - 2nd edition', 1, 3, 0)
]

o.h.e.j.b.i.AbstractBatchImpl - HHH000010: On release of batch it still contained JDBC statements

o.h.e.j.b.i.BatchingBatch - HHH000315: Exception executing batch [
    org.hibernate.StaleStateException: 
    Batch update returned unexpected row count from update [0]; 
    actual row count: 0; 
    expected: 1; 
    statement executed: 
        PgPreparedStatement [
            update post set title='Post no. 3 - 2nd edition', version=1 where id=3 and version=0
        ]
], 
SQL: update post set title=?, version=? where id=? and version=?

So, you need to upgrade to Hibernate 5.4.1 or newer to benefit from this improvement.

Disject answered 25/6, 2020 at 7:32 Comment(2)
Sadly, probably because of this bug here: hibernate.atlassian.net/browse/HHH-13741 I only see some hash of a Hikari prepared statement. Using Spring Boot with the Hikari connection pool.Cordovan
To my previous comment: my problem is that the Hikari prepared statement wraps an SQLServerPreparedStatement, content of which won't be shown in the log. This would probably be the solution: hibernate.atlassian.net/browse/HHH-13888Cordovan
K
13

This can happen when trigger(s) execute additional DML (data modification) queries which affect the row counts. My solution was to add the following at the top of my trigger:

SET NOCOUNT ON;
Katsuyama answered 11/2, 2014 at 15:44 Comment(5)
This answer sent me in the right direction. I was working with a legacy database that had a trigger on it - fortunately I was replacing what the trigger did with code, so I could just delete it.Health
+1 That was my problem too. I was using postgresql, so needed to use the @SQLInsert annotation to switch off row count checking: technology-ebay.de/the-teams/mobile-de/blog/…External
SET NOCOUNT OFF*Spongy
Where did you put this annotation? In the repository?Lapful
@RomainDereux add it as the first line of the trigger which causes the problem.Katsuyama
F
10

I was facing same issue. The code was working in the testing environment. But it was not working in staging environment.

org.hibernate.jdbc.BatchedTooManyRowsAffectedException: Batch update returned unexpected row count from update [0]; actual row count: 3; expected: 1

The problem was the table had single entry for each primary key in testing DB table. But in staging DB there was multiple entry for same primary key. ( Problem is in staging DB the table didn't had any primary key constraints also there was multiple entry.)

So every time on update operation it gets failed. It tries to update single record and expect to get update count as 1. But since there was 3 records in the table for the same primary key, The result update count finds 3. Since expected update count and actual result update count didn't match, It throws exception and rolls back.

After the I removed all the records which have duplicate primary key and added primary key constraints. It is working fine.

Hibernate - Batch update returned unexpected row count from update: 0 actual row count: 0 expected: 1

actual row count: 0 // means no record found to update
update: 0 // means no record found so nothing update
expected: 1 // means expected at least 1 record with key in db table.

Here the problem is that the query trying to update a record for some key, But hibernate didn't find any record with the key.

Formulism answered 25/8, 2015 at 10:3 Comment(6)
Hi @ParagFlume, i got the same error "Batch update returned unexpected row count from update: 0 actual row count: 0 expected: 1" when i try to select one object from my databse the problem exists only any production, do you have any idea on how to resolve this problem , i'm in a critical situation . Regards !Osmosis
I think the query didn't find any record in db, it is expecting to find one record in db, which it is trying to update. you can check manually / query-browser if record actually exists on db.Formulism
yes the record exists , but my problem is why hibernate try to update , i'm just using a select service to get the object from the database !Osmosis
may be you are trying to change the state of the objects. and session is still open.Formulism
how i can check this behaivor , because i'm not the person who developp the service...Osmosis
just check whether any state of the object got changed, specially the id ( id/primary_key of the dto ) of the objectFormulism
W
10

My two cents.

Problem: With Spring Boot 2.7.1 the h2 database version has changed to v2.1.214 which may result into a thrown OptimisticLockException when using generated UUIDs for Id columns, see https://hibernate.atlassian.net/browse/HHH-15373.

Solution: Add columnDefinition="UUID" to the @Column annotation

E.g., with a primary key definition for an entity like this:

@Id
@GeneratedValue(generator = "UUID")
@GenericGenerator(name = "UUID", strategy = "org.hibernate.id.UUIDGenerator")
@Column(name = COLUMN_UUID, updatable = false, nullable = false)
UUID uUID;

Change the column annotation to:

@Column(name = COLUMN_UUID, updatable = false, nullable = false, columnDefinition="UUID")
Wellesley answered 7/9, 2022 at 12:17 Comment(1)
You said it "may" result into such an exception. Do you know by any chance what exactly is causing it or under which conditions it may occur?Coincide
C
8

It also can happen when you try to UPDATE a PRIMARY KEY.

Cyna answered 31/8, 2017 at 17:9 Comment(0)
T
5

As Julius says this happens when an update Occurs on an Object that has its children being deleted. (Probably because there was a need for an update for the whole Father Object and sometimes we prefer to delete the children and re -insert them on the Father (new , old doesnt matter )along with any other updates the father could have on any of its other plain fields) So ...in order for this to work delete the children (within a Transaction) by calling childrenList.clear() (Dont loop through the children and delete each one with some childDAO.delete(childrenList.get(i).delete())) and setting @OneToMany(cascade = CascadeType.XXX ,orphanRemoval=true) on the Side of the Father Object. Then update the father (fatherDAO.update(father)). (Repeat for every father object) The result is that children have their link to their father stripped off and then they are being removed as orphans by the framework.

Torsibility answered 29/8, 2013 at 18:50 Comment(0)
Q
5

I encountered this problem where we had one-many relationship.

In the hibernate hbm mapping file for master, for object with set type arrangement, added cascade="save-update" and it worked fine.

Without this, by default hibernate tries to update for a non-existent record and by doing so it inserts instead.

Quadruplet answered 1/5, 2014 at 15:37 Comment(0)
G
5

Another way to get this error is if you have a null item in a collection.

Guatemala answered 19/10, 2018 at 19:14 Comment(0)
B
4

It happens when you try to delete an object and then you try to update the same object. Use this after delete:

session.clear();
Belga answered 17/10, 2013 at 8:47 Comment(0)
I
3

i got the same problem and i verified this may occur because of Auto increment primary key. To solve this problem do not inset auto increment value with data set. Insert data without the primary key.

Invariant answered 30/4, 2014 at 9:36 Comment(0)
M
2

This happened to me too, because I had my id as Long, and I was receiving from the view the value 0, and when I tried to save in the database I got this error, then I fixed it by set the id to null.

Metalware answered 16/2, 2015 at 23:37 Comment(0)
W
2

This problem mainly occurs when we are trying to save or update the object which are already fetched into memory by a running session. If you've fetched object from the session and you're trying to update in the database, then this exception may be thrown.

I used session.evict(); to remove the cache stored in hibernate first or if you don't wanna take risk of loosing data, better you make another object for storing the data temp.

     try
    {
        if(!session.isOpen())
        {
            session=EmployeyDao.getSessionFactory().openSession();
        }
            tx=session.beginTransaction();

        session.evict(e);
        session.saveOrUpdate(e);
        tx.commit();;
        EmployeyDao.shutDown(session);
    }
    catch(HibernateException exc)
    {
        exc.printStackTrace();
        tx.rollback();
    }
Walkerwalkietalkie answered 17/10, 2019 at 15:5 Comment(0)
E
1

I ran into this issue when I was manually beginning and committing transactions inside of method annotated as @Transactional. I fixed the problem by detecting if an active transaction already existed.

//Detect underlying transaction
if (session.getTransaction() != null && session.getTransaction().isActive()) {
    myTransaction = session.getTransaction();
    preExistingTransaction = true;
} else {
    myTransaction = session.beginTransaction();
}

Then I allowed Spring to handle committing the transaction.

private void finishTransaction() {
    if (!preExistingTransaction) {
        try {
            tx.commit();
        } catch (HibernateException he) {
            if (tx != null) {
                tx.rollback();
            }
            log.error(he);
        } finally {
            if (newSessionOpened) {
                SessionFactoryUtils.closeSession(session);
                newSessionOpened = false;
                maxResults = 0;
            }
        }
    }
}
Edythedythe answered 6/10, 2014 at 15:42 Comment(0)
J
1

This happens when you declared the JSF Managed Bean as

@RequestScoped;

when you should declare as

@SessionScoped;

Regards;

Jalap answered 30/7, 2015 at 22:5 Comment(0)
D
1

I got this error when I tried to update an object with an id that did not exist in the database. The reason for my mistake was that I had manually assigned a property with the name 'id' to the client side JSON-representation of the object and then when deserializing the object on the server side this 'id' property would overwrite the instance variable (also called 'id') that Hibernate was supposed to generate. So be careful of naming collisions if you are using Hibernate to generate identifiers.

Dodge answered 11/5, 2016 at 21:12 Comment(0)
T
1

I also came across the same challenge. In my case I was updating an object which was not even existing, using hibernateTemplate.

Actually in my application I was getting a DB object to update. And while updating its values, I also updated its ID by mistake, and went ahead to update it and came across the said error.

I am using hibernateTemplate for CRUD operations.

Tasty answered 17/7, 2016 at 18:10 Comment(0)
F
1

After reading all answers did´t find anyone to talk about inverse atribute of hibernate.

In my my opinion you should also verify in your relationships mapping whether inverse key word is appropiately setted. Inverse keyword is created to defines which side is the owner to maintain the relationship. The procedure for updating and inserting varies cccording to this attribute.

Let's suppose we have two tables:

principal_table, middle_table

with a relationship of one to many. The hiberntate mapping classes are Principal and Middle respectively.

So the Principal class has a SET of Middle objects. The xml mapping file should be like following:

<hibernate-mapping>
    <class name="path.to.class.Principal" table="principal_table" ...>
    ...
    <set name="middleObjects" table="middle_table" inverse="true" fetch="select">
        <key>
            <column name="PRINCIPAL_ID" not-null="true" />
        </key>
        <one-to-many class="path.to.class.Middel" />
    </set>
    ...

As inverse is set to ”true”, it means “Middle” class is the relationship owner, so Principal class will NOT UPDATE the relationship.

So the procedure for updating could be implemented like this:

session.beginTransaction();

Principal principal = new Principal();
principal.setSomething("1");
principal.setSomethingElse("2");


Middle middleObject = new Middle();
middleObject.setSomething("1");

middleObject.setPrincipal(principal);
principal.getMiddleObjects().add(middleObject);

session.saveOrUpdate(principal);
session.saveOrUpdate(middleObject); // NOTICE: you will need to save it manually

session.getTransaction().commit();

This worked for me, bu you can suggest some editions in order to improve the solution. That way we all will be learning.

Ferretti answered 21/8, 2016 at 14:33 Comment(0)
E
1

In our case we finally found out the root cause of StaleStateException.

In fact we were deleting the row twice in a single hibernate session. Earlier we were using ojdbc6 lib, and this was ok in this version.

But when we upgraded to odjc7 or ojdbc8, deleting records twice was throwing exception. There was bug in our code where we were deleting twice, but that was not evident in ojdbc6.

We were able to reproduce with this piece of code:

Detail detail = getDetail(Long.valueOf(1396451));
session.delete(detail);
session.flush();
session.delete(detail);
session.flush();

On first flush hibernate goes and makes changes in database. During 2nd flush hibernate compares session's object with actual table's record, but could not find one, hence the exception.

Epizoon answered 22/8, 2019 at 6:54 Comment(0)
B
1

I solved it. I found that there was no primary key for my Id column in table. Once I created it solved for me. Also there was duplicate id found in table before which I deleted and solved it.

Burrow answered 28/11, 2021 at 8:15 Comment(0)
A
1

This thread is a bit old, however I thought I should drop my fix here in case it may help someone with same root cause.

I was migrating a Java Spring hibernate app. from Oracle to Postgre, along the migration process, I converted a trigger from Oracle to Postgre, the trigger was "on Before Insert" of a table and was setting a one of the columns value (of course the desired column was marked update=false insert=false in hibernate mapping to allow the trigger to set its value), and when inserting data from the application I got this error Hibernate - Batch update returned unexpected row count from update: 0 actual row count: 0 expected: 1

My mistake was that I was setting "Return NULL" at the end of the trigger function, so when the trigger set the column value and the control is back to hibernate for saving, the record was lost as I was returning null.

My fix was to change "Return NULL" to "RETURN NEW" in trigger, this will keep the record available after being altered by the trigger, simply this was what it means by "unexcepted row count for update: 0 expected 1"

Autolysin answered 13/12, 2022 at 4:51 Comment(0)
K
0

This happened if you change something in data set using native sql query but persisted object for same data set is present in session cache. Use session.evict(yourObject);

Kathiekathleen answered 22/6, 2016 at 0:32 Comment(0)
B
0

Hibernate caches objects from the session. If object is accessed and modified by more than 1 user then org.hibernate.StaleStateException may be be thrown. It may be solved with merge/refresh entity method before saving or using lock. More info: http://java-fp.blogspot.lt/2011/09/orghibernatestalestateexception-batch.html

Biflagellate answered 11/8, 2016 at 13:43 Comment(0)
A
0

One of the case

SessionFactory sf=new Configuration().configure().buildSessionFactory();
Session session=sf.openSession();

UserDetails user=new UserDetails();

session.beginTransaction();
user.setUserName("update user agian");
user.setUserId(12);
session.saveOrUpdate(user);
session.getTransaction().commit();
System.out.println("user::"+user.getUserName());

sf.close();
Ahumada answered 12/9, 2016 at 6:15 Comment(0)
M
0

I got this error because I mistakenly mapped the ID column using Id(x => x.Id, "id").GeneratedBy.**Assigned**();

Issue resolved by using Id(x => x.Id, "id").GeneratedBy.**Identity**();

Miscalculate answered 28/11, 2016 at 11:34 Comment(0)
B
0

I was facing this exception, and hibernate was working well. I tried to insert manually one record using pgAdmin, here the issue became clear. SQL insert query returns 0 insert. and there is a trigger function that cause this issue because it returns null. so I have only to set it to return new. and finally I solved the problem.

hope that helps any body.

Beachhead answered 3/3, 2017 at 15:38 Comment(0)
G
0

This happens to me because I am missing ID declaration in bean class.

Gherkin answered 16/1, 2018 at 12:50 Comment(0)
D
0

In my case there was an issue with the Database as one of the Stored Procs was consuming all the CPU causing high DB response times. Once this was killed issue got resolved.

Defensive answered 2/2, 2018 at 7:17 Comment(0)
V
0

Actually, it happen to me when I didn't store the object as reference variable. in Entity class. Like this code: ses.get(InsurancePolicy.class, 101); After that, I stored the object in entity's reference variable so problem solved for me. policy=(InsurancePolicy)ses.get(InsurancePolicy.class, 101); After that, I updated the object and it worked fine.

Vassily answered 2/3, 2018 at 17:51 Comment(0)
B
0

I got the same message. After looking for a code related source it got to me that running the application on a local machine interferes with the dev stage, because the share the same DB. So sometimes one server has deleted an entry already while the other just wanted to do the same.

Bacchae answered 16/7, 2018 at 8:34 Comment(0)
J
0

Few ways I debugged this error:

  1. As suggested in the accepted answer- turn on show sql.
  2. I found there is some issue with setting up the id in the hibernate sql.
  3. Found that I was missing @GeneratedValue(strategy=GenerationType.IDENTITY)
Jempty answered 16/1, 2019 at 9:45 Comment(0)
O
0

In my case, it's because a trigger is triggered before a insert cause, (actually it means to split a big table in several tables using timestamp), and then return null. So I met this problem when I used springboot jpa save() function.

In addition to change the trigger to SET NOCOUNT ON; Mr. TA mentioned above, the solution can also be using native query.

insert into table values(nextval('table_id_seq'), value1)
Operable answered 15/7, 2020 at 3:16 Comment(0)
C
0

Removing cascade=CascadeType.ALL solved it for me.

Cannae answered 20/10, 2021 at 20:49 Comment(0)
S
0

In my case, I had **two** similar primary keys which I mistakenly created through MySQL workbench.

In case you see this kind of scenario, try deleting one from the MySQL workbench.

enter image description here

Salvatore answered 14/6, 2022 at 12:6 Comment(0)
U
0

I encountered this problem when using Hibernate with MySQL and upon changing the Id data type from UUID to String, the problem was solved. I don't know the reason though.

User answered 6/7, 2022 at 10:15 Comment(1)
Y
0

I encounter this problem when I query insert operation it means that I save my parent entity and it's child entity as well. It's solution that, use bidirectional mapping in your entity relationships.

Yasmineyasu answered 7/3, 2023 at 10:0 Comment(0)
M
0

I was facing this exception and the issue was that we were trying to delete parent object first then the child object. Reversing order of deletion in the code resolved the issue.

Menam answered 15/3, 2023 at 12:36 Comment(0)
S
0

The problem we met is that we use the PostgreSQL's jsonb with hibernate-types (https://github.com/vladmihalcea/hypersistence-utils), and the jsonb is a List<MyType>, where the MyType does not impl. the equals and hashCode, that makes the Hibernate dirty checking trigger unexpected UPDATE statements always after a query..

You can check this best practices, and this issue for more details

Stoss answered 16/3, 2023 at 1:59 Comment(0)
P
0

Consider this in multithreded context.

When using Spring Data JPA query methods as following:

void removeExpiredRecordsByValidfromBefore(Instant removeBefore);

Hibernation first retrieves all records and deletes them one by one. Calling such a function could result in a so-called exception if the retrieved records have already been deleted by another thread.

To prevent this, either isolate the delete requests using an appropriate lock or try a bulk delete, which deletes the records all at once.

@Modifying
@Query("delete from entityName e where e.validfrom < :removeBefore")
void removeBulkExpiredRecordsByValidfromBefore(@Param ("removeBefore")Instant removeBefore);

The difference here is that there will not be any retrieval. The first one is benefitial when we have callbakcs to be executed for each deletion. #java #hibernate

Pilcher answered 31/1 at 14:59 Comment(0)
Y
0

If you using ClickHouse, it may be related with materialized views. CH counts affected row including related tables in MV.

Yellowwood answered 22/2 at 14:10 Comment(0)
N
-1

Almost always the cause of this error is that you are sending one or more erroneous primary keys, verify that your variables are clean before loading them with the new values to perform the update.

Norry answered 14/5, 2021 at 15:2 Comment(1)
Welcome to Stack Overflow. Please write your answer in English, as Stack Overflow is an English only site.Outfoot

© 2022 - 2024 — McMap. All rights reserved.