Concerning the Answer:
"So it seems there is no simple, elegant and fail-proof solution for
that. In our case it was decided to rely on simple redelivery
mechanism (throwing exception and letting JMS message to be
redelivered after certain amount of time)."
This is only fail proof if your second transaction that starts after Transaction1 logically ends has a way of detecting that the Transaction 1 changes are not yet visible and blow up itself on techichal exception.
When you have Transaction 2 that is a different process than Transaction 1 then this is likely to be possible to check. Most likely the output of Transaction 1 is necessary to the success of transaction 2 to go forward. You can only make french fries if you have potatoes... If you have no potatoes you can blow up and try again next time.
However, if your process that is breaking due to the DB appearing stale is the exact same process that run on Transaction 1 itself. You are just adding potatoes into a bowel (e.g. a db table) and fail to detect that you bowel is overlfowing and continue running transactions to pumptit up... Such a check may be out of your hands.
Something of the sort, happens to be my case.
A theoretical solution for this might very well be to try to induce a Dirty Read on the DB by creating an artificial entity equivalent to the @Version field of JPA, forcing each process that needs to run serially to hammer an update on a common entity. If both transaction 2 and 1 update a common field on a common entity, the process will have to break - either you get a JPA optimistic lock exception on the second transaction or if you get a dirty read update exception from the DB.
I have not tested this approach yet, but it is likely going to be the needed work around, sadly enough.