What is the 'best' way to do distributed transactions across multiple databases using Spring and Hibernate
Asked Answered
B

6

37

I have an application - more like a utility - that sits in a corner and updates two different databases periodically.

It is a little standalone app that has been built with a Spring Application Context. The context has two Hibernate Session Factories configured in it, in turn using Commons DBCP data sources configured in Spring.

Currently there is no transaction management, but I would like to add some. The update to one database depends on a successful update to the other.

The app does not sit in a Java EE container - it is bootstrapped by a static launcher class called from a shell script. The launcher class instantiates the Application Context and then invokes a method on one of its beans.

What is the 'best' way to put transactionality around the database updates?

I will leave the definition of 'best' to you, but I think it should be some function of 'easy to set up', 'easy to configure', 'inexpensive', and 'easy to package and redistribute'. Naturally FOSS would be good.

Bryner answered 24/9, 2008 at 17:1 Comment(0)
P
46

The best way to distribute transactions over more than one database is: Don't.

Some people will point you to XA but XA (or Two Phase Commit) is a lie (or marketese).

Imagine: After the first phase have told the XA manager that it can send the final commit, the network connection to one of the databases fails. Now what? Timeout? That would leave the other database corrupt. Rollback? Two problems: You can't roll back a commit and how do you know what happened to the second database? Maybe the network connection failed after it successfully committed the data and only the "success" message was lost?

The best way is to copy the data in a single place. Use a scheme which allows you to abort the copy and continue it at any time (for example, ignore data which you already have or order the select by ID and request only records > MAX(ID) of your copy). Protect this with a transaction. This is not a problem since you're only reading data from the source, so when the transaction fails for any reason, you can ignore the source database. Therefore, this is a plain old single source transaction.

After you have copied the data, process it locally.

Peak answered 20/5, 2009 at 11:38 Comment(14)
Distributed Transactions must statisfy all 4 ACID properties. What's your problem? The scenario you described cannot happen, as the managers are communicating with each other and do only commit when all participating nodes have exchanged a "GO".Campbellite
@Falcon: So what happens if the network fails between PREPARE and COMMIT? Or one of the server dies? "cannot happen" can't happen in reality.Peak
No, they are not instructed to roll back because in this scenario, some of the nodes have already committed. What happens is when the crashed node becomes available, the transaction coordinator tells it to commit again. Because the node responded positively in the "prepare" phase, it is required to be able to "commit", even when it comes back from a crash.Friday
@Nicholas ... unless you can't start the node anymore because of a hardware failure.Peak
I think a RAID drive would address that, or even backups. Restore the last good backup point and all the lost commits will be re-applied.Friday
@Nicholas: No, they won't because the backup is several hours old and the backups of different servers won't match up. I understand that you want XA to work because it feels like such a "good" solution but it really isn't. There are literally thousands of problems lurking under the surface and one of them is going to ruin your day eventually.Peak
I understand you want it not to work, but I expect there are plenty of places in the world that are working with it quite successfully and who have had hardware problems.Friday
@Nicholas I've been working in those places and everyone eventually abandoned XA because they couldn't make it work reliably, even after spending millions of dollars.Peak
I find support for the standard too widespread to be swayed by the anecdotal evidence of one person. Thank you for your comments.Friday
@Nicholas I find support widespread only by companies who make money by fixing the problems this standard creates. The "consumers" (= people who have to suffer such solutions) usually try this once and then look for better solutions. That said, my answer is logically sound. My approach is much more simple than XA and I can prove that it will always work. XA is more of a promise, not a fact.Peak
I never said anything about your solution. I've used it myself. It slows everything down a little bit, but can be done when XA is cost prohibitive. Distributed databases architectures work fine, if implemented by a competent database team.Friday
Please back me up here bro #39254024Halftruth
I think that you are needed here: #43119278 :)Ratliff
The database without the transaction will retry until it succeeds. There is not some "gotcha" to XA.Intercalate
F
8

Setup a transaction manager in your context. Spring docs have examples, and it is very simple. Then when you want to execute a transaction:

try { 
    TransactionTemplate tt = new TransactionTemplate(txManager);

    tt.execute(new TransactionCallbackWithoutResult(){
    protected void doInTransactionWithoutResult(
            TransactionStatus status) {
        updateDb1();
        updateDb2();
    }
} catch (TransactionException ex) {
    // handle 
}

For more examples, and information perhaps look at this: XA transactions using Spring

Ferd answered 24/9, 2008 at 17:20 Comment(1)
This example doesn’t really answer the question or it even answers it wrongly: the OP mentioned that he had two Hibernate session factories configured which would require two separate transaction managers. The example in the answer only uses one transaction manager which is not specified any closer. Using a single Hibernate transaction manager consequently would never rollback one of the two DBs on errors. Using for example a ChainedTransactionManager (as noted by @Pani Dhakshnamurthy) might help but that is not mentioned in this answer.Tifanie
P
5

When you say "two different databases", do you mean different database servers, or two different schemas within the same DB server?

If the former, then if you want full transactionality, then you need the XA transaction API, which provides full two-phase commit. But more importantly, you also need a transaction coordinator/monitor which manages transaction propagation between the different database systems. This is part of JavaEE spec, and a pretty rarefied part of it at that. The TX coordinator itself is a complex piece of software. Your application software (via Spring, if you so wish) talks to the coordinator.

If, however, you just mean two databases within the same DB server, then vanilla JDBC transactions should work just fine, just perform your operations against both databases within a single transaction.

Precision answered 28/9, 2008 at 12:11 Comment(0)
N
3

In this case you would need a Transaction Monitor (server supporting XA protocol) and make sure your databases supports XA also. Most (all?) J2EE servers comes with Transaction Monitor built in. If your code is running not in J2EE server then there are bunch of standalone alternatives - Atomicos, Bitronix, etc.

Nor answered 28/12, 2008 at 17:15 Comment(0)
P
3

You could try Spring ChainedTransactionManager - http://docs.spring.io/spring-data/commons/docs/1.6.2.RELEASE/api/org/springframework/data/transaction/ChainedTransactionManager.html that supports distributed db transaction. This could be a better alternative to XA

Palawan answered 12/6, 2014 at 0:8 Comment(0)
K
1

For those suggesting concerns with two-phase commit can be waved away because it's widely used in practice, I suggest looking at this: https://en.wikipedia.org/wiki/Two-phase_commit_protocol. There's a link at the bottom of the 2PC article to an article on three-phase commit(!)

Some excerpts from the article on 3PC:

In computer networking and databases, the three-phase commit protocol (3PC)[1] is a distributed algorithm which lets all nodes in a distributed system agree to commit a transaction. It is a more failure-resilient refinement of the two-phase commit protocol (2PC).

Three-phase commit assumes a network with bounded delay and nodes with bounded response times; In most practical systems with unbounded network delay and process pauses, it cannot guarantee atomicity.

To summarize:

  • 3PC is more failure-resistant than 2PC
  • Not even 3PC guarantees atomicity
  • Draw your own conclusions about 2PC
Krystynakshatriya answered 13/7, 2023 at 23:45 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.