Two phase commit is described as an "atomic commitment protocol". I would expect this to mean that all clients see the state of the world from either before a transaction commits, or after it commits -- with no in-between state. It seems though that it can enter a state where a transaction is partially committed and clients see inconsistent data, breaking atomicity.
Consider the case with two databases, A and B. If there is a partition during the commit phase after A has committed but before B has committed, the transaction is partially committed. A user querying A and B will not see consistent data -- the transaction has committed on A, but B has data from before the commit.
The "Consistent" part of ACID also seems to be broken -- a client querying A and B could see data that violates business rules.
I guess the idea is that the system will eventually be able to recover from this, when the partition is over and the transaction manager instructs B to commit. In the meantime though, the system is in an inconsistent "partially committed" state. Isn't the whole point of atomicity to prevent this? By the time consistency is restored, the damage could already be done.
What property is referred to when two-phase commit is said to be atomic?