Usage of distributed/XA transactions in turn come with its own issues and complexities.Some of the problems/complexities with distributed transactions include
- 2-PC protocol is very chatty protocol and does a lot of logging to be able to recover from any failure scenario.
- Too much overhead to 99.9% of the cases to handle less than 0.1% of the failure/exception cases
- Increases the amount of time the locks are held on the databases. This increases the chances for deadlocks on the database. This also lowers the overall performance of the system.
- Distributed transactions are sort of the bane of scalability. It sort of grinds the entire system to halt by adding overhead to each and every transaction.
- Availability of the Systems goes down. When using distributed transactions,the completion of a distributed transaction is now a product of the availability of two different systems and there by the the total availability of the system goes down.
- XA/Distributed transaction configuration is complicated and is difficult to test to make sure the configuration is configured correctly. (Many Java Developers tend to believe that using JTA implementation of transaction manager will take care of a XA transactions and tend to forget configuring the resources as XA resources. Many a time people end up using JTA even while dealing with single resource. ). Many people get XA enlisted resources when they don't have to and many a time the applications would perform better if they weren't unnecessarily using XA.
Based on the various issues mentioned above it is very much recommended that the use of XA transactions be avoided as much as we can. So how do we maintain the ACID of a transaction that goes against multiple resources !!!? In most cases intelligent recovery business scenarios supported by the system's management interface will eliminate a need for XA. The possible ways to circumvent the need for distributed transactions could be a seperate post in itself...stay tuned..:).