We are trying to convert our monolithic application to a micro services based architecture. We use Postgresql as one of our database in the monolithic application with BoneCP for connection pooling.
When this monolith is split to a number of independent micro-services with each of them running in a different JVM, I can think about two options for connection pooling
- BoneCP or any decent connection pool for each microservice - My initial research shows that this is the primary choice. It is possible to have a fine grained control of connection requirements for each service.But, down side is that as the number of services increase, number of connection pool also increases and eventually there will be too many idle connections assuming that minimum connections in each pool is greater than 0.
- Rely on database specific extensions like PGBouncer - This approach has the advantage that connection pool is managed by a central source rather than a pool for each micro service and hence number of idle connections can be brought down. It is also language/technology agnostic. Down side is that these extensions are database specific and some of the functionalities in JDBC may not work. For eg: Prepared statments may not work with PGBouncer in Transaction_Pooling mode.
In our case most of the micro-services(at least 50) will be connecting to the same Postgres server even though the database can be different. So, if we go with option 1, there is a higher chance of creating too many idle connections.The traffic to most of our services are very moderate and the rationale behind moving to micro-service is for easier deployment, scaling etc.
Has anyone faced a similar problem while adopting micro-services architecture? Is there a better way of solving this problem in micro-service world?