There is an emerging trend of ripping global state out of traditional "static" config management tools like Chef/Puppet/Ansible, and instead storing configurations in some centralized/distributed tool, of which the main players appear to be:
- ZooKeeper (Apache)
- Consul (Hashicorp)
- Eureka (Netflix)
Each of these tools works differently, but the principle is the same:
- Store your env vars and other dynamic configurations (that is, stuff that is subject to change) in these tools as key/value pairs
- Connect to these tools/services via clients at startup and pull down your config KV pairs. This typically requires the client to supply a service name ("
MY_APP
"), and an environment ("DEV
", "PROD
", etc.).
There is an excellent Consul Java client which explains all of this beautifully and provides ample code examples.
My understanding of these tools is that they are built on top of consensus algorithms such as Zab, Paxos and Gossip that allow config updates to spread almost virally, with eventual consistency, throughout your nodes. So the idea there is that if you have a myapp
app that has 20 nodes, say myapp01
through myapp20
, if you make a config change to one of them, that change will naturally "spread" throughout the 20 nodes over a period of seconds/minutes.
My problem is: how do these updates actually deploy to each node? In none of the client APIs (the one I linked to above, the ZooKeeper API, or the Eureka API) do I see some kind of callback functionality that can be set up and used to notify the client when the centralized service (e.g. the Consul cluster) wants to push and reload config updates.
So I ask: how is this supposed to work (dynamic config deployment and reload on clients)? I'm interested in any viable answer for any of those 3 tools, though Consul's API seems to be the most advanced IMHO.