How should application using active/passive redundant model be containerized using Kubernetes?
Asked Answered
K

1

5

I have a distributed application running on virtual machines, among which I have one service running on active/passive mode. The active VM provides service via a public IP. Should the active VM fail, the public IP will be moved to the passive VM and the passive VM will become active and starts to provide service.

How this pattern fit in containerized application managed by kubernetes?

If I use a replication controller with replicas =1, in case of node/minion failure, the replication controller will reschedule the pod(= VM in my current application) in another minion, but this would likely cause high downtime compared with my current solution where only IP resource is moved.

If I use a replication controller with replicas=2, then I would need to have a different configuration with two pods (one with public IP, the other without) which is anti-pattern? Furthermore, there is no designed way in kubernetes to support virtual IP(move around pods.)?

OR should I use replicas =2 and implement something myself to manage the IP(or maybe make use of pacemaker? this would introduce another problem: there will be to cluster management in my application, kubernetes, and pacemaker/corosync)

So, how this should be done?

Kenny answered 24/3, 2015 at 7:3 Comment(4)
Is there a reason that you can't load balance between two active replicas at all times? What does your failover procedure look like? Do you have a database, and if so, do you lose commits if the primary fails?Phototype
The active/passive VM itself is load balancer in my applicaion. It's the only component that provides external connectivity. Actaully I have several active/passive pairs depends on the required capacity. Each pair has one public IP.Kenny
The failover procedure is currently inplemented with proprietary recovery system, basically it supervises the VM and do the failover should ative VM failed. I think pacemaker as resource management supports this case with IP resource agent. However, Using pacemaker together with Kubernetes seems to be somehow conflicting as they both provides cluster management.Kenny
Have you looked at using a kubernetes service with an external load balancer? It gives you an external IP and does load balancing between pods.Phototype
P
6

It sounds like your application is using its own master election scheme between the two VMs acting as a load balancer and you know internally which one is currently the master.

This can be achieved today in Kubernetes using a service that spans both pods (master and standby) and a readiness probe that only returns success for the currently active master. Failure of a readiness probe removes the pod from the endpoints list, so no traffic will be directed to the node that isn't the master. When you need to do failover, the standby would report healthy to the readiness probe (and the master would report unhealthy or be unreachable) at which point traffic to the service would only land on the standby (now acting as the master).

You can create the service that spans the two pods with an external IP such that it is reachable from outside of your cluster.

Phototype answered 25/3, 2015 at 22:41 Comment(2)
Thank you for help. I will check the readiness probe. Do I understand correctly I have to configure this external IP on one minion of the cluster, then if this minion fail, I looses the service access, i.e. a single point of failure here. About external load balancer, I would have my APP infrastructure independant, so this is not an option, right?Kenny
Re: external IP, it depends on your deployment configuration. If you are running in GCE, for instance, your external IP would be load balanced across your set of healthy nodes. If you are running on premises and you wanted to share the IP across two nodes you could use a load balancer (like netscaler) to spread packets across multiple hosts for increased reliability.Phototype

© 2022 - 2024 — McMap. All rights reserved.