Set replicas on different nodes
Asked Answered
G

3

6

I am developing an application for dealing with kubernetes runtime microservices. I actually did some cool things, like moving a microservice from a node to another one. The problem is that all replicas go together.

So, Imagine that a microservice has two replicas and it is running on a namespaces with two nodes.

I want to set one replica in each node. Is that possible? Even in a yaml file, is that possible? I am trying to do my own scheduler to do that, but I got no success until now.

Thank you all

Gasman answered 1/3, 2018 at 14:7 Comment(1)
Hi, you are looking for inter pod anti-affinityScherle
S
0

I think what you are looking for is a NodeSelector for your replica Set. From the documentation:

Inter-pod affinity and anti-affinity allow you to constrain which nodes your pod is eligible to be scheduled based on labels on pods that are already running on the node rather than based on labels on nodes.

Here is the documentation: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#inter-pod-affinity-and-anti-affinity-beta-feature

Sextuplicate answered 1/3, 2018 at 17:27 Comment(3)
I thank you for that. But I am already using nodeSelector on my code. In fact, what I cannot do is specify which replica from a microservice goes to a particular node, and which other replicas go to another particular node. Got it?Gasman
That make sense. All your replicas from the replica set are exactly the same right ? If so it shouldn't matter which of the replicas goes on a specific node as long as all replicas go on different nodes. If the replicas are different then you should probably use different replica set for different replicas.Sextuplicate
Well. All my replicas are the same. I just want to divide them in different nodesGasman
T
0

I can't find where it's documented, but I recently read somewhere that replicas will be distributed across nodes when you create the kubernetes service BEFORE the deployment / replicaset.

Tew answered 7/3, 2018 at 17:28 Comment(0)
G
0

You could use pod anti-affinify (since Kubernetes 1.6) to prefer schedule pods on different nodes:

spec:
  affinity:
    podAntiAffinity:
      preferredDuringSchedulingIgnoredDuringExecution:
      - weight: 100 # Strong preference
        podAffinityTerm:
          topologyKey: kubernetes.io/hostname
          labelSelector:
            matchLabels:
              app: my-app

or more modern pod topology spread constraints (since Kubernetes 1.19) to evenly spread pods across nodes:

spec:
  topologySpreadConstraints:
  - maxSkew: 1
    topologyKey: node
    whenUnsatisfiable: ScheduleAnyway
    labelSelector:
      matchLabels:
        app: my-app
Garbe answered 7/9, 2024 at 7:45 Comment(0)

© 2022 - 2025 — McMap. All rights reserved.