Is there a way to add arbitrary records to kube-dns?
Asked Answered
I

7

37

I will use a very specific way to explain my problem, but I think this is better to be specific than explain in an abstract way...

Say, there is a MongoDB replica set outside of a Kubernetes cluster but in a network. The ip addresses of all members of the replica set were resolved by /etc/hosts in app servers and db servers.

In an experiment/transition phase, I need to access those mongo db servers from kubernetes pods. However, kubernetes doesn't seem to allow adding custom entries to /etc/hosts in pods/containers.

The MongoDB replica sets are already working with large data set, creating a new replica set in the cluster is not an option.

Because I use GKE, changing any of resources in kube-dns namespace should be avoided I suppose. Configuring or replace kube-dns to be suitable for my need are last thing to try.

Is there a way to resolve ip address of custom hostnames in a Kubernetes cluster?

It is just an idea, but if kube2sky can read some entries of configmap and use them as dns records, it colud be great. e.g. repl1.mongo.local: 192.168.10.100.

EDIT: I referenced this question from https://github.com/kubernetes/kubernetes/issues/12337

Indulgence answered 11/5, 2016 at 15:14 Comment(0)
I
6

UPDATE: 2017-07-03 Kunbernetes 1.7 now support Adding entries to Pod /etc/hosts with HostAliases.


The solution is not about kube-dns, but /etc/hosts. Anyway, following trick seems to work so far...

EDIT: Changing /etc/hosts may has race condition with kubernetes system. Let it retry.

1) create a configMap

apiVersion: v1
kind: ConfigMap
metadata:
  name: db-hosts
data:
  hosts: |
    10.0.0.1  db1
    10.0.0.2  db2

2) Add a script named ensure_hosts.sh.

#!/bin/sh                                                                                                           
while true
do
    grep db1 /etc/hosts > /dev/null || cat /mnt/hosts.append/hosts >> /etc/hosts
    sleep 5
done

Don't forget chmod a+x ensure_hosts.sh.

3) Add a wrapper script start.sh your image

#!/bin/sh
$(dirname "$(realpath "$0")")/ensure_hosts.sh &
exec your-app args...

Don't forget chmod a+x start.sh

4) Use the configmap as a volume and run start.sh

apiVersion: extensions/v1beta1
kind: Deployment
...
spec:
  template:
    ...
    spec:
      volumes:
      - name: hosts-volume
        configMap:
          name: db-hosts
      ...
      containers:
        command:
        - ./start.sh
        ...
        volumeMounts:
        - name: hosts-volume
          mountPath: /mnt/hosts.append
        ...
Indulgence answered 15/5, 2016 at 3:5 Comment(1)
It's a satisfactory hack for a temporary deployment which allows for quick and dirty updates. The other answers are better, especially @0xMH's answerKesley
C
45

There are 2 possible solutions for this problem now:

  1. Pod-wise (Adding the changes to every pod needed to resolve these domains)
  2. cluster-wise (Adding the changes to a central place which all pods have access to, Which is in our case is the DNS)

Let's begin with the pod-wise solution:

As of Kunbernetes 1.7, It's possible now to add entries to a Pod's /etc/hosts directly using .spec.hostAliases

For example: to resolve foo.local, bar.local to 127.0.0.1 and foo.remote, bar.remote to 10.1.2.3, you can configure HostAliases for a Pod under .spec.hostAliases:

apiVersion: v1
kind: Pod
metadata:
  name: hostaliases-pod
spec:
  restartPolicy: Never
  hostAliases:
  - ip: "127.0.0.1"
    hostnames:
    - "foo.local"
    - "bar.local"
  - ip: "10.1.2.3"
    hostnames:
    - "foo.remote"
    - "bar.remote"
  containers:
  - name: cat-hosts
    image: busybox
    command:
    - cat
    args:
    - "/etc/hosts"

The Cluster-wise solution:

As of Kubernetes v1.12, CoreDNS is the recommended DNS Server, replacing kube-dns. If your cluster originally used kube-dns, you may still have kube-dns deployed rather than CoreDNS. I'm going to assume that you're using CoreDNS as your K8S DNS.

In CoreDNS it's possible to Add an arbitrary entries inside the cluster domain and that way all pods will resolve this entries directly from the DNS without the need to change each and every /etc/hosts file in every pod.

First:

Let's change the coreos ConfigMap and add required changes:

kubectl edit cm coredns -n kube-system 

apiVersion: v1
kind: ConfigMap
data:
  Corefile: |
    .:53 {
        errors
        health {
          lameduck 5s
        }
        hosts /etc/coredns/customdomains.db example.org {
          fallthrough
        }
        ready
        kubernetes cluster.local in-addr.arpa ip6.arpa {
          pods insecure
          fallthrough in-addr.arpa ip6.arpa
        }
        prometheus :9153
        forward . "/etc/resolv.conf"
        cache 30
        loop
        reload
        loadbalance
    }
  customdomains.db: |
    10.10.1.1 mongo-en-1.example.org
    10.10.1.2 mongo-en-2.example.org
    10.10.1.3 mongo-en-3.example.org
    10.10.1.4 mongo-en-4.example.org

Basically we added two things:

  1. The hosts plugin before the kubernetes plugin and used the fallthrough option of the hosts plugin to satisfy our case.

    To shed some more lights on the fallthrough option. Any given backend is usually the final word for its zone - it either returns a result, or it returns NXDOMAIN for the query. However, occasionally this is not the desired behavior, so some of the plugin support a fallthrough option. When fallthrough is enabled, instead of returning NXDOMAIN when a record is not found, the plugin will pass the request down the chain. A backend further down the chain then has the opportunity to handle the request and that backend in our case is kubernetes.

  2. We added a new file to the ConfigMap (customdomains.db) and added our custom domains (mongo-en-*.example.org) in there.

Last thing is to Remember to add the customdomains.db file to the config-volume for the CoreDNS pod template:

kubectl edit -n kube-system deployment coredns
volumes:
        - name: config-volume
          configMap:
            name: coredns
            items:
            - key: Corefile
              path: Corefile
            - key: customdomains.db
              path: customdomains.db

and finally to make kubernetes reload CoreDNS (each pod running):

$ kubectl rollout restart -n kube-system deployment/coredns
Collagen answered 17/12, 2020 at 10:13 Comment(4)
I just posted an answer based on yours but inlining the hosts rather than defining them in a different file. I wondered if you had considered this method, and if there was any reason you preferred to define the hosts in a separate file.Shockproof
What about kube-dns which is mentioned in the title of this question?Kierkegaard
CoreDNS is not a practical option on GKE... I'm here to try and figure out a solution on GKE's kube-dns , similar to what I did on CoreDNS and this answers a different question that the one in the title... HostAliases does not scale...Wheelhorse
Too late for edit: cloud.google.com/knowledge/kb/… "Note: This is not a Google Cloud supported solution, but just a possible workaround for providing your Google Kubernetes Engine cluster CoreDNS resolution. "Wheelhorse
S
30

@OxMH answer is fantastic, and can be simplified for brevity. CoreDNS allows you to specify hosts directly in the hosts plugin (https://coredns.io/plugins/hosts/#examples).

The ConfigMap can therefore be edited like so:

$ kubectl edit cm coredns -n kube-system 


apiVersion: v1
kind: ConfigMap
data:
  Corefile: |
    .:53 {
        errors
        health {
          lameduck 5s
        }
        hosts {
          10.10.1.1 mongo-en-1.example.org
          10.10.1.2 mongo-en-2.example.org
          10.10.1.3 mongo-en-3.example.org
          10.10.1.4 mongo-en-4.example.org
          fallthrough
        }
        ready
        kubernetes cluster.local in-addr.arpa ip6.arpa {
          pods insecure
          fallthrough in-addr.arpa ip6.arpa
        }
        prometheus :9153
        forward . "/etc/resolv.conf"
        cache 30
        loop
        reload
        loadbalance
    }

You will still need to restart coredns so it rereads the config:

$ kubectl rollout restart -n kube-system deployment/coredns

Inlining the contents of the hostsfile removes the need to map the hostsfile from the configmap. Both approaches achieve the same outcome, it is up to personal preference as to where you want to define the hosts.

Shockproof answered 21/12, 2021 at 5:49 Comment(2)
This is the one that worked for me. The other approach with a new file via customdomains.db did not workSalutatory
This does not use kube-dns as in the question, it is not an option on providers that force the use of kube-dns, like GKE. (well, they also allowGoogle DNS, but that requires the cluster domain to be changed from clsuter.local (if more than one cluster is deployed)Wheelhorse
M
10

A type of External Name is required to access hosts or ips outside of the kubernetes.

The following worked for me.

{
    "kind": "Service",
    "apiVersion": "v1",
    "metadata": {
        "name": "tiny-server-5",
        "namespace": "default"
    },
    "spec": {
        "type": "ExternalName",
        "externalName": "192.168.1.15",
        "ports": [{ "port": 80 }]
    }
}
Massarelli answered 12/3, 2017 at 16:25 Comment(1)
Do not use this! externalName should only point to an external name, such as a DNS record. See note in K8S docs: "ExternalName accepts an IPv4 address string, but as a DNS name comprised of digits, not as an IP address. ExternalNames that resemble IPv4 addresses are not resolved by CoreDNS or ingress-nginx because ExternalName is intended to specify a canonical DNS name. To hardcode an IP address, consider headless services." (kubernetes.io/docs/concepts/services-networking/service/…) morallo's answer is correct.Wineglass
S
8

For the record, an alternate solution for those not checking the referenced github issue.

You can define an "external" Service in Kubernetes, by not specifying any selector or ClusterIP. You have to also define a corresponding Endpoint pointing to your external IP.

From the Kubernetes documentation:

{
    "kind": "Service",
    "apiVersion": "v1",
    "metadata": {
        "name": "my-service"
    },
    "spec": {
        "ports": [
            {
                "protocol": "TCP",
                "port": 80,
                "targetPort": 9376
            }
        ]
    }
}
{
    "kind": "Endpoints",
    "apiVersion": "v1",
    "metadata": {
        "name": "my-service"
    },
    "subsets": [
        {
            "addresses": [
                { "ip": "1.2.3.4" }
            ],
            "ports": [
                { "port": 9376 }
            ]
        }
    ]
}

With this, you can point your app inside the containers to my-service:9376 and the traffic should be forwarded to 1.2.3.4:9376

Limitations:

  • The DNS name used needs to be only letters, numbers or dashes. You can't use multi-level names (something.like.this). This means you probably have to modify your app to point just to your-service, and not yourservice.domain.tld.
  • You can only point to a specific IP, not a DNS name. For that, you can define a kind of a DNS alias with an ExternalName type Service.
Spruill answered 13/2, 2017 at 13:6 Comment(2)
Thanks. This info will be help if hostname -> ip mapping is needed.Indulgence
This breaks TLS certificate validation - A cert issued by an external CA likely contains the external hostname, so if you point this to an internal IP, the certificate validation will fail.Wheelhorse
I
6

UPDATE: 2017-07-03 Kunbernetes 1.7 now support Adding entries to Pod /etc/hosts with HostAliases.


The solution is not about kube-dns, but /etc/hosts. Anyway, following trick seems to work so far...

EDIT: Changing /etc/hosts may has race condition with kubernetes system. Let it retry.

1) create a configMap

apiVersion: v1
kind: ConfigMap
metadata:
  name: db-hosts
data:
  hosts: |
    10.0.0.1  db1
    10.0.0.2  db2

2) Add a script named ensure_hosts.sh.

#!/bin/sh                                                                                                           
while true
do
    grep db1 /etc/hosts > /dev/null || cat /mnt/hosts.append/hosts >> /etc/hosts
    sleep 5
done

Don't forget chmod a+x ensure_hosts.sh.

3) Add a wrapper script start.sh your image

#!/bin/sh
$(dirname "$(realpath "$0")")/ensure_hosts.sh &
exec your-app args...

Don't forget chmod a+x start.sh

4) Use the configmap as a volume and run start.sh

apiVersion: extensions/v1beta1
kind: Deployment
...
spec:
  template:
    ...
    spec:
      volumes:
      - name: hosts-volume
        configMap:
          name: db-hosts
      ...
      containers:
        command:
        - ./start.sh
        ...
        volumeMounts:
        - name: hosts-volume
          mountPath: /mnt/hosts.append
        ...
Indulgence answered 15/5, 2016 at 3:5 Comment(1)
It's a satisfactory hack for a temporary deployment which allows for quick and dirty updates. The other answers are better, especially @0xMH's answerKesley
T
2

Use configMap seems better way to set DNS, but it's a little bit heavy when just add a few record (in my opinion). So I add records to /etc/hosts by shell script executed by docker CMD.

for example:

Dockerfile

...(ignore)
COPY run.sh /tmp/run.sh
CMD bash /tmp/run.sh

run.sh

#!/bin/bash
echo repl1.mongo.local 192.168.10.100 >> /etc/hosts
# some else command...

Notice, if your run MORE THAN ONE container in a pod, you have to add script in each container, because kubernetes start container randomly, /etc/hosts may be override by another container (which start later).

Tarrel answered 2/11, 2016 at 15:42 Comment(2)
Ah, multiple containers in a pod can overwrite /etc/hosts... it seems... Thanks for your insight.Indulgence
Mounting the hosts file from a shared volume might make more sense with multiple container pods (and hostAliases is the more modern way to do it). As is, it also requires the pod to be running as root, which is a security risk.Wheelhorse
C
-1

i was running k3s cluster and get one simple solution

$ kubectl edit cm coredns -n kube-system

get this file open

apiVersion: v1
data:
  Corefile: |
    .:53 {
        errors
        health
        ready
        kubernetes cluster.local in-addr.arpa ip6.arpa {
          pods insecure
          fallthrough in-addr.arpa ip6.arpa
        }
        hosts /etc/coredns/NodeHosts {
          ttl 60
          reload 15s
          fallthrough
        }
        prometheus :9153
        forward . /etc/resolv.conf
        cache 30
        loop
        reload
        loadbalance
        import /etc/coredns/custom/*.override
    }
    import /etc/coredns/custom/*.server
  NodeHosts: |
    10.0.0.248 k8s-master-node
    10.0.0.198 my-custom-hostname   # <- add my dns here
kind: ConfigMap

save this file & run

$ kubectl rollout restart -n kube-system deployment/coredns

and now i can resolve my-custom-hostname to 10.0.0.198 from any pod

Capable answered 29/8, 2023 at 17:59 Comment(1)
CoreDNS is the better alternative to kube-dns. Some providers (GKE) does not provide CoreDNS as an option, which is why this question is specifically asking about kube-dns.Wheelhorse

© 2022 - 2024 — McMap. All rights reserved.