Unable to connect to the server: dial tcp: lookup <Server Location>: no such host
Asked Answered
C

17

24

I'm beginning to build out a kubernetes cluster for our applications. We are using Azure for cloud services, so my K8s cluster is built using AKS. The AKs cluster was created using the portal interface for Azure. It has one node, and I am attempting to create a pod with a single container to deploy to the node. Where I am stuck currently is trying to connect to the AKS cluster from Powershell. The steps I have taken are:

az login (followed by logging in)
az account set --subscription <subscription id>
az aks get-credentials --name <cluster name> --resource-group <resource group name>
kubectl get nodes

After entering the last line, I am left with the error: Unable to connect to the server: dial tcp: lookup : no such host

I've also gone down a few other rabbit holes found on SO and other forums, but quite honestly, I'm looking for a straight forward way to access my cluster before complicating it further.

Edit: So in the end, I deleted the resource I was working with and spun up a new version of AKS, and am now having no trouble connecting. Thanks for the suggestions though!

Coin answered 27/8, 2020 at 18:41 Comment(2)
Can you fire "az aks show" and post the output (remember to mask out any sensitive info)..learn.microsoft.com/en-us/cli/azure/…Thruway
also make sure you are not connected to any VPN (like your company's) or proxyThruway
M
12

As of now, the aks run command adds a fourth option to connect to private clusters extending @Darius's three options posted earlier:

  1. Use the AKS Run Command feature.

Below are some copy/paste excerpts of a simple command, and one that requires a file. It is possible to chain multiple commands with &&.

az aks command invoke \
  --resource-group myResourceGroup \
  --name myAKSCluster \
  --command "kubectl get pods -n kube-system"

az aks command invoke \
  --resource-group myResourceGroup \
  --name myAKSCluster \
  --command "kubectl apply -f deployment.yaml -n default" \
  --file deployment.yaml

In case you get a (ResourceGroupNotFound) error, try adding the subscription, too

az aks command invoke \
  --resource-group myResourceGroup \
  --name myAKSCluster \
  --subscription <subscription> \
  --command "kubectl get pods -n kube-system"

You can also configure the default subscription:

az account set -s <subscription>
Mouthful answered 16/9, 2021 at 12:33 Comment(0)
M
8

Unable to connect to the server: dial tcp: lookup : no such host

The error is coming because of private cluster. The Private Cluster option is enabled while creating the AKS cluster. You need to disable this option.

Kubectl is a kubernetes control client. It is an external connectivity provider to connect with our kubernetes cluster. We can't connect with the private cluster externally.

Believe me.... just disable the private cluster options And see your success. Thank you.

Note: We can't disable this option after the cluster creation. you need to delete the cluster and again reform it.

Mistiemistime answered 25/9, 2020 at 5:47 Comment(2)
Totally agree about error origin, but can not agree with solution. If you don't want to make api accessible from public than you should not make cluster public. (I guess if you wrote that, than I could agree). If you don't care about api access - than yes, you can make cluster public, but in other cases check other answers.Arturo
I agree with @Arturo Although this solution seems perfect, for security concerns, it shouldn't be done. Not recommended for production environments. Please consider other solutions.Cherycherye
E
4

Posting this as Community Wiki for better visibility.

Solution provided by OP:

Delete resource and spun up a new version of AKS.

For details, you can check docs Create a resource group, Create AKS cluster and resource create.

Next try worth to try:

kubectl config use-context <cluster-name>

as it was proposed in similar Github issue.

Embrasure answered 27/8, 2020 at 18:41 Comment(2)
I tried use-context as one of the "rabbit holes" I went down. Had no luck.Coin
Had no luck with this solution unfortunately, and ensured I didn't have a VPN active when attempting to change the context.Clarkson
R
3

Gaurav's answer pretty much sums it up. In fact you can refer to the documentation which states that

The API server endpoint has no public IP address. To manage the API server, you'll need to use a VM that has access to the AKS cluster's Azure Virtual Network (VNet). There are several options for establishing network connectivity to the private cluster.

To connect to a private cluster, there are only 3 methods:

  • Create a VM in the same Azure Virtual Network (VNet) as the AKS cluster.
  • Use a VM in a separate network and set up Virtual network peering. See the section below for more information on this option.
  • Use an Express Route or VPN connection.
Rigdon answered 2/2, 2021 at 1:6 Comment(1)
can't we do using PE from another vnet instead of peering ?Spartan
A
2

It is more convenient to use Az module from desktop Powershell for any management operation with Azure portal. Microsoft adds a lot of new cmdlets for managing AKS and Service Fabric clusters.

Please take a look Az.Aks

In your case:

Connect-AzAccount

Get-AzAksNodePool
Advent answered 28/8, 2020 at 12:13 Comment(2)
I'm using az for a lot of what I'm running, but the portal has a lot of configuration placed in a convenient GUI, and while I'm all for using command line when it's the stronger option, I also believe that using tools that speed up the process is a good practice.Coin
I'm all for this. Thanks Oleh!Clarkson
W
2

I was also facing the issue, I'm using a private cluster and I have a machine (bastion) in a different vnet with peering enabled but still, I was not able to connect the cluster (I was able to SSH and telnet to the machine).

Then I added a virtual network link in the private DNS zone for the vnet where the bastion host resides. It worked for me, I'm able to access the cluster.

Wolframite answered 8/9, 2021 at 11:48 Comment(1)
had same error, your answer resolved my issueChaldea
N
1

When using a private cluster, the kubernetes api-endpoint is only accessible on the cluster's VNet. Connecting via VPN unfortunately does not work painlessly since the azure private DNS will not be available via for VPN clients (yet).

However, it is possible to connect kubectl directly to the IP-address of the api-endpoint, but that will require you to ignore certificate errors since we are using the IP directly.

If you edit your .kube/config and change the server address to the IP number. Then call kubectl with something like this

kubectl get all --all-namespaces --insecure-skip-tls-verify
Nich answered 26/10, 2021 at 20:34 Comment(0)
E
1

You can simply append "--admin" to the query as seen below.

az aks get-credentials --name <cluster name> --resource-group <resource group name> --admin
Eidson answered 4/5, 2022 at 10:59 Comment(0)
T
0

Usually, this is all that is required to connect. Check whether firewall is not blocking any traffic. Also, verify subscription id and other identifiers again and make sure you are using the correct values. If the issue still persists, I recommend you ask azure support to help you out.

Tayyebeb answered 27/8, 2020 at 19:45 Comment(0)
C
0

I had the same issues when running the kubectl command from jenkins. For me it was the permission issues of ~/.kube/config I gave it access to jenkins as well which solved the issue for me.

Countryfied answered 6/11, 2020 at 5:25 Comment(0)
L
0

For me I had this issue when I was trying to connect a new Linux user to my Elastic Kubernetes Cluster in AWS.

I setup a new user called jenkins-user, then I tried to run the command below to get pods:

kubectl get pods

And then I will run into the error below:

Unable to connect to the server: dial tcp: lookup 23343445ADFEHGROGMFDFMG.sk1.eu-east-2.eks.amazonaws.com on 198.74.83.506:53: no such host

Here's how I solved it:

The issue was because I had not set the context for the Kubernetes cluster in the kube config file of the new linux user (jenkins-user).

All I had to do was either first install the aws-cli for this new user (install it into the home directory of this new user). And then run the command aws configure to configure the necessary credentials. Although, since I already had the aws-cli setup for the other users on the Linux system I simply copied the ~/.aws directory from an already existing user to the jenkins-user home directory using the command:

sudo cp -R /home/existing-user/.aws/ /home/jenkins-user/

Next, I had to create a context for the Kubernetes configuration which will create a new ~/.kube/config file for the jenkins-user using the command below:

aws eks --region my-aws-region update-kubeconfig --name my-cluster-name

Next, I checked the kube config file to confirm that my context has been added using the command:

sudo nano /.kube/config

This time when I ran the command below, it was successful:

kubectl get pods

Resources: Create a kubeconfig for Amazon EKS

Lennyleno answered 7/12, 2021 at 18:24 Comment(0)
V
0

You can run kubectl commands on a private AKS cluster using az aks command invoke. Refer to this for more info.

As for why you might want to run private AKS clusters, read this

Voluptuary answered 7/12, 2021 at 20:46 Comment(1)
Hello. Thanks for your answer. You can improve your answers in future by quoting "the most relevant part of an important link, in case the external resource is unreachable or goes permanently offline" (you can read more here stackoverflow.com/help/how-to-answer).Outpour
A
0

I also hit this after restarting my kubernetes cluster, but it turned out I was just not waiting long enough, after about 10 minutes the "kubectrl" commands started working again.

Applecart answered 6/5, 2022 at 22:2 Comment(0)
C
0

If you are using AWS with kops then this might help you

mkdir autoscaler
cd autoscaler/
git clone https://github.com/kubernetes/autoscaler.git

create a file called ig-policy.json with the contents

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": [
                "autoscaling:DescribeAutoScalingGroups",
                "autoscaling:DescribeAutoScalingInstances",
                "autoscaling:DescribeLaunchConfigurations",
                "autoscaling:SetDesiredCapacity",
                "autoscaling:TerminateInstanceInAutoScalingGroup"
            ],
            "Resource": "*"
        }
    ]
}

Then you need to create iam policy

aws iam create-policy --policy-name ig-policy --policy-document file://ig-policy.json

And attach the above create iam policy with the user id to the cluster name

aws iam attach-role-policy --policy-arn arn:aws:iam::005935423478650:policy/ig-policy --role-name nodes.testing.k8s.local

Then update the cluster

kops update cluster testing.k8s.local --yes

Then run

kops rolling-update cluster
Calvary answered 9/5, 2022 at 9:28 Comment(0)
A
0

Creating private not easy journey, but it has beautiful views so I encourage anyone to get there. I did it all in terraform, so some names can be little different than they are in portal/azure CLI.

And this is how I did it:

  1. Private DNS zone, with name as privatelink.westeurope.azmk8s.io
  2. VNET where AKS will be placed (let's call it vnet-access)
  3. Virtual network from which you want to access AKS
  4. Private AKS (private_dns_zone_id set to dns zone form first point)
  5. Virtual network link (in private DNS zone, pointing to VNET from point 3)
  6. Peering between networks from points 2 and 3.

This should allow any machine in vnet-access to firstly resolve DNS, and then - to access cluster...

Yet... if you want to get there from your local machine, this is another setup. Fortunately Microsoft have such tutorial here


If you find that something is still not working - put the error in comment and I'll try to adapt my answer to cover this.

Arturo answered 3/6, 2022 at 16:10 Comment(1)
Do you have success for private endpoint instead of vnet peering ?Spartan
N
0

I faced the same issue and resolved it by deleting .kube folder which was under the following path C:\Users\<your_username> and then restarting kubernetes cluster.

Nerti answered 6/11, 2022 at 5:18 Comment(0)
A
-1

After spending many hours just realize that this may be internal bug.

I ran below command and it works:

az aks get-credentials --resource-group resource-group-name --name aks-cluster-name
Astonish answered 9/4, 2023 at 18:40 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.