How come my kubernetes' service can't find an endpoint? [closed]
Asked Answered
B

5

18

I am running a kubernetes cluster on coreos.

I have a kubernetes replication controller that works fine. It looks like this:

id: "redis-controller"
kind: "ReplicationController"
apiVersion: "v1beta3"
metadata:
  name: "rediscontroller"
  lables:
    name: "rediscontroller"
spec:
  replicas: 1
  selector:
    name: "rediscontroller"
  template:
    metadata:
      labels:
        name: "rediscontroller"
    spec:
      containers:
        - name: "rediscontroller"
          image: "redis:3.0.2"
          ports:
            - name: "redisport"
              hostPort: 6379
              containerPort:  6379
              protocol: "TCP"

But I have a service for said replication controller's pods that looks like this:

id: "redis-service"
kind: "Service"
apiVersion: "v1beta3"
metadata:
  name: "redisservice"
spec:
  ports:
    - protocol: "TCP"
      port: 6379
      targetPort: 6379
  selector:
    name: "redissrv"
  createExternalLoadBalancer: true
  sessionAffinity: "ClientIP"

the journal for kube-proxy has this to say about the service:

Jul 06 21:18:31 core-01 kube-proxy[6896]: E0706 21:18:31.477535    6896 proxysocket.go:126] Failed to connect to balancer: failed to connect to an endpoint.
Jul 06 21:18:41 core-01 kube-proxy[6896]: E0706 21:18:41.353425    6896 proxysocket.go:81] Couldn't find an endpoint for default/redisservice:: missing service entry

From what I understand, I do have the service pointing at the right pod and right ports, but am I wrong?

UPDATE 1

I noticed another possible issue, after fixing the things mentioned by Alex, I noticed in other services, where it is using websockets, the service can't find an endpoint. Does this mean the service needs a http endpoint to poll?

Beaty answered 6/7, 2015 at 21:27 Comment(1)
You have a typo : lables instead of labels.Linoleum
K
6

A few things look funny to me, with the first two being most important:

  1. It looks like the service doesn't exist. Are you sure it was created properly? Does it show up when you run kubectl get svc?
  2. The selector on your service doesn't look right. The selector should be key-value label pairs that match those in the replication controller's template. The label in your rc template is name: "rediscontroller", so you should use that as your service selector as well.
  3. What's the id field at the start of each object? It doesn't look like that's a valid field in v1beta3.
Kingbolt answered 6/7, 2015 at 22:11 Comment(4)
1. It does show up on kubectl get svc 2. Good catch 3. Another good catchBeaty
The error persists even after changing the selector to match the replication controller.Beaty
That's strange, since the error message on the in the kube-proxy logs indicates that it doesn't know about the service. Do you see any endpoints if you run kubectl get endpoints redisservice?Kingbolt
I fixed part of it, the no endpoint issue was that the pod would die and if there wasn't another being brought up the service had no endpoint, which makes sense. The not existing thing is weird, but I'm getting it after saying the services are stale.Beaty
F
31

Extra thing to check for.

Endpoints are only created if your deployment is considered healthy. If you have defined your readinessProbe incorrectly (mea culpa) or the deployment does not react to it correctly, an endpoint will not be created.

Flatways answered 31/8, 2017 at 1:22 Comment(0)
C
23

You can try inspecting the endpoints with kubectl get ep kubectl describe ep. If you see pod IP's next to NotReadyAddresses in the endpoints description, this indicates there's a problem with the pod that's causing it not to be ready, in which case it will fail to be registered against the endpoints.

If the pod isn't ready it can be because of a failing health/liveness probe.

The 'selector' on your service (kubectl get services kubectl describe myServiceName) should match a label on the pods (kubectl get pods kubectl describe po myPodName). E.g. selector = app=myAppName, pod label = app=myAppName. That's how the service determines which of the endpoints it should be trying to connect to.

Comitative answered 13/10, 2020 at 2:48 Comment(3)
Very helpful explaining! My issue was related to a label mismatch between deployment and svcTrinitrocresol
Very helpful I was using the name as label selector. The docs can be more specific about the "labels"Baer
@LeapHawk : everyone struggles when trying to make things work in k8s (especially ingress-service-pod links), which sadly proves that the docs suck.Apollyon
K
6

A few things look funny to me, with the first two being most important:

  1. It looks like the service doesn't exist. Are you sure it was created properly? Does it show up when you run kubectl get svc?
  2. The selector on your service doesn't look right. The selector should be key-value label pairs that match those in the replication controller's template. The label in your rc template is name: "rediscontroller", so you should use that as your service selector as well.
  3. What's the id field at the start of each object? It doesn't look like that's a valid field in v1beta3.
Kingbolt answered 6/7, 2015 at 22:11 Comment(4)
1. It does show up on kubectl get svc 2. Good catch 3. Another good catchBeaty
The error persists even after changing the selector to match the replication controller.Beaty
That's strange, since the error message on the in the kube-proxy logs indicates that it doesn't know about the service. Do you see any endpoints if you run kubectl get endpoints redisservice?Kingbolt
I fixed part of it, the no endpoint issue was that the pod would die and if there wasn't another being brought up the service had no endpoint, which makes sense. The not existing thing is weird, but I'm getting it after saying the services are stale.Beaty
P
4

For you particular case, make sure the service spec has a containerPort if you specified it in your Pod spec. See details: http://kubernetes.io/docs/user-guide/debugging-services/#my-service-is-missing-endpoints

Otherwise please setup through the official K8s service debugging guide:

http://kubernetes.io/docs/user-guide/debugging-services/

It has a step-by-step checklist of things to look out to from service to DNS to networking to kube proxy etc.

Palaeontology answered 15/7, 2016 at 15:23 Comment(3)
Thanks, but I already solved the issue a long time ago, and this was pre 1.0Beaty
Cool! What was the root cause? Was it just because the pre 1.0 kubernetes stability issue? Care to elaborate more?Palaeontology
Well mismatch selector as Alex Robinson pointed out and pre-1.0 stability.Beaty
H
0

To add to the top voted answer... Kubernetes has this interesting behavior (bug?)

Scenario 1: If you have multiple containers in a pod and one of them terminates with "reason: completed" (which should be perfectly fine), the pod is no longer considered "Ready" and services will no longer be able to reach it as an endpoint

Scenario 2: If you have multiple running containers in a pod, the pod will be "Ready" but services will not reach all the containers

Bottomline... don't put multiple containers in a pod, if you want to expose those containers as endpoints

Harebrained answered 29/1, 2023 at 20:23 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.