upstream connect error or disconnect/reset before headers. reset reason: connection failure. Spring Boot and java 11
Asked Answered
P

3

22

I'm having a problem migrating my pure Kubernetes app to an Istio managed. I'm using Google Cloud Platform (GCP), Istio 1.4, Google Kubernetes Engine (GKE), Spring Boot and JAVA 11.

I had the containers running in a pure GKE environment without a problem. Now I started the migration of my Kubernetes cluster to use Istio. Since then I'm getting the following message when I try to access the exposed service.

upstream connect error or disconnect/reset before headers. reset reason: connection failure

This error message looks like a really generic. I found a lot of different problems, with the same error message, but no one was related to my problem.

Bellow the version of the Istio:

client version: 1.4.10
control plane version: 1.4.10-gke.5
data plane version: 1.4.10-gke.5 (2 proxies)

Bellow my yaml files:

apiVersion: v1
kind: ServiceAccount
metadata:
  labels:
    account: tree-guest
  name: tree-guest-service-account
---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: tree-guest
    service: tree-guest
  name: tree-guest
spec:
  ports:
  - name: http
    port: 8080
    targetPort: 8080
  selector:
    app: tree-guest
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: tree-guest
    version: v1
  name: tree-guest-v1
spec:
  replicas: 1
  selector:
    matchLabels:
      app: tree-guest
      version: v1
  template:
    metadata:
      labels:
        app: tree-guestaz
        version: v1
    spec:
      containers:
      - image: registry.hub.docker.com/victorsens/tree-quest:circle_ci_build_00923285-3c44-4955-8de1-ed578e23c5cf
        imagePullPolicy: IfNotPresent
        name: tree-guest
        ports:
        - containerPort: 8080
      serviceAccount: tree-guest-service-account
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
  name: tree-guest-gateway
spec:
  selector:
    istio: ingressgateway # use istio default controller
  servers:
    - port:
        number: 80
        name: http
        protocol: HTTP
      hosts:
        - "*"
---
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: tree-guest-virtual-service
spec:
  hosts:
    - "*"
  gateways:
    - tree-guest-gateway
  http:
    - match:
        - uri:
            prefix: /v1
      route:
        - destination:
            host: tree-guest
            port:
              number: 8080

To apply the yaml file I used the following argument:

kubectl apply -f <(istioctl kube-inject -f ./tree-guest.yaml)

Below the result of the Istio proxy argument, after deploying the application:

istio-ingressgateway-6674cc989b-vwzqg.istio-system SYNCED SYNCED SYNCED SYNCED 
istio-pilot-ff4489db8-2hx5f 1.4.10-gke.5 tree-guest-v1-774bf84ddd-jkhsh.default SYNCED SYNCED SYNCED SYNCED istio-pilot-ff4489db8-2hx5f 1.4.10-gke.5

If someone have a tip about what is going wrong, please let me know. I'm stuck in this problem for a couple of days.

Thanks.

Prothorax answered 14/8, 2020 at 7:45 Comment(6)
can you describe your Gateway and VirtualService objects and see if all the config went through as in the yaml? I would say the indentation is wrong, so the right config is not going through, but sometime both indentation is right, so not sure. Another idea would be closing /v1 by /v1/.Enabling
Can You check if there are any issues with istio proxy? Use istioctl proxy-status .Eichler
@suren. Thanks for your answer... The Gateway and VrtualService are going to the Istio generated YAML file. And I tried to change to /v1/ and I still with having the same error.Prothorax
@PiotrMalec. I updated the question with the result of the proxy argument. Is it correct? it should not have just one line?Prothorax
Hi @Prothorax do you still need help with that? The problem your have is 503 which is a very often bug in istio, I have made an answer with few things to check when the problem occurs, could you check that? About the istioctl proxy-status, there should be your application, and it's not. Could you add output from kubectl get pods?Stein
I solve it. In my case the yaml file was wrong. I reviewed it and the problem now is solved. Thank you guys.,Prothorax
S
20

As @Victor mentioned the problem here was the wrong yaml file.

I solve it. In my case the yaml file was wrong. I reviewed it and the problem now is solved. Thank you guys., – Victor

If you're looking for yaml samples I would suggest to take a look at istio github samples.


As 503 upstream connect error or disconnect/reset before headers. reset reason: connection failure occurs very often I set up little troubleshooting answer, there are another questions with 503 error which I encountered for several months with answers, useful informations from istio documentation and things I would check.

Examples with 503 error:

Common cause of 503 errors from istio documentation:

Few things I would check first:

  • Check services ports name, Istio can route correctly the traffic if it knows the protocol. It should be <protocol>[-<suffix>] as mentioned in istio documentation.
  • Check mTLS, if there are any problems caused by mTLS, usually those problems would result in error 503.
  • Check if istio works, I would recommend to apply bookinfo application example and check if it works as expected.
  • Check if your namespace is injected with kubectl get namespace -L istio-injection
  • If the VirtualService using the subsets arrives before the DestinationRule where the subsets are defined, the Envoy configuration generated by Pilot would refer to non-existent upstream pools. This results in HTTP 503 errors until all configuration objects are available to Pilot.
Stein answered 28/9, 2020 at 5:57 Comment(0)
S
5

I landed exactly here with exactly similar symptoms.

But in my case I had to

switch pod listen address from 172.0.0.1 to 0.0.0.0

which solved my issue

Silden answered 17/1, 2023 at 8:36 Comment(0)
A
0

I am posting here, because this is the top search result for the error I was receiving:

upstream connect error or disconnect/reset before headers. retried and the latest reset reason: protocol error

The reason for the error was the following.

I had deployed:

  • BackendConfig: to setup a healthCheck
  • Ingress: to define which service to expose and how
  • Service: to attach the needed annotations to the an already existing Service of NodePort type

What was happening was that I did not include, and the service did not include an annotation to coerce the protocol to HTTPS

resource "kubernetes_annotations" "redpanda_http_annotations" {
  api_version = "v1"
  kind        = "Service"
  metadata {
    name      = "redpanda-external-nodeport-service"
    namespace = "redpanda"
  }  
  annotations = {
    "cloud.google.com/backend-config" = <<-EOF
{"ports": {"8083":"redpanda-http-backendcfg"}}
EOF
# I was missing this annotation
    "cloud.google.com/app-protocols" = <<-EOF
{"redpanda-external-nodeport-service":"HTTPS"}
EOF
  }
}

Reference:

Allpowerful answered 10/4, 2024 at 14:54 Comment(0)

© 2022 - 2025 — McMap. All rights reserved.