Upstream sent too big header while reading response header from upstream in Keycloak
Asked Answered
R

5

15

I am trying to make an OIDC authentication/authorization against a keycloak server with an Android app I'm building.

I am getting the following error, which is leading me to receive a 502 in my application:

2019/08/15 00:29:04 [error] 31921#31921: *64410338 upstream sent too big header while reading response header from upstream, client: 192.168.4.61, server: stage.example.com, request: "GET /auth/realms/master/protocol/openid-connect/auth?client_id=example-mobile-android&redirect_uri=http%3A%2F%2Flocalhost%3A53978%2F%23%2Flogin&state=a627edff-c1a2-43d3-8c6e-e5635bcc2252&response_mode=fragment&response_type=id_token%20token&scope=openid&nonce=69967773-36ba-49b2-8dd8-a31fd36f412b&prompt=none HTTP/1.1", upstream: "http://192.168.4.147:8080/auth/realms/master/protocol/openid-connect/auth?client_id=example-mobile-android&redirect_uri=http%3A%2F%2Flocalhost%3A53978%2F%23%2Flogin&state=a627edff-c1a2-43d3-8c6e-e5635bcc2252&response_mode=fragment&response_type=id_token%20token&scope=openid&nonce=69967773-36ba-49b2-8dd8-a31fd36f412b&prompt=none", host: "www.example.com", referrer: "http://localhost:53978/"

I have tried both this:

proxy_buffer_size          128k;
proxy_buffers              4 256k;
proxy_busy_buffers_size    256k;

as well as disabling the proxy buffer entirely.

What could be going on? Do I expand my buffer further? Is there some other error I am not catching?

Revelationist answered 15/8, 2019 at 0:32 Comment(0)
D
11

For this error, the one to blame is proxy_buffer_size.

I have a detailed writeup on it here. Essentially, if you don't allocate enough buffers for NGINX to read response headers, then it will fail with this error.

If you can reconstruct request URL/headers in their entirety, you can calculate the required value for this parameter, e.g.:

curl -s -w \%{size_header} -o /dev/null https://example.com

Either way, you will be raising it from the default value, and couple this with increasing proxy_busy_buffers_size, and proxy_buffers as well.

If you can't determine the size of response headers/body, then yes - keep increasing things gradually until it fixes the issue.

Do not just set buffers to arbitrarily high values, because those buffers are per-connection and will make for higher RAM use.

For that same reason, it's also best to create a separate location in NGINX with adjusted buffer values, so that larger buffers are used only there, without affecting the overall RAM usage by NGINX.

P.S. disabling proxy buffering won't help, because NGINX always buffers response headers :)

Detradetract answered 17/8, 2019 at 23:20 Comment(0)
S
7

For me, @danila-vershinin's answer is the proper one.

However, I would like to add my 50 cents here for the ones trying to configure it on Kubernetes. I got it working on Keycloak 20.0.3 like this:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: keycloak
  namespace: keycloak
  annotations:
    nginx.ingress.kubernetes.io/proxy-buffers-number: "4"
    nginx.ingress.kubernetes.io/proxy-buffer-size: "16k"
spec:
  tls:
    - hosts:
      - my-keycloak.example.com
  rules:
  - host: my-keycloak.example.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: keycloak
            port: 
              number: 80

Mind the annotations section.

Also, as far as I understood here, proxy_busy_buffers_size is now calculated based on proxy_buffers' size.

I hope it helps.

Synonymy answered 27/1, 2023 at 21:53 Comment(0)
L
4

I've updated Keycloak from v20.0.1 to v20.0.2 and got that HTTP 502 Error.

I adjusted NGINX with these parameters:

proxy_buffer_size 128k;
proxy_buffers 4 256k;
proxy_busy_buffers_size 256k;
location / {
proxy_pass http://localhost:8080;
proxy_read_timeout 90;

# proxy header
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_protocol_addr;
proxy_set_header X-Scheme $scheme;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $host;

# ws
proxy_http_version 1.1;
}

and it works now.

Lax answered 28/12, 2022 at 10:21 Comment(0)
P
2

It depends what was added to access token. I have seen token with ~1MB size; it was extreme case, where token contains a lot of user groups/roles for authorization. Try to configure bigger buffer size.

Pikeman answered 15/8, 2019 at 5:18 Comment(8)
Hrm, but the iOS app doesn't have this problem and is theoretically doing a similar action. What might cause this for the Android app? What buffer sizes do you recommend? 1M? 2M? Can the proxy buffer be any arbitrary value then?Revelationist
I added a 0 to all of my items above. Still the 502 error when authenticating, but it is not showing in the nginx logs. I am totally confused.Revelationist
You can remove nginx component if you don't need it and keycloak can serve requests directly. Check your iOS app - maybe it doesn't request token, only id_token, so it doesn't reach infrastructure header size limit.Pikeman
How would I change my request in the Android app to only need the id_token? Would that involve switching over the auth flow?Revelationist
Parameter response_type=id_token and not response_type=id_token%20tokenPikeman
Do I change that in Keycloak or in my consumer app?Revelationist
That is app setting. App can choose what kind of response wants to receive from keycloak.Pikeman
Beautiful, thank you. I'll go look up the Keycloak Javascript API and see what I need to changeRevelationist
X
0

In Kubernetes this was enough for me:

  annotations:
    # ..
    nginx.ingress.kubernetes.io/proxy-buffer-size: "8k"

nginx.ingress.kubernetes.io/proxy-buffers-number is by default already set to 4.

Xylia answered 20/8 at 12:51 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.