TimeoutException: Timeout expired while fetching topic metadata Kafka
Asked Answered
S

7

12

I have been trying to deploy Kafka with schema registry locally using Kubernetes. However, the logs of the schema registry pod show this error message:

ERROR Server died unexpectedly:  (io.confluent.kafka.schemaregistry.rest.SchemaRegistryMain:51)
org.apache.kafka.common.errors.TimeoutException: Timeout expired while fetching topic metadata

What could be the reason of this behavior? ' In order to run Kubernetes locally, I user Minikube version v0.32.0 with Kubernetes version v1.13.0

My Kafka configuration:

apiVersion: v1
kind: Service
metadata:
  name: kafka-1
spec:
  ports:
    - name: client
      port: 9092
  selector:
    app: kafka
    server-id: "1"
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: kafka-1
spec:
  selector:
    matchLabels:
      app: kafka
      server-id: "1"
  replicas: 1
  template:
    metadata:
      labels:
        app: kafka
        server-id: "1"
    spec:
      volumes:
        - name: kafka-data
          emptyDir: {}
      containers:
        - name: server
          image: confluent/kafka:0.10.0.0-cp1
          env:
            - name: KAFKA_ZOOKEEPER_CONNECT
              value: zookeeper-1:2181
            - name: KAFKA_ADVERTISED_HOST_NAME
              value: kafka-1
            - name: KAFKA_BROKER_ID
              value: "1"
          ports:
            - containerPort: 9092
          volumeMounts:
            - mountPath: /var/lib/kafka
              name: kafka-data
---
apiVersion: v1
kind: Service
metadata:
  name: schema
spec:
  ports:
    - name: client
      port: 8081
  selector:
    app: kafka-schema-registry
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: kafka-schema-registry
spec:
  replicas: 1
  selector:
    matchLabels:
      app: kafka-schema-registry
  template:
    metadata:
      labels:
        app: kafka-schema-registry
    spec:
      containers:
        - name: kafka-schema-registry
          image: confluent/schema-registry:3.0.0
          env:
            - name: SR_KAFKASTORE_CONNECTION_URL
              value: zookeeper-1:2181
            - name: SR_KAFKASTORE_TOPIC
              value: "_schema_registry"
            - name: SR_LISTENERS
              value: "http://0.0.0.0:8081"
          ports:
            - containerPort: 8081

Zookeeper configuraion:

apiVersion: v1
kind: Service
metadata:
  name: zookeeper
spec:
  ports:
    - name: client
      port: 2181
  selector:
    app: zookeeper
---
apiVersion: v1
kind: Service
metadata:
  name: zookeeper-1
spec:
  ports:
    - name: client
      port: 2181
    - name: followers
      port: 2888
    - name: election
      port: 3888
  selector:
    app: zookeeper
    server-id: "1"
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: zookeeper-1
spec:
  selector:
    matchLabels:
      app: zookeeper
      server-id: "1"
  replicas: 1
  template:
    metadata:
      labels:
        app: zookeeper
        server-id: "1"
    spec:
      volumes:
        - name: data
          emptyDir: {}
        - name: wal
          emptyDir:
            medium: Memory
      containers:
        - name: server
          image: elevy/zookeeper:v3.4.7
          env:
            - name: MYID
              value: "1"
            - name: SERVERS
              value: "zookeeper-1"
            - name: JVMFLAGS
              value: "-Xmx2G"
          ports:
            - containerPort: 2181
            - containerPort: 2888
            - containerPort: 3888
          volumeMounts:
            - mountPath: /zookeeper/data
              name: data
            - mountPath: /zookeeper/wal
              name: wal
Spermatophore answered 18/1, 2019 at 13:9 Comment(3)
By the way, the confluent/ Docker images are deprecated. And confluentinc/ are preffered. And mentioned previously, are you having issues using Helm charts? docs.confluent.io/current/installation/installing_cp/…Concernment
I don't have issues with Helm charts. I need to deploy custom Kafka solutions without Helm, that is why I am trying to do soSpermatophore
I'm not seeing anything that looks very custom, though. Kafka is really only installed in one way, and maybe the config values are changed a bit, but any custom apps built around Kafka+Schema Registry, can be defined in separate YAML filesConcernment
B
19
org.apache.kafka.common.errors.TimeoutException: Timeout expired while fetching topic metadata

can happen when trying to connect to a broker expecting SSL connections and the client config did not specify;

security.protocol=SSL 
Basilicata answered 21/3, 2019 at 10:5 Comment(4)
@AdrianMitev I was using spring-boot Kafka, so I just ended up using the default spring-boot application-properties to set up the connection. My error came from trying to create a @Configuration class to create the connection and that gave me the timeout error.Kronstadt
This solved it for me. The broker (which I don't control) was configured to still use port 9092, but with SSL enabled. I had assumed it was plaintext. Thanks, Anders!Handspring
Thank you! In my case, I had misconfigured a service in Kubernetes that expected an optional environment variable containing an API key if connecting via SSL, and my secret was mis-named. Because I then pointed it at an SSL endpoint, I ended up with this timeout. Though I'm not familiar with the Kafka protocol, I expect the client was waiting for the server to say hello, and the server was expecting the client to perform an SSL handshake.Newsmonger
For a Kafka running in docker compose, add the environment prop: KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: SSL:SSL.Accusatorial
F
6

One time I fixed this issue by restarting my machine but it happened again and I didn't want to restart my machine, so I fixed it with this property in the server.properties file

advertised.listeners=PLAINTEXT://localhost:9092
Fishbolt answered 2/10, 2019 at 21:27 Comment(0)
C
4

Kafka fetch topics metadata fails due to 2 reasons:

Reason 1 If the bootstrap server is not accepting your connections this can be due to some proxy issue like a VPN or some server level security groups.

Reason 2: Mismatch in security protocol where the expected can be SASL_SSL and the actual can be SSL. or the reverse or it can be PLAIN.

Celestecelestia answered 14/7, 2020 at 7:35 Comment(1)
Thank you, this is why i love StackOverflow, helped me identify a misconfigured broker in my configMassotherapy
H
1

I have faced the same issue even though all the SSL config, topics are created. After long research, I have enabled the spring debug logs. The internal error is org.springframework.jdbc.CannotGetJdbcConnectionException. When I checked in other thread, they said about Spring Boot and Kafka dependency mismatch can cause the Timeout exception. So I have upgraded Spring Boot from 2.1.3 to 2.2.4. Now there is no error and kafka connection is successful. Might be useful to someone.

Hbomb answered 6/7, 2021 at 7:26 Comment(0)
M
0

For others who might face this issue, it may happen because topics are not created on the kafka broker machine. So ensure to create appropriate Topics on server as mentioned in your codebase.

Moonrise answered 2/3, 2021 at 12:31 Comment(0)
L
0

org.apache.kafka.common.errors.TimeoutException: Timeout expired while fetching topic metadata

In my case, the value of Kafka.consumer.stream.host in the application.properties file was not correct, this value should be in the right format according to the environment.

Lu answered 23/2, 2022 at 12:9 Comment(0)
P
0

Zookeeper session timeout occurs due to long Garbage Collection processes. So, I was facing same issue in my local. So check in your config folder server.properties file will there. Increase the size of below value zookeeper.connection.timeout.ms=18000

Peltry answered 20/10, 2022 at 8:14 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.