Unable to start Confluent Schema Registry
Asked Answered
M

2

5

I have installed confluent platform in Ubuntu 16.04 machine and initially i have configured zookeeper, Kafka and ksql and started confluent platform. i am able to see the below message.

 root@DESKTOP-DIB3097:/opt/kafkafull/confluent-5.1.0/bin# ./confluent start
 This CLI is intended for development only, not for production
 https://docs.confluent.io/current/cli/index.html
 Using CONFLUENT_CURRENT: /tmp/confluent.HUlCltYT
 Starting zookeeper
 zookeeper is [UP]
 Starting kafka
 kafka is [UP]
 Starting schema-registry
 schema-registry is [UP]
 Starting kafka-rest
 kafka-rest is [UP]
 Starting connect
 connect is [UP]
 Starting ksql-server
 ksql-server is [UP]
 Starting control-center
 control-center is [UP]

now everything is up, when i checked status of the confluent platform, i observed that Schema registry, connect & control-center are down.

i have checked the logs of schema registry and found out the below log.


ERROR Error starting the schema registry (io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication)
io.confluent.kafka.schemaregistry.exceptions.SchemaRegistryInitializationException: Error initializing kafka store while initializing schema registry
        at io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry.init(KafkaSchemaRegistry.java:210)
        at io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.initSchemaRegistry(SchemaRegistryRestApplication.java:61)
        at io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.setupResources(SchemaRegistryRestApplication.java:72)
        at io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.setupResources(SchemaRegistryRestApplication.java:39)
        at io.confluent.rest.Application.createServer(Application.java:201)
        at io.confluent.kafka.schemaregistry.rest.SchemaRegistryMain.main(SchemaRegistryMain.java:41)
Caused by: io.confluent.kafka.schemaregistry.storage.exceptions.StoreInitializationException: io.confluent.kafka.schemaregistry.storage.exceptions.StoreException: Failed to write Noop record to kafka store.
        at io.confluent.kafka.schemaregistry.storage.KafkaStore.init(KafkaStore.java:137)
        at io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry.init(KafkaSchemaRegistry.java:208)
        ... 5 more
Caused by: io.confluent.kafka.schemaregistry.storage.exceptions.StoreException: Failed to write Noop record to kafka store.
        at io.confluent.kafka.schemaregistry.storage.KafkaStore.getLatestOffset(KafkaStore.java:422)
        at io.confluent.kafka.schemaregistry.storage.KafkaStore.waitUntilKafkaReaderReachesLastOffset(KafkaStore.java:275)
        at io.confluent.kafka.schemaregistry.storage.KafkaStore.init(KafkaStore.java:135)
        ... 6 more
Caused by: java.util.concurrent.TimeoutException: Timeout after waiting for 60000 ms.
        at org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:78)
        at org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:30)
        at io.confluent.kafka.schemaregistry.storage.KafkaStore.getLatestOffset(KafkaStore.java:417)
        ... 8 more

Monolatry answered 30/1, 2019 at 12:56 Comment(4)
Did you try restarting the individual server after Zookeeper and Kafka were up?Remanent
Thanks for the quick response. I have tried restarting the service individually. initially services are up, but when i check the status, the services are stopping automatically by throwing the above errors. More Information : To start connect service, I guess Schema Registry is mandatory. is it so?Monolatry
If you use confluent start, then yes Registry is required, but if manually started without Avro Converters, such as from Apache Kafka download, then no.Chlor
Thanks for the response. Our ideal task is to use Kafka connect jdbc as source. to use connect I guess we require schema registry. So I am working on this one.Monolatry
I
7

In $CONFLUENT_HOME/etc/kafka, you'll see server.properties.

Uncomment the following and update as below

  1. listeners=PLAINTEXT://0.0.0.0:9092

  2. advertised.listeners=PLAINTEXT://localhost:9092

  3. listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL

In $CONFLUENT_HOME/etc/schema-registry, you'll see schema-registry.properties, open and update as below

  1. listeners=http://0.0.0.0:9092
In answered 20/3, 2022 at 20:27 Comment(0)
M
1

I think, I've found the answer,

In Kafka configuration files add the property host.name=host_ip_address which will act as Kafka broker host. so in all the configuration files where ever Kafka bootstrap property comes, change it to the respective host name or IP address as shown below.

bootstrap.servers=192.168.0.193:9092

Example : In Schema Registry configurations, I have changed the below property from local host to respective IP address

kafkastore.bootstrap.servers=PLAINTEXT://192.168.0.193:9092 ##

In other files check the property bootstrap.servers=192.168.0.193:9092 referring to correctly or not. And also check if schema registry configuration file are referring correctly or not.

(you can actually check and compare configuration files in /tmp/confluent kafka logs)

Now after changing all the configurations files,services are up and running.

Monolatry answered 31/1, 2019 at 7:51 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.