We have a kafka cluster with 4 brokers and some topics with replica factor 1 and 10 partitions.
At one moment 2 of our 4 servers with the kafka cluster - fail.
So now we have 2 brokers with the same topics.
When I run the command
./kafka_topics.sh --zookeeper localhost:2181 --describe
I get this:
Topic:outcoming-notification-error-topic PartitionCount:10 ReplicationFactor:1 Configs:
Topic: outcoming-error-topic Partition: 0 Leader: 2 Replicas: 2 Isr: 2
Topic: outcoming-error-topic Partition: 1 Leader: 3 Replicas: 3 Isr: 3
Topic: outcoming-error-topic Partition: 2 Leader: 4 Replicas: 4 Isr: 4
Topic: outcoming-error-topic Partition: 3 Leader: 1 Replicas: 1 Isr: 1
Topic: outcoming-error-topic Partition: 4 Leader: 2 Replicas: 2 Isr: 2
Topic: outcoming-error-topic Partition: 5 Leader: 3 Replicas: 3 Isr: 3
Topic: outcoming-error-topic Partition: 6 Leader: 4 Replicas: 4 Isr: 4
Topic: outcoming-error-topic Partition: 7 Leader: 1 Replicas: 1 Isr: 1
Topic: outcoming-error-topic Partition: 8 Leader: 2 Replicas: 2 Isr: 2
Topic: outcoming-error-topic Partition: 9 Leader: 3 Replicas: 3 Isr: 3
How can I delete Leader 2...4? Or maybe I need to delete the partition for this Leader, but how?
UPD..
Also, we use kafka_exporter for monitoring kafka with Prometheus. After 2 brokers are down, in the log of kafka_exporter, we get this error:
level=error msg="Cannot get oldest offset of topic outcoming-error-topic partition 10: kafka server: In the middle of a leadership election, there is currently no leader for this partition and hence it is unavailable for writes." source="kafka_exporter.go:296"
m create file:
cat topicmove.json { "topics":[ {"topic": " outcoming-notification-error-topic"} ], "version":1` Im run command :
bin/kafka-reassign-partitions.sh --zookeeper localhost:2181 --topics-to-move-json-file topicmove.json --broker-list "1,2" --generate ` and get only this: ` Current partition replica assignment {"version":1,"partitions":[]} Proposed partition reassignment configuration {"version":1,"partitions":[]} ` – Epilate