AccessDeniedException when deleting a topic on Windows Kafka
Asked Answered
P

7

11

I just installed Kafka (from Confluent Platform) on my Windows machine. I started up Zookeeper and Kafka and creating topics, producing to and consuming from them works. However, as soon as I delete a topic, Kafka crashes like this:

PS C:\confluent-4.1.1> .\bin\windows\kafka-topics.bat -zookeeper 127.0.0.1:2181 --topic foo --create --partitions 1 --replication-factor 1
Created topic "foo".
PS C:\confluent-4.1.1> .\bin\windows\kafka-topics.bat -zookeeper 127.0.0.1:2181 --topic foo --delete
Topic foo is marked for deletion.
Note: This will have no impact if delete.topic.enable is not set to true.

This is the crash output:

[2018-06-08 09:44:54,185] ERROR Error while renaming dir for foo-0 in log dir C:\confluent-4.1.1\data\kafka (kafka.server.LogDirFailureChannel)
java.nio.file.AccessDeniedException: C:\confluent-4.1.1\data\kafka\foo-0 -> C:\confluent-4.1.1\data\kafka\foo-0.cf697a92ed5246c0977bf9a279f15de8-delete
        at sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:83)
        at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
        at sun.nio.fs.WindowsFileCopy.move(WindowsFileCopy.java:387)
        at sun.nio.fs.WindowsFileSystemProvider.move(WindowsFileSystemProvider.java:287)
        at java.nio.file.Files.move(Files.java:1395)
        at org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:697)
        at kafka.log.Log$$anonfun$renameDir$1.apply$mcV$sp(Log.scala:579)
        at kafka.log.Log$$anonfun$renameDir$1.apply(Log.scala:577)
        at kafka.log.Log$$anonfun$renameDir$1.apply(Log.scala:577)
        at kafka.log.Log.maybeHandleIOException(Log.scala:1678)
        at kafka.log.Log.renameDir(Log.scala:577)
        at kafka.log.LogManager.asyncDelete(LogManager.scala:828)
        at kafka.cluster.Partition$$anonfun$delete$1.apply(Partition.scala:240)
        at kafka.cluster.Partition$$anonfun$delete$1.apply(Partition.scala:235)
        at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:250)
        at kafka.utils.CoreUtils$.inWriteLock(CoreUtils.scala:258)
        at kafka.cluster.Partition.delete(Partition.scala:235)
        at kafka.server.ReplicaManager.stopReplica(ReplicaManager.scala:347)
        at kafka.server.ReplicaManager$$anonfun$stopReplicas$2.apply(ReplicaManager.scala:377)
        at kafka.server.ReplicaManager$$anonfun$stopReplicas$2.apply(ReplicaManager.scala:375)
        at scala.collection.Iterator$class.foreach(Iterator.scala:891)
        at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
        at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
        at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
        at kafka.server.ReplicaManager.stopReplicas(ReplicaManager.scala:375)
        at kafka.server.KafkaApis.handleStopReplicaRequest(KafkaApis.scala:205)
        at kafka.server.KafkaApis.handle(KafkaApis.scala:116)
        at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:69)
        at java.lang.Thread.run(Thread.java:748)
        Suppressed: java.nio.file.AccessDeniedException: C:\confluent-4.1.1\data\kafka\foo-0 -> C:\confluent-4.1.1\data\kafka\foo-0.cf697a92ed5246c0977bf9a279f15de8-delete
                at sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:83)
                at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
                at sun.nio.fs.WindowsFileCopy.move(WindowsFileCopy.java:301)
                at sun.nio.fs.WindowsFileSystemProvider.move(WindowsFileSystemProvider.java:287)
                at java.nio.file.Files.move(Files.java:1395)
                at org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:694)
                ... 23 more
[2018-06-08 09:44:54,187] INFO [ReplicaManager broker=0] Stopping serving replicas in dir C:\confluent-4.1.1\data\kafka (kafka.server.ReplicaManager)
[2018-06-08 09:44:54,192] INFO [ReplicaFetcherManager on broker 0] Removed fetcher for partitions  (kafka.server.ReplicaFetcherManager)
[2018-06-08 09:44:54,193] INFO [ReplicaAlterLogDirsManager on broker 0] Removed fetcher for partitions  (kafka.server.ReplicaAlterLogDirsManager)
[2018-06-08 09:44:54,195] INFO [ReplicaManager broker=0] Broker 0 stopped fetcher for partitions  and stopped moving logs for partitions  because they are in the failed log directory C:\confluent-4.1.1\data\kafka. (kafka.server.ReplicaManager)
[2018-06-08 09:44:54,195] INFO Stopping serving logs in dir C:\confluent-4.1.1\data\kafka (kafka.log.LogManager)
[2018-06-08 09:44:54,197] ERROR Shutdown broker because all log dirs in C:\confluent-4.1.1\data\kafka have failed (kafka.log.LogManager)
[2018-06-08 09:44:54,198] INFO [ReplicaFetcherManager on broker 0] Removed fetcher for partitions  (kafka.server.ReplicaFetcherManager)

The user running Zookeeper and Kafka has full access rights to C:\confluent-4.1.1\data\kafka.

What am I missing?

Pentad answered 8/6, 2018 at 7:46 Comment(2)
Possible Duplicate of #48114540 Delete all the logs from Zookeeper and Kafka-logs Folders from C:/tmp if kafka is hosted on windows.Unzip
You can try to use a fix in this pull request.Rattan
J
10

I know I'm late to the party but keep in mind that even if you delete your topic manually or via some Kafka UI and you delete all the kafka logs, kafka still may not start because of the state that it syncs with ZK.

So, make sure you cleanup the ZK state by deleting ZK's log.

Please know these actions are irreversible. Also run as Administrator

Jacelynjacenta answered 24/5, 2019 at 23:4 Comment(2)
Thanx a lot. This one only worked for me. I deleted first kafka logs and then only afther the deletion of ZK logs, it worked.Spectacled
Could you add the zookeeper logs location?Gallard
D
5

I had a similar problem and it happen only under windows, see KAFKA-1194 and it still apply to Kafka 1.1.0

The only workaround available is to disable the cleaner log.cleaner.enable = false

For local development under windows you can ignore this issue since it does not apply in other OS.

Dace answered 8/6, 2018 at 8:56 Comment(3)
I tried this, but it still crashes with the same exception.Pentad
If you disable the log cleaner, then your disks will eventually fill upEstus
Disabling the cleaner does not fix the exceptionTaddeo
A
2

I had similar problem after deleting a topic. I had to go to topic location and delete it manually and it worked. /tmp/kafka-logs/[yourTopicName]

I am not sure if same will work for you, as I am also new to KAFKA.

Aquarium answered 2/11, 2018 at 16:11 Comment(1)
In my case i have to delete ZK logs also, then it worked.Spectacled
O
1
1- stop zookeeper & Kafka server,
2- then go to ‘kafka-logs’ folder , there you will see list of kafka topic folders, delete folder with topic name
3- go to ‘zookeeper-data’ folder , delete data inside that.
4- start zookeeper & kafka server again.

note: if you get "The Cluster ID xxxxxxxxxx doesn't match stored clusterId" error, you have to delete all files in the kafkas log dir.

Oppilate answered 21/6, 2021 at 11:25 Comment(0)
T
0

Problem: I had similar problem after deleting a topic. zookeeper was started successfully but while running kafka I was getting above mentioned issue.

Analysis: In my case, what I did was I redirected kafka logs to new folder location C:\Tools\kafka_2.13-2.6.0\kafka-test-logs. I forgot to create a folder kafka-test-logs. In this case it will create auto default folder with provided path name ex: Toolskafka_2.13-2.6.0kafka-test-logs. So even after deleting this logs folder it won't worked in my case.

Solution: First I stopped zookeeper. I created new folder kafka-test-logs which I forgot earlier and then deleted default created logs for kafka and then restarted zookeeper and kafka server. That's all worked for me.

Thank you!! Cheers and Happy Coding.

Telegony answered 30/11, 2020 at 8:53 Comment(1)
This is basically like creating a new cluster. Doesn't prevent downtime or true persistenceEstus
D
0

I was also facing the same issue, then resolved it by downloading the following version of Kafka from this link, Version 2.8.1

  1. Then changed the zookeeper.properties file in the Config folder to

    dataDir=C:/kafka/zookeeper
    
  2. and server.properties file in the Config folder to

    log.dirs=C:/kafka/kafka-logs
    

Make sure your Kafka folder is extracted and stored in the C:/ drive or else amend the path accordingly in the config file properties.

Dumpling answered 28/9, 2021 at 7:30 Comment(1)
You may still have an AccessDeniedException when the kafka logs rotate due to retention. Refer confluent.io/blog/set-up-and-run-kafka-on-windows-linux-wsl-2Estus
D
0

I had changed the Zookeeper/kafka logs folder to other than kafka home folder as below:

kafka home - \kafka Zookeper logs - \kafkalogs\zookeeper Kafka Logs - \kafkalogs

Its working for me.

Drying answered 23/7, 2023 at 15:41 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.