Kafka: unable to start Kafka - process can not access file 00000000000000000000.timeindex
Asked Answered
L

15

47

Kafka enthusiast, need little help here. I am unable to start kafka because the file \00000000000000000000.timeindex is being used by another process. Below are the logs:

[2017-08-09 22:49:22,811] FATAL [Kafka Server 0], Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
java.nio.file.FileSystemException: \installation\kafka_2.11-0.11.0.0\log\test-0\00000000000000000000.timeindex: The process cannot access the file because it is being used by another process.

        at sun.nio.fs.WindowsException.translateToIOException(WindowsException.java:86)
        at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:97)
        at sun.nio.fs.WindowsException.rethrowAsIOException(WindowsException.java:102)
        at sun.nio.fs.WindowsFileSystemProvider.implDelete(WindowsFileSystemProvider.java:269)
        at sun.nio.fs.AbstractFileSystemProvider.deleteIfExists(AbstractFileSystemProvider.java:108)
        at java.nio.file.Files.deleteIfExists(Files.java:1165)
        at kafka.log.Log$$anonfun$loadSegmentFiles$3.apply(Log.scala:311)
        at kafka.log.Log$$anonfun$loadSegmentFiles$3.apply(Log.scala:272)
        at scala.collection.TraversableLike$WithFilter$$anonfun$foreach$1.apply(TraversableLike.scala:733)
        at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
        at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
        at scala.collection.TraversableLike$WithFilter.foreach(TraversableLike.scala:732)
        at kafka.log.Log.loadSegmentFiles(Log.scala:272)
        at kafka.log.Log.loadSegments(Log.scala:376)
        at kafka.log.Log.<init>(Log.scala:179)
        at kafka.log.Log$.apply(Log.scala:1580)
        at kafka.log.LogManager$$anonfun$loadLogs$2$$anonfun$5$$anonfun$apply$12$$anonfun$apply$1.apply$mcV$sp(LogManager.scala:172)
        at kafka.utils.CoreUtils$$anon$1.run(CoreUtils.scala:57)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)
[2017-08-09 22:49:22,826] INFO [Kafka Server 0], shutting down (kafka.server.KafkaServer)
Lamellar answered 9/8, 2017 at 19:59 Comment(2)
Refer for alternate solution: https://mcmap.net/q/372205/-accessdeniedexception-when-deleting-a-topic-on-windows-kafkaThreonine
You can try to use a fix in this pull request.Tardiff
E
75

I had the same issue. The only way I could figure it out that was just delete the C:\tmp\kafka-logs directory. After that i was able to start up the kafka server.

You will lose your data and the offset will start from 0.

Earle answered 15/8, 2017 at 19:49 Comment(12)
is there any other solution without the loose data ?Gleeful
Tried deleting the logs like 3 times now once I got this error, but still facing the same issue :(Groth
I got the same error. Why do we want to delete the logs data every time? Is there a permanent solution?Lavelle
This problem constantly happens for me. Having delete everything and start over fresh every day or so is really annoying. Any solution that doesn't involve truncating everything?Mertz
Like Melwyn, I did this many times and still getting the same error.Yearround
Deleting the log folder worked for me on windows 10Winkelman
this works, it won't delete all your data, the topics will remain, but your messages are gone forever..Neomineomycin
Upvote. I can't believe this still remains the best answer & solution to-date.Reitman
This is temporary solution, but I looking for long term solution. I faced same issue for every 38hrs the Kakfa broker is getting down and that required to clear the logs and restart Kafka server/zookeeper again. My Kafka is on Windows 2012 R2. Any pointers would be appreciated.Embayment
@Earle thanks buddy.. it's really worked awesome. once i have deleted that folder my server started.Ghostwrite
Delete logs has the chances to loss data, the correct solution should be: "Don't run Kafka Server on Windows"Ecumenicist
I changed the logs folder to other than kafka home folder it worked for me.Centonze
T
25

This seems to be a known issue that gets trigerred on Windows after 168 hours have elapsed since you last published the message. Apparently this issue is being tracked and worked on here: KAFKA-8145

There are 2 workarounds for this:

  1. As suggested by others here you can clean up your directory containing your log files (or take a back up and have log.dirs point to another directory). However by this way you will loose your data.
  2. Go to you server.properties file and make following changes to it. Note: This is temporary solution to allow your consumers to come up and consume any remaining data so that there is no data loss. After having got all the data you need you should revert to Step 1 to clean up your data folder once and for all.

Update below property to prescribed value

# The minimum age of a log file to be eligible for deletion due to age
log.retention.hours=-1

Add this property at the end of your properties file.

log.cleaner.enable=false

Essentially what you are doing is that you are saying to the Kafka broker to not bother deleting old messages and that the age of all messages is now infinite i.e they will never be deleted. As you can see this is obviously not a desirable state and hence you should only do this in order for you to be able to consume whatever you need and then clean up your files / directory (Step 1). IMHO that the JIRA issue mentioned above is worked on soon and as per this comment looks like it may soon be resolved.

Tapetum answered 13/4, 2019 at 11:56 Comment(1)
This is the best answer in my opinion as it addresses the root cause. The workarounds are just delaying the inevitable. The issue still isn't fixed and looks like it won't be any time soon as officially, Windows isn't a supported OS. There are some patches out there including a Microsoft fork but I haven't tried them so cannot comment. The alternative could be to try a Linux container on Windows perhaps.Scrip
M
13

All answers give you a same solution by remove data, not how to prevent the problem.

Which actually, you just need to stop Kafka and Zookeepter properly.

You just have run these two commands in order

kafka-server-stop.sh

zookeeper-server-stop.sh

Then next time when you start, you will see no problems.

Misconduct answered 6/12, 2018 at 14:23 Comment(5)
I did kafka-server-stop.sh and zkServer.sh stop and then restarted it using zkServer.sh start and kafka-server-start.sh and it worked for me.Dichotomize
Did not work. In fact, the first command has no effect since Kafka server has already crashed.Insurgence
@swdonthis is to prevent before the crash happens, not to solve it.Misconduct
why the attitude? for me, it's a bug with running kafka on windows. has nothing to do with not having properly shut down kafka.Ringlet
@TonySchwartz it is a problem for everything when you kill the process instead of properly shutdown. Use your attitude to unplug your computer instead of properly shutdown it , and good luck with that.Misconduct
F
4
java.nio.file.FileSystemException: \installation\kafka_2.11-0.11.0.0\log\test-0\00000000000000000000.timeindex: The process cannot access the file because it is being used by another process.

00000000000000000000.timeindex is being used by another process. So you can delete the process by using following command

$ ps aux | grep zookeeper
$ sudo kill -9 <PID> 

Here PID is the zookeeper's process ID.


The problem is not fixed yet. It is described here: https://issues.apache.org/jira/browse/KAFKA-1194

There are 2 ways for temporary solution given by ephemeral972:

  1. [Recommended] You need to clean up the broker ids in the zookeeper path /brokers/ids/[]. Use the zk-cli tool delete command to clean up the paths. Start your brokers and verify it registers with the coordinator.
  2. The other way of resolving this is to change your broker-id from kafka server config and restarting the broker. However, this would corrupt your partitions and data is not recommended
Fernando answered 15/8, 2017 at 20:7 Comment(0)
E
4

I got this error too while running kafka on windows. You can avoid this error by changing the default config in sever.properties file.

Please follow these steps:

  1. Go to the config folder of kafka installation.
  2. Open the Server.properties file
  3. you will see the config

A comma separated list of directories under which to store log files:

log.dirs=/tmp/logs/kafka**

Change the value of log.dirs=/tmp/logs/kafka to some other value, for example:

log.dirs=/tmp/logs/kafka1
  1. Now start your kafka-server again.

This should solve the issue.

Ensphere answered 9/2, 2018 at 6:27 Comment(2)
And after stopping the kafka-server once more and trying to start it again? Change the log.dirs to /tmp/log/kafka2? ;-)Teazel
I got the same error on my Windows 10. Data is lost if change the logs directory every time. Is there a permanent solution?Lavelle
I
2

I faced the same issue and restarting kafka and zook then windows didn't work for me. what works for me (Don't reproduce that in Production mode, I'm not sure it will works fine but it could be acceptable with a DEVELOPMENT kafka server.

on a dev kafka server: go to the concerned directory (for instance \installation\kafka_2.11-0.11.0.0\log\test-0) and delete all files other than :

00000000000000000000.index
00000000000000000000.log
00000000000000000000.timeindex
leader-epoch-checkpoint

Then restart kafka, it was ok for me, after restarting (zookeeper then kafka), kafka add a .snapshot file and everything was ok.

Inorganic answered 3/4, 2019 at 16:1 Comment(1)
This approach works for me but not sure if we loose any data with this approach?Samaveda
R
1

Solution : In Windows delete logs manually. And restart kafka-server or broker

Finding log storage location.

Go to server.properties ############################# Log Basics #############################

A comma separated list of directories under which to store log files

log.dirs=/This Location/

Ramification answered 4/8, 2020 at 22:15 Comment(0)
A
0

Followed the approached suggested by @SkyWalker

Follow the below steps:

  1. List item.Open zkCli and get everything inside broker. See the below screenshot.

    List items inside broker

  2. Go inside topics and press double tab. You will get all the topic listed here.

    List all the topics

  3. Delete each topics then.

Delete each topic

Anachronous answered 7/11, 2017 at 9:21 Comment(0)
C
0

I faced the same problem and this is how i resolved it.

Change the log.dirs path in server.properties log.dirs=C:\kafka\logs

Another solution which worked : delete all files from the below dir wherever configured kafkalogs\test-0

Cedell answered 3/6, 2018 at 17:55 Comment(1)
Use formatting tools to make your post more readable. Code block should look like code block. Use bold italics if needed and image should be added as image , not as a link.Actor
O
0

I had similar issue on windows , partly because i had deleted couple of topics ( since i found no other way to just flush only the messages from those topics ). This is what worked for me.

Change the logs.dir in config/server.properties to new location
Change the dataDir in config/zookeeper.properties to new location
Restart zookeeper and kafka

The above obviously will work when you have no other topics other than what you deleted on the zookeeper/kafka to cater for , if there are other topics which you still want to retain configuration for , i believe the solution proposed by @Sumit Das might work. I had issues starting zkCli on my windows and i had only those topics which i deleted on my brokers , so i could safely do the above steps and get away with it.

Odor answered 24/7, 2018 at 6:19 Comment(1)
I only had to stop Kafka and Zookeeper, go to temporary folder and force-delete both kafka-logs and zookeeper folders. It means you lose everything but you will be able to start kafka (after zookeeper) again!Scutum
N
0

I configure tmp path as below: (in file ./config/server.properties)

log.dirs=d:\tmp\kafka-logs

then I changed from backslash '\' to '/':

log.dirs=d:/tmp/kafka-logs

and create folder to solve the problem

Nymphomania answered 22/5, 2019 at 7:21 Comment(0)
C
0

For me it worked after renaming the log files log.dirs=D:/kafka_2.13-2.4.0/data/kafka to kafka1.

Also modified the log.retention.hours=1 , to avoid repetition of issue

Conk answered 9/3, 2020 at 1:50 Comment(0)
B
0

This is for windows:

Kill the process running on port "9092" and "2181" using below commands in powershell.

netstat -aon | findstr 'yourPortNumberHere' 

taskkill /pid <pid here> f

Run above commands for both the ports.

Block answered 13/9, 2020 at 7:53 Comment(0)
A
0

If this happens during startup of a new installation, it is a (known) problem for Kafka under Windows. The issue has not been resolved yet, but there has been feedback on GitHub. There is a PR and I compiled the file and added it to my Kafka installation which make it worked.

Albaalbacete answered 24/2, 2023 at 20:12 Comment(0)
H
0

I got the same issue on Mac, but adding this annotation @DirtiesContext to the test class solves it. Basically this annotation makes sure that the context is cleaned and reset before different tests are run.

  @SpringBootTest
  @DirtiesContext
  @EmbeddedKafka(partitions = 1, brokerProperties = { "listeners=PLAINTEXT://localhost:9092", "port=9092" })
  class MyTestConsumerApplicationTests {
       // Your test here
   }
Hob answered 4/4 at 1:7 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.