Java FileLock for Reading and Writing
Asked Answered
B

3

23

I have a process that will be called rather frequently from cron to read a file that has certain move related commands in it. My process needs to read and write to this data file - and keep it locked to prevent other processes from touching it during this time. A completely separate process can be executed by a user to (potential) write/append to this same data file. I want these two processes to play nice and only access the file one at a time.

The nio FileLock seemed to be what I needed (short of writing my own semaphore type files), but I'm having trouble locking it for reading. I can lock and write just fine, but when attempting to create lock when reading I get a NonWritableChannelException. Is it even possible to lock a file for reading? Seems like a RandomAccessFile is closer to what I need, but I don't see how to implement that.

Here is the code that fails:

FileInputStream fin = new FileInputStream(f);
FileLock fl = fin.getChannel().tryLock();
if(fl != null) 
{
  System.out.println("Locked File");
  BufferedReader in = new BufferedReader(new InputStreamReader(fin));
  System.out.println(in.readLine());
          ...

The exception is thrown on the FileLock line.

java.nio.channels.NonWritableChannelException
 at sun.nio.ch.FileChannelImpl.tryLock(Unknown Source)
 at java.nio.channels.FileChannel.tryLock(Unknown Source)
 at Mover.run(Mover.java:74)
 at java.lang.Thread.run(Unknown Source)

Looking at the JavaDocs, it says

Unchecked exception thrown when an attempt is made to write to a channel that was not originally opened for writing.

But I don't necessarily need to write to it. When I try creating a FileOutpuStream, etc. for writing purposes it is happy until I try to open a FileInputStream on the same file.

Brodie answered 15/2, 2010 at 21:20 Comment(2)
Have you tried using the three param method call, FileLock lock(long position, long size, boolean shared)? I have never used the FileLock before so I won't post as an answer but I think using that method call may help since it sounds like you need a shared lock and not an exclusive lock since you want to write to the file while it has a lock on it.Skatole
I think the intent of that is to lock only portions of a file, however I would like to lock the entire file to prevent and possible corruption.Brodie
A
21
  1. Are you aware that locking the file won't keep other processes from touching it unless they also use locks?
  2. You have to lock via a writable channel. Get the lock via a RandomAccessFile in "rw" mode and then open your FileInputStream. Make sure to close both!
Aneroidograph answered 15/2, 2010 at 21:46 Comment(4)
a) yes, I will be writing both processes and plan to implement similar locking procedures in both b) I didn't realize you could get the lock on the RandomAccessFile directly. To use the File[Input|Output]Stream I needed to do a new FileInputStream(raf.getFD()). But, either way using the input stream of the RandomAccessFile object directly, I can still read from the file. ThanksBrodie
Eh? (a) you can't get a lock directly from RandomAccessFile, you have to get its FileChannel first; (b) RandomAccessFile doesn't have input streams, but it does implement DataInput so you can read from it directly.Aneroidograph
I notice that in linux (Oracle Linux Server, similar to Red Hat Linux) file locking in java does not work across OS instances. That is, it will lock within one operating system but not from another box.Shana
@DonSmith That's stated in the Javadoc. Network file locks are always problematic and to be avoided, as indeed are network files.Aneroidograph
K
13

It would be better if you created the lock using tryLock(0L, Long.MAX_VALUE, true).

This creates a shared lock which is the right thing to do for reading.

tryLock() is a shorthand for tryLock(0L, Long.MAX_VALUE, false), i.e. it requests an exclusive write-lock.

Kingcup answered 29/5, 2010 at 12:38 Comment(3)
Great response. This program is already live, but this could certainly be updated in the next phase. We are seeing so much use of it now that exclusive locks are becoming cumbersome in certain scenarios. I'll definitely keep this in mind for the next update.Brodie
Why do you say that a shared lock is the right thing to do for reading, surely that depends on your use-case? If I want to ensure that only 1 of a number of processes reads a file (eg agents monitoring an upload directory to process new files) then I don't think a shared lock is what I need.Noonan
I wrote it like that because reading isn't mutually exclusive. It is actually possible for several threads to read from one file without interfering while writing is always exclusive. Keep in mind that this was an answer to the original question and is not meant as an absolute truth for every situation. It's true in 99% of the use cases, though.Kingcup
U
0

I wrote a test program and bash commands to confirm the effectivity of the file lock:

import java.io.File;
import java.io.RandomAccessFile;
import java.nio.channels.FileLock;

public class FileWriterTest
{
    public static void main(String[] args) throws Exception
    {
        if (args.length != 4)
        {
            System.out.println("Usage: FileWriterTest <filename> <string> <sleep ms> <enable lock>");
            System.exit(1);
        }

        String filename = args[0];
        String data = args[1];
        int sleep = Integer.parseInt(args[2]);
        boolean enableLock = Boolean.parseBoolean(args[3]);

        try (RandomAccessFile raFile = new RandomAccessFile(new File(filename), "rw"))
        {
            FileLock lock = null;
            if (enableLock)
            {
                lock = raFile.getChannel().lock();
            }

            Thread.sleep(sleep);
            raFile.seek(raFile.length());
            System.out.println("writing " + data + " in a new line; current pointer = " + raFile.getFilePointer());
            raFile.write((data+"\n").getBytes());

            if (lock != null)
            {
                lock.release();
            }
        }
    }
}

Run with this bash command to check it works:

for i in {1..1000}
do
java FileWriterTest test.txt $i 10 true &
done

You should see the writing only happening once every 10ms (from the outputs), and in the end all numbers to be present in the file.

Output:

/tmp wc -l test.txt
1000 test.txt
/tmp

The same test without the lock shows data being lost:

for i in {1..1000}
do
java FileWriterTest test.txt $i 10 false &
done

Output:

/tmp wc -l test.txt
764 test.txt
/tmp

It should be easy to modify it to test the tryLock instead.

Unsling answered 15/11, 2019 at 6:30 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.