Memory Mapped files and atomic writes of single blocks
Asked Answered
C

2

15

If I read and write a single file using normal IO APIs, writes are guaranteed to be atomic on a per-block basis. That is, if my write only modifies a single block, the operating system guarantees that either the whole block is written, or nothing at all.

How do I achieve the same effect on a memory mapped file?

Memory mapped files are simply byte arrays, so if I modify the byte array, the operating system has no way of knowing when I consider a write "done", so it might (even if that is unlikely) swap out the memory just in the middle of my block-writing operation, and in effect I write half a block.

I'd need some sort of a "enter/leave critical section", or some method of "pinning" the page of a file into memory while I'm writing to it. Does something like that exist? If so, is that portable across common POSIX systems & Windows?

Combination answered 21/9, 2010 at 10:17 Comment(2)
How many applications are interacting with your mapped file?Space
Only one process, i.e., the database server.Combination
S
6

The technique of keeping a journal seems to be the only way. I don't know how this works with multiple apps writing to the same file. The Cassandra project has a good article on how to get performance with a journal. The key thing is to make sure of, is that the journal only records positive actions (my first approach was to write the pre-image of each write to the journal allowing you to rollback, but it got overly complicated).

So basically your memory-mapped file has a transactionId in the header, if your header fits into one block you know it won't get corrupted, though many people seem to write it twice with a checksum: [header[cksum]] [header[cksum]]. If the first checksum fails, use the second.

The journal looks something like this:

[beginTxn[txnid]] [offset, length, data...] [commitTxn[txnid]]

You just keep appending journal records until it gets too big, then roll it over at some point. When you startup your program you check to see if the transaction id for the file is at the last transaction id of the journal -- if not you play back all the transactions in the journal to sync up.

Space answered 25/10, 2010 at 16:41 Comment(4)
Yes, journaling is the way to go, I'm aware of those algorithms. But the problem is that even when using a journal, you have to guarantee that individual pages of the data file(s) are only written in a complete go, otherwise you risk a "half-written" page, and you cannot detect whether it is corrupted or not. That's why I'm looking for a way to do atomic writes of pages in mapped files.Combination
Why doesn't this work: partialWrite = (file.transaction-id < journal.transaction-id). Since you only update the file's transaction-id at the end (once the page has been updated). Its also difficult to achieve perfect durability (see h2database.com/html/advanced.html#durability_problems)Space
@MartinProbst: You can't do atomic writes of pages at all. It's a fundamentally asynchronous operation to the Windows kernel, I believe. You'll probably want to check out the FlushFileBuffers Win API function.Calida
Thanks very much for you informative answer! The double writes approach is brilliant. Could you please refer to codes that use this method? As to "if your header fits into one block you know it won't get corrupted", I don't see why that's the case, because the process can possibly crash in the middle of copying data to mmap()ed memory. can you please elaborate on that. Thank you again!Handiness
T
0

If I read and write a single file using normal IO APIs, writes are guaranteed to be atomic on a per-block basis. That is, if my write only modifies a single block, the operating system guarantees that either the whole block is written, or nothing at all.

In the general case, the OS does not guarantee "writes of a block" done with "normal IO APIs" are atomic:

  • Blocks are more of a filesystem concept - a filesystem's block size may actually map to multiple disk sectors...
  • Assuming you meant sector, how do you know your write only mapped to a sector? There's nothing saying the I/O was well aligned to that of a sector when it's gone through the indirection of a filesystem
  • There's nothing saying your disk HAS to implement sector atomicity. A "real disk" usually does but it's not mandatory or a guaranteed property. Sadly your program can't "check" for this property unless its an NVMe disk and you have access to the raw device or you're sending raw commands that have atomicity guarantees to a raw device.

Further, you're usually concerned with durability over multiple sectors (e.g. if power loss happens was the data I sent before this sector definitely on stable storage?). If there's any buffering going on, your write may have still only been in RAM/disk cache unless you used another command to check first / opened the file/device with flags requesting cache bypass and said flags were actually honoured.

Taler answered 9/6, 2020 at 3:56 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.