What are the advantages of memory-mapped files?
Asked Answered
D

4

102

I've been researching memory mapped files for a project and would appreciate any thoughts from people who have either used them before, or decided against using them, and why?

In particular, I am concerned about the following, in order of importance:

  • concurrency
  • random access
  • performance
  • ease of use
  • portability
Diamonddiamondback answered 10/10, 2008 at 18:19 Comment(0)
A
61

I think the advantage is really that you reduce the amount of data copying required over traditional methods of reading a file.

If your application can use the data "in place" in a memory-mapped file, it can come in without being copied; if you use a system call (e.g. Linux's pread() ) then that typically involves the kernel copying the data from its own buffers into user space. This extra copying not only takes time, but decreases the effectiveness of the CPU's caches by accessing this extra copy of the data.

If the data actually have to be read from the disc (as in physical I/O), then the OS still has to read them in, a page fault probably isn't any better performance-wise than a system call, but if they don't (i.e. already in the OS cache), performance should in theory be much better.

On the downside, there's no asynchronous interface to memory-mapped files - if you attempt to access a page which isn't mapped in, it generates a page fault then makes the thread wait for the I/O.


The obvious disadvantage to memory mapped files is on a 32-bit OS - you can easily run out of address space.

Anima answered 10/10, 2008 at 20:1 Comment(4)
On Windows at least you can map multiple 32bit views of a larger mmap file - which can be more efficent than trying to deal with very large files using regular CRT functionAlysaalyse
@Anima You wrote "his extra copying not only takes time, but decreases the effectiveness of the CPU's caches by accessing this extra copy of the data.". (emphasis mine). Can you please explain how the extra buffer copy in the kernel hinders the effectiveness of CPU's caches?Coulee
@Coulee accessing twice as much memory = twice as much cache wasted (very approximately).Mattias
A major drawback of memory mapped files is error handling. The logistics of recovering to a well-defined state on a segment violation really hurts easy-of-use. With discipline for where to access memory, suitable unwinding points and barriers to prevent reordering this is doable yet defeats the convenience of passing native pointers around. It is effectively the difference between the convenience of C++ exceptions and pitfalls of (sig)longjmp. I would advise avoiding memory mapped files in applications where crashing unacceptable if the user yanks a USB drive or a network share goes down.Campania
D
57

I have used a memory mapped file to implement an 'auto complete' feature while the user is typing. I have well over 1 million product part numbers stored in a single index file. The file has some typical header information but the bulk of the file is a giant array of fixed size records sorted on the key field.

At runtime the file is memory mapped, cast to a C-style struct array, and we do a binary search to find matching part numbers as the user types. Only a few memory pages of the file are actually read from disk -- whichever pages are hit during the binary search.

  • Concurrency - I had an implementation problem where it would sometimes memory map the file multiple times in the same process space. This was a problem as I recall because sometimes the system couldn't find a large enough free block of virtual memory to map the file to. The solution was to only map the file once and thunk all calls to it. In retrospect using a full blown Windows service would of been cool.
  • Random Access - The binary search is certainly random access and lightning fast
  • Performance - The lookup is extremely fast. As users type a popup window displays a list of matching product part numbers, the list shrinks as they continue to type. There is no noticeable lag while typing.
Discography answered 10/10, 2008 at 19:4 Comment(6)
Wouldn't the binary search be slow as the pages are read in for each attempt? Or is the operating system smart enough to deal with this in an efficient way?Township
I suppose using memory mapped I/O is kind of wasteful for the binary search, as the search will only access a few single keys in relatively distant memory locations, but the OS will load in 4k pages for each such request. But then again, the file with parts doesn't change much, so the cache helps to cover this up. But strictly speaking, i believe that traditional seeking/reading would be better in here. Finally, 1 mil is not much these days. Why not just keep it all in RAM?Giddens
@the swine and PsychoDad my original answer was from 2008 and the actual implementation of this memory mapped auto-complete feature was around 2004-2005 or so. Consuming 800-1000MB of physical memory to load the entire file was not a good solution for our user base. The memory mapped solution was very fast and efficient. It kicked-ass and I remember it fondly from my early junior-developer days. :)Discography
@BrianEnsink: ok, that makes sense. i didnt expect each entry to be as much as 1kB. then of course the paged approach turns more efficient. nice :)Giddens
consuming physical memory to load the entire file was not a good solution. I'm not sure why you're concerned with physical memory (does loading a file into a byteBuffer take more physical memory?, thats an OS detail). mmap will take up all that space with on virtual memory, and reading a file by the specific bytes you need won't. the swine mentioned But strictly speaking, i believe that traditional seeking/reading would be better in here.. That's an interesting thought, because then you can literally just read the bytes you need when you need them.Jauch
I think the performance (increased speed, reduced latency) advantage for Brian was caused by using mmap, but this also means he did in fact consume 800-1000MB of not physical, but virtual memory. Its a memory vs. latency trade-off. The alternative solution (reading files the normal way) is more memory efficient than mmaping, but its slower. I don't fully understand file systems to 100%, but I think there was confusion back in 2013 here, unix.stackexchange.com/questions/367982/… could helpJauch
Y
22

Memory mapped files can be used to either replace read/write access, or to support concurrent sharing. When you use them for one mechanism, you get the other as well.

Rather than lseeking and writing and reading around in a file, you map it into memory and simply access the bits where you expect them to be.

This can be very handy, and depending on the virtual memory interface can improve performance. The performance improvement can occur because the operating system now gets to manage this former "file I/O" along with all your other programmatic memory access, and can (in theory) leverage the paging algorithms and so forth that it is already using to support virtual memory for the rest of your program. It does, however, depend on the quality of your underlying virtual memory system. Anecdotes I have heard say that the Solaris and *BSD virtual memory systems may show better performance improvements than the VM system of Linux--but I have no empirical data to back this up. YMMV.

Concurrency comes into the picture when you consider the possibility of multiple processes using the same "file" through mapped memory. In the read/write model, if two processes wrote to the same area of the file, you could be pretty much assured that one of the process's data would arrive in the file, overwriting the other process' data. You'd get one, or the other--but not some weird intermingling. I have to admit I am not sure whether this is behavior mandated by any standard, but it is something you could pretty much rely on. (It's actually agood followup question!)

In the mapped world, in contrast, imagine two processes both "writing". They do so by doing "memory stores", which result in the O/S paging the data out to disk--eventually. But in the meantime, overlapping writes can be expected to occur.

Here's an example. Say I have two processes both writing 8 bytes at offset 1024. Process 1 is writing '11111111' and process 2 is writing '22222222'. If they use file I/O, then you can imagine, deep down in the O/S, there is a buffer full of 1s, and a buffer full of 2s, both headed to the same place on disk. One of them is going to get there first, and the other one second. In this case, the second one wins. However, if I am using the memory-mapped file approach, process 1 is going to go a memory store of 4 bytes, followed by another memory store of 4 bytes (let's assume that't the maximum memory store size). Process 2 will be doing the same thing. Based on when the processes run, you can expect to see any of the following:

11111111
22222222
11112222
22221111

The solution to this is to use explicit mutual exclusion--which is probably a good idea in any event. You were sort of relying on the O/S to do "the right thing" in the read/write file I/O case, anyway.

The classing mutual exclusion primitive is the mutex. For memory mapped files, I'd suggest you look at a memory-mapped mutex, available using (e.g.) pthread_mutex_init().

Edit with one gotcha: When you are using mapped files, there is a temptation to embed pointers to the data in the file, in the file itself (think linked list stored in the mapped file). You don't want to do that, as the file may be mapped at different absolute addresses at different times, or in different processes. Instead, use offsets within the mapped file.

Yhvh answered 10/10, 2008 at 19:3 Comment(0)
B
2

Concurrency would be an issue. Random access is easier Performance is good to great. Ease of use. Not as good. Portability - not so hot.

I've used them on a Sun system a long time ago, and those are my thoughts.

Birnbaum answered 10/10, 2008 at 18:21 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.