I have an application that sequentially reads data from a file. Some is read directly from a pointer to the mmap
ed file and other parts are memcpy
ed from the file to another buffer. I noticed poor performance when doing a large memcpy
of all the memory that I needed (1MB blocks) and better performance when doing a lot of smaller memcpy
calls (In my tests, I used 4KB, the page size, which took 1/3 of the time to run.) I believe that the issue is a very large number of major page faults when using a large memcpy
.
I've tried various tuning parameters (MAP_POPUATE
, MADV_WILLNEED
, MADV_SEQUENTIAL
) without any noticeable improvement.
I'm not sure why many small memcpy
calls should be faster; it seems counter-intuitive. Is there any way to improve this?
Results and test code follow.
Running on CentOS 7 (linux 3.10.0), default compiler (gcc 4.8.5), reading 29GB file from a RAID array of regular disks.
Running with /usr/bin/time -v
:
4KB memcpy
:
User time (seconds): 5.43
System time (seconds): 10.18
Percent of CPU this job got: 75%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:20.59
Major (requiring I/O) page faults: 4607
Minor (reclaiming a frame) page faults: 7603470
Voluntary context switches: 61840
Involuntary context switches: 59
1MB memcpy
:
User time (seconds): 6.75
System time (seconds): 8.39
Percent of CPU this job got: 23%
Elapsed (wall clock) time (h:mm:ss or m:ss): 1:03.71
Major (requiring I/O) page faults: 302965
Minor (reclaiming a frame) page faults: 7305366
Voluntary context switches: 302975
Involuntary context switches: 96
MADV_WILLNEED
did not seem to have much impact on the 1MB copy result.
MADV_SEQUENTIAL
slowed down the 1MB copy result by so much, I didn't wait for it to finish (at least 7 minutes).
MAP_POPULATE
slowed the 1MB copy result by about 15 seconds.
Simplified code used for the test:
#include <algorithm>
#include <iostream>
#include <stdexcept>
#include <fcntl.h>
#include <stdint.h>
#include <string.h>
#include <sys/mman.h>
#include <unistd.h>
int
main(int argc, char *argv[])
{
try {
char *filename = argv[1];
int fd = open(filename, O_RDONLY);
if (fd == -1) {
throw std::runtime_error("Failed open()");
}
off_t file_length = lseek(fd, 0, SEEK_END);
if (file_length == (off_t)-1) {
throw std::runtime_error("Failed lseek()");
}
int mmap_flags = MAP_PRIVATE;
#ifdef WITH_MAP_POPULATE
mmap_flags |= MAP_POPULATE; // Small performance degredation if enabled
#endif
void *map = mmap(NULL, file_length, PROT_READ, mmap_flags, fd, 0);
if (map == MAP_FAILED) {
throw std::runtime_error("Failed mmap()");
}
#ifdef WITH_MADV_WILLNEED
madvise(map, file_length, MADV_WILLNEED); // No difference in performance if enabled
#endif
#ifdef WITH_MADV_SEQUENTIAL
madvise(map, file_length, MADV_SEQUENTIAL); // Massive performance degredation if enabled
#endif
const uint8_t *file_map_i = static_cast<const uint8_t *>(map);
const uint8_t *file_map_end = file_map_i + file_length;
size_t memcpy_size = MEMCPY_SIZE;
uint8_t *buffer = new uint8_t[memcpy_size];
while (file_map_i != file_map_end) {
size_t this_memcpy_size = std::min(memcpy_size, static_cast<std::size_t>(file_map_end - file_map_i));
memcpy(buffer, file_map_i, this_memcpy_size);
file_map_i += this_memcpy_size;
}
}
catch (const std::exception &e) {
std::cerr << "Caught exception: " << e.what() << std::endl;
}
return 0;
}
perf stat
? – Cranerperf stat
showed page-faults to be the same on the 2 different memcpy sizes (it seems to be the same as "minor page faults" in/usr/bin/time -v
), but shows context switches to be ~4 times as many with the 1MB copy. – Annikaannikenread
instead. – RacketMAP_NORESERVE
tommap_flags
? – Whoso