In my software I have 4x 500GB files which I write to sequentially in a circular fashion using boosts memory mapped file APIs.
I allocate regions in 32MB blocks, and when allocating a block at the end I create two memory mapped regions where the first is the end of the file and the second is at the start of the file and mapped to the end address of the first region.
Now this works just fine with smaller files. However, with big files when getting to the end region the disk performance goes to the floor and I'm not sure how to avoid it.
What I'm guessing is happening is that the disk tries to write to both ends of the files and the spindle has to jump back and forth. Which is a rather silly thing to do, especially when doing sequential writes, and I would have hoped that the OS would be a bit smarter.
Does anyone have any ideas on how to avoid this issue?
I was thinking of upgrading to Windows 10 and hope it does a better job. But it is a rather risky change that I would like to avoid right now.
I should also note that the files lives on a software RAID 1 with 2x 3TB Seagate Constallation Enterprise drives. These drive have minimum sequential write speed of 60MB/s and avarage of 120MB/s, and I am writing in total with all files at a speed of 30 MB/s.
The code can be found here.
EDIT:
So it turns out, after writing to the entire file and then starting over from the start the OS actually starts reading back what's on the disk even though it is not needed which what I believe is causing the issues.