My question is an extension of this one How to manipulate page cache in Linux?
I was trying to do a small project that aims to limit the size of page cache used on a per file basis. The approach I used was as follows -
- Maintain a kfifo queue of page pointers as they are added to the page cache.
- Add a hook in add_to_page_cache_lru() and see if the size of the radix tree (the address_space) of a file is more than a pre-determined size then choose a victim from the fifo queue and delete the page from page cache.
- I used the functions delete_from_page_cache() and try_to_unmap() to evict the page from the page cache, followed by put_page() to release the page.
I expect this code to free the pages and release the memory but that doesn't seem to happen. For example, if I read a file of size 25MB and I've restricted the size of page cache for this file to be 512 pages (2MB), then I expect to see a change of only 2MB in the free memory (free -m). What I see instead is that the full 25MB is eaten up and shows up in the free command.
What more should I do to ensure that my requirements are fulfilled? I've not thought about dirty pages yet as I couldn't even make it work for reads (cat the file). Any pointers would be helpful.
P.S. - I'm using linux 4.0 for this project.
drop_caches
within the kernel i.e. how it accomplishes something similar. https://mcmap.net/q/540096/-how-can-i-shrink-the-linux-page-cache-from-within-kernel-space – Halm