]> www.infradead.org Git - users/jedix/linux-maple.git/commitdiff
hugetlbfs: dirty pages as they are added to pagecache
authorMike Kravetz <mike.kravetz@oracle.com>
Thu, 18 Oct 2018 21:31:47 +0000 (14:31 -0700)
committerBrian Maly <brian.maly@oracle.com>
Sat, 20 Oct 2018 04:53:20 +0000 (00:53 -0400)
Some test systems were experiencing negative huge page reserve
counts and incorrect file block counts.  This was traced to
/proc/sys/vm/drop_caches removing clean pages from hugetlbfs
file pagecaches.  When non-hugetlbfs explicit code removes the
pages, the appropriate accounting is not performed.

This can be recreated as follows:
 fallocate -l 2M /dev/hugepages/foo
 echo 1 > /proc/sys/vm/drop_caches
 fallocate -l 2M /dev/hugepages/foo
 grep -i huge /proc/meminfo
   AnonHugePages:         0 kB
   ShmemHugePages:        0 kB
   HugePages_Total:    2048
   HugePages_Free:     2047
   HugePages_Rsvd:    18446744073709551615
   HugePages_Surp:        0
   Hugepagesize:       2048 kB
   Hugetlb:         4194304 kB
 ls -lsh /dev/hugepages/foo
   4.0M -rw-r--r--. 1 root root 2.0M Oct 17 20:05 /dev/hugepages/foo

To address this issue, dirty pages as they are added to pagecache.
This can easily be reproduced with fallocate as shown above. Read
faulted pages will eventually end up being marked dirty.  But there
is a window where they are clean and could be impacted by code such
as drop_caches.  So, just dirty them all as they are added to the
pagecache.

Orabug: 28813968

Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com>
Reviewed-by: Larry Bassel <larry.bassel@oracle.com>
Signed-off-by: Brian Maly <brian.maly@oracle.com>
mm/hugetlb.c

index c7ebb5f9aa6ba57e88ec4bf531fbc831053de40b..d55d43ed643e61d21af98585953b2bea603ed86b 100644 (file)
@@ -3548,6 +3548,12 @@ int huge_add_to_page_cache(struct page *page, struct address_space *mapping,
                return err;
        ClearPagePrivate(page);
 
+       /*
+        * set page dirty so that it will not be removed from cache/file
+        * by non-hugetlbfs specific code paths.
+        */
+       set_page_dirty(page);
+
        spin_lock(&inode->i_lock);
        inode->i_blocks += blocks_per_huge_page(h);
        spin_unlock(&inode->i_lock);