The bit pattern of _PAGE_DIRTY set and _PAGE_RW clear is used to
mark shadow stacks. This is currently checked for in mk_pte() but
not pfn_pte(). If we add the check to pfn_pte(), it catches vfree()
calling set_direct_map_invalid_noflush() which calls __change_page_attr()
which loads the old protection bits from the PTE, clears the specified
bits and uses pfn_pte() to construct the new PTE.
We should, therefore, clear the _PAGE_DIRTY bit whenever we clear
_PAGE_RW. I opted to do it in the callers in case we want to use
__change_page_attr() to create shadow stacks inside the kernel at some
point in the future. Arguably, we might also want to clear _PAGE_ACCESSED
here.
Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
Reported-by: kernel test robot <oliver.sang@intel.com>
Closes: https://lore.kernel.org/oe-lkp/202502241646.719f4651-lkp@intel.com
.pgd = NULL,
.numpages = numpages,
.mask_set = __pgprot(0),
- .mask_clr = __pgprot(_PAGE_PRESENT | _PAGE_RW),
+ .mask_clr = __pgprot(_PAGE_PRESENT | _PAGE_RW | _PAGE_DIRTY),
.flags = CPA_NO_CHECK_ALIAS };
/*
.pgd = pgd,
.numpages = numpages,
.mask_set = __pgprot(0),
- .mask_clr = __pgprot(~page_flags & (_PAGE_NX|_PAGE_RW)),
+ .mask_clr = __pgprot(~page_flags & (_PAGE_NX|_PAGE_RW|_PAGE_DIRTY)),
.flags = CPA_NO_CHECK_ALIAS,
};
.pgd = pgd,
.numpages = numpages,
.mask_set = __pgprot(0),
- .mask_clr = __pgprot(_PAGE_PRESENT | _PAGE_RW),
+ .mask_clr = __pgprot(_PAGE_PRESENT | _PAGE_RW | _PAGE_DIRTY),
.flags = CPA_NO_CHECK_ALIAS,
};