GUP now supports reliable R/O long-term pinning in COW mappings, such
that we break COW early. MAP_SHARED VMAs only use the shared zeropage so
far in one corner case (DAXFS file with holes), which can be ignored
because GUP does not support long-term pinning in fsdax (see
check_vma_flags()).
Consequently, FOLL_FORCE | FOLL_WRITE | FOLL_LONGTERM is no longer required
for reliable R/O long-term pinning: FOLL_LONGTERM is sufficient. So stop
using FOLL_FORCE, which is really only for ptrace access.
Link: https://lkml.kernel.org/r/20221116102659.70287-13-david@redhat.com
Signed-off-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Jason Gunthorpe <jgg@nvidia.com>
Cc: Bernard Metzler <bmt@zurich.ibm.com>
Cc: Leon Romanovsky <leon@kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
        struct mm_struct *mm_s;
        u64 first_page_va;
        unsigned long mlock_limit;
-       unsigned int foll_flags = FOLL_WRITE;
+       unsigned int foll_flags = FOLL_LONGTERM;
        int num_pages, num_chunks, i, rv = 0;
 
        if (!can_do_mlock())
 
        mmgrab(mm_s);
 
-       if (!writable)
-               foll_flags |= FOLL_FORCE;
+       if (writable)
+               foll_flags |= FOLL_WRITE;
 
        mmap_read_lock(mm_s);
 
                while (nents) {
                        struct page **plist = &umem->page_chunk[i].plist[got];
 
-                       rv = pin_user_pages(first_page_va, nents,
-                                           foll_flags | FOLL_LONGTERM,
+                       rv = pin_user_pages(first_page_va, nents, foll_flags,
                                            plist, NULL);
                        if (rv < 0)
                                goto out_sem_up;