Switch from the atomic_long_add_return() to its relaxed version.
We do not need a full memory barrier or any memory ordering during
increasing the "vmap_lazy_nr" variable.  What we only need is to do it
atomically.  This is what atomic_long_add_return_relaxed() guarantees.
AARCH64:
<snip>
Default:
    40ec:       
d34cfe94        lsr     x20, x20, #12
    40f0:       
14000044        b       4200 <free_vmap_area_noflush+0x19c>
    40f4:       
94000000        bl      0 <__sanitizer_cov_trace_pc>
    40f8:       
90000000        adrp    x0, 0 <__traceiter_alloc_vmap_area>
    40fc:       
91000000        add     x0, x0, #0x0
    4100:       
f8f40016        ldaddal x20, x22, [x0]
    4104:       
8b160296        add     x22, x20, x22
Relaxed:
    40ec:       
d34cfe94        lsr     x20, x20, #12
    40f0:       
14000044        b       4200 <free_vmap_area_noflush+0x19c>
    40f4:       
94000000        bl      0 <__sanitizer_cov_trace_pc>
    40f8:       
90000000        adrp    x0, 0 <__traceiter_alloc_vmap_area>
    40fc:       
91000000        add     x0, x0, #0x0
    4100:       
f8340016        ldadd   x20, x22, [x0]
    4104:       
8b160296        add     x22, x20, x22
<snip>
Link: https://lkml.kernel.org/r/20250415112646.113091-1-urezki@gmail.com
Signed-off-by: Uladzislau Rezki (Sony) <urezki@gmail.com>
Reviewed-by: Baoquan He <bhe@redhat.com>
Cc: Christop Hellwig <hch@infradead.org>
Cc: Mateusz Guzik <mjguzik@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
        if (WARN_ON_ONCE(!list_empty(&va->list)))
                return;
 
-       nr_lazy = atomic_long_add_return(va_size(va) >> PAGE_SHIFT,
+       nr_lazy = atomic_long_add_return_relaxed(va_size(va) >> PAGE_SHIFT,
                                         &vmap_lazy_nr);
 
        /*