]> www.infradead.org Git - users/jedix/linux-maple.git/commitdiff
s390/bitops: Disable arch_test_bit() optimization for PROFILE_ALL_BRANCHES
authorHeiko Carstens <hca@linux.ibm.com>
Mon, 10 Feb 2025 08:56:53 +0000 (09:56 +0100)
committerVasily Gorbik <gor@linux.ibm.com>
Tue, 11 Feb 2025 18:35:08 +0000 (19:35 +0100)
With PROFILE_ALL_BRANCHES enabled gcc sometimes fails to handle
__builtin_constant_p() correctly:

   In function 'arch_test_bit',
       inlined from 'node_state' at include/linux/nodemask.h:423:9,
       inlined from 'warn_if_node_offline' at include/linux/gfp.h:252:2,
       inlined from '__alloc_pages_node_noprof' at include/linux/gfp.h:267:2,
       inlined from 'alloc_pages_node_noprof' at include/linux/gfp.h:296:9,
       inlined from 'vm_area_alloc_pages.constprop' at mm/vmalloc.c:3591:11:
>> arch/s390/include/asm/bitops.h:60:17: warning: 'asm' operand 2 probably does not match constraints
      60 |                 asm volatile(
         |                 ^~~
>> arch/s390/include/asm/bitops.h:60:17: error: impossible constraint in 'asm'

Therefore disable the optimization for this case. This is similar to
commit 63678eecec57 ("s390/preempt: disable __preempt_count_add()
optimization for PROFILE_ALL_BRANCHES")

Fixes: b2bc1b1a77c0 ("s390/bitops: Provide optimized arch_test_bit()")
Reported-by: kernel test robot <lkp@intel.com>
Closes: https://lore.kernel.org/oe-kbuild-all/202502091912.xL2xTCGw-lkp@intel.com/
Acked-by: Vasily Gorbik <gor@linux.ibm.com>
Signed-off-by: Heiko Carstens <hca@linux.ibm.com>
Signed-off-by: Vasily Gorbik <gor@linux.ibm.com>
arch/s390/include/asm/bitops.h

index d5125296ade25b24f66610f0cca293e3e056539a..a5ca0a9476916c9f5327972e55ebb1faff52a056 100644 (file)
@@ -53,7 +53,11 @@ static __always_inline bool arch_test_bit(unsigned long nr, const volatile unsig
        unsigned long mask;
        int cc;
 
-       if (__builtin_constant_p(nr)) {
+       /*
+        * With CONFIG_PROFILE_ALL_BRANCHES enabled gcc fails to
+        * handle __builtin_constant_p() in some cases.
+        */
+       if (!IS_ENABLED(CONFIG_PROFILE_ALL_BRANCHES) && __builtin_constant_p(nr)) {
                addr = (const volatile unsigned char *)ptr;
                addr += (nr ^ (BITS_PER_LONG - BITS_PER_BYTE)) / BITS_PER_BYTE;
                mask = 1UL << (nr & (BITS_PER_BYTE - 1));