]> www.infradead.org Git - users/jedix/linux-maple.git/commitdiff
x86/bpf: Fix BPF percpu accesses
authorBrian Gerst <brgerst@gmail.com>
Thu, 27 Feb 2025 19:53:02 +0000 (14:53 -0500)
committerIngo Molnar <mingo@kernel.org>
Thu, 27 Feb 2025 20:10:03 +0000 (21:10 +0100)
Due to this recent commit in the x86 tree:

  9d7de2aa8b41 ("Use relative percpu offsets")

percpu addresses went from positive offsets from the GSBASE to negative
kernel virtual addresses.  The BPF verifier has an optimization for
x86-64 that loads the address of cpu_number into a register, but was only
doing a 32-bit load which truncates negative addresses.

Change it to a 64-bit load so that the address is properly sign-extended.

Fixes: 9d7de2aa8b41 ("Use relative percpu offsets")
Signed-off-by: Brian Gerst <brgerst@gmail.com>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Cc: Alexei Starovoitov <ast@kernel.org>
Cc: Daniel Borkmann <daniel@iogearbox.net>
Cc: Uros Bizjak <ubizjak@gmail.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Link: https://lore.kernel.org/r/20250227195302.1667654-1-brgerst@gmail.com
kernel/bpf/verifier.c

index 9971c03adfd5d69645e057dc9f8f55ecdd45f908..f74263b206e434354a0ad032ad1edac9e8c67f3b 100644 (file)
@@ -21692,7 +21692,7 @@ patch_map_ops_generic:
                         * way, it's fine to back out this inlining logic
                         */
 #ifdef CONFIG_SMP
-                       insn_buf[0] = BPF_MOV32_IMM(BPF_REG_0, (u32)(unsigned long)&pcpu_hot.cpu_number);
+                       insn_buf[0] = BPF_MOV64_IMM(BPF_REG_0, (u32)(unsigned long)&pcpu_hot.cpu_number);
                        insn_buf[1] = BPF_MOV64_PERCPU_REG(BPF_REG_0, BPF_REG_0);
                        insn_buf[2] = BPF_LDX_MEM(BPF_W, BPF_REG_0, BPF_REG_0, 0);
                        cnt = 3;