]> www.infradead.org Git - users/hch/misc.git/log
users/hch/misc.git
2 months agoMerge branches 'fixes', 'misc', 'mmu', 'pir', 'selftests', 'svm' and 'vmx'
Sean Christopherson [Thu, 8 May 2025 22:59:54 +0000 (22:59 +0000)]
Merge branches 'fixes', 'misc', 'mmu', 'pir', 'selftests', 'svm' and 'vmx'

* fixes:
  KVM: SVM: Set/clear SRSO's BP_SPEC_REDUCE on 0 <=> 1 VM count transitions
  KVM: x86/mmu: Prevent installing hugepages when mem attributes are changing
  KVM: SVM: Update dump_ghcb() to use the GHCB snapshot fields
  KVM: x86/mmu: Check and free obsolete roots in kvm_mmu_reload()
  KVM: x86: Check that the high 32bits are clear in kvm_arch_vcpu_ioctl_run()
  KVM: SVM: Forcibly leave SMM mode on SHUTDOWN interception

* misc:
  KVM: Remove obsolete comment about locking for kvm_io_bus_read/write
  KVM: x86: Unify cross-vCPU IBPB
  KVM: SVM: Clear current_vmcb during vCPU free for all *possible* CPUs
  KVM: x86: Add module param to control and enumerate device posted IRQs
  KVM: VMX: Don't send UNBLOCK when starting device assignment without APICv
  KVM: x86: Rescan I/O APIC routes after EOI interception for old routing
  KVM: x86: Add a helper to deduplicate I/O APIC EOI interception logic
  KVM: x86: Isolate edge vs. level check in userspace I/O APIC route scanning
  KVM: x86: Advertise support for AMD's PREFETCHI
  KVM: x86: Sort CPUID_8000_0021_EAX leaf bits properly
  KVM: x86: clean up a return
  KVM: x86: Advertise support for WRMSRNS
  x86/msr: Rename the WRMSRNS opcode macro to ASM_WRMSRNS (for KVM)
  KVM: x86: Generalize IBRS virtualization on emulated VM-exit
  KVM: x86: Propagate AMD's IbrsSameMode to the guest
  x86/cpufeatures: Define X86_FEATURE_AMD_IBRS_SAME_MODE

* mmu:
  KVM: x86/mmu: Warn if PFN changes on shadow-present SPTE in shadow MMU
  KVM: x86/tdp_mmu: WARN if PFN changes for spurious faults
  KVM: x86/tdp_mmu: Merge prefetch and access checks for spurious faults
  KVM: x86/mmu: Further check old SPTE is leaf for spurious prefetch fault

* pir:
  x86/irq: KVM: Add helper for harvesting PIR to deduplicate KVM and posted MSIs
  KVM: VMX: Use arch_xchg() when processing PIR to avoid instrumentation
  KVM: VMX: Isolate pure loads from atomic XCHG when processing PIR
  KVM: VMX: Process PIR using 64-bit accesses on 64-bit kernels
  x86/irq: KVM: Track PIR bitmap as an "unsigned long" array
  KVM: VMX: Ensure vIRR isn't reloaded at odd times when sync'ing PIR
  x86/irq: Track if IRQ was found in PIR during initial loop (to load PIR vals)
  x86/irq: Ensure initial PIR loads are performed exactly once

* selftests:
  KVM: selftests: Add a test for x86's fastops emulation
  KVM: selftests: Add a basic SEV-SNP smoke test
  KVM: selftests: Decouple SEV policy from VM type
  KVM: selftests: Force GUEST_MEMFD flag for SNP VM type
  KVM: selftests: Add library support for interacting with SNP
  KVM: selftests: Introduce SEV VM type check
  KVM: selftests: Replace assert() with TEST_ASSERT_EQ()
  KVM: selftests: Add SMT control state helper
  KVM: selftests: Add vmgexit helper
  KVM: selftests: SEV-SNP test for KVM_SEV_INIT2

* svm:
  x86/sev: Remove unnecessary GFP_KERNEL_ACCOUNT for temporary variables
  KVM: SVM: Treat DEBUGCTL[5:2] as reserved
  KVM: SVM: avoid frequency indirect calls
  KVM: SEV: Configure "ALLOWED_SEV_FEATURES" VMCB Field
  x86/cpufeatures: Add "Allowed SEV Features" Feature
  KVM: SVM: Add a mutex to dump_vmcb() to prevent concurrent output
  KVM: SVM: Include the vCPU ID when dumping a VMCB
  KVM: SVM: Add the type of VM for which the VMCB/VMSA is being dumped
  KVM: SVM: Dump guest register state in dump_vmcb()
  KVM: SVM: Decrypt SEV VMSA in dump_vmcb() if debugging is enabled
  KVM: SVM: Fix SNP AP destroy race with VMRUN

* vmx:
  KVM: VMX: Clean up and macrofy x86_ops
  KVM: VMX: Define a VMX glue macro for kvm_complete_insn_gp()
  KVM: VMX: Move vt_apicv_pre_state_restore() to posted_intr.c and tweak name
  KVM: x86: Revert kvm_x86_ops.mem_enc_ioctl() back to an OPTIONAL hook
  KVM: VMX: Flush shadow VMCS on emergency reboot
  KVM: VMX: Use LEAVE in vmx_do_interrupt_irqoff()
  KVM: nVMX: Check MSR load/store list counts during VM-Enter consistency checks

2 months agoKVM: SVM: Set/clear SRSO's BP_SPEC_REDUCE on 0 <=> 1 VM count transitions
Sean Christopherson [Mon, 5 May 2025 18:03:00 +0000 (11:03 -0700)]
KVM: SVM: Set/clear SRSO's BP_SPEC_REDUCE on 0 <=> 1 VM count transitions

Set the magic BP_SPEC_REDUCE bit to mitigate SRSO when running VMs if and
only if KVM has at least one active VM.  Leaving the bit set at all times
unfortunately degrades performance by a wee bit more than expected.

Use a dedicated spinlock and counter instead of hooking virtualization
enablement, as changing the behavior of kvm.enable_virt_at_load based on
SRSO_BP_SPEC_REDUCE is painful, and has its own drawbacks, e.g. could
result in performance issues for flows that are sensitive to VM creation
latency.

Defer setting BP_SPEC_REDUCE until VMRUN is imminent to avoid impacting
performance on CPUs that aren't running VMs, e.g. if a setup is using
housekeeping CPUs.  Setting BP_SPEC_REDUCE in task context, i.e. without
blasting IPIs to all CPUs, also helps avoid serializing 1<=>N transitions
without incurring a gross amount of complexity (see the Link for details
on how ugly coordinating via IPIs gets).

Link: https://lore.kernel.org/all/aBOnzNCngyS_pQIW@google.com
Fixes: 8442df2b49ed ("x86/bugs: KVM: Add support for SRSO_MSR_FIX")
Reported-by: Michael Larabel <Michael@michaellarabel.com>
Closes: https://www.phoronix.com/review/linux-615-amd-regression
Cc: Borislav Petkov <bp@alien8.de>
Tested-by: Borislav Petkov (AMD) <bp@alien8.de>
Link: https://lore.kernel.org/r/20250505180300.973137-1-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
2 months agoKVM: selftests: Add a test for x86's fastops emulation
Sean Christopherson [Tue, 6 May 2025 01:12:50 +0000 (18:12 -0700)]
KVM: selftests: Add a test for x86's fastops emulation

Add a test to verify KVM's fastops emulation via forced emulation.  KVM's
so called "fastop" infrastructure executes the to-be-emulated instruction
directly on hardware instead of manually emulating the instruction in
software, using various shenanigans to glue together the emulator context
and CPU state, e.g. to get RFLAGS fed into the instruction and back out
for the emulator.

Add testcases for all instructions that are low hanging fruit.  While the
primary goal of the selftest is to validate the glue code, a secondary
goal is to ensure "emulation" matches hardware exactly, including for
arithmetic flags that are architecturally undefined.  While arithmetic
flags may be *architecturally* undefined, their behavior is deterministic
for a given CPU (likely a given uarch, and possibly even an entire family
or class of CPUs).  I.e. KVM has effectively been emulating underlying
hardware behavior for years.

Link: https://lore.kernel.org/r/20250506011250.1089254-1-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
2 months agoKVM: Remove obsolete comment about locking for kvm_io_bus_read/write
Li RongQing [Tue, 6 May 2025 01:22:51 +0000 (09:22 +0800)]
KVM: Remove obsolete comment about locking for kvm_io_bus_read/write

Nobody is actually calling these functions with slots_lock held, The
srcu_dereference() in kvm_io_bus_read/write() precisely communicates
both what is being protected, and what provides the protection. so the
comments are no longer needed

Suggested-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Li RongQing <lirongqing@baidu.com>
Link: https://lore.kernel.org/r/20250506012251.2613-1-lirongqing@baidu.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
2 months agoKVM: x86/mmu: Prevent installing hugepages when mem attributes are changing
Sean Christopherson [Wed, 30 Apr 2025 22:09:54 +0000 (15:09 -0700)]
KVM: x86/mmu: Prevent installing hugepages when mem attributes are changing

When changing memory attributes on a subset of a potential hugepage, add
the hugepage to the invalidation range tracking to prevent installing a
hugepage until the attributes are fully updated.  Like the actual hugepage
tracking updates in kvm_arch_post_set_memory_attributes(), process only
the head and tail pages, as any potential hugepages that are entirely
covered by the range will already be tracked.

Note, only hugepage chunks whose current attributes are NOT mixed need to
be added to the invalidation set, as mixed attributes already prevent
installing a hugepage, and it's perfectly safe to install a smaller
mapping for a gfn whose attributes aren't changing.

Fixes: 8dd2eee9d526 ("KVM: x86/mmu: Handle page fault for private memory")
Cc: stable@vger.kernel.org
Reported-by: Michael Roth <michael.roth@amd.com>
Tested-by: Michael Roth <michael.roth@amd.com>
Link: https://lore.kernel.org/r/20250430220954.522672-1-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
2 months agoKVM: SVM: Update dump_ghcb() to use the GHCB snapshot fields
Tom Lendacky [Mon, 28 Apr 2025 18:55:31 +0000 (13:55 -0500)]
KVM: SVM: Update dump_ghcb() to use the GHCB snapshot fields

Commit 4e15a0ddc3ff ("KVM: SEV: snapshot the GHCB before accessing it")
updated the SEV code to take a snapshot of the GHCB before using it. But
the dump_ghcb() function wasn't updated to use the snapshot locations.
This results in incorrect output from dump_ghcb() for the "is_valid" and
"valid_bitmap" fields.

Update dump_ghcb() to use the proper locations.

Fixes: 4e15a0ddc3ff ("KVM: SEV: snapshot the GHCB before accessing it")
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Reviewed-by: Liam Merwick <liam.merwick@oracle.com>
Link: https://lore.kernel.org/r/8f03878443681496008b1b37b7c4bf77a342b459.1745866531.git.thomas.lendacky@amd.com
[sean: add comment and snapshot qualifier]
Signed-off-by: Sean Christopherson <seanjc@google.com>
2 months agoKVM: VMX: Clean up and macrofy x86_ops
Vishal Verma [Tue, 18 Mar 2025 06:35:09 +0000 (00:35 -0600)]
KVM: VMX: Clean up and macrofy x86_ops

Eliminate a lot of stub definitions by using macros to define the TDX vs
non-TDX versions of various x86_ops. Moving the x86_ops wrappers under
CONFIG_KVM_INTEL_TDX also allows nearly all of vmx/main.c to go under a
single #ifdef, eliminating trampolines in the generated code, and almost
all of the stubs.

For example, with CONFIG_KVM_INTEL_TDX=n, before this cleanup,
vt_refresh_apicv_exec_ctrl() would produce:

0000000000036490 <vt_refresh_apicv_exec_ctrl>:
   36490:       f3 0f 1e fa             endbr64
   36494:       e8 00 00 00 00          call   36499 <vt_refresh_apicv_exec_ctrl+0x9>
                        36495: R_X86_64_PLT32   __fentry__-0x4
   36499:       e9 00 00 00 00          jmp    3649e <vt_refresh_apicv_exec_ctrl+0xe>
                        3649a: R_X86_64_PLT32   vmx_refresh_apicv_exec_ctrl-0x4
   3649e:       66 90                   xchg   %ax,%ax

After this patch, this is completely eliminated.

Based on a patch by Sean Christopherson <seanjc@google.com>

Link: https://lore.kernel.org/kvm/Z6v9yjWLNTU6X90d@google.com/
Cc: Sean Christopherson <seanjc@google.com>
Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
Signed-off-by: Vishal Verma <vishal.l.verma@intel.com>
Link: https://lore.kernel.org/r/20250318-vverma7-cleanup_x86_ops-v2-4-701e82d6b779@intel.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
2 months agoKVM: VMX: Define a VMX glue macro for kvm_complete_insn_gp()
Vishal Verma [Tue, 18 Mar 2025 06:35:08 +0000 (00:35 -0600)]
KVM: VMX: Define a VMX glue macro for kvm_complete_insn_gp()

Define kvm_complete_insn_gp() as vmx_complete_emulated_msr() and use the
glue wrapper in vt_complete_emulated_msr() so that VT's
.complete_emulated_msr() implementation follows the soon-to-be-standard
pattern of:

    vt_abc:
        if (is_td())
            return tdx_abc();
        return vmx_abc();

This will allow generating such wrappers via a macro, which in turn will
make it trivially easy to skip the wrappers entirely when KVM_INTEL_TDX=n.

Suggested-by: Sean Christopherson <seanjc@google.com>
Link: https://lore.kernel.org/kvm/Z6v9yjWLNTU6X90d@google.com/
Cc: Sean Christopherson <seanjc@google.com>
Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
Signed-off-by: Vishal Verma <vishal.l.verma@intel.com>
Link: https://lore.kernel.org/r/20250318-vverma7-cleanup_x86_ops-v2-3-701e82d6b779@intel.com
[sean: massage shortlog+changelog]
Signed-off-by: Sean Christopherson <seanjc@google.com>
2 months agoKVM: VMX: Move vt_apicv_pre_state_restore() to posted_intr.c and tweak name
Vishal Verma [Tue, 18 Mar 2025 06:35:07 +0000 (00:35 -0600)]
KVM: VMX: Move vt_apicv_pre_state_restore() to posted_intr.c and tweak name

In preparation for a cleanup of the kvm_x86_ops struct for TDX, all vt_*
functions are expected to act as glue functions that route to either tdx_*
or vmx_* based on the VM type. Specifically, the pattern is:

vt_abc:
    if (is_td())
        return tdx_abc();
    return vmx_abc();

But vt_apicv_pre_state_restore() does not follow this pattern. To
facilitate that cleanup, rename and move vt_apicv_pre_state_restore() into
posted_intr.c.

Opportunistically turn vcpu_to_pi_desc() back into a static function, as
the only reason it was exposed outside of posted_intr.c was for
vt_apicv_pre_state_restore().

No functional change intended.

Suggested-by: Sean Christopherson <seanjc@google.com>
Link: https://lore.kernel.org/kvm/Z6v9yjWLNTU6X90d@google.com/
Cc: Sean Christopherson <seanjc@google.com>
Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
Reviewed-by: Binbin Wu <binbin.wu@linxu.intel.com>
Signed-off-by: Vishal Verma <vishal.l.verma@intel.com>
Link: https://lore.kernel.org/r/20250318-vverma7-cleanup_x86_ops-v2-2-701e82d6b779@intel.com
[sean: apply Chao's suggestions, massage shortlog]
Signed-off-by: Sean Christopherson <seanjc@google.com>
2 months agoKVM: x86: Revert kvm_x86_ops.mem_enc_ioctl() back to an OPTIONAL hook
Sean Christopherson [Fri, 2 May 2025 20:34:21 +0000 (13:34 -0700)]
KVM: x86: Revert kvm_x86_ops.mem_enc_ioctl() back to an OPTIONAL hook

Restore KVM's handling of a NULL kvm_x86_ops.mem_enc_ioctl, as the hook is
NULL on SVM when CONFIG_KVM_AMD_SEV=n, and TDX will soon follow suit.

  ------------[ cut here ]------------
  WARNING: CPU: 0 PID: 1 at arch/x86/include/asm/kvm-x86-ops.h:130 kvm_x86_vendor_init+0x178b/0x18e0
  Modules linked in:
  CPU: 0 UID: 0 PID: 1 Comm: swapper/0 Not tainted 6.15.0-rc2-dc1aead1a985-sink-vm #2 NONE
  Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015
  RIP: 0010:kvm_x86_vendor_init+0x178b/0x18e0
  Call Trace:
   <TASK>
   svm_init+0x2e/0x60
   do_one_initcall+0x56/0x290
   kernel_init_freeable+0x192/0x1e0
   kernel_init+0x16/0x130
   ret_from_fork+0x30/0x50
   ret_from_fork_asm+0x1a/0x30
   </TASK>
  ---[ end trace 0000000000000000 ]---

Opportunistically drop the superfluous curly braces.

Link: https://lore.kernel.org/all/20250318-vverma7-cleanup_x86_ops-v2-4-701e82d6b779@intel.com
Fixes: b2aaf38ced69 ("KVM: TDX: Add place holder for TDX VM specific mem_enc_op ioctl")
Link: https://lore.kernel.org/r/20250502203421.865686-1-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
2 months agoKVM: selftests: Add a basic SEV-SNP smoke test
Pratik R. Sampat [Wed, 5 Mar 2025 23:00:00 +0000 (17:00 -0600)]
KVM: selftests: Add a basic SEV-SNP smoke test

Extend sev_smoke_test to also run a minimal SEV-SNP smoke test that
initializes and sets up private memory regions required to run a simple
SEV-SNP guest.

Similar to its SEV-ES smoke test counterpart, this also does not
support GHCB and ucall yet and uses the GHCB MSR protocol to trigger an
exit of the type KVM_EXIT_SYSTEM_EVENT.

Signed-off-by: Pratik R. Sampat <prsampat@amd.com>
Link: https://lore.kernel.org/r/20250305230000.231025-11-prsampat@amd.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
2 months agoKVM: selftests: Decouple SEV policy from VM type
Pratik R. Sampat [Wed, 5 Mar 2025 22:59:59 +0000 (16:59 -0600)]
KVM: selftests: Decouple SEV policy from VM type

In preparation for SNP, cleanup the smoke test to decouple deriving type
from policy.  This will allow reusing the existing interfaces for SNP.

No functional change intended.

Signed-off-by: Pratik R. Sampat <prsampat@amd.com>
Link: https://lore.kernel.org/r/20250305230000.231025-10-prsampat@amd.com
[sean: massage shortlog+changelog]
Signed-off-by: Sean Christopherson <seanjc@google.com>
2 months agoKVM: selftests: Force GUEST_MEMFD flag for SNP VM type
Pratik R. Sampat [Wed, 5 Mar 2025 22:59:58 +0000 (16:59 -0600)]
KVM: selftests: Force GUEST_MEMFD flag for SNP VM type

Force the SEV-SNP VM type to set the KVM_MEM_GUEST_MEMFD flag for the
creation of private memslots.

Signed-off-by: Pratik R. Sampat <prsampat@amd.com>
Link: https://lore.kernel.org/r/20250305230000.231025-9-prsampat@amd.com
[sean: add a comment, don't break non-x86]
Signed-off-by: Sean Christopherson <seanjc@google.com>
2 months agoKVM: selftests: Add library support for interacting with SNP
Pratik R. Sampat [Wed, 5 Mar 2025 22:59:57 +0000 (16:59 -0600)]
KVM: selftests: Add library support for interacting with SNP

Extend the SEV library to include support for SNP ioctl() wrappers,
which aid in launching and interacting with a SEV-SNP guest.

Signed-off-by: Pratik R. Sampat <prsampat@amd.com>
Link: https://lore.kernel.org/r/20250305230000.231025-8-prsampat@amd.com
[sean: use BIT()]
Signed-off-by: Sean Christopherson <seanjc@google.com>
2 months agoKVM: selftests: Introduce SEV VM type check
Pratik R. Sampat [Wed, 5 Mar 2025 22:59:56 +0000 (16:59 -0600)]
KVM: selftests: Introduce SEV VM type check

In preparation for SNP, declutter the vm type check by introducing a
SEV-SNP VM type check as well as a transitive set of helper functions.

The SNP VM type is the subset of SEV-ES. Similarly, the SEV-ES and SNP
types are subset of the SEV VM type check.

Signed-off-by: Pratik R. Sampat <prsampat@amd.com>
Link: https://lore.kernel.org/r/20250305230000.231025-7-prsampat@amd.com
[sean: make the helpers static inlines]
Signed-off-by: Sean Christopherson <seanjc@google.com>
2 months agoKVM: selftests: Replace assert() with TEST_ASSERT_EQ()
Pratik R. Sampat [Wed, 5 Mar 2025 22:59:55 +0000 (16:59 -0600)]
KVM: selftests: Replace assert() with TEST_ASSERT_EQ()

For SEV tests, assert() failures on VM type or fd do not provide
sufficient error reporting. Replace assert() with TEST_ASSERT_EQ() to
obtain more detailed information on the assertion condition failure,
including the call stack.

Signed-off-by: Pratik R. Sampat <prsampat@amd.com>
Link: https://lore.kernel.org/r/20250305230000.231025-6-prsampat@amd.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
2 months agoKVM: selftests: Add SMT control state helper
Pratik R. Sampat [Wed, 5 Mar 2025 22:59:54 +0000 (16:59 -0600)]
KVM: selftests: Add SMT control state helper

Move the SMT control check out of the hyperv_cpuid selftest so that it
is generally accessible all KVM selftests. Split the functionality into
a helper that populates a buffer with SMT control value which other
helpers can use to ascertain if SMT state is available and active.

Signed-off-by: Pratik R. Sampat <prsampat@amd.com>
Link: https://lore.kernel.org/r/20250305230000.231025-5-prsampat@amd.com
[sean: prepend is_ to the helpers]
Signed-off-by: Sean Christopherson <seanjc@google.com>
2 months agoKVM: selftests: Add vmgexit helper
Pratik R. Sampat [Wed, 5 Mar 2025 22:59:53 +0000 (16:59 -0600)]
KVM: selftests: Add vmgexit helper

Abstract rep vmmcall coded into the vmgexit helper for the sev
library.

No functional change intended.

Signed-off-by: Pratik R. Sampat <prsampat@amd.com>
Reviewed-by: Pankaj Gupta <pankaj.gupta@amd.com>
Link: https://lore.kernel.org/r/20250305230000.231025-4-prsampat@amd.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
2 months agoKVM: selftests: SEV-SNP test for KVM_SEV_INIT2
Pratik R. Sampat [Wed, 5 Mar 2025 22:59:52 +0000 (16:59 -0600)]
KVM: selftests: SEV-SNP test for KVM_SEV_INIT2

Add the X86_FEATURE_SEV_SNP CPU feature to the architectural definition
for the SEV-SNP VM type to exercise the KVM_SEV_INIT2 call. Ensure that
the SNP test is skipped in scenarios where CPUID supports it but KVM
does not, preventing reporting of failure in such cases.

Reviewed-by: Nikunj A Dadhania <nikunj@amd.com>
Signed-off-by: Pratik R. Sampat <prsampat@amd.com>
Link: https://lore.kernel.org/r/20250305230000.231025-3-prsampat@amd.com
[sean: use the same pattern as SEV and SEV-ES]
Signed-off-by: Sean Christopherson <seanjc@google.com>
2 months agoKVM: x86: Unify cross-vCPU IBPB
Sean Christopherson [Tue, 29 Apr 2025 15:37:19 +0000 (08:37 -0700)]
KVM: x86: Unify cross-vCPU IBPB

Both SVM and VMX have similar implementation for executing an IBPB
between running different vCPUs on the same CPU to create separate
prediction domains for different vCPUs.

For VMX, when the currently loaded VMCS is changed in
vmx_vcpu_load_vmcs(), an IBPB is executed if there is no 'buddy', which
is the case on vCPU load. The intention is to execute an IBPB when
switching vCPUs, but not when switching the VMCS within the same vCPU.
Executing an IBPB on nested transitions within the same vCPU is handled
separately and conditionally in nested_vmx_vmexit().

For SVM, the current VMCB is tracked on vCPU load and an IBPB is
executed when it is changed. The intention is also to execute an IBPB
when switching vCPUs, although it is possible that in some cases an IBBP
is executed when switching VMCBs for the same vCPU. Executing an IBPB on
nested transitions should be handled separately, and is proposed at [1].

Unify the logic by tracking the last loaded vCPU and execuintg the IBPB
on vCPU change in kvm_arch_vcpu_load() instead. When a vCPU is
destroyed, make sure all references to it are removed from any CPU. This
is similar to how SVM clears the current_vmcb tracking on vCPU
destruction. Remove the current VMCB tracking in SVM as it is no longer
required, as well as the 'buddy' parameter to vmx_vcpu_load_vmcs().

[1] https://lore.kernel.org/lkml/20250221163352.3818347-4-yosry.ahmed@linux.dev

Link: https://lore.kernel.org/all/20250320013759.3965869-1-yosry.ahmed@linux.dev
Signed-off-by: Yosry Ahmed <yosry.ahmed@linux.dev>
[sean: tweak comment to stay at/under 80 columns]
Signed-off-by: Sean Christopherson <seanjc@google.com>
2 months agoKVM: SVM: Clear current_vmcb during vCPU free for all *possible* CPUs
Yosry Ahmed [Tue, 29 Apr 2025 15:32:15 +0000 (08:32 -0700)]
KVM: SVM: Clear current_vmcb during vCPU free for all *possible* CPUs

When freeing a vCPU and thus its VMCB, clear current_vmcb for all possible
CPUs, not just online CPUs, as it's theoretically possible a CPU could go
offline and come back online in conjunction with KVM reusing the page for
a new VMCB.

Link: https://lore.kernel.org/all/20250320013759.3965869-1-yosry.ahmed@linux.dev
Fixes: fd65d3142f73 ("kvm: svm: Ensure an IBPB on all affected CPUs when freeing a vmcb")
Cc: stable@vger.kernel.org
Cc: Jim Mattson <jmattson@google.com>
Signed-off-by: Yosry Ahmed <yosry.ahmed@linux.dev>
[sean: split to separate patch, write changelog]
Signed-off-by: Sean Christopherson <seanjc@google.com>
3 months agox86/sev: Remove unnecessary GFP_KERNEL_ACCOUNT for temporary variables
Peng Hao [Mon, 28 Apr 2025 06:30:13 +0000 (14:30 +0800)]
x86/sev: Remove unnecessary GFP_KERNEL_ACCOUNT for temporary variables

Some variables allocated in sev_send_update_data are released when
the function exits, so there is no need to set GFP_KERNEL_ACCOUNT.

Signed-off-by: Peng Hao <flyingpeng@tencent.com>
Link: https://lore.kernel.org/r/20250428063013.62311-1-flyingpeng@tencent.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
3 months agoKVM: x86/mmu: Check and free obsolete roots in kvm_mmu_reload()
Yan Zhao [Tue, 18 Mar 2025 01:33:33 +0000 (09:33 +0800)]
KVM: x86/mmu: Check and free obsolete roots in kvm_mmu_reload()

Check request KVM_REQ_MMU_FREE_OBSOLETE_ROOTS to free obsolete roots in
kvm_mmu_reload() to prevent kvm_mmu_reload() from seeing a stale obsolete
root.

Since kvm_mmu_reload() can be called outside the
vcpu_enter_guest() path (e.g., kvm_arch_vcpu_pre_fault_memory()), it may be
invoked after a root has been marked obsolete and before vcpu_enter_guest()
is invoked to process KVM_REQ_MMU_FREE_OBSOLETE_ROOTS and set root.hpa to
invalid. This causes kvm_mmu_reload() to fail to load a new root, which
can lead to kvm_arch_vcpu_pre_fault_memory() being stuck in the while
loop in kvm_tdp_map_page() since RET_PF_RETRY is always returned due to
is_page_fault_stale().

Keep the existing check of KVM_REQ_MMU_FREE_OBSOLETE_ROOTS in
vcpu_enter_guest() since the cost of kvm_check_request() is negligible,
especially a check that's guarded by kvm_request_pending().

Export symbol of kvm_mmu_free_obsolete_roots() as kvm_mmu_reload() is
inline and may be called outside of kvm.ko.

Fixes: 6e01b7601dfe ("KVM: x86: Implement kvm_arch_vcpu_pre_fault_memory()")
Suggested-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Yan Zhao <yan.y.zhao@intel.com>
Link: https://lore.kernel.org/r/20250318013333.5817-1-yan.y.zhao@intel.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
3 months agoKVM: x86/mmu: Warn if PFN changes on shadow-present SPTE in shadow MMU
Yan Zhao [Tue, 18 Mar 2025 01:33:10 +0000 (09:33 +0800)]
KVM: x86/mmu: Warn if PFN changes on shadow-present SPTE in shadow MMU

Warn if PFN changes on shadow-present SPTE in mmu_set_spte().

KVM should _never_ change the PFN of a shadow-present SPTE. In
mmu_set_spte(), there is a WARN_ON_ONCE() on pfn changes on shadow-present
SPTE in mmu_spte_update() to detect this condition. However, that
WARN_ON_ONCE() is not hittable since mmu_set_spte() invokes drop_spte()
earlier before mmu_spte_update(), which clears SPTE to a !shadow-present
state. So, before invoking drop_spte(), add a WARN_ON_ONCE() in
mmu_set_spte() to warn PFN change of a shadow-present SPTE.

For the spurious prefetch fault, only return RET_PF_SPURIOUS directly when
PFN is not changed. When PFN changes, fall through to follow the sequence
of drop_spte(), warn of PFN change, make_spte(), flush tlb, rmap_add().

Signed-off-by: Yan Zhao <yan.y.zhao@intel.com>
Link: https://lore.kernel.org/r/20250318013310.5781-1-yan.y.zhao@intel.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
3 months agoKVM: x86/tdp_mmu: WARN if PFN changes for spurious faults
Yan Zhao [Tue, 18 Mar 2025 01:32:38 +0000 (09:32 +0800)]
KVM: x86/tdp_mmu: WARN if PFN changes for spurious faults

Add a WARN() to assert that KVM does _not_ change the PFN of a
shadow-present SPTE during spurious fault handling.

KVM should _never_ change the PFN of a shadow-present SPTE and TDP MMU
already BUG()s on this. However, spurious faults just return early before
the existing BUG() could be hit.

Suggested-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Yan Zhao <yan.y.zhao@intel.com>
Link: https://lore.kernel.org/r/20250318013238.5732-1-yan.y.zhao@intel.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
3 months agoKVM: x86/tdp_mmu: Merge prefetch and access checks for spurious faults
Yan Zhao [Tue, 18 Mar 2025 01:32:10 +0000 (09:32 +0800)]
KVM: x86/tdp_mmu: Merge prefetch and access checks for spurious faults

Combine prefetch and is_access_allowed() checks into a unified path to
detect spurious faults, since both cases now share identical logic.

No functional changes.

Signed-off-by: Yan Zhao <yan.y.zhao@intel.com>
Link: https://lore.kernel.org/r/20250318013210.5701-1-yan.y.zhao@intel.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
3 months agoKVM: x86/mmu: Further check old SPTE is leaf for spurious prefetch fault
Yan Zhao [Tue, 18 Mar 2025 01:31:11 +0000 (09:31 +0800)]
KVM: x86/mmu: Further check old SPTE is leaf for spurious prefetch fault

Instead of simply treating a prefetch fault as spurious when there's a
shadow-present old SPTE, further check if the old SPTE is leaf to determine
if a prefetch fault is spurious.

It's not reasonable to treat a prefetch fault as spurious when there's a
shadow-present non-leaf SPTE without a corresponding shadow-present leaf
SPTE. e.g., in the following sequence, a prefetch fault should not be
considered spurious:
1. add a memslot with size 4K
2. prefault GPA A in the memslot
3. delete the memslot (zap all disabled)
4. re-add the memslot with size 2M
5. prefault GPA A again.
In step 5, the prefetch fault attempts to install a 2M huge entry.
Since step 3 zaps the leaf SPTE for GPA A while keeping the non-leaf SPTE,
the leaf entry will remain empty after step 5 if the fetch fault is
regarded as spurious due to a shadow-present non-leaf SPTE.

Signed-off-by: Yan Zhao <yan.y.zhao@intel.com>
Link: https://lore.kernel.org/r/20250318013111.5648-1-yan.y.zhao@intel.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
3 months agoKVM: VMX: Flush shadow VMCS on emergency reboot
Chao Gao [Mon, 24 Mar 2025 14:08:48 +0000 (22:08 +0800)]
KVM: VMX: Flush shadow VMCS on emergency reboot

Ensure the shadow VMCS cache is evicted during an emergency reboot to
prevent potential memory corruption if the cache is evicted after reboot.

This issue was identified through code inspection, as __loaded_vmcs_clear()
flushes both the normal VMCS and the shadow VMCS.

Avoid checking the "launched" state during an emergency reboot, unlike the
behavior in __loaded_vmcs_clear(). This is important because reboot NMIs
can interfere with operations like copy_shadow_to_vmcs12(), where shadow
VMCSes are loaded directly using VMPTRLD. In such cases, if NMIs occur
right after the VMCS load, the shadow VMCSes will be active but the
"launched" state may not be set.

Fixes: 16f5b9034b69 ("KVM: nVMX: Copy processor-specific shadow-vmcs to VMCS12")
Cc: stable@vger.kernel.org
Signed-off-by: Chao Gao <chao.gao@intel.com>
Reviewed-by: Kai Huang <kai.huang@intel.com>
Link: https://lore.kernel.org/r/20250324140849.2099723-1-chao.gao@intel.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
3 months agoKVM: SVM: Treat DEBUGCTL[5:2] as reserved
Sean Christopherson [Thu, 27 Feb 2025 22:24:11 +0000 (14:24 -0800)]
KVM: SVM: Treat DEBUGCTL[5:2] as reserved

Stop ignoring DEBUGCTL[5:2] on AMD CPUs and instead treat them as reserved.
KVM has never properly virtualized AMD's legacy PBi bits, but did allow
the guest (and host userspace) to set the bits.  To avoid breaking guests
when running on CPUs with BusLockTrap, which redefined bit 2 to BLCKDB and
made bits 5:3 reserved, a previous KVM change ignored bits 5:3, e.g. so
that legacy guest software wouldn't inadvertently enable BusLockTrap or
hit a VMRUN failure due to setting reserved.

To allow for virtualizing BusLockTrap and whatever future features may use
bits 5:3, treat bits 5:2 as reserved (and hope that doing so doesn't break
any existing guests).

Reviewed-and-tested-by: Ravi Bangoria <ravi.bangoria@amd.com>
Link: https://lore.kernel.org/r/20250227222411.3490595-7-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
3 months agoKVM: SVM: avoid frequency indirect calls
Peng Hao [Thu, 6 Mar 2025 07:54:25 +0000 (15:54 +0800)]
KVM: SVM: avoid frequency indirect calls

When retpoline is enabled, indirect function calls introduce additional
performance overhead. Avoid frequent indirect calls to VMGEXIT when SEV
is enabled.

Signed-off-by: Peng Hao <flyingpeng@tencent.com>
Link: https://lore.kernel.org/r/20250306075425.66693-1-flyingpeng@tencent.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
3 months agoKVM: SEV: Configure "ALLOWED_SEV_FEATURES" VMCB Field
Kim Phillips [Mon, 10 Mar 2025 20:16:03 +0000 (15:16 -0500)]
KVM: SEV: Configure "ALLOWED_SEV_FEATURES" VMCB Field

AMD EPYC 5th generation processors have introduced a feature that allows
the hypervisor to control the SEV_FEATURES that are set for, or by, a
guest [1].  ALLOWED_SEV_FEATURES can be used by the hypervisor to enforce
that SEV-ES and SEV-SNP guests cannot enable features that the
hypervisor does not want to be enabled.

Always enable ALLOWED_SEV_FEATURES.  A VMRUN will fail if any
non-reserved bits are 1 in SEV_FEATURES but are 0 in
ALLOWED_SEV_FEATURES.

Some SEV_FEATURES - currently PmcVirtualization and SecureAvic
(see Appendix B, Table B-4) - require an opt-in via ALLOWED_SEV_FEATURES,
i.e. are off-by-default, whereas all other features are effectively
on-by-default, but still honor ALLOWED_SEV_FEATURES.

[1] Section 15.36.20 "Allowed SEV Features", AMD64 Architecture
    Programmer's Manual, Pub. 24593 Rev. 3.42 - March 2024:
    https://bugzilla.kernel.org/attachment.cgi?id=306250

Co-developed-by: Kishon Vijay Abraham I <kvijayab@amd.com>
Signed-off-by: Kishon Vijay Abraham I <kvijayab@amd.com>
Reviewed-by: Pankaj Gupta <pankaj.gupta@amd.com>
Signed-off-by: Kim Phillips <kim.phillips@amd.com>
Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com>
Link: https://lore.kernel.org/r/20250310201603.1217954-3-kim.phillips@amd.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
3 months agox86/cpufeatures: Add "Allowed SEV Features" Feature
Kishon Vijay Abraham I [Mon, 10 Mar 2025 20:16:02 +0000 (15:16 -0500)]
x86/cpufeatures: Add "Allowed SEV Features" Feature

Add CPU feature detection for "Allowed SEV Features" to allow the
Hypervisor to enforce that SEV-ES and SEV-SNP guest VMs cannot
enable features (via SEV_FEATURES) that the Hypervisor does not
support or wish to be enabled.

Signed-off-by: Kishon Vijay Abraham I <kvijayab@amd.com>
Reviewed-by: Tom Lendacky <thomas.lendacky@amd.com>
Reviewed-by: Pankaj Gupta <pankaj.gupta@amd.com>
Acked-by: Borislav Petkov (AMD) <bp@alien8.de>
Signed-off-by: Kim Phillips <kim.phillips@amd.com>
Link: https://lore.kernel.org/r/20250310201603.1217954-2-kim.phillips@amd.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
3 months agoKVM: SVM: Add a mutex to dump_vmcb() to prevent concurrent output
Tom Lendacky [Thu, 20 Mar 2025 13:26:53 +0000 (08:26 -0500)]
KVM: SVM: Add a mutex to dump_vmcb() to prevent concurrent output

If multiple VMRUN instructions fail, resulting in calls to dump_vmcb(),
the output can become interleaved and it is impossible to identify which
line of output belongs to which VMCB. Add a mutex to dump_vmcb() so that
the output is serialized.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Acked-by: Borislav Petkov (AMD) <bp@alien8.de>
Tested-by: Kim Phillips <kim.phillips@amd.com>
Link: https://lore.kernel.org/r/a880678afd9488e1dd6017445802712f7c02cc6d.1742477213.git.thomas.lendacky@amd.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
3 months agoKVM: SVM: Include the vCPU ID when dumping a VMCB
Tom Lendacky [Thu, 20 Mar 2025 13:26:52 +0000 (08:26 -0500)]
KVM: SVM: Include the vCPU ID when dumping a VMCB

Provide the vCPU ID of the VMCB in dump_vmcb().

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Acked-by: Borislav Petkov (AMD) <bp@alien8.de>
Tested-by: Kim Phillips <kim.phillips@amd.com>
Link: https://lore.kernel.org/r/ee0af5a6c1a49aebb4a8291071c3f68cacf107b2.1742477213.git.thomas.lendacky@amd.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
3 months agoKVM: SVM: Add the type of VM for which the VMCB/VMSA is being dumped
Tom Lendacky [Thu, 20 Mar 2025 13:26:51 +0000 (08:26 -0500)]
KVM: SVM: Add the type of VM for which the VMCB/VMSA is being dumped

Add the type of VM (SVM, SEV, SEV-ES, or SEV-SNP) being dumped to the
dump_vmcb() function.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Acked-by: Borislav Petkov (AMD) <bp@alien8.de>
Tested-by: Kim Phillips <kim.phillips@amd.com>
Link: https://lore.kernel.org/r/7a183a8beedf4ee26c42001160e073a884fe466e.1742477213.git.thomas.lendacky@amd.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
3 months agoKVM: SVM: Dump guest register state in dump_vmcb()
Tom Lendacky [Thu, 20 Mar 2025 13:26:50 +0000 (08:26 -0500)]
KVM: SVM: Dump guest register state in dump_vmcb()

Guest register state can be useful when debugging, include it as part
of dump_vmcb().

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Acked-by: Borislav Petkov (AMD) <bp@alien8.de>
Tested-by: Kim Phillips <kim.phillips@amd.com>
Link: https://lore.kernel.org/r/a4131a10c082a93610cac12b35dca90292e50f50.1742477213.git.thomas.lendacky@amd.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
3 months agoKVM: SVM: Decrypt SEV VMSA in dump_vmcb() if debugging is enabled
Tom Lendacky [Thu, 20 Mar 2025 13:26:49 +0000 (08:26 -0500)]
KVM: SVM: Decrypt SEV VMSA in dump_vmcb() if debugging is enabled

An SEV-ES/SEV-SNP VM save area (VMSA) can be decrypted if the guest
policy allows debugging. Update the dump_vmcb() routine to output
some of the SEV VMSA contents if possible. This can be useful for
debug purposes.

Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Acked-by: Borislav Petkov (AMD) <bp@alien8.de>
Tested-by: Kim Phillips <kim.phillips@amd.com>
Link: https://lore.kernel.org/r/ea3b852c295b6f4b200925ed6b6e2c90d9475e71.1742477213.git.thomas.lendacky@amd.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
3 months agoKVM: VMX: Use LEAVE in vmx_do_interrupt_irqoff()
Uros Bizjak [Mon, 14 Apr 2025 08:10:51 +0000 (10:10 +0200)]
KVM: VMX: Use LEAVE in vmx_do_interrupt_irqoff()

Micro-optimize vmx_do_interrupt_irqoff() by substituting
MOV %RBP,%RSP; POP %RBP instruction sequence with equivalent
LEAVE instruction. GCC compiler does this by default for
a generic tuning and for all modern processors:

DEF_TUNE (X86_TUNE_USE_LEAVE, "use_leave",
  m_386 | m_CORE_ALL | m_K6_GEODE | m_AMD_MULTIPLE | m_ZHAOXIN
  | m_TREMONT | m_CORE_HYBRID | m_CORE_ATOM | m_GENERIC)

The new code also saves a couple of bytes, from:

  27: 48 89 ec              mov    %rbp,%rsp
  2a: 5d                    pop    %rbp

to:

  27: c9                    leave

No functional change intended.

Signed-off-by: Uros Bizjak <ubizjak@gmail.com>
Cc: Sean Christopherson <seanjc@google.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Vitaly Kuznetsov <vkuznets@redhat.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@kernel.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Link: https://lore.kernel.org/r/20250414081131.97374-2-ubizjak@gmail.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
3 months agoKVM: nVMX: Check MSR load/store list counts during VM-Enter consistency checks
Sean Christopherson [Sat, 15 Mar 2025 02:44:02 +0000 (19:44 -0700)]
KVM: nVMX: Check MSR load/store list counts during VM-Enter consistency checks

Explicitly verify the MSR load/store list counts are below the advertised
limit as part of the initial consistency checks on the lists, so that code
that consumes the count doesn't need to worry about extreme edge cases.
Enforcing the limit during the initial checks fixes a flaw on 32-bit KVM
where a sufficiently high @count could lead to overflow:

arch/x86/kvm/vmx/nested.c:834 nested_vmx_check_msr_switch()
warn: potential user controlled sizeof overflow 'addr + count * 16' '0-u64max + 16-68719476720'

arch/x86/kvm/vmx/nested.c
    827 static int nested_vmx_check_msr_switch(struct kvm_vcpu *vcpu,
    828                                        u32 count, u64 addr)
    829 {
    830         if (count == 0)
    831                 return 0;
    832
    833         if (!kvm_vcpu_is_legal_aligned_gpa(vcpu, addr, 16) ||
--> 834             !kvm_vcpu_is_legal_gpa(vcpu, (addr + count * sizeof(struct vmx_msr_entry) - 1)))
                                                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

While the SDM doesn't explicitly state an illegal count results in VM-Fail,
the SDM states that exceeding the limit may result in undefined behavior.
I.e. the SDM gives hardware, and thus KVM, carte blanche to do literally
anything in response to a count that exceeds the "recommended" limit.

  If the limit is exceeded, undefined processor behavior may result
  (including a machine check during the VMX transition).

KVM already enforces the limit when processing the MSRs, i.e. already
signals a late VM-Exit Consistency Check for VM-Enter, and generates a
VMX Abort for VM-Exit.  I.e. explicitly checking the limits simply means
KVM will signal VM-Fail instead of VM-Exit or VMX Abort.

Reported-by: Dan Carpenter <dan.carpenter@linaro.org>
Closes: https://lore.kernel.org/all/44961459-2759-4164-b604-f6bd43da8ce9@stanley.mountain
Link: https://lore.kernel.org/r/20250315024402.2363098-1-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
3 months agoKVM: SVM: Fix SNP AP destroy race with VMRUN
Tom Lendacky [Thu, 27 Mar 2025 17:39:56 +0000 (12:39 -0500)]
KVM: SVM: Fix SNP AP destroy race with VMRUN

An AP destroy request for a target vCPU is typically followed by an
RMPADJUST to remove the VMSA attribute from the page currently being
used as the VMSA for the target vCPU. This can result in a vCPU that
is about to VMRUN to exit with #VMEXIT_INVALID.

This usually does not happen as APs are typically sitting in HLT when
being destroyed and therefore the vCPU thread is not running at the time.
However, if HLT is allowed inside the VM, then the vCPU could be about to
VMRUN when the VMSA attribute is removed from the VMSA page, resulting in
a #VMEXIT_INVALID when the vCPU actually issues the VMRUN and causing the
guest to crash. An RMPADJUST against an in-use (already running) VMSA
results in a #NPF for the vCPU issuing the RMPADJUST, so the VMSA
attribute cannot be changed until the VMRUN for target vCPU exits. The
Qemu command line option '-overcommit cpu-pm=on' is an example of allowing
HLT inside the guest.

Update the KVM_REQ_UPDATE_PROTECTED_GUEST_STATE event to include the
KVM_REQUEST_WAIT flag. The kvm_vcpu_kick() function will not wait for
requests to be honored, so create kvm_make_request_and_kick() that will
add a new event request and honor the KVM_REQUEST_WAIT flag. This will
ensure that the target vCPU sees the AP destroy request before returning
to the initiating vCPU should the target vCPU be in guest mode.

Fixes: e366f92ea99e ("KVM: SEV: Support SEV-SNP AP Creation NAE event")
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Link: https://lore.kernel.org/r/fe2c885bf35643dd224e91294edb6777d5df23a4.1743097196.git.thomas.lendacky@amd.com
[sean: add a comment explaining the use of smp_send_reschedule()]
Co-developed-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
3 months agox86/irq: KVM: Add helper for harvesting PIR to deduplicate KVM and posted MSIs
Sean Christopherson [Tue, 1 Apr 2025 16:34:47 +0000 (09:34 -0700)]
x86/irq: KVM: Add helper for harvesting PIR to deduplicate KVM and posted MSIs

Now that posted MSI and KVM harvesting of PIR is identical, extract the
code (and posted MSI's wonderful comment) to a common helper.

No functional change intended.

Link: https://lore.kernel.org/r/20250401163447.846608-9-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
3 months agoKVM: VMX: Use arch_xchg() when processing PIR to avoid instrumentation
Sean Christopherson [Tue, 1 Apr 2025 16:34:46 +0000 (09:34 -0700)]
KVM: VMX: Use arch_xchg() when processing PIR to avoid instrumentation

Use arch_xchg() when moving IRQs from the PIR to the vIRR, purely to avoid
instrumentation so that KVM is compatible with the needs of posted MSI.
This will allow extracting the core PIR logic to common code and sharing
it between KVM and posted MSI handling.

Link: https://lore.kernel.org/r/20250401163447.846608-8-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
3 months agoKVM: VMX: Isolate pure loads from atomic XCHG when processing PIR
Sean Christopherson [Tue, 1 Apr 2025 16:34:45 +0000 (09:34 -0700)]
KVM: VMX: Isolate pure loads from atomic XCHG when processing PIR

Rework KVM's processing of the PIR to use the same algorithm as posted
MSIs, i.e. to do READ(x4) => XCHG(x4) instead of (READ+XCHG)(x4).  Given
KVM's long-standing, sub-optimal use of 32-bit accesses to the PIR, it's
safe to say far more thought and investigation was put into handling the
PIR for posted MSIs, i.e. there's no reason to assume KVM's existing
logic is meaningful, let alone superior.

Matching the processing done by posted MSIs will also allow deduplicating
the code between KVM and posted MSIs.

See the comment for handle_pending_pir() added by commit 1b03d82ba15e
("x86/irq: Install posted MSI notification handler") for details on
why isolating loads from XCHG is desirable.

Suggested-by: Jim Mattson <jmattson@google.com>
Link: https://lore.kernel.org/r/20250401163447.846608-7-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
3 months agoKVM: VMX: Process PIR using 64-bit accesses on 64-bit kernels
Sean Christopherson [Tue, 1 Apr 2025 16:34:44 +0000 (09:34 -0700)]
KVM: VMX: Process PIR using 64-bit accesses on 64-bit kernels

Process the PIR at the natural kernel width, i.e. in 64-bit chunks on
64-bit kernels, so that the worst case of having a posted IRQ in each
chunk of the vIRR only requires 4 loads and xchgs from/to the PIR, not 8.

Deliberately use a "continue" to skip empty entries so that the code is a
carbon copy of handle_pending_pir(), in anticipation of deduplicating KVM
and posted MSI logic.

Suggested-by: Jim Mattson <jmattson@google.com>
Link: https://lore.kernel.org/r/20250401163447.846608-6-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
3 months agox86/irq: KVM: Track PIR bitmap as an "unsigned long" array
Sean Christopherson [Tue, 1 Apr 2025 16:34:43 +0000 (09:34 -0700)]
x86/irq: KVM: Track PIR bitmap as an "unsigned long" array

Track the PIR bitmap in posted interrupt descriptor structures as an array
of unsigned longs instead of using unionized arrays for KVM (u32s) versus
IRQ management (u64s).  In practice, because the non-KVM usage is (sanely)
restricted to 64-bit kernels, all existing usage of the u64 variant is
already working with unsigned longs.

Using "unsigned long" for the array will allow reworking KVM's processing
of the bitmap to read/write in 64-bit chunks on 64-bit kernels, i.e. will
allow optimizing KVM by reducing the number of atomic accesses to PIR.

Opportunstically replace the open coded literals in the posted MSIs code
with the appropriate macro.  Deliberately don't use ARRAY_SIZE() in the
for-loops, even though it would be cleaner from a certain perspective, in
anticipation of decoupling the processing from the array declaration.

No functional change intended.

Link: https://lore.kernel.org/r/20250401163447.846608-5-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
3 months agoKVM: VMX: Ensure vIRR isn't reloaded at odd times when sync'ing PIR
Sean Christopherson [Tue, 1 Apr 2025 16:34:42 +0000 (09:34 -0700)]
KVM: VMX: Ensure vIRR isn't reloaded at odd times when sync'ing PIR

Read each vIRR exactly once when shuffling IRQs from the PIR to the vAPIC
to ensure getting the highest priority IRQ from the chunk doesn't reload
from the vIRR.  In practice, a reload is functionally benign as vcpu->mutex
is held and so IRQs can be consumed, i.e. new IRQs can appear, but existing
IRQs can't disappear.

Link: https://lore.kernel.org/r/20250401163447.846608-4-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
3 months agox86/irq: Track if IRQ was found in PIR during initial loop (to load PIR vals)
Sean Christopherson [Tue, 1 Apr 2025 16:34:41 +0000 (09:34 -0700)]
x86/irq: Track if IRQ was found in PIR during initial loop (to load PIR vals)

Track whether or not at least one IRQ was found in PIR during the initial
loop to load PIR chunks from memory.  Doing so generates slightly better
code (arguably) for processing the for-loop of XCHGs, especially for the
case where there are no pending IRQs.

Note, while PIR can be modified between the initial load and the XCHG, it
can only _gain_ new IRQs, i.e. there is no danger of a false positive due
to the final version of pir_copy[] being empty.

Opportunistically convert the boolean to an "unsigned long" and compute
the effective boolean result via bitwise-OR.  Some compilers, e.g.
clang-14, need the extra "hint" to elide conditional branches.

Opportunistically rename the variable in anticipation of moving the PIR
accesses to a common helper that can be shared by posted MSIs and KVM.

Old:
   <+74>: test   %rdx,%rdx
   <+77>: je     0xffffffff812bbeb0 <handle_pending_pir+144>
   <pir[0]>
   <+88>: mov    $0x1,%dl>
   <+90>: test   %rsi,%rsi
   <+93>: je     0xffffffff812bbe8c <handle_pending_pir+108>
   <pir[1]>
   <+106>: mov    $0x1,%dl
   <+108>: test   %rcx,%rcx
   <+111>: je     0xffffffff812bbe9e <handle_pending_pir+126>
   <pir[2]>
   <+124>: mov    $0x1,%dl
   <+126>: test   %rax,%rax
   <+129>: je     0xffffffff812bbeb9 <handle_pending_pir+153>
   <pir[3]>
   <+142>: jmp    0xffffffff812bbec1 <handle_pending_pir+161>
   <+144>: xor    %edx,%edx
   <+146>: test   %rsi,%rsi
   <+149>: jne    0xffffffff812bbe7f <handle_pending_pir+95>
   <+151>: jmp    0xffffffff812bbe8c <handle_pending_pir+108>
   <+153>: test   %dl,%dl
   <+155>: je     0xffffffff812bbf8e <handle_pending_pir+366>

New:
   <+74>: mov    %rax,%r8
   <+77>: or     %rcx,%r8
   <+80>: or     %rdx,%r8
   <+83>: or     %rsi,%r8
   <+86>: setne  %bl
   <+89>: je     0xffffffff812bbf88 <handle_pending_pir+360>
   <+95>: test   %rsi,%rsi
   <+98>: je     0xffffffff812bbe8d <handle_pending_pir+109>
   <pir[0]>
   <+109>: test   %rdx,%rdx
   <+112>: je     0xffffffff812bbe9d <handle_pending_pir+125>
   <pir[1]>
   <+125>: test   %rcx,%rcx
   <+128>: je     0xffffffff812bbead <handle_pending_pir+141>
   <pir[2]>
   <+141>: test   %rax,%rax
   <+144>: je     0xffffffff812bbebd <handle_pending_pir+157>
   <pir[3]>

Link: https://lore.kernel.org/r/20250401163447.846608-3-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
3 months agox86/irq: Ensure initial PIR loads are performed exactly once
Sean Christopherson [Tue, 1 Apr 2025 16:34:40 +0000 (09:34 -0700)]
x86/irq: Ensure initial PIR loads are performed exactly once

Ensure the PIR is read exactly once at the start of handle_pending_pir(),
to guarantee that checking for an outstanding posted interrupt in a given
chuck doesn't reload the chunk from the "real" PIR.  Functionally, a reload
is benign, but it would defeat the purpose of pre-loading into a copy.

Fixes: 1b03d82ba15e ("x86/irq: Install posted MSI notification handler")
Reviewed-by: Thomas Gleixner <tglx@linutronix.de>
Link: https://lore.kernel.org/r/20250401163447.846608-2-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
3 months agoKVM: x86: Add module param to control and enumerate device posted IRQs
Sean Christopherson [Tue, 1 Apr 2025 16:18:04 +0000 (09:18 -0700)]
KVM: x86: Add module param to control and enumerate device posted IRQs

Add a module param to each KVM vendor module to allow disabling device
posted interrupts without having to sacrifice all of APICv/AVIC, and to
also effectively enumerate to userspace whether or not KVM may be
utilizing device posted IRQs.  Disabling device posted interrupts is
very desirable for testing, and can even be desirable for production
environments, e.g. if the host kernel wants to interpose on device
interrupts.

Put the module param in kvm-{amd,intel}.ko instead of kvm.ko to match
the overall APICv/AVIC controls, and to avoid complications with said
controls.  E.g. if the param is in kvm.ko, KVM needs to be snapshot the
original user-defined value to play nice with a vendor module being
reloaded with different enable_apicv settings.

Link: https://lore.kernel.org/r/20250401161804.842968-4-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
3 months agoKVM: VMX: Don't send UNBLOCK when starting device assignment without APICv
Sean Christopherson [Tue, 1 Apr 2025 16:18:02 +0000 (09:18 -0700)]
KVM: VMX: Don't send UNBLOCK when starting device assignment without APICv

When starting device assignment, i.e. potential IRQ bypass, don't blast
KVM_REQ_UNBLOCK if APICv is disabled/unsupported.  There is no need to
wake vCPUs if they can never use VT-d posted IRQs (sending UNBLOCK guards
against races being vCPUs blocking and devices starting IRQ bypass).

Opportunistically use kvm_arch_has_irq_bypass() for all relevant checks in
the VMX Posted Interrupt code so that all checks in KVM x86 incorporate
the same information (once AMD/AVIC is given similar treatment).

Cc: Yosry Ahmed <yosry.ahmed@linux.dev>
Link: https://lore.kernel.org/r/20250401161804.842968-2-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
3 months agoKVM: x86: Rescan I/O APIC routes after EOI interception for old routing
weizijie [Tue, 4 Mar 2025 01:33:35 +0000 (17:33 -0800)]
KVM: x86: Rescan I/O APIC routes after EOI interception for old routing

Rescan I/O APIC routes for a vCPU after handling an intercepted I/O APIC
EOI for an IRQ that is not targeting said vCPU, i.e. after handling what's
effectively a stale EOI VM-Exit.  If a level-triggered IRQ is in-flight
when IRQ routing changes, e.g. because the guest changes routing from its
IRQ handler, then KVM intercepts EOIs on both the new and old target vCPUs,
so that the in-flight IRQ can be de-asserted when it's EOI'd.

However, only the EOI for the in-flight IRQ needs to be intercepted, as
IRQs on the same vector with the new routing are coincidental, i.e. occur
only if the guest is reusing the vector for multiple interrupt sources.
If the I/O APIC routes aren't rescanned, KVM will unnecessarily intercept
EOIs for the vector and negative impact the vCPU's interrupt performance.

Note, both commit db2bdcbbbd32 ("KVM: x86: fix edge EOI and IOAPIC reconfig
race") and commit 0fc5a36dd6b3 ("KVM: x86: ioapic: Fix level-triggered EOI
and IOAPIC reconfigure race") mentioned this issue, but it was considered
a "rare" occurrence thus was not addressed.  However in real environments,
this issue can happen even in a well-behaved guest.

Cc: Kai Huang <kai.huang@intel.com>
Co-developed-by: xuyun <xuyun_xy.xy@linux.alibaba.com>
Signed-off-by: xuyun <xuyun_xy.xy@linux.alibaba.com>
Signed-off-by: weizijie <zijie.wei@linux.alibaba.com>
[sean: massage changelog and comments, use int/-1, reset at scan]
Reviewed-by: Kai Huang <kai.huang@intel.com>
Link: https://lore.kernel.org/r/20250304013335.4155703-4-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
3 months agoKVM: x86: Add a helper to deduplicate I/O APIC EOI interception logic
Sean Christopherson [Tue, 4 Mar 2025 01:33:34 +0000 (17:33 -0800)]
KVM: x86: Add a helper to deduplicate I/O APIC EOI interception logic

Extract the vCPU specific EOI interception logic for I/O APIC emulation
into a common helper for userspace and in-kernel emulation in anticipation
of optimizing the "pending EOI" case.

No functional change intended.

Reviewed-by: Kai Huang <kai.huang@intel.com>
Link: https://lore.kernel.org/r/20250304013335.4155703-3-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
3 months agoKVM: x86: Isolate edge vs. level check in userspace I/O APIC route scanning
Sean Christopherson [Tue, 4 Mar 2025 01:33:33 +0000 (17:33 -0800)]
KVM: x86: Isolate edge vs. level check in userspace I/O APIC route scanning

Extract and isolate the trigger mode check in kvm_scan_ioapic_routes() in
anticipation of moving destination matching logic to a common helper (for
userspace vs. in-kernel I/O APIC emulation).

No functional change intended.

Reviewed-by: Kai Huang <kai.huang@intel.com>
Link: https://lore.kernel.org/r/20250304013335.4155703-2-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
3 months agoKVM: x86: Advertise support for AMD's PREFETCHI
Babu Moger [Tue, 8 Apr 2025 22:57:09 +0000 (17:57 -0500)]
KVM: x86: Advertise support for AMD's PREFETCHI

The latest AMD platform has introduced a new instruction called PREFETCHI.
This instruction loads a cache line from a specified memory address into
the indicated data or instruction cache level, based on locality reference
hints.

Feature bit definition:
CPUID_Fn80000021_EAX [bit 20] - Indicates support for IC prefetch.

This feature is analogous to Intel's PREFETCHITI (CPUID.(EAX=7,ECX=1):EDX),
though the CPUID bit definitions differ between AMD and Intel.

Advertise support to userspace, as no additional enabling is necessary
(PREFETCHI can't be intercepted as there's no instruction specific behavior
that needs to be virtualize).

The feature is documented in Processor Programming Reference (PPR)
for AMD Family 1Ah Model 02h, Revision C1 (Link below).

Link: https://bugzilla.kernel.org/show_bug.cgi?id=206537
Signed-off-by: Babu Moger <babu.moger@amd.com>
Acked-by: Borislav Petkov (AMD) <bp@alien8.de>
Link: https://lore.kernel.org/r/ee1c08fc400bb574a2b8f2c6a0bd9def10a29d35.1744130533.git.babu.moger@amd.com
[sean: rewrite shortlog to highlight the KVM functionality]
Signed-off-by: Sean Christopherson <seanjc@google.com>
3 months agoKVM: x86: Sort CPUID_8000_0021_EAX leaf bits properly
Borislav Petkov [Mon, 24 Mar 2025 16:06:17 +0000 (17:06 +0100)]
KVM: x86: Sort CPUID_8000_0021_EAX leaf bits properly

WRMSR_XX_BASE_NS is bit 1 so put it there, add some new bits as
comments only.

Signed-off-by: Borislav Petkov (AMD) <bp@alien8.de>
Link: https://lore.kernel.org/r/20250324160617.15379-1-bp@kernel.org
[sean: skip the FSRS/FSRC placeholders to avoid confusion]
Signed-off-by: Sean Christopherson <seanjc@google.com>
3 months agoKVM: x86: clean up a return
Dan Carpenter [Mon, 24 Mar 2025 10:53:39 +0000 (13:53 +0300)]
KVM: x86: clean up a return

Returning a literal X86EMUL_CONTINUE is slightly clearer than returning
rc.

Signed-off-by: Dan Carpenter <dan.carpenter@linaro.org>
Link: https://lore.kernel.org/r/7604cbbf-15e6-45a8-afec-cf5be46c2924@stanley.mountain
Signed-off-by: Sean Christopherson <seanjc@google.com>
3 months agoKVM: x86: Advertise support for WRMSRNS
Sean Christopherson [Thu, 27 Feb 2025 01:01:11 +0000 (17:01 -0800)]
KVM: x86: Advertise support for WRMSRNS

Advertise support for WRMSRNS (WRMSR non-serializing) to userspace if the
instruction is supported by the underlying CPU.  From a virtualization
perspective, the only difference between WRMSRNS and WRMSR is that VM-Exits
due to WRMSRNS set EXIT_QUALIFICATION to '1'.  WRMSRNS doesn't require a
new enabling control, shares the same basic exit reason, and behaves the
same as WRMSR with respect to MSR interception.

  WRMSR and WRMSRNS use the same basic exit reason (see Appendix C). For
  WRMSR, the exit qualification is 0, while for WRMSRNS it is 1.

Don't do anything different when emulating WRMSRNS vs. WRMSR, as KVM can't
do anything less, i.e. can't make emulation non-serializing.  The
motivation for the guest to use WRMSRNS instead of WRMSR is to avoid
immediately serializing the CPU when the necessary serialization is
guaranteed by some other mechanism, i.e. WRMSRNS being fully serializing
isn't guest-visible, just less performant.

Suggested-by: Xin Li (Intel) <xin@zytor.com>
Link: https://lore.kernel.org/r/20250227010111.3222742-3-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
3 months agox86/msr: Rename the WRMSRNS opcode macro to ASM_WRMSRNS (for KVM)
Sean Christopherson [Thu, 27 Feb 2025 01:01:10 +0000 (17:01 -0800)]
x86/msr: Rename the WRMSRNS opcode macro to ASM_WRMSRNS (for KVM)

Rename the WRMSRNS instruction opcode macro so that it doesn't collide
with X86_FEATURE_WRMSRNS when using token pasting to generate references
to X86_FEATURE_WRMSRNS.  KVM heavily uses token pasting to generate KVM's
set of support feature bits, and adding WRMSRNS support in KVM will run
will run afoul of the opcode macro.

  arch/x86/kvm/cpuid.c:719:37: error: pasting "X86_FEATURE_" and "" "" does not
                                      give a valid preprocessing token
  719 |         u32 __leaf = __feature_leaf(X86_FEATURE_##name);                \
      |                                     ^~~~~~~~~~~~

KVM has worked around one such collision in the past by #undef'ing the
problematic macro in order to avoid blocking a KVM rework, but such games
are generally undesirable, e.g. requires bleeding macro details into KVM,
risks weird behavior if what KVM is #undef'ing changes, etc.

Reviewed-by: Xin Li (Intel) <xin@zytor.com>
Link: https://lore.kernel.org/r/20250227010111.3222742-2-seanjc@google.com
Signed-off-by: Sean Christopherson <seanjc@google.com>
3 months agoKVM: x86: Generalize IBRS virtualization on emulated VM-exit
Yosry Ahmed [Fri, 21 Feb 2025 16:33:52 +0000 (16:33 +0000)]
KVM: x86: Generalize IBRS virtualization on emulated VM-exit

Commit 2e7eab81425a ("KVM: VMX: Execute IBPB on emulated VM-exit when
guest has IBRS") added an IBPB in the emulated VM-exit path on Intel to
properly virtualize IBRS by providing separate predictor modes for L1
and L2.

AMD requires similar handling, except when IbrsSameMode is enumerated by
the host CPU (which is the case on most/all AMD CPUs). With
IbrsSameMode, hardware IBRS is sufficient and no extra handling is
needed from KVM.

Generalize the handling in nested_vmx_vmexit() by moving it into a
generic function, add the AMD handling, and use it in
nested_svm_vmexit() too. The main reason for using a generic function is
to have a single place to park the huge comment about virtualizing IBRS.

Signed-off-by: Yosry Ahmed <yosry.ahmed@linux.dev>
Reviewed-by: Jim Mattson <jmattson@google.com>
Link: https://lore.kernel.org/r/20250221163352.3818347-4-yosry.ahmed@linux.dev
[sean: use kvm_nested_vmexit_handle_spec_ctrl() for the helper]
Signed-off-by: Sean Christopherson <seanjc@google.com>
3 months agoKVM: x86: Propagate AMD's IbrsSameMode to the guest
Yosry Ahmed [Fri, 21 Feb 2025 16:33:51 +0000 (16:33 +0000)]
KVM: x86: Propagate AMD's IbrsSameMode to the guest

If IBRS provides same mode (kernel/user or host/guest) protection on the
host, then by definition it also provides same mode protection in the
guest. In fact, all different modes from the guest's perspective are the
same mode from the host's perspective anyway.

Propagate IbrsSameMode to the guests.

Signed-off-by: Yosry Ahmed <yosry.ahmed@linux.dev>
Reviewed-by: Jim Mattson <jmattson@google.com>
Link: https://lore.kernel.org/r/20250221163352.3818347-3-yosry.ahmed@linux.dev
Signed-off-by: Sean Christopherson <seanjc@google.com>
3 months agox86/cpufeatures: Define X86_FEATURE_AMD_IBRS_SAME_MODE
Yosry Ahmed [Fri, 21 Feb 2025 16:33:50 +0000 (16:33 +0000)]
x86/cpufeatures: Define X86_FEATURE_AMD_IBRS_SAME_MODE

Per the APM [1]:

  Some processors, identified by CPUID Fn8000_0008_EBX[IbrsSameMode]
  (bit 19) = 1, provide additional speculation limits. For these
  processors, when IBRS is set, indirect branch predictions are not
  influenced by any prior indirect branches, regardless of mode (CPL
  and guest/host) and regardless of whether the prior indirect branches
  occurred before or after the setting of IBRS. This is referred to as
  Same Mode IBRS.

Define this feature bit, which will be used by KVM to determine if an
IBPB is required on nested VM-exits in SVM.

[1] AMD64 Architecture Programmer's Manual Pub. 40332, Rev 4.08 - April
    2024, Volume 2, 3.2.9 Speculation Control MSRs

Signed-off-by: Yosry Ahmed <yosry.ahmed@linux.dev>
Reviewed-by: Jim Mattson <jmattson@google.com>
Link: https://lore.kernel.org/r/20250221163352.3818347-2-yosry.ahmed@linux.dev
Signed-off-by: Sean Christopherson <seanjc@google.com>
3 months agoKVM: x86: Check that the high 32bits are clear in kvm_arch_vcpu_ioctl_run()
Dan Carpenter [Mon, 24 Mar 2025 10:51:28 +0000 (13:51 +0300)]
KVM: x86: Check that the high 32bits are clear in kvm_arch_vcpu_ioctl_run()

The "kvm_run->kvm_valid_regs" and "kvm_run->kvm_dirty_regs" variables are
u64 type.  We are only using the lowest 3 bits but we want to ensure that
the users are not passing invalid bits so that we can use the remaining
bits in the future.

However "sync_valid_fields" and kvm_sync_valid_fields() are u32 type so
the check only ensures that the lower 32 bits are clear.  Fix this by
changing the types to u64.

Fixes: 74c1807f6c4f ("KVM: x86: block KVM_CAP_SYNC_REGS if guest state is protected")
Signed-off-by: Dan Carpenter <dan.carpenter@linaro.org>
Link: https://lore.kernel.org/r/ec25aad1-113e-4c6e-8941-43d432251398@stanley.mountain
Signed-off-by: Sean Christopherson <seanjc@google.com>
3 months agoKVM: SVM: Forcibly leave SMM mode on SHUTDOWN interception
Mikhail Lobanov [Mon, 14 Apr 2025 17:12:06 +0000 (20:12 +0300)]
KVM: SVM: Forcibly leave SMM mode on SHUTDOWN interception

Previously, commit ed129ec9057f ("KVM: x86: forcibly leave nested mode
on vCPU reset") addressed an issue where a triple fault occurring in
nested mode could lead to use-after-free scenarios. However, the commit
did not handle the analogous situation for System Management Mode (SMM).

This omission results in triggering a WARN when KVM forces a vCPU INIT
after SHUTDOWN interception while the vCPU is in SMM. This situation was
reprodused using Syzkaller by:

  1) Creating a KVM VM and vCPU
  2) Sending a KVM_SMI ioctl to explicitly enter SMM
  3) Executing invalid instructions causing consecutive exceptions and
     eventually a triple fault

The issue manifests as follows:

  WARNING: CPU: 0 PID: 25506 at arch/x86/kvm/x86.c:12112
  kvm_vcpu_reset+0x1d2/0x1530 arch/x86/kvm/x86.c:12112
  Modules linked in:
  CPU: 0 PID: 25506 Comm: syz-executor.0 Not tainted
  6.1.130-syzkaller-00157-g164fe5dde9b6 #0
  Hardware name: QEMU Standard PC (i440FX + PIIX, 1996),
  BIOS 1.12.0-1 04/01/2014
  RIP: 0010:kvm_vcpu_reset+0x1d2/0x1530 arch/x86/kvm/x86.c:12112
  Call Trace:
   <TASK>
   shutdown_interception+0x66/0xb0 arch/x86/kvm/svm/svm.c:2136
   svm_invoke_exit_handler+0x110/0x530 arch/x86/kvm/svm/svm.c:3395
   svm_handle_exit+0x424/0x920 arch/x86/kvm/svm/svm.c:3457
   vcpu_enter_guest arch/x86/kvm/x86.c:10959 [inline]
   vcpu_run+0x2c43/0x5a90 arch/x86/kvm/x86.c:11062
   kvm_arch_vcpu_ioctl_run+0x50f/0x1cf0 arch/x86/kvm/x86.c:11283
   kvm_vcpu_ioctl+0x570/0xf00 arch/x86/kvm/../../../virt/kvm/kvm_main.c:4122
   vfs_ioctl fs/ioctl.c:51 [inline]
   __do_sys_ioctl fs/ioctl.c:870 [inline]
   __se_sys_ioctl fs/ioctl.c:856 [inline]
   __x64_sys_ioctl+0x19a/0x210 fs/ioctl.c:856
   do_syscall_x64 arch/x86/entry/common.c:51 [inline]
   do_syscall_64+0x35/0x80 arch/x86/entry/common.c:81
   entry_SYSCALL_64_after_hwframe+0x6e/0xd8

Architecturally, INIT is blocked when the CPU is in SMM, hence KVM's WARN()
in kvm_vcpu_reset() to guard against KVM bugs, e.g. to detect improper
emulation of INIT.  SHUTDOWN on SVM is a weird edge case where KVM needs to
do _something_ sane with the VMCB, since it's technically undefined, and
INIT is the least awful choice given KVM's ABI.

So, double down on stuffing INIT on SHUTDOWN, and force the vCPU out of
SMM to avoid any weirdness (and the WARN).

Found by Linux Verification Center (linuxtesting.org) with Syzkaller.

Fixes: ed129ec9057f ("KVM: x86: forcibly leave nested mode on vCPU reset")
Cc: stable@vger.kernel.org
Suggested-by: Sean Christopherson <seanjc@google.com>
Signed-off-by: Mikhail Lobanov <m.lobanov@rosa.ru>
Link: https://lore.kernel.org/r/20250414171207.155121-1-m.lobanov@rosa.ru
[sean: massage changelog, make it clear this isn't architectural behavior]
Signed-off-by: Sean Christopherson <seanjc@google.com>
3 months agoMerge branch 'kvm-fixes-6.15-rc4' into HEAD
Paolo Bonzini [Fri, 18 Apr 2025 13:00:37 +0000 (09:00 -0400)]
Merge branch 'kvm-fixes-6.15-rc4' into HEAD

* Single fix for broken usage of 'multi-MIDR' infrastructure in PI
  code, adding an open-coded erratum check for Cavium ThunderX

* Bugfixes from a planned posted interrupt rework

* Do not use kvm_rip_read() unconditionally to cater for guests
  with inaccessible register state.

3 months agoMerge tag 'kvmarm-fixes-6.15-2' of https://git.kernel.org/pub/scm/linux/kernel/git...
Paolo Bonzini [Thu, 24 Apr 2025 17:28:53 +0000 (13:28 -0400)]
Merge tag 'kvmarm-fixes-6.15-2' of https://git.kernel.org/pub/scm/linux/kernel/git/kvmarm/kvmarm into HEAD

KVM/arm64 fixes for 6.15, round #2

 - Single fix for broken usage of 'multi-MIDR' infrastructure in PI
   code, adding an open-coded erratum check for everyone's favorite pile
   of sand: Cavium ThunderX

3 months agoKVM: x86: Do not use kvm_rip_read() unconditionally for KVM_PROFILING
Adrian Hunter [Tue, 15 Apr 2025 10:48:21 +0000 (13:48 +0300)]
KVM: x86: Do not use kvm_rip_read() unconditionally for KVM_PROFILING

Not all VMs allow access to RIP.  Check guest_state_protected before
calling kvm_rip_read().

This avoids, for example, hitting WARN_ON_ONCE in vt_cache_reg() for
TDX VMs.

Fixes: 81bf912b2c15 ("KVM: TDX: Implement TDX vcpu enter/exit path")
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Message-ID: <20250415104821.247234-3-adrian.hunter@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 months agoKVM: x86: Do not use kvm_rip_read() unconditionally in KVM tracepoints
Adrian Hunter [Tue, 15 Apr 2025 10:48:20 +0000 (13:48 +0300)]
KVM: x86: Do not use kvm_rip_read() unconditionally in KVM tracepoints

Not all VMs allow access to RIP.  Check guest_state_protected before
calling kvm_rip_read().

This avoids, for example, hitting WARN_ON_ONCE in vt_cache_reg() for
TDX VMs.

Fixes: 81bf912b2c15 ("KVM: TDX: Implement TDX vcpu enter/exit path")
Signed-off-by: Adrian Hunter <adrian.hunter@intel.com>
Message-ID: <20250415104821.247234-2-adrian.hunter@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 months agoKVM: SVM: WARN if an invalid posted interrupt IRTE entry is added
Sean Christopherson [Fri, 4 Apr 2025 19:38:22 +0000 (12:38 -0700)]
KVM: SVM: WARN if an invalid posted interrupt IRTE entry is added

Now that the AMD IOMMU doesn't signal success incorrectly, WARN if KVM
attempts to track an AMD IRTE entry without metadata.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-ID: <20250404193923.1413163-8-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 months agoiommu/amd: WARN if KVM attempts to set vCPU affinity without posted intrrupts
Sean Christopherson [Fri, 4 Apr 2025 19:38:21 +0000 (12:38 -0700)]
iommu/amd: WARN if KVM attempts to set vCPU affinity without posted intrrupts

WARN if KVM attempts to set vCPU affinity when posted interrupts aren't
enabled, as KVM shouldn't try to enable posting when they're unsupported,
and the IOMMU driver darn well should only advertise posting support when
AMD_IOMMU_GUEST_IR_VAPIC() is true.

Note, KVM consumes is_guest_mode only on success.

Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-ID: <20250404193923.1413163-7-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 months agoiommu/amd: Return an error if vCPU affinity is set for non-vCPU IRTE
Sean Christopherson [Fri, 4 Apr 2025 19:38:20 +0000 (12:38 -0700)]
iommu/amd: Return an error if vCPU affinity is set for non-vCPU IRTE

Return -EINVAL instead of success if amd_ir_set_vcpu_affinity() is
invoked without use_vapic; lying to KVM about whether or not the IRTE was
configured to post IRQs is all kinds of bad.

Fixes: d98de49a53e4 ("iommu/amd: Enable vAPIC interrupt remapping mode by default")
Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-ID: <20250404193923.1413163-6-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 months agoKVM: x86: Take irqfds.lock when adding/deleting IRQ bypass producer
Sean Christopherson [Fri, 4 Apr 2025 19:38:19 +0000 (12:38 -0700)]
KVM: x86: Take irqfds.lock when adding/deleting IRQ bypass producer

Take irqfds.lock when adding/deleting an IRQ bypass producer to ensure
irqfd->producer isn't modified while kvm_irq_routing_update() is running.
The only lock held when a producer is added/removed is irqbypass's mutex.

Fixes: 872768800652 ("KVM: x86: select IRQ_BYPASS_MANAGER")
Cc: stable@vger.kernel.org
Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-ID: <20250404193923.1413163-5-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 months agoKVM: x86: Explicitly treat routing entry type changes as changes
Sean Christopherson [Fri, 4 Apr 2025 19:38:18 +0000 (12:38 -0700)]
KVM: x86: Explicitly treat routing entry type changes as changes

Explicitly treat type differences as GSI routing changes, as comparing MSI
data between two entries could get a false negative, e.g. if userspace
changed the type but left the type-specific data as-is.

Fixes: 515a0c79e796 ("kvm: irqfd: avoid update unmodified entries of the routing")
Cc: stable@vger.kernel.org
Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-ID: <20250404193923.1413163-4-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 months agoKVM: x86: Reset IRTE to host control if *new* route isn't postable
Sean Christopherson [Fri, 4 Apr 2025 19:38:17 +0000 (12:38 -0700)]
KVM: x86: Reset IRTE to host control if *new* route isn't postable

Restore an IRTE back to host control (remapped or posted MSI mode) if the
*new* GSI route prevents posting the IRQ directly to a vCPU, regardless of
the GSI routing type.  Updating the IRTE if and only if the new GSI is an
MSI results in KVM leaving an IRTE posting to a vCPU.

The dangling IRTE can result in interrupts being incorrectly delivered to
the guest, and in the worst case scenario can result in use-after-free,
e.g. if the VM is torn down, but the underlying host IRQ isn't freed.

Fixes: efc644048ecd ("KVM: x86: Update IRTE for posted-interrupts")
Fixes: 411b44ba80ab ("svm: Implements update_pi_irte hook to setup posted interrupt")
Cc: stable@vger.kernel.org
Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-ID: <20250404193923.1413163-3-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 months agoKVM: SVM: Allocate IR data using atomic allocation
Sean Christopherson [Fri, 4 Apr 2025 19:38:16 +0000 (12:38 -0700)]
KVM: SVM: Allocate IR data using atomic allocation

Allocate SVM's interrupt remapping metadata using GFP_ATOMIC as
svm_ir_list_add() is called with IRQs are disabled and irqfs.lock held
when kvm_irq_routing_update() reacts to GSI routing changes.

Fixes: 411b44ba80ab ("svm: Implements update_pi_irte hook to setup posted interrupt")
Cc: stable@vger.kernel.org
Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-ID: <20250404193923.1413163-2-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 months agoKVM: SVM: Don't update IRTEs if APICv/AVIC is disabled
Sean Christopherson [Tue, 1 Apr 2025 16:18:03 +0000 (09:18 -0700)]
KVM: SVM: Don't update IRTEs if APICv/AVIC is disabled

Skip IRTE updates if AVIC is disabled/unsupported, as forcing the IRTE
into remapped mode (kvm_vcpu_apicv_active() will never be true) is
unnecessary and wasteful.  The IOMMU driver is responsible for putting
IRTEs into remapped mode when an IRQ is allocated by a device, long before
that device is assigned to a VM.  I.e. the kernel as a whole has major
issues if the IRTE isn't already in remapped mode.

Opportunsitically kvm_arch_has_irq_bypass() to query for APICv/AVIC, so
so that all checks in KVM x86 incorporate the same information.

Cc: Yosry Ahmed <yosry.ahmed@linux.dev>
Cc: Jim Mattson <jmattson@google.com>
Signed-off-by: Sean Christopherson <seanjc@google.com>
Message-ID: <20250401161804.842968-3-seanjc@google.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 months agoKVM: arm64, x86: make kvm_arch_has_irq_bypass() inline
Paolo Bonzini [Fri, 18 Apr 2025 17:03:08 +0000 (13:03 -0400)]
KVM: arm64, x86: make kvm_arch_has_irq_bypass() inline

kvm_arch_has_irq_bypass() is a small function and even though it does
not appear in any *really* hot paths, it's also not entirely rare.
Make it inline---it also works out nicely in preparation for using it in
kvm-intel.ko and kvm-amd.ko, since the function is not currently exported.

Suggested-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
3 months agoarm64: Rework checks for broken Cavium HW in the PI code
Marc Zyngier [Fri, 18 Apr 2025 09:31:29 +0000 (10:31 +0100)]
arm64: Rework checks for broken Cavium HW in the PI code

Calling into the MIDR checking framework from the PI code has recently
become much harder, due to the new fancy "multi-MIDR" support that
relies on tables being populated at boot time, but not that early that
they are available to the PI code. There are additional issues with
this framework, as the code really isn't position independend *at all*.

This leads to some ugly breakages, as reported by Ada.

It so appears that the only reason for the PI code to call into the
MIDR checking code is to cope with The Most Broken ARM64 System Ever,
aka Cavium ThunderX, which cannot deal with nG attributes that result
of the combination of KASLR and KPTI as a consequence of Erratum 27456.

Duplicate the check for the erratum in the PI code, removing the
dependency on the bulk of the MIDR checking framework. This allows
dropping that same check from kaslr_requires_kpti(), as the KPTI code
already relies on the ARM64_WORKAROUND_CAVIUM_27456 cap.

Fixes: c8c2647e69bed ("arm64: Make  _midr_in_range_list() an exported function")
Reported-by: Ada Couprie Diaz <ada.coupriediaz@arm.com>
Signed-off-by: Marc Zyngier <maz@kernel.org>
Link: https://lore.kernel.org/r/3d97e45a-23cf-419b-9b6f-140b4d88de7b@arm.com
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>
Cc: Oliver Upton <oliver.upton@linux.dev>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/20250418093129.1755739-1-maz@kernel.org
Signed-off-by: Oliver Upton <oliver.upton@linux.dev>
3 months agoLinux 6.15-rc2
Linus Torvalds [Sun, 13 Apr 2025 18:54:49 +0000 (11:54 -0700)]
Linux 6.15-rc2

3 months agoMerge tag 'erofs-for-6.15-rc2-fixes' of git://git.kernel.org/pub/scm/linux/kernel...
Linus Torvalds [Sun, 13 Apr 2025 17:52:04 +0000 (10:52 -0700)]
Merge tag 'erofs-for-6.15-rc2-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/xiang/erofs

Pull erofs fixes from Gao Xiang:

 - Properly handle errors when file-backed I/O fails

 - Fix compilation issues on ARM platform (arm-linux-gnueabi)

 - Fix parsing of encoded extents

 - Minor cleanup

* tag 'erofs-for-6.15-rc2-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/xiang/erofs:
  erofs: remove duplicate code
  erofs: fix encoded extents handling
  erofs: add __packed annotation to union(__le16..)
  erofs: set error to bio if file-backed IO fails

3 months agoMerge tag 'ext4_for_linus-6.15-rc2' of git://git.kernel.org/pub/scm/linux/kernel...
Linus Torvalds [Sun, 13 Apr 2025 14:15:50 +0000 (07:15 -0700)]
Merge tag 'ext4_for_linus-6.15-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/tytso/ext4

Pull ext4 fixes from Ted Ts'o:
 "A few more miscellaneous ext4 bug fixes and cleanups including some
  syzbot failures and fixing a stale file handing refeencing an inode
  previously used as a regular file, but which has been deleted and
  reused as an ea_inode would result in ext4 erroneously considering
  this a case of fs corruption"

* tag 'ext4_for_linus-6.15-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/tytso/ext4:
  ext4: fix off-by-one error in do_split
  ext4: make block validity check resistent to sb bh corruption
  ext4: avoid -Wflex-array-member-not-at-end warning
  Documentation: ext4: Add fields to ext4_super_block documentation
  ext4: don't treat fhandle lookup of ea_inode as FS corruption

3 months agoMerge tag 'fixes-2025-04-13' of git://git.kernel.org/pub/scm/linux/kernel/git/rppt...
Linus Torvalds [Sun, 13 Apr 2025 14:11:33 +0000 (07:11 -0700)]
Merge tag 'fixes-2025-04-13' of git://git.kernel.org/pub/scm/linux/kernel/git/rppt/memblock

Pull memblock fix from Mike Rapoport:
 "Fix build of memblock test.

  Add missing stubs for mutex and free_reserved_area() to memblock
  tests"

* tag 'fixes-2025-04-13' of git://git.kernel.org/pub/scm/linux/kernel/git/rppt/memblock:
  memblock tests: Fix mutex related build error

3 months agoext4: fix off-by-one error in do_split
Artem Sadovnikov [Fri, 4 Apr 2025 08:28:05 +0000 (08:28 +0000)]
ext4: fix off-by-one error in do_split

Syzkaller detected a use-after-free issue in ext4_insert_dentry that was
caused by out-of-bounds access due to incorrect splitting in do_split.

BUG: KASAN: use-after-free in ext4_insert_dentry+0x36a/0x6d0 fs/ext4/namei.c:2109
Write of size 251 at addr ffff888074572f14 by task syz-executor335/5847

CPU: 0 UID: 0 PID: 5847 Comm: syz-executor335 Not tainted 6.12.0-rc6-syzkaller-00318-ga9cda7c0ffed #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/30/2024
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:94 [inline]
 dump_stack_lvl+0x241/0x360 lib/dump_stack.c:120
 print_address_description mm/kasan/report.c:377 [inline]
 print_report+0x169/0x550 mm/kasan/report.c:488
 kasan_report+0x143/0x180 mm/kasan/report.c:601
 kasan_check_range+0x282/0x290 mm/kasan/generic.c:189
 __asan_memcpy+0x40/0x70 mm/kasan/shadow.c:106
 ext4_insert_dentry+0x36a/0x6d0 fs/ext4/namei.c:2109
 add_dirent_to_buf+0x3d9/0x750 fs/ext4/namei.c:2154
 make_indexed_dir+0xf98/0x1600 fs/ext4/namei.c:2351
 ext4_add_entry+0x222a/0x25d0 fs/ext4/namei.c:2455
 ext4_add_nondir+0x8d/0x290 fs/ext4/namei.c:2796
 ext4_symlink+0x920/0xb50 fs/ext4/namei.c:3431
 vfs_symlink+0x137/0x2e0 fs/namei.c:4615
 do_symlinkat+0x222/0x3a0 fs/namei.c:4641
 __do_sys_symlink fs/namei.c:4662 [inline]
 __se_sys_symlink fs/namei.c:4660 [inline]
 __x64_sys_symlink+0x7a/0x90 fs/namei.c:4660
 do_syscall_x64 arch/x86/entry/common.c:52 [inline]
 do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
 </TASK>

The following loop is located right above 'if' statement.

for (i = count-1; i >= 0; i--) {
/* is more than half of this entry in 2nd half of the block? */
if (size + map[i].size/2 > blocksize/2)
break;
size += map[i].size;
move++;
}

'i' in this case could go down to -1, in which case sum of active entries
wouldn't exceed half the block size, but previous behaviour would also do
split in half if sum would exceed at the very last block, which in case of
having too many long name files in a single block could lead to
out-of-bounds access and following use-after-free.

Found by Linux Verification Center (linuxtesting.org) with Syzkaller.

Cc: stable@vger.kernel.org
Fixes: 5872331b3d91 ("ext4: fix potential negative array index in do_split()")
Signed-off-by: Artem Sadovnikov <a.sadovnikov@ispras.ru>
Reviewed-by: Jan Kara <jack@suse.cz>
Link: https://patch.msgid.link/20250404082804.2567-3-a.sadovnikov@ispras.ru
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
3 months agoext4: make block validity check resistent to sb bh corruption
Ojaswin Mujoo [Fri, 28 Mar 2025 06:24:52 +0000 (11:54 +0530)]
ext4: make block validity check resistent to sb bh corruption

Block validity checks need to be skipped in case they are called
for journal blocks since they are part of system's protected
zone.

Currently, this is done by checking inode->ino against
sbi->s_es->s_journal_inum, which is a direct read from the ext4 sb
buffer head. If someone modifies this underneath us then the
s_journal_inum field might get corrupted. To prevent against this,
change the check to directly compare the inode with journal->j_inode.

**Slight change in behavior**: During journal init path,
check_block_validity etc might be called for journal inode when
sbi->s_journal is not set yet. In this case we now proceed with
ext4_inode_block_valid() instead of returning early. Since systems zones
have not been set yet, it is okay to proceed so we can perform basic
checks on the blocks.

Suggested-by: Baokun Li <libaokun1@huawei.com>
Reviewed-by: Baokun Li <libaokun1@huawei.com>
Reviewed-by: Jan Kara <jack@suse.cz>
Reviewed-by: Zhang Yi <yi.zhang@huawei.com>
Signed-off-by: Ojaswin Mujoo <ojaswin@linux.ibm.com>
Link: https://patch.msgid.link/0c06bc9ebfcd6ccfed84a36e79147bf45ff5adc1.1743142920.git.ojaswin@linux.ibm.com
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
3 months agoext4: avoid -Wflex-array-member-not-at-end warning
Gustavo A. R. Silva [Wed, 26 Mar 2025 22:55:51 +0000 (16:55 -0600)]
ext4: avoid -Wflex-array-member-not-at-end warning

-Wflex-array-member-not-at-end was introduced in GCC-14, and we are
getting ready to enable it, globally.

Use the `DEFINE_RAW_FLEX()` helper for an on-stack definition of
a flexible structure where the size of the flexible-array member
is known at compile-time, and refactor the rest of the code,
accordingly.

So, with these changes, fix the following warning:

fs/ext4/mballoc.c:3041:40: warning: structure containing a flexible array member is not at the end of another structure [-Wflex-array-member-not-at-end]

Signed-off-by: Gustavo A. R. Silva <gustavoars@kernel.org>
Reviewed-by: Kees Cook <kees@kernel.org>
Link: https://patch.msgid.link/Z-SF97N3AxcIMlSi@kspp
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
3 months agoDocumentation: ext4: Add fields to ext4_super_block documentation
Tom Vierjahn [Mon, 24 Mar 2025 22:09:30 +0000 (23:09 +0100)]
Documentation: ext4: Add fields to ext4_super_block documentation

Documentation and implementation of the ext4 super block have
slightly diverged: Padding has been removed in order to make room for
new fields that are still missing in the documentation.

Add the new fields s_encryption_level, s_first_error_errorcode,
s_last_error_errorcode to the documentation of the ext4 super block.

Fixes: f542fbe8d5e8 ("ext4 crypto: reserve codepoints used by the ext4 encryption feature")
Fixes: 878520ac45f9 ("ext4: save the error code which triggered an ext4_error() in the superblock")
Signed-off-by: Tom Vierjahn <tom.vierjahn@acm.org>
Reviewed-by: Ojaswin Mujoo <ojaswin@linux.ibm.com>
Link: https://patch.msgid.link/20250324221004.5268-1-tom.vierjahn@acm.org
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
3 months agoMerge tag 'trace-v6.15-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/trace...
Linus Torvalds [Sat, 12 Apr 2025 22:37:40 +0000 (15:37 -0700)]
Merge tag 'trace-v6.15-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace

Pull tracing fixes from Steven Rostedt:

 - Hide get_vm_area() from MMUless builds

   The function get_vm_area() is not defined when CONFIG_MMU is not
   defined. Hide that function within #ifdef CONFIG_MMU.

 - Fix output of synthetic events when they have dynamic strings

   The print fmt of the synthetic event's format file use to have "%.*s"
   for dynamic size strings even though the user space exported
   arguments had only __get_str() macro that provided just a nul
   terminated string. This was fixed so that user space could parse this
   properly.

   But the reason that it had "%.*s" was because internally it provided
   the maximum size of the string as one of the arguments. The fix that
   replaced "%.*s" with "%s" caused the trace output (when the kernel
   reads the event) to write "(efault)" as it would now read the length
   of the string as "%s".

   As the string provided is always nul terminated, there's no reason
   for the internal code to use "%.*s" anyway. Just remove the length
   argument to match the "%s" that is now in the format.

 - Fix the ftrace subops hash logic of the manager ops hash

   The function_graph uses the ftrace subops code. The subops code is a
   way to have a single ftrace_ops registered with ftrace to determine
   what functions will call the ftrace_ops callback. More than one user
   of function graph can register a ftrace_ops with it. The function
   graph infrastructure will then add this ftrace_ops as a subops with
   the main ftrace_ops it registers with ftrace. This is because the
   functions will always call the function graph callback which in turn
   calls the subops ftrace_ops callbacks.

   The main ftrace_ops must add a callback to all the functions that the
   subops want a callback from. When a subops is registered, it will
   update the main ftrace_ops hash to include the functions it wants.
   This is the logic that was broken.

   The ftrace_ops hash has a "filter_hash" and a "notrace_hash" where
   all the functions in the filter_hash but not in the notrace_hash are
   attached by ftrace. The original logic would have the main ftrace_ops
   filter_hash be a union of all the subops filter_hashes and the main
   notrace_hash would be a intersect of all the subops filter hashes.
   But this was incorrect because the notrace hash depends on the
   filter_hash it is associated to and not the union of all
   filter_hashes.

   Instead, when a subops is added, just include all the functions of
   the subops hash that are in its filter_hash but not in its
   notrace_hash. The main subops hash should not use its notrace hash,
   unless all of its subops hashes have an empty filter_hash (which
   means to attach to all functions), and then, and only then, the main
   ftrace_ops notrace hash can be the intersect of all the subops
   hashes.

   This not only fixes the bug, but also simplifies the code.

 - Add a selftest to better test the subops filtering

   Add a selftest that would catch the bug fixed by the above change.

 - Fix extra newline printed in function tracing with retval

   The function parameter code changed the output logic slightly and
   called print_graph_retval() and also printed a newline. The
   print_graph_retval() also prints a newline which caused blank lines
   to be printed in the function graph tracer when retval was added.
   This caused one of the selftests to fail if retvals were enabled.
   Instead remove the new line output from print_graph_retval() and have
   the callers always print the new line so that it doesn't have to do
   special logic if it calls print_graph_retval() or not.

 - Fix out-of-bound memory access in the runtime verifier

   When rv_is_container_monitor() is called on the last entry on the
   link list it references the next entry, which is the list head and
   causes an out-of-bound memory access.

* tag 'trace-v6.15-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace:
  rv: Fix out-of-bound memory access in rv_is_container_monitor()
  ftrace: Do not have print_graph_retval() add a newline
  tracing/selftest: Add test to better test subops filtering of function graph
  ftrace: Fix accounting of subop hashes
  ftrace: Properly merge notrace hashes
  tracing: Do not add length to print format in synthetic events
  tracing: Hide get_vm_area() from MMUless builds

3 months agoMerge tag 'bpf-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf
Linus Torvalds [Sat, 12 Apr 2025 19:48:10 +0000 (12:48 -0700)]
Merge tag 'bpf-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf

Pull bpf fixes from Alexei Starovoitov:

 - Followup fixes for resilient spinlock (Kumar Kartikeya Dwivedi):
     - Make res_spin_lock test less verbose, since it was spamming BPF
       CI on failure, and make the check for AA deadlock stronger
     - Fix rebasing mistake and use architecture provided
       res_smp_cond_load_acquire
     - Convert BPF maps (queue_stack and ringbuf) to resilient spinlock
       to address long standing syzbot reports

 - Make sure that classic BPF load instruction from SKF_[NET|LL]_OFF
   offsets works when skb is fragmeneted (Willem de Bruijn)

* tag 'bpf-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/bpf/bpf:
  bpf: Convert ringbuf map to rqspinlock
  bpf: Convert queue_stack map to rqspinlock
  bpf: Use architecture provided res_smp_cond_load_acquire
  selftests/bpf: Make res_spin_lock AA test condition stronger
  selftests/net: test sk_filter support for SKF_NET_OFF on frags
  bpf: support SKF_NET_OFF and SKF_LL_OFF on skb frags
  selftests/bpf: Make res_spin_lock test less verbose

3 months agorv: Fix out-of-bound memory access in rv_is_container_monitor()
Nam Cao [Fri, 11 Apr 2025 07:37:17 +0000 (09:37 +0200)]
rv: Fix out-of-bound memory access in rv_is_container_monitor()

When rv_is_container_monitor() is called on the last monitor in
rv_monitors_list, KASAN yells:

  BUG: KASAN: global-out-of-bounds in rv_is_container_monitor+0x101/0x110
  Read of size 8 at addr ffffffff97c7c798 by task setup/221

  The buggy address belongs to the variable:
   rv_monitors_list+0x18/0x40

This is due to list_next_entry() is called on the last entry in the list.
It wraps around to the first list_head, and the first list_head is not
embedded in struct rv_monitor_def.

Fix it by checking if the monitor is last in the list.

Cc: stable@vger.kernel.org
Cc: Gabriele Monaco <gmonaco@redhat.com>
Fixes: cb85c660fcd4 ("rv: Add option for nested monitors and include sched")
Link: https://lore.kernel.org/e85b5eeb7228bfc23b8d7d4ab5411472c54ae91b.1744355018.git.namcao@linutronix.de
Signed-off-by: Nam Cao <namcao@linutronix.de>
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
3 months agoftrace: Do not have print_graph_retval() add a newline
Steven Rostedt [Fri, 11 Apr 2025 17:30:15 +0000 (13:30 -0400)]
ftrace: Do not have print_graph_retval() add a newline

The retval and retaddr options for function_graph tracer will add a
comment at the end of a function for both leaf and non leaf functions that
looks like:

               __wake_up_common(); /* ret=0x1 */

               } /* pick_next_task_fair ret=0x0 */

The function print_graph_retval() adds a newline after the "*/". But if
that's not called, the caller function needs to make sure there's a
newline added.

This is confusing and when the function parameters code was added, it
added a newline even when calling print_graph_retval() as the fact that
the print_graph_retval() function prints a newline isn't obvious.

This caused an extra newline to be printed and that made it fail the
selftests when the retval option was set, as the selftests were not
expecting blank lines being injected into the trace.

Instead of having print_graph_retval() print a newline, just have the
caller always print the newline regardless if it calls print_graph_retval()
or not. This not only fixes this bug, but it also simplifies the code.

Cc: Masami Hiramatsu <mhiramat@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Link: https://lore.kernel.org/20250411133015.015ca393@gandalf.local.home
Reported-by: Mark Brown <broonie@kernel.org>
Tested-by: Mark Brown <broonie@kernel.org>
Closes: https://lore.kernel.org/all/ccc40f2b-4b9e-4abd-8daf-d22fce2a86f0@sirena.org.uk/
Fixes: ff5c9c576e754 ("ftrace: Add support for function argument to graph tracer")
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
3 months agoMerge tag 'pwm/for-6.15-rc2-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git...
Linus Torvalds [Sat, 12 Apr 2025 15:11:19 +0000 (08:11 -0700)]
Merge tag 'pwm/for-6.15-rc2-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/ukleinek/linux

Pull pwm fixes from Uwe Kleine-König:
 "A set of fixes for pwm core and various drivers

  The first three patches handle clk_get_rate() returning 0 (which might
  happen for example if the CCF is disabled). The first of these was
  found because this triggered a warning with clang, the two others by
  looking for similar issues in other drivers.

  The remaining three fixes address issues in the new waveform pwm API.
  Now that I worked on this a bit more, the finer details and corner
  cases are better understood and the code is fixed accordingly"

* tag 'pwm/for-6.15-rc2-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/ukleinek/linux:
  pwm: axi-pwmgen: Let .round_waveform_tohw() signal when request was rounded up
  pwm: stm32: Search an appropriate duty_cycle if period cannot be modified
  pwm: Let pwm_set_waveform() succeed even if lowlevel driver rounded up
  pwm: fsl-ftm: Handle clk_get_rate() returning 0
  pwm: rcar: Improve register calculation
  pwm: mediatek: Prevent divide-by-zero in pwm_mediatek_config()

3 months agoMerge tag 'v6.15-rc1-smb3-client-fixes' of git://git.samba.org/sfrench/cifs-2.6
Linus Torvalds [Fri, 11 Apr 2025 23:41:14 +0000 (16:41 -0700)]
Merge tag 'v6.15-rc1-smb3-client-fixes' of git://git.samba.org/sfrench/cifs-2.6

Pull smb client fixes from Steve French:

 - Fix multichannel decryption UAF

 - Fix regression mounting to onedrive shares

 - Fix missing mount option check for posix vs. noposix

 - Fix version field in WSL symlinks

 - Three minor cleanup to reparse point handling

 - SMB1 fix for WSL special files

 - SMB1 Kerberos fix

 - Add SMB3 defines for two new FS attributes

* tag 'v6.15-rc1-smb3-client-fixes' of git://git.samba.org/sfrench/cifs-2.6:
  smb3: Add defines for two new FileSystemAttributes
  cifs: Fix querying of WSL CHR and BLK reparse points over SMB1
  cifs: Split parse_reparse_point callback to functions: get buffer and parse buffer
  cifs: Improve handling of name surrogate reparse points in reparse.c
  cifs: Remove explicit handling of IO_REPARSE_TAG_MOUNT_POINT in inode.c
  cifs: Fix encoding of SMB1 Session Setup Kerberos Request in non-UNICODE mode
  smb: client: fix UAF in decryption with multichannel
  cifs: Fix support for WSL-style symlinks
  smb311 client: fix missing tcon check when mounting with linux/posix extensions
  cifs: Ensure that all non-client-specific reparse points are processed by the server

3 months agoMerge tag 'pci-v6.15-fixes-1' of git://git.kernel.org/pub/scm/linux/kernel/git/pci/pci
Linus Torvalds [Fri, 11 Apr 2025 23:29:52 +0000 (16:29 -0700)]
Merge tag 'pci-v6.15-fixes-1' of git://git.kernel.org/pub/scm/linux/kernel/git/pci/pci

Pull pci fix from Bjorn Helgaas:

 - Run quirk_huawei_pcie_sva() before arm_smmu_probe_device(), which
   depends on the quirk, to avoid IOMMU initialization failures
   (Zhangfei Gao)

* tag 'pci-v6.15-fixes-1' of git://git.kernel.org/pub/scm/linux/kernel/git/pci/pci:
  PCI: Run quirk_huawei_pcie_sva() before arm_smmu_probe_device()

3 months agotracing/selftest: Add test to better test subops filtering of function graph
Steven Rostedt [Wed, 9 Apr 2025 15:15:51 +0000 (11:15 -0400)]
tracing/selftest: Add test to better test subops filtering of function graph

A bug was discovered that showed the accounting of the subops of the
ftrace_ops filtering was incorrect. Add a new test to better test the
filtering.

This test creates two instances, where it will add various filters to both
the set_ftrace_filter and the set_ftrace_notrace files and enable
function_graph. Then it looks into the enabled_functions file to make sure
that the filters are behaving correctly.

Cc: Masami Hiramatsu <mhiramat@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Shuah Khan <skhan@linuxfoundation.org>
Cc: Andy Chiu <andybnac@gmail.com>
Link: https://lore.kernel.org/20250409152720.380778379@goodmis.org
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
3 months agoftrace: Fix accounting of subop hashes
Steven Rostedt [Wed, 9 Apr 2025 15:15:50 +0000 (11:15 -0400)]
ftrace: Fix accounting of subop hashes

The function graph infrastructure uses ftrace to hook to functions. It has
a single ftrace_ops to manage all the users of function graph. Each
individual user (tracing, bpf, fprobes, etc) has its own ftrace_ops to
track the functions it will have its callback called from. These
ftrace_ops are "subops" to the main ftrace_ops of the function graph
infrastructure.

Each ftrace_ops has a filter_hash and a notrace_hash that is defined as:

  Only trace functions that are in the filter_hash but not in the
  notrace_hash.

If the filter_hash is empty, it means to trace all functions.
If the notrace_hash is empty, it means do not disable any function.

The function graph main ftrace_ops needs to be a superset containing all
the functions to be traced by all the subops it has. The algorithm to
perform this merge was incorrect.

When the first subops was added to the main ops, it simply made the main
ops a copy of the subops (same filter_hash and notrace_hash).

When a second ops was added, it joined the new subops filter_hash with the
main ops filter_hash as a union of the two sets. The intersect between the
new subops notrace_hash and the main ops notrace_hash was created as the
new notrace_hash of the main ops.

The issue here is that it would then start tracing functions than no
subops were tracing. For example if you had two subops that had:

subops 1:

  filter_hash = '*sched*' # trace all functions with "sched" in it
  notrace_hash = '*time*' # except do not trace functions with "time"

subops 2:

  filter_hash = '*lock*' # trace all functions with "lock" in it
  notrace_hash = '*clock*' # except do not trace functions with "clock"

The intersect of '*time*' functions with '*clock*' functions could be the
empty set. That means the main ops will be tracing all functions with
'*time*' and all "*clock*" in it!

Instead, modify the algorithm to be a bit simpler and correct.

First, when adding a new subops, even if it's the first one, do not add
the notrace_hash if the filter_hash is not empty. Instead, just add the
functions that are in the filter_hash of the subops but not in the
notrace_hash of the subops into the main ops filter_hash. There's no
reason to add anything to the main ops notrace_hash.

The notrace_hash of the main ops should only be non empty iff all subops
filter_hashes are empty (meaning to trace all functions) and all subops
notrace_hashes include the same functions.

That is, the main ops notrace_hash is empty if any subops filter_hash is
non empty.

The main ops notrace_hash only has content in it if all subops
filter_hashes are empty, and the content are only functions that intersect
all the subops notrace_hashes. If any subops notrace_hash is empty, then
so is the main ops notrace_hash.

Cc: Masami Hiramatsu <mhiramat@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Shuah Khan <skhan@linuxfoundation.org>
Cc: Andy Chiu <andybnac@gmail.com>
Link: https://lore.kernel.org/20250409152720.216356767@goodmis.org
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
3 months agoftrace: Properly merge notrace hashes
Andy Chiu [Tue, 8 Apr 2025 16:02:57 +0000 (00:02 +0800)]
ftrace: Properly merge notrace hashes

The global notrace hash should be jointly decided by the intersection of
each subops's notrace hash, but not the filter hash.

Cc: stable@vger.kernel.org
Link: https://lore.kernel.org/20250408160258.48563-1-andybnac@gmail.com
Fixes: 5fccc7552ccb ("ftrace: Add subops logic to allow one ops to manage many")
Signed-off-by: Andy Chiu <andybnac@gmail.com>
[ fixed removing of freeing of filter_hash ]
Signed-off-by: Steven Rostedt (Google) <rostedt@goodmis.org>
3 months agoPCI: Run quirk_huawei_pcie_sva() before arm_smmu_probe_device()
Zhangfei Gao [Mon, 17 Mar 2025 01:13:52 +0000 (01:13 +0000)]
PCI: Run quirk_huawei_pcie_sva() before arm_smmu_probe_device()

quirk_huawei_pcie_sva() sets properties needed by arm_smmu_probe_device(),
but bcb81ac6ae3c ("iommu: Get DT/ACPI parsing into the proper probe path")
changed the iommu_probe_device() flow so arm_smmu_probe_device() is now
invoked before the quirk, leading to failures like this:

  reg-dummy reg-dummy: late IOMMU probe at driver bind, something fishy here!
  WARNING: CPU: 0 PID: 1 at drivers/iommu/iommu.c:449 __iommu_probe_device+0x140/0x570
  RIP: 0010:__iommu_probe_device+0x140/0x570

The SR-IOV enumeration ordering changes like this:

  pci_iov_add_virtfn
    pci_device_add
      pci_fixup_device(pci_fixup_header)      <--
      device_add
        bus_notify
          iommu_bus_notifier
  +         iommu_probe_device
  +           arm_smmu_probe_device
    pci_bus_add_device
      pci_fixup_device(pci_fixup_final)       <--
      device_attach
        driver_probe_device
          really_probe
            pci_dma_configure
              acpi_dma_configure_id
  -             iommu_probe_device
  -               arm_smmu_probe_device

The non-SR-IOV case is similar in that pci_device_add() is called from
pci_scan_single_device() in the generic enumeration path and
pci_bus_add_device() is called later, after all host bridges have been
enumerated.

Declare quirk_huawei_pcie_sva() as a header fixup to ensure that it happens
before arm_smmu_probe_device().

Fixes: bcb81ac6ae3c ("iommu: Get DT/ACPI parsing into the proper probe path")
Reported-by: Chaitanya Kumar Borah <chaitanya.kumar.borah@intel.com>
Closes: https://lore.kernel.org/all/SJ1PR11MB61295DE21A1184AEE0786E25B9D22@SJ1PR11MB6129.namprd11.prod.outlook.com/
Signed-off-by: Zhangfei Gao <zhangfei.gao@linaro.org>
[bhelgaas: commit log, add failure info and reporter]
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Link: https://patch.msgid.link/20250317011352.5806-1-zhangfei.gao@linaro.org
3 months agobpf: Convert ringbuf map to rqspinlock
Kumar Kartikeya Dwivedi [Fri, 11 Apr 2025 10:17:59 +0000 (03:17 -0700)]
bpf: Convert ringbuf map to rqspinlock

Convert the raw spinlock used by BPF ringbuf to rqspinlock. Currently,
we have an open syzbot report of a potential deadlock. In addition, the
ringbuf can fail to reserve spuriously under contention from NMI
context.

It is potentially attractive to enable unconstrained usage (incl. NMIs)
while ensuring no deadlocks manifest at runtime, perform the conversion
to rqspinlock to achieve this.

This change was benchmarked for BPF ringbuf's multi-producer contention
case on an Intel Sapphire Rapids server, with hyperthreading disabled
and performance governor turned on. 5 warm up runs were done for each
case before obtaining the results.

Before (raw_spinlock_t):

Ringbuf, multi-producer contention
==================================
rb-libbpf nr_prod 1  11.440 ± 0.019M/s (drops 0.000 ± 0.000M/s)
rb-libbpf nr_prod 2  2.706 ± 0.010M/s (drops 0.000 ± 0.000M/s)
rb-libbpf nr_prod 3  3.130 ± 0.004M/s (drops 0.000 ± 0.000M/s)
rb-libbpf nr_prod 4  2.472 ± 0.003M/s (drops 0.000 ± 0.000M/s)
rb-libbpf nr_prod 8  2.352 ± 0.001M/s (drops 0.000 ± 0.000M/s)
rb-libbpf nr_prod 12 2.813 ± 0.001M/s (drops 0.000 ± 0.000M/s)
rb-libbpf nr_prod 16 1.988 ± 0.001M/s (drops 0.000 ± 0.000M/s)
rb-libbpf nr_prod 20 2.245 ± 0.001M/s (drops 0.000 ± 0.000M/s)
rb-libbpf nr_prod 24 2.148 ± 0.001M/s (drops 0.000 ± 0.000M/s)
rb-libbpf nr_prod 28 2.190 ± 0.001M/s (drops 0.000 ± 0.000M/s)
rb-libbpf nr_prod 32 2.490 ± 0.001M/s (drops 0.000 ± 0.000M/s)
rb-libbpf nr_prod 36 2.180 ± 0.001M/s (drops 0.000 ± 0.000M/s)
rb-libbpf nr_prod 40 2.201 ± 0.001M/s (drops 0.000 ± 0.000M/s)
rb-libbpf nr_prod 44 2.226 ± 0.001M/s (drops 0.000 ± 0.000M/s)
rb-libbpf nr_prod 48 2.164 ± 0.001M/s (drops 0.000 ± 0.000M/s)
rb-libbpf nr_prod 52 1.874 ± 0.001M/s (drops 0.000 ± 0.000M/s)

After (rqspinlock_t):

Ringbuf, multi-producer contention
==================================
rb-libbpf nr_prod 1  11.078 ± 0.019M/s (drops 0.000 ± 0.000M/s) (-3.16%)
rb-libbpf nr_prod 2  2.801 ± 0.014M/s (drops 0.000 ± 0.000M/s) (3.51%)
rb-libbpf nr_prod 3  3.454 ± 0.005M/s (drops 0.000 ± 0.000M/s) (10.35%)
rb-libbpf nr_prod 4  2.567 ± 0.002M/s (drops 0.000 ± 0.000M/s) (3.84%)
rb-libbpf nr_prod 8  2.468 ± 0.001M/s (drops 0.000 ± 0.000M/s) (4.93%)
rb-libbpf nr_prod 12 2.510 ± 0.001M/s (drops 0.000 ± 0.000M/s) (-10.77%)
rb-libbpf nr_prod 16 2.075 ± 0.001M/s (drops 0.000 ± 0.000M/s) (4.38%)
rb-libbpf nr_prod 20 2.640 ± 0.001M/s (drops 0.000 ± 0.000M/s) (17.59%)
rb-libbpf nr_prod 24 2.092 ± 0.001M/s (drops 0.000 ± 0.000M/s) (-2.61%)
rb-libbpf nr_prod 28 2.426 ± 0.005M/s (drops 0.000 ± 0.000M/s) (10.78%)
rb-libbpf nr_prod 32 2.331 ± 0.004M/s (drops 0.000 ± 0.000M/s) (-6.39%)
rb-libbpf nr_prod 36 2.306 ± 0.003M/s (drops 0.000 ± 0.000M/s) (5.78%)
rb-libbpf nr_prod 40 2.178 ± 0.002M/s (drops 0.000 ± 0.000M/s) (-1.04%)
rb-libbpf nr_prod 44 2.293 ± 0.001M/s (drops 0.000 ± 0.000M/s) (3.01%)
rb-libbpf nr_prod 48 2.022 ± 0.001M/s (drops 0.000 ± 0.000M/s) (-6.56%)
rb-libbpf nr_prod 52 1.809 ± 0.001M/s (drops 0.000 ± 0.000M/s) (-3.47%)

There's a fair amount of noise in the benchmark, with numbers on reruns
going up and down by 10%, so all changes are in the range of this
disturbance, and we see no major regressions.

Reported-by: syzbot+850aaf14624dc0c6d366@syzkaller.appspotmail.com
Closes: https://lore.kernel.org/all/0000000000004aa700061379547e@google.com
Signed-off-by: Kumar Kartikeya Dwivedi <memxor@gmail.com>
Link: https://lore.kernel.org/r/20250411101759.4061366-1-memxor@gmail.com
Signed-off-by: Alexei Starovoitov <ast@kernel.org>
3 months agoMerge tag 'spi-fix-v6.15-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/brooni...
Linus Torvalds [Fri, 11 Apr 2025 15:36:18 +0000 (08:36 -0700)]
Merge tag 'spi-fix-v6.15-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/broonie/spi

Pull spi fixes from Mark Brown:
 "A couple of cleanups for the error handling in the Freescale drivers"

* tag 'spi-fix-v6.15-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/broonie/spi:
  spi: fsl-spi: Remove redundant probe error message
  spi: fsl-qspi: Fix double cleanup in probe error path

3 months agoMerge tag 'ata-6.15-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/libata...
Linus Torvalds [Fri, 11 Apr 2025 15:33:33 +0000 (08:33 -0700)]
Merge tag 'ata-6.15-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/libata/linux

Pull ata fixes from Damien Le Moal:

 - Fix missing error checks during controller probe in the sata_sx4
   driver (Wentao)

 - Fix missing error checks during controller probe in the pata_pxa
   driver (Henry)

* tag 'ata-6.15-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/libata/linux:
  ata: sata_sx4: Add error handling in pdc20621_i2c_read()
  ata: pata_pxa: Fix potential NULL pointer dereference in pxa_ata_probe()

3 months agoMerge tag 'block-6.15-20250411' of git://git.kernel.dk/linux
Linus Torvalds [Fri, 11 Apr 2025 15:29:35 +0000 (08:29 -0700)]
Merge tag 'block-6.15-20250411' of git://git.kernel.dk/linux

Pull more block fixes from Jens Axboe:
 "Apparently my internal clock was off, or perhaps it was just wishful
  thinking, but I sent out block fixes yesterday as my brain assumed it
  was Friday. Subsequently, that missed the NVMe fixes that should go
  into this weeks release as well. Hence, here's a followup with those,
  and another simple fix.

   - NVMe pull request via Christoph:
       - nvmet fc/fcloop refcounting fixes (Daniel Wagner)
       - fix missed namespace/ANA scans (Hannes Reinecke)
       - fix a use after free in the new TCP netns support (Kuniyuki
         Iwashima)
       - fix a NULL instead of false review in multipath (Uday Shankar)

   - Use strscpy() for null_blk disk name copy"

* tag 'block-6.15-20250411' of git://git.kernel.dk/linux:
  null_blk: Use strscpy() instead of strscpy_pad() in null_add_dev()
  nvmet-fc: put ref when assoc->del_work is already scheduled
  nvmet-fc: take tgtport reference only once
  nvmet-fc: update tgtport ref per assoc
  nvmet-fc: inline nvmet_fc_free_hostport
  nvmet-fc: inline nvmet_fc_delete_assoc
  nvmet-fcloop: add ref counting to lport
  nvmet-fcloop: replace kref with refcount
  nvmet-fcloop: swap list_add_tail arguments
  nvme-tcp: fix use-after-free of netns by kernel TCP socket.
  nvme: multipath: fix return value of nvme_available_path
  nvme: re-read ANA log page after ns scan completes
  nvme: requeue namespace scan on missed AENs