]> www.infradead.org Git - users/jedix/linux-maple.git/commitdiff
x86/mtrr: Check if fixed MTRRs exist before saving them
authorAndi Kleen <ak@linux.intel.com>
Thu, 8 Aug 2024 00:02:44 +0000 (17:02 -0700)
committerThomas Gleixner <tglx@linutronix.de>
Thu, 8 Aug 2024 15:03:12 +0000 (17:03 +0200)
MTRRs have an obsolete fixed variant for fine grained caching control
of the 640K-1MB region that uses separate MSRs. This fixed variant has
a separate capability bit in the MTRR capability MSR.

So far all x86 CPUs which support MTRR have this separate bit set, so it
went unnoticed that mtrr_save_state() does not check the capability bit
before accessing the fixed MTRR MSRs.

Though on a CPU that does not support the fixed MTRR capability this
results in a #GP.  The #GP itself is harmless because the RDMSR fault is
handled gracefully, but results in a WARN_ON().

Add the missing capability check to prevent this.

Fixes: 2b1f6278d77c ("[PATCH] x86: Save the MTRRs of the BSP before booting an AP")
Signed-off-by: Andi Kleen <ak@linux.intel.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: stable@vger.kernel.org
Link: https://lore.kernel.org/all/20240808000244.946864-1-ak@linux.intel.com
arch/x86/kernel/cpu/mtrr/mtrr.c

index 767bf1c71aadda1a63e9d8c43244b219294ac778..2a2fc14955cd3b2b0302486d82b542a280bd36c6 100644 (file)
@@ -609,7 +609,7 @@ void mtrr_save_state(void)
 {
        int first_cpu;
 
-       if (!mtrr_enabled())
+       if (!mtrr_enabled() || !mtrr_state.have_fixed)
                return;
 
        first_cpu = cpumask_first(cpu_online_mask);