From 58f8ffa917669a0c8c027e24d5349f0b488f8181 Mon Sep 17 00:00:00 2001 From: Andy Lutomirski Date: Wed, 2 Apr 2025 11:45:38 +0200 Subject: [PATCH] x86/mm: Allow temporary MMs when IRQs are on EFI runtime services should use temporary MMs, but EFI runtime services want IRQs on. Preemption must still be disabled in a temporary MM context. At some point, the entirely temporary MM mechanism should be moved out of arch code. Signed-off-by: Andy Lutomirski Signed-off-by: Peter Zijlstra (Intel) Signed-off-by: Ingo Molnar Cc: Rik van Riel Cc: "H. Peter Anvin" Cc: Linus Torvalds Cc: Andrew Morton Cc: Ard Biesheuvel Link: https://lore.kernel.org/r/20250402094540.3586683-6-mingo@kernel.org --- arch/x86/mm/tlb.c | 19 ++++++++++++------- 1 file changed, 12 insertions(+), 7 deletions(-) diff --git a/arch/x86/mm/tlb.c b/arch/x86/mm/tlb.c index 38fdcf875d5fa..c9b87e5f569a8 100644 --- a/arch/x86/mm/tlb.c +++ b/arch/x86/mm/tlb.c @@ -977,18 +977,23 @@ void enter_lazy_tlb(struct mm_struct *mm, struct task_struct *tsk) * that override the kernel memory protections (e.g., W^X), without exposing the * temporary page-table mappings that are required for these write operations to * other CPUs. Using a temporary mm also allows to avoid TLB shootdowns when the - * mapping is torn down. + * mapping is torn down. Temporary mms can also be used for EFI runtime service + * calls or similar functionality. * - * Context: The temporary mm needs to be used exclusively by a single core. To - * harden security IRQs must be disabled while the temporary mm is - * loaded, thereby preventing interrupt handler bugs from overriding - * the kernel memory protection. + * It is illegal to schedule while using a temporary mm -- the context switch + * code is unaware of the temporary mm and does not know how to context switch. + * Use a real (non-temporary) mm in a kernel thread if you need to sleep. + * + * Note: For sensitive memory writes, the temporary mm needs to be used + * exclusively by a single core, and IRQs should be disabled while the + * temporary mm is loaded, thereby preventing interrupt handler bugs from + * overriding the kernel memory protection. */ struct mm_struct *use_temporary_mm(struct mm_struct *temp_mm) { struct mm_struct *prev_mm; - lockdep_assert_irqs_disabled(); + lockdep_assert_preemption_disabled(); /* * Make sure not to be in TLB lazy mode, as otherwise we'll end up @@ -1020,7 +1025,7 @@ struct mm_struct *use_temporary_mm(struct mm_struct *temp_mm) void unuse_temporary_mm(struct mm_struct *prev_mm) { - lockdep_assert_irqs_disabled(); + lockdep_assert_preemption_disabled(); /* Clear the cpumask, to indicate no TLB flushing is needed anywhere */ cpumask_clear_cpu(smp_processor_id(), mm_cpumask(this_cpu_read(cpu_tlbstate.loaded_mm))); -- 2.50.1