]> www.infradead.org Git - users/jedix/linux-maple.git/commitdiff
vmstat: disable vmstat_work on vmstat_cpu_down_prep()
authorKoichiro Den <koichiro.den@canonical.com>
Sat, 21 Dec 2024 03:33:20 +0000 (12:33 +0900)
committerAndrew Morton <akpm@linux-foundation.org>
Tue, 31 Dec 2024 01:59:10 +0000 (17:59 -0800)
Even after mm/vmstat:online teardown, shepherd may still queue work for
the dying cpu until the cpu is removed from online mask.  While it's quite
rare, this means that after unbind_workers() unbinds a per-cpu kworker, it
potentially runs vmstat_update for the dying CPU on an irrelevant cpu
before entering atomic AP states.  When CONFIG_DEBUG_PREEMPT=y, it results
in the following error with the backtrace.

  BUG: using smp_processor_id() in preemptible [00000000] code: \
                                               kworker/7:3/1702
  caller is refresh_cpu_vm_stats+0x235/0x5f0
  CPU: 0 UID: 0 PID: 1702 Comm: kworker/7:3 Tainted: G
  Tainted: [N]=TEST
  Workqueue: mm_percpu_wq vmstat_update
  Call Trace:
   <TASK>
   dump_stack_lvl+0x8d/0xb0
   check_preemption_disabled+0xce/0xe0
   refresh_cpu_vm_stats+0x235/0x5f0
   vmstat_update+0x17/0xa0
   process_one_work+0x869/0x1aa0
   worker_thread+0x5e5/0x1100
   kthread+0x29e/0x380
   ret_from_fork+0x2d/0x70
   ret_from_fork_asm+0x1a/0x30
   </TASK>

So, for mm/vmstat:online, disable vmstat_work reliably on teardown and
symmetrically enable it on startup.

Link: https://lkml.kernel.org/r/20241221033321.4154409-1-koichiro.den@canonical.com
Signed-off-by: Koichiro Den <koichiro.den@canonical.com>
Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
mm/vmstat.c

index 4d016314a56c95e6247f8970511afb364e686278..0889b75cef1491d53ee7ee8a6c6ae97eb175c347 100644 (file)
@@ -2148,13 +2148,14 @@ static int vmstat_cpu_online(unsigned int cpu)
        if (!node_state(cpu_to_node(cpu), N_CPU)) {
                node_set_state(cpu_to_node(cpu), N_CPU);
        }
+       enable_delayed_work(&per_cpu(vmstat_work, cpu));
 
        return 0;
 }
 
 static int vmstat_cpu_down_prep(unsigned int cpu)
 {
-       cancel_delayed_work_sync(&per_cpu(vmstat_work, cpu));
+       disable_delayed_work_sync(&per_cpu(vmstat_work, cpu));
        return 0;
 }