]> www.infradead.org Git - users/hch/misc.git/commitdiff
sched/task.h: fix the wrong comment on task_lock() nesting with tasklist_lock
authorOleg Nesterov <oleg@redhat.com>
Sun, 14 Sep 2025 11:09:08 +0000 (13:09 +0200)
committerAndrew Morton <akpm@linux-foundation.org>
Tue, 23 Sep 2025 03:10:59 +0000 (20:10 -0700)
The ancient comment above task_lock() states that it can be nested outside
of read_lock(&tasklist_lock), but this is no longer true:

  CPU_0 CPU_1 CPU_2

  task_lock() read_lock(tasklist)
   write_lock_irq(tasklist)
  read_lock(tasklist) task_lock()

Unless CPU_0 calls read_lock() in IRQ context, queued_read_lock_slowpath()
won't get the lock immediately, it will spin waiting for the pending
writer on CPU_2, resulting in a deadlock.

Link: https://lkml.kernel.org/r/20250914110908.GA18769@redhat.com
Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Cc: Christian Brauner <brauner@kernel.org>
Cc: Jiri Slaby <jirislaby@kernel.org>
Cc: Mateusz Guzik <mjguzik@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
include/linux/sched/task.h

index ea41795a352bcad044f9d74ab2d331ca60a37ff5..8ff98b18b24b98a2d8cac03d97ea1c7e4bd4f6fa 100644 (file)
@@ -210,9 +210,8 @@ static inline struct vm_struct *task_stack_vm_area(const struct task_struct *t)
  * pins the final release of task.io_context.  Also protects ->cpuset and
  * ->cgroup.subsys[]. And ->vfork_done. And ->sysvshm.shm_clist.
  *
- * Nests both inside and outside of read_lock(&tasklist_lock).
- * It must not be nested with write_lock_irq(&tasklist_lock),
- * neither inside nor outside.
+ * Nests inside of read_lock(&tasklist_lock). It must not be nested with
+ * write_lock_irq(&tasklist_lock), neither inside nor outside.
  */
 static inline void task_lock(struct task_struct *p)
 {