From 5d808c78d97251af1d3a3e4f253e7d6c39fd871e Mon Sep 17 00:00:00 2001 From: Tianchen Ding Date: Tue, 31 Dec 2024 13:50:20 +0800 Subject: [PATCH] sched: Fix race between yield_to() and try_to_wake_up() We met a SCHED_WARN in set_next_buddy(): __warn_printk set_next_buddy yield_to_task_fair yield_to kvm_vcpu_yield_to [kvm] ... After a short dig, we found the rq_lock held by yield_to() may not be exactly the rq that the target task belongs to. There is a race window against try_to_wake_up(). CPU0 target_task blocking on CPU1 lock rq0 & rq1 double check task_rq == p_rq, ok woken to CPU2 (lock task_pi & rq2) task_rq = rq2 yield_to_task_fair (w/o lock rq2) In this race window, yield_to() is operating the task w/o the correct lock. Fix this by taking task pi_lock first. Fixes: d95f41220065 ("sched: Add yield_to(task, preempt) functionality") Signed-off-by: Tianchen Ding Signed-off-by: Peter Zijlstra (Intel) Link: https://lkml.kernel.org/r/20241231055020.6521-1-dtcccc@linux.alibaba.com --- kernel/sched/syscalls.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/kernel/sched/syscalls.c b/kernel/sched/syscalls.c index ff0e5ab4e37c..943406c4ee86 100644 --- a/kernel/sched/syscalls.c +++ b/kernel/sched/syscalls.c @@ -1433,7 +1433,7 @@ int __sched yield_to(struct task_struct *p, bool preempt) struct rq *rq, *p_rq; int yielded = 0; - scoped_guard (irqsave) { + scoped_guard (raw_spinlock_irqsave, &p->pi_lock) { rq = this_rq(); again: -- 2.50.1