dl_add_task_root_domain() is called during sched domain rebuild:
  rebuild_sched_domains_locked()
    partition_and_rebuild_sched_domains()
      rebuild_root_domains()
         for all top_cpuset descendants:
           update_tasks_root_domain()
             for all tasks of cpuset:
               dl_add_task_root_domain()
Change it so that only the task pi lock is taken to check if the task
has a SCHED_DEADLINE (DL) policy. In case that p is a DL task take the
rq lock as well to be able to safely de-reference root domain's DL
bandwidth structure.
Most of the tasks will have another policy (namely SCHED_NORMAL) and
can now bail without taking the rq lock.
One thing to note here: Even in case that there aren't any DL user
tasks, a slow frequency switching system with cpufreq gov schedutil has
a DL task (sugov) per frequency domain running which participates in DL
bandwidth management.
Signed-off-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Reviewed-by: Quentin Perret <qperret@google.com>
Reviewed-by: Valentin Schneider <valentin.schneider@arm.com>
Reviewed-by: Daniel Bristot de Oliveira <bristot@redhat.com>
Acked-by: Juri Lelli <juri.lelli@redhat.com>
Link: https://lkml.kernel.org/r/20210119083542.19856-1-dietmar.eggemann@arm.com
        struct rq *rq;
        struct dl_bw *dl_b;
 
-       rq = task_rq_lock(p, &rf);
-       if (!dl_task(p))
-               goto unlock;
+       raw_spin_lock_irqsave(&p->pi_lock, rf.flags);
+       if (!dl_task(p)) {
+               raw_spin_unlock_irqrestore(&p->pi_lock, rf.flags);
+               return;
+       }
+
+       rq = __task_rq_lock(p, &rf);
 
        dl_b = &rq->rd->dl_bw;
        raw_spin_lock(&dl_b->lock);
 
        raw_spin_unlock(&dl_b->lock);
 
-unlock:
        task_rq_unlock(rq, p, &rf);
 }