]> www.infradead.org Git - users/jedix/linux-maple.git/commitdiff
sched/fair: Implement update_blocked_averages() for CONFIG_FAIR_GROUP_SCHED=n
authorVincent Guittot <vincent.guittot@linaro.org>
Wed, 15 Jul 2015 00:04:38 +0000 (08:04 +0800)
committerAllen Pais <allen.pais@oracle.com>
Tue, 16 May 2017 04:31:08 +0000 (10:01 +0530)
The load and the utilization of idle CPUs must be updated periodically in
order to decay the blocked part.

If CONFIG_FAIR_GROUP_SCHED is not set, the load and util of idle cpus
are not decayed and stay at the values set before becoming idle.

Orabug: 25544560

Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
Signed-off-by: Yuyang Du <yuyang.du@intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: arjan@linux.intel.com
Cc: bsegall@google.com
Cc: dietmar.eggemann@arm.com
Cc: fengguang.wu@intel.com
Cc: len.brown@intel.com
Cc: morten.rasmussen@arm.com
Cc: pjt@google.com
Cc: rafael.j.wysocki@intel.com
Cc: umgwanakikbuti@gmail.com
Link: http://lkml.kernel.org/r/1436918682-4971-4-git-send-email-yuyang.du@intel.com
[ Fixed up the SOB chain. ]
Signed-off-by: Ingo Molnar <mingo@kernel.org>
(cherry picked from commit 6c1d47c0827304949e0eb9479f4d587f226fac8b)

Signed-off-by: Atish Patra <atish.patra@oracle.com>
Signed-off-by: Allen Pais <allen.pais@oracle.com>
kernel/sched/fair.c

index 9fe988a5d323d038710dd44097f5d4917b00ff68..3ee761ee5fa76313d980291acc10ef7d7b06f1ee 100644 (file)
@@ -5861,6 +5861,14 @@ static unsigned long task_h_load(struct task_struct *p)
 #else
 static inline void update_blocked_averages(int cpu)
 {
+       struct rq *rq = cpu_rq(cpu);
+       struct cfs_rq *cfs_rq = &rq->cfs;
+       unsigned long flags;
+
+       raw_spin_lock_irqsave(&rq->lock, flags);
+       update_rq_clock(rq);
+       update_cfs_rq_load_avg(cfs_rq_clock_task(cfs_rq), cfs_rq);
+       raw_spin_unlock_irqrestore(&rq->lock, flags);
 }
 
 static unsigned long task_h_load(struct task_struct *p)