]> www.infradead.org Git - users/jedix/linux-maple.git/commit
sched/fair: Initialize and rework throttle_count for new task-groups
authorPeter Zijlstra <peterz@infradead.org>
Wed, 22 Jun 2016 13:14:26 +0000 (15:14 +0200)
committerJack Vogel <jack.vogel@oracle.com>
Thu, 5 Apr 2018 20:56:13 +0000 (13:56 -0700)
commitf2b8303c04f14cd682bc415ea7dc041b864a53ff
tree7c0ca28799a26d3b2476146b9b6bcd131e73ffbf
parentca201100fe014769deb61ab54d51a3dc5a3183ff
sched/fair: Initialize and rework throttle_count for new task-groups

This patch is a combination of the following three patches from mainline:

094f469172e0 sched/fair: Initialize throttle_count for new task-groups lazily

Cgroup created inside throttled group must inherit current throttle_count.
Broken throttle_count allows to nominate throttled entries as a next buddy,
later this leads to null pointer dereference in pick_next_task_fair().

This patch initialize cfs_rq->throttle_count at first enqueue: laziness
allows to skip locking all rq at group creation. Lazy approach also allows
to skip full sub-tree scan at throttling hierarchy (not in this patch).

8663e24d56dc sched/fair: Reorder cgroup creation code

A future patch needs rq->lock held _after_ we link the task_group into
the hierarchy. In order to avoid taking every rq->lock twice, reorder
things a little and create online_fair_sched_group() to be called
after we link the task_group.

All this code is still ran from css_alloc() so css_online() isn't in
fact used for this.

55e16d30bd99 sched/fair: Rework throttle_count sync

Since we already take rq->lock when creating a cgroup, use it to also
sync the throttle_count and avoid the extra state and enqueue path
branch.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Konstantin Khlebnikov <khlebnikov@yandex-team.ru>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: bsegall@google.com
Cc: linux-kernel@vger.kernel.org
[ Fixed build warning. ]
Signed-off-by: Ingo Molnar <mingo@kernel.org>
The patches have been combined because applying them separately will
cause a KABI breakage and introduce a dummy function.

Orabug: 27787518

Conflicts:
kernel/sched/fair.c
kernel/sched/core.c
kernel/sched/sched.h

Signed-off-by: Gayatri Vasudevan <gayatri.vasudevan@oracle.com>
Reviewed-by: Mridula Shastry <mridula.c.shastry@oracle.com>
kernel/sched/core.c
kernel/sched/fair.c
kernel/sched/sched.h