sched: Move the loadavg code to a more obvious location
A previous commit
f33dfff75d968 ("sched/fair: Rewrite runnable load
and utilization average tracking") created a regression in global
load average in uptime. Active Load average computation function
should be invoked periodically to update the delta for each runqueue.
Use the following upstream commit
3289bdb42 to fix this in stead of
quick-fix.
Before the fix
procs_
when running load average
======== ======= =================
13:32:46 1 0.65, 0.22, 0.08
13:33:47 129 0.78, 0.33, 0.12
13:34:47 129 0.74, 0.41, 0.16
13:35:47 129 0.60, 0.42, 0.18
13:36:47 129 0.77, 0.49, 0.22
13:37:47 129 0.78, 0.55, 0.26
After the fix:
procs_
when running load average
======== ======= =================
19:46:35 1 0.58, 0.38, 0.16
19:47:35 129 74.02, 21.09, 7.27
19:48:35 129 103.16, 39.08, 14.31
19:49:35 129 114.25, 53.95, 20.98
19:52:36 257 172.40, 97.26, 42.96
19:53:37 257 221.54, 124.95, 55.87
19:54:37 257 237.13, 147.05, 67.80
Original upstream commit message:
I could not find the loadavg code.. turns out it was hidden in a file
called proc.c. It further got mingled up with the cruft per rq load
indexes (which we really want to get rid of).
Move the per rq load indexes into the fair.c load-balance code (that's
the only thing that uses them) and rename proc.c to loadavg.c so we
can find it again.
Orabug:
26266279
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Paul Gortmaker <paul.gortmaker@windriver.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
[ Did minor cleanups to the code. ]
Signed-off-by: Ingo Molnar <mingo@kernel.org>
(cherry picked from commit
3289bdb429884c0279bf9ab72dff7b934f19dfc6)
Conflicts:
kernel/sched/fair.c
kernel/sched/loadavg.c
kernel/sched/sched.h
Signed-off-by: Vijay Kumar <vijay.ac.kumar@oracle.com>
Signed-off-by: Atish Patra <atish.patra@oracle.com>
Signed-off-by: Allen Pais <allen.pais@oracle.com>
Reviewed-by: Dhaval Giani <dhaval.giani@oracle.com>