(cherry picked from commit a61246c96195fc5f7500f6842e883b9eb1567d8d) Signed-off-by: Calum Mackay <calum.mackay@oracle.com> Reviewed-by: Darren Kenny <darren.kenny@oracle.com> Tested-by: alfredo.ramirez@oracle.com Signed-off-by: Brian Maly <brian.maly@oracle.com>
Allow to modify the low-level unbound workqueues cpumask through
sysfs. This is performed by traversing the entire workqueue list
and calling apply_wqattrs_prepare() on the unbound workqueues
with the new low level mask. Only after all the preparation are done,
we commit them all together.
Ordered workqueues are ignored from the low level unbound workqueue
cpumask, it will be handled in near future.
All the (default & per-node) pwqs are mandatorily controlled by
the low level cpumask. If the user configured cpumask doesn't overlap
with the low level cpumask, the low level cpumask will be used for the
wq instead.
The comment of wq_calc_node_cpumask() is updated and explicitly
requires that its first argument should be the attrs of the default
pwq.
The default wq_unbound_cpumask is cpu_possible_mask. The workqueue
subsystem doesn't know its best default value, let the system manager
or the other subsystem set it when needed.
Changed from V8:
merge the calculating code for the attrs of the default pwq together.
minor change the code&comments for saving the user configured attrs.
remove unnecessary list_del().
minor update the comment of wq_calc_node_cpumask().
update the comment of workqueue_set_unbound_cpumask();
Cc: Christoph Lameter <cl@linux.com> Cc: Kevin Hilman <khilman@linaro.org> Cc: Lai Jiangshan <laijs@cn.fujitsu.com> Cc: Mike Galbraith <bitbucket@online.de> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Tejun Heo <tj@kernel.org> Cc: Viresh Kumar <viresh.kumar@linaro.org> Cc: Frederic Weisbecker <fweisbec@gmail.com> Original-patch-by: Frederic Weisbecker <fweisbec@gmail.com> Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com> Signed-off-by: Tejun Heo <tj@kernel.org>
(cherry picked from commit 042f7df15a4fff8eec42873f755aea848dcdedd1)
Signed-off-by: Mihai Carabas <mihai.carabas@oracle.com> Reviewed-by: Darren Kenny <darren.kenny@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
Create a cpumask that limits the affinity of all unbound workqueues.
This cpumask is controlled through a file at the root of the workqueue
sysfs directory.
It works on a lower-level than the per WQ_SYSFS workqueues cpumask files
such that the effective cpumask applied for a given unbound workqueue is
the intersection of /sys/devices/virtual/workqueue/$WORKQUEUE/cpumask and
the new /sys/devices/virtual/workqueue/cpumask file.
This patch implements the basic infrastructure and the read interface.
wq_unbound_cpumask is initially set to cpu_possible_mask.
Cc: Christoph Lameter <cl@linux.com> Cc: Kevin Hilman <khilman@linaro.org> Cc: Lai Jiangshan <laijs@cn.fujitsu.com> Cc: Mike Galbraith <bitbucket@online.de> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Tejun Heo <tj@kernel.org> Cc: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com> Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com> Signed-off-by: Tejun Heo <tj@kernel.org>
(cherry picked from commit b05a79280b346eb24ddb73b39988398015291075)
Signed-off-by: Mihai Carabas <mihai.carabas@oracle.com> Reviewed-by: Darren Kenny <darren.kenny@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
Jann Horn [Mon, 25 Jun 2018 14:25:44 +0000 (16:25 +0200)]
scsi: sg: mitigate read/write abuse
As Al Viro noted in commit 128394eff343 ("sg_write()/bsg_write() is not fit
to be called under KERNEL_DS"), sg improperly accesses userspace memory
outside the provided buffer, permitting kernel memory corruption via
splice(). But it doesn't just do it on ->write(), also on ->read().
As a band-aid, make sure that the ->read() and ->write() handlers can not
be called in weird contexts (kernel context or credentials different from
file opener), like for ib_safe_file_access().
If someone needs to use these interfaces from different security contexts,
a new interface should be written that goes through the ->ioctl() handler.
I've mostly copypasted ib_safe_file_access() over as sg_safe_file_access()
because I couldn't find a good common header - please tell me if you know a
better way.
[mkp: s/_safe_/_check_/]
Fixes: 1da177e4c3f4 ("Linux-2.6.12-rc2") Cc: <stable@vger.kernel.org> Signed-off-by: Jann Horn <jannh@google.com> Acked-by: Douglas Gilbert <dgilbert@interlog.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Orabug: 28824718
CVE: CVE-2017-13168
Reviewed-by: Darren Kenny <darren.kenny@oracle.com> Signed-off-by: Allen Pais <allen.pais@oracle.com>
(cherry picked from commit 26b5b874aff5659a7e26e5b1997e3df2c41fa7fd) Signed-off-by: Brian Maly <brian.maly@oracle.com>
Conflicts:
drivers/scsi/sg.c
Alexandre Chartre [Tue, 16 Oct 2018 18:19:46 +0000 (20:19 +0200)]
x86/speculation: Retpoline should always be available on Skylake
Now that we can dynamically toggle retpoline on or off, retpoline
should always be available even on Skylake platforms, without
having to explicitly select retpoline at boot time.
x86/speculation: Add sysfs entry to enable/disable retpoline
Add /sys/kernel/debug/x86/retpoline_enabled to enable/disable retpoline.
Enabling retpoline will also enable IBRS for the firmware.
Note that IBRS and retpoline can't be enabled together. Enabling retpoline
while IBRS is already enabled will automatically disable IBRS. Similarly,
enabling IBRS while retpoline is already enabled will automatically disable
retpoline.
On Skylake, retpoline is not provided and can't be enabled unless the system
has been explicitly booted with retpoline (using spectre_v2=retpoline or
spectre_v2_heuristics=skylake=off).
Also fix the behavior when retpoline is not available (!CONFIG_RETPOLINE):
now we will try using IBRS (if it is available) instead of not using any
mitigation.
Signed-off-by: Alexandre Chartre <alexandre.chartre@oracle.com> Reviewed-by: Mihai Carabas <mihai.carabas@oracle.com>
(cherry picked from UEK5 commit d75554157882d9b4df91f0b2bbc4907e2731781e)
[Backport: a large part of d75554157882d9b4df91f0b2bbc4907e2731781e
was already ported in previous commit ("x86/speculation: switch to IBRS
when loading a non-retpoline module"). This ports the remaining part
which effectively adds the retpoline_enabled sysfs entry.
Also we issue a warning when enabling retpoline and a non-retpoline
module is loaded.]
Signed-off-by: Alexandre Chartre <alexandre.chartre@oracle.com> Reviewed-by: Darren Kenny <darren.kenny@oracle.com> Reviewed-by: Mihai Carabas <mihai.carabas@oracle.com> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
x86/speculation: Switch to IBRS when loading a non-retpoline module
When retpoline is used and a non-retpoline module gets loaded, IBRS
is enabled in addition of retpoline. Now that we can dynamically
disable retpoline, do it and use IBRS only. That way, IBRS and
retpoline will never be enabled together.
Changes are located in kernel/module.c. Additional changes are from
parts of UEK5 commit d75554157882d9b4df91f0b2bbc4907e2731781e
("x86/speculation: Add sysfs entry to enable/disable retpoline")
to provide the mechanism for switching between retpoline and IBRS.
Signed-off-by: Alexandre Chartre <alexandre.chartre@oracle.com> Reviewed-by: Darren Kenny <darren.kenny@oracle.com> Reviewed-by: Mihai Carabas <mihai.carabas@oracle.com> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
Now that the X86_FEATURE_RETPOLINE is always set, some assembly
alternatives can be simplied or even removed. Also add early
check of retpoline_enabled_key in CALL_NOSPEC to avoid an extra
call into thunk when retpoline is disabled.
Signed-off-by: Alexandre Chartre <alexandre.chartre@oracle.com> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Reviewed-by: Zhenzhong Duan <zhenzhong.duan@oracle.com>
(cherry picked from UEK5 commit eb88d822befdc73952ae7c00cfcbce9ff5aad574)
Signed-off-by: Alexandre Chartre <alexandre.chartre@oracle.com> Reviewed-by: Darren Kenny <darren.kenny@oracle.com> Reviewed-by: Mihai Carabas <mihai.carabas@oracle.com> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
Alexandre Chartre [Thu, 30 Aug 2018 13:51:47 +0000 (15:51 +0200)]
x86/speculation: Use static key to enable/disable retpoline
Change the way retpoline is enabled.
Up to now, the retpoline feature was enabled using alternatives
depending whether the X86_FEATURE_RETPOLINE cpu capability was set
or not. The usage of alternatives prevented enabling or disabling
after the boot sequence.
Now, retpoline is enabled using jump labels controlled by the
retpoline_enabled_key static key. This enables retpoline to be
selectively enabled or disabled even after the boot sequence.
Signed-off-by: Alexandre Chartre <alexandre.chartre@oracle.com> Reviewed-by: Darren Kenny <darren.kenny@oracle.com> Reviewed-by: Mihai Carabas <mihai.carabas@oracle.com> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
Signed-off-by: Alexandre Chartre <alexandre.chartre@oracle.com> Reviewed-by: Darren Kenny <darren.kenny@oracle.com> Reviewed-by: Mihai Carabas <mihai.carabas@oracle.com> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
Jason Baron [Wed, 3 Aug 2016 20:46:36 +0000 (13:46 -0700)]
jump_label: remove bug.h, atomic.h dependencies for HAVE_JUMP_LABEL
The current jump_label.h includes bug.h for things such as WARN_ON().
This makes the header problematic for inclusion by kernel.h or any
headers that kernel.h includes, since bug.h includes kernel.h (circular
dependency). The inclusion of atomic.h is similarly problematic. Thus,
this should make jump_label.h 'includable' from most places.
Link: http://lkml.kernel.org/r/7060ce35ddd0d20b33bf170685e6b0fab816bdf2.1467837322.git.jbaron@akamai.com Signed-off-by: Jason Baron <jbaron@akamai.com> Cc: "David S. Miller" <davem@davemloft.net> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Chris Metcalf <cmetcalf@mellanox.com> Cc: Heiko Carstens <heiko.carstens@de.ibm.com> Cc: Joe Perches <joe@perches.com> Cc: Martin Schwidefsky <schwidefsky@de.ibm.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Paul Mackerras <paulus@samba.org> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
(cherry picked from commit 1f69bf9c6137602cd028c96b4f8329121ec89231)
Signed-off-by: Alexandre Chartre <alexandre.chartre@oracle.com> Reviewed-by: Darren Kenny <darren.kenny@oracle.com> Reviewed-by: Mihai Carabas <mihai.carabas@oracle.com> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
CPU 1 CPU 2
static_key_slow_inc()
atomic_inc_not_zero()
-> key.enabled == 0, no increment
jump_label_lock()
atomic_inc_return()
-> key.enabled == 1 now
static_key_slow_inc()
atomic_inc_not_zero()
-> key.enabled == 1, inc to 2
return
** static key is wrong!
jump_label_update()
jump_label_unlock()
Testing the static key at the point marked by (**) will follow the
wrong path for jumps that have not been patched yet. This can
actually happen when creating many KVM virtual machines with userspace
LAPIC emulation; just run several copies of the following program:
int main(void)
{
for (;;) {
int kvmfd = open("/dev/kvm", O_RDONLY);
int vmfd = ioctl(kvmfd, KVM_CREATE_VM, 0);
close(ioctl(vmfd, KVM_CREATE_VCPU, 1));
close(vmfd);
close(kvmfd);
}
return 0;
}
Every KVM_CREATE_VCPU ioctl will attempt a static_key_slow_inc() call.
The static key's purpose is to skip NULL pointer checks and indeed one
of the processes eventually dereferences NULL.
As explained in the commit that introduced the bug:
jump_label_update() needs key.enabled to be true. The solution adopted
here is to temporarily make key.enabled == -1, and use go down the
slow path when key.enabled <= 0.
Reported-by: Dmitry Vyukov <dvyukov@google.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: <stable@vger.kernel.org> # v4.3+ Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Fixes: 706249c222f6 ("locking/static_keys: Rework update logic") Link: http://lkml.kernel.org/r/1466527937-69798-1-git-send-email-pbonzini@redhat.com
[ Small stylistic edits to the changelog and the code. ] Signed-off-by: Ingo Molnar <mingo@kernel.org>
(cherry picked from 4c5ea0a9cd02d6aa8adc86e100b2a4cff8d614ff)
Signed-off-by: Alexandre Chartre <alexandre.chartre@oracle.com> Reviewed-by: Darren Kenny <darren.kenny@oracle.com> Reviewed-by: Mihai Carabas <mihai.carabas@oracle.com> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
jump_label: make static_key_enabled() work on static_key_true/false types too
static_key_enabled() can be used on struct static_key but not on its
wrapper types static_key_true and static_key_false. The function is
useful for debugging and management of static keys. Update it so that
it can be used for the wrapper types too.
Signed-off-by: Tejun Heo <tj@kernel.org> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Andrew Morton <akpm@linux-foundation.org>
(cherry picked from fa128fd735bd236b6b04d3fedfed7a784137c185)
Signed-off-by: Alexandre Chartre <alexandre.chartre@oracle.com> Reviewed-by: Darren Kenny <darren.kenny@oracle.com> Reviewed-by: Mihai Carabas <mihai.carabas@oracle.com> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
Signed-off-by: Alexandre Chartre <alexandre.chartre@oracle.com> Reviewed-by: Darren Kenny <darren.kenny@oracle.com> Reviewed-by: Mihai Carabas <mihai.carabas@oracle.com> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
Signed-off-by: Jonathan Corbet <corbet@lwn.net> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Jason Baron <jbaron@akamai.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/20150907131803.54c027e1@lwn.net Signed-off-by: Ingo Molnar <mingo@kernel.org>
(cherry picked from edcd591c77a48da753456f92daf8bb50fe9bac93)
Signed-off-by: Alexandre Chartre <alexandre.chartre@oracle.com> Reviewed-by: Darren Kenny <darren.kenny@oracle.com> Reviewed-by: Mihai Carabas <mihai.carabas@oracle.com> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
Signed-off-by: Alexandre Chartre <alexandre.chartre@oracle.com> Reviewed-by: Darren Kenny <darren.kenny@oracle.com> Reviewed-by: Mihai Carabas <mihai.carabas@oracle.com> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
Andy Lutomirski [Thu, 12 Nov 2015 20:59:03 +0000 (12:59 -0800)]
x86/asm: Add asm macros for static keys/jump labels
Unfortunately, we can only do this if HAVE_JUMP_LABEL. In
principle, we could do some serious surgery on the core jump
label infrastructure to keep the patch infrastructure available
on x86 on all builds, but that's probably not worth it.
Implementing the macros using a conditional branch as a fallback
seems like a bad idea: we'd have to clobber flags.
This limitation can't cause silent failures -- trying to include
asm/jump_label.h at all on a non-HAVE_JUMP_LABEL kernel will
error out. The macro's users are responsible for handling this
issue themselves.
Signed-off-by: Andy Lutomirski <luto@kernel.org> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Gerst <brgerst@gmail.com> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Link: http://lkml.kernel.org/r/63aa45c4b692e8469e1876d6ccbb5da707972990.1447361906.git.luto@kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
(cherry picked from commit 2671c3e4fe2a34bd9bf2eecdf5d1149d4b55dbdf)
Signed-off-by: Alexandre Chartre <alexandre.chartre@oracle.com> Reviewed-by: Darren Kenny <darren.kenny@oracle.com> Reviewed-by: Mihai Carabas <mihai.carabas@oracle.com> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
Signed-off-by: Alexandre Chartre <alexandre.chartre@oracle.com> Reviewed-by: Darren Kenny <darren.kenny@oracle.com> Reviewed-by: Mihai Carabas <mihai.carabas@oracle.com> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
Peter Zijlstra [Wed, 12 Aug 2015 19:04:22 +0000 (21:04 +0200)]
jump_label/x86: Work around asm build bug on older/backported GCCs
Boris reported that gcc version 4.4.4 20100503 (Red Hat
4.4.4-2) fails to build linux-next kernels that have
this fresh commit via the locking tree:
11276d5306b8 ("locking/static_keys: Add a new static_key interface")
The problem appears to be that even though @key and @branch are
compile time constants, it doesn't see the following expression
as an immediate value:
&((char *)key)[branch]
More recent GCCs don't appear to have this problem.
In particular, Red Hat backported the 'asm goto' feature into 4.4,
'normal' 4.4 compilers will not have this feature and thus not
run into this asm.
The workaround is to supply both values to the asm as immediates
and do the addition in asm.
Suggested-by: H. Peter Anvin <hpa@zytor.com> Reported-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Tested-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-kernel@vger.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
(cherry picked from d420acd816c07c7be31bd19d09cbcb16e5572fa6)
Signed-off-by: Alexandre Chartre <alexandre.chartre@oracle.com> Reviewed-by: Darren Kenny <darren.kenny@oracle.com> Reviewed-by: Mihai Carabas <mihai.carabas@oracle.com> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
Signed-off-by: Alexandre Chartre <alexandre.chartre@oracle.com> Reviewed-by: Darren Kenny <darren.kenny@oracle.com> Reviewed-by: Mihai Carabas <mihai.carabas@oracle.com> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
Peter Zijlstra [Fri, 24 Jul 2015 13:06:37 +0000 (15:06 +0200)]
locking/static_keys: Rework update logic
Instead of spreading the branch_default logic all over the place,
concentrate it into the one jump_label_type() function.
This does mean we need to actually increment/decrement the enabled
count _before_ calling the update path, otherwise jump_label_type()
will not see the right state.
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-kernel@vger.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
(cherry picked from 706249c222f68471b6f8e9e8e9b77665c404b226)
Signed-off-by: Alexandre Chartre <alexandre.chartre@oracle.com> Reviewed-by: Darren Kenny <darren.kenny@oracle.com> Reviewed-by: Mihai Carabas <mihai.carabas@oracle.com> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
Signed-off-by: Alexandre Chartre <alexandre.chartre@oracle.com> Reviewed-by: Darren Kenny <darren.kenny@oracle.com> Reviewed-by: Mihai Carabas <mihai.carabas@oracle.com> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
Peter Zijlstra [Fri, 24 Jul 2015 12:55:40 +0000 (14:55 +0200)]
jump_label, locking/static_keys: Rename JUMP_LABEL_TYPE_* and related helpers to the static_key* pattern
Rename the JUMP_LABEL_TYPE_* macros to be JUMP_TYPE_* and move the
inline helpers into kernel/jump_label.c, since that's the only place
they're ever used.
Also rename the helpers where it's all about static keys.
This is the second step in removing the naming confusion that has led to
a stream of avoidable bugs such as:
a833581e372a ("x86, perf: Fix static_key bug in load_mm_cr4()")
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-kernel@vger.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
(cherry picked from a1efb01feca597b2abbc89873b40ef8ec6690168)
Signed-off-by: Alexandre Chartre <alexandre.chartre@oracle.com> Reviewed-by: Darren Kenny <darren.kenny@oracle.com> Reviewed-by: Mihai Carabas <mihai.carabas@oracle.com> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
Peter Zijlstra [Fri, 24 Jul 2015 12:45:44 +0000 (14:45 +0200)]
jump_label: Rename JUMP_LABEL_{EN,DIS}ABLE to JUMP_LABEL_{JMP,NOP}
Since we've already stepped away from ENABLE is a JMP and DISABLE is a
NOP with the branch_default bits, and are going to make it even worse,
rename it to make it all clearer.
This way we don't mix multiple levels of logic attributes, but have a
plain 'physical' name for what the current instruction patching status
of a jump label is.
This is a first step in removing the naming confusion that has led to
a stream of avoidable bugs such as:
a833581e372a ("x86, perf: Fix static_key bug in load_mm_cr4()")
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: linux-kernel@vger.kernel.org
[ Beefed up the changelog. ] Signed-off-by: Ingo Molnar <mingo@kernel.org>
(cherry picked from commit 76b235c6bcb16062d663e2ee96db0b69f2e6bc14)
Signed-off-by: Alexandre Chartre <alexandre.chartre@oracle.com> Reviewed-by: Darren Kenny <darren.kenny@oracle.com> Reviewed-by: Mihai Carabas <mihai.carabas@oracle.com> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Jason Baron <jbaron@akamai.com> Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
(cherry picked from commit bed831f9a251968272dae10a83b512c7db256ef0)
Signed-off-by: Alexandre Chartre <alexandre.chartre@oracle.com> Reviewed-by: Darren Kenny <darren.kenny@oracle.com> Reviewed-by: Mihai Carabas <mihai.carabas@oracle.com> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
x86/speculation: Protect against userspace-userspace spectreRSB
The article "Spectre Returns! Speculation Attacks using the Return Stack
Buffer" [1] describes two new (sub-)variants of spectrev2-like attacks,
making use solely of the RSB contents even on CPUs that don't fallback to
BTB on RSB underflow (Skylake+).
Mitigate userspace-userspace attacks by always unconditionally filling RSB on
context switch when the generic spectrev2 mitigation has been enabled.
[1] https://arxiv.org/pdf/1807.07940.pdf
Signed-off-by: Jiri Kosina <jkosina@suse.cz> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Josh Poimboeuf <jpoimboe@redhat.com> Acked-by: Tim Chen <tim.c.chen@linux.intel.com> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Cc: Borislav Petkov <bp@suse.de> Cc: David Woodhouse <dwmw@amazon.co.uk> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: stable@vger.kernel.org Link: https://lkml.kernel.org/r/nycvar.YFH.7.76.1807261308190.997@cbobk.fhfr.pm
(cherry picked from commit fdf82a7856b32d905c39afc85e34364491e46346)
Signed-off-by: Alejandro Jimenez <alejandro.j.jimenez@oracle.com> Reviewed-by: Mark Kanda <mark.kanda@oracle.com> Reviewed-by: Darren Kenny <darren.kenny@oracle.com> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
Conflicts:
arch/x86/kernel/cpu/bugs.c
(UEK4 has the relevant code in arch/x86/kernel/cpu/bugs_64.c.
Also, the upstream patch removes the function is_skylake_era(),
but this patch does not since it is still used in the UEK code)
x86/spectre_v2: Remove remaining references to lfence mitigation
The LFENCE mitigation alone was deemed to be insufficient and the
related code was mostly removed by commit cf3b86bc07619
(Revert "x86/spec_ctrl: Add 'nolfence' knob to disable fallback for
spectre_v2 mitigation")
This patch cleans up two additional references that still exist.
Signed-off-by: Alejandro Jimenez <alejandro.j.jimenez@oracle.com> Reviewed-by: Mark Kanda <mark.kanda@oracle.com> Reviewed-by: Darren Kenny <darren.kenny@oracle.com> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
This commit is poorly justified, I can find not discusison in email,
and it clearly causes a problem.
If a device which is being recovered fails and is subsequently
re-added to an array, there could easily have been changes to the
array *before* the point where the recovery was up to. So the
recovery must start again from the beginning.
If a spare is being recovered and fails, then when it is re-added we
really should do a bitmap-based recovery up to the recovery-offset,
and then a full recovery from there. Before this reversion, we only
did the "full recovery from there" which is not corect. After this
reversion with will do a full recovery from the start, which is safer
but not ideal.
It will be left to a future patch to arrange the two different styles
of recovery.
Reported-and-tested-by: Nate Dailey <nate.dailey@stratus.com> Signed-off-by: NeilBrown <neilb@suse.com> Cc: stable@vger.kernel.org (3.14+) Fixes: 7eb418851f32 ("md: allow a partially recovered device to be hot-added to an array.")
(cherry pick from upstream commit d01552a76d71f9879af448e9142389ee9be6e95b)
Signed-off-by: Jianchao Wang <jianchao.w.wang@oracle.com> Reviewed-by: Darren Kenny <darren.kenny@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
Daniel Jordan [Wed, 18 Jul 2018 16:23:25 +0000 (09:23 -0700)]
x86/bugs: ssbd_ibrs_selected called prematurely
spectre_v2_select_mitigation considers the SSB mitigation that's been
selected in its own decision; specifically, it calls ssbd_ibrs_selected
when deciding whether to use IBRS. The problem is, the SSB mitigation
hasn't been chosen yet, so ssbd_ibrs_selected will always return the
same result, possibly the wrong one.
Address this by splitting SSB initialization into two phases. The
first, ssb_select_mitigation, picks the SSB mitigation mode, getting far
enough for spectre_v2_select_mitigation's needs, and the second,
ssb_init, sets up the necessary SSB state based on the mode. The second
phase is necessary because it relies on the outcome of
spectre_v2_select_mitigation.
Fixes: 648702f587df ("x86/bugs/IBRS: Disable SSB (RDS) if IBRS is selected for spectre_v2.") Signed-off-by: Daniel Jordan <daniel.m.jordan@oracle.com> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Reviewed-by: Alexandre Chartre <alexandre.chartre@oracle.com>
(cherry picked from commit 86a51ae1ce42d9f76e09e3c87c20c3625c6a1a6c)
Qing Huang [Mon, 8 Oct 2018 23:50:53 +0000 (16:50 -0700)]
net/mlx4_core: print firmware version during driver loading
When debugging firmware related issues, it's very helpful to have
firmware version info printed in the kernel log when the driver is
loaded. It's easier to match error/warning messages with different
FW versions in the log other than running a separate tool to get
the information during remote debugging.
Henry Willard [Wed, 3 Jan 2018 19:29:58 +0000 (11:29 -0800)]
mm: numa: Do not trap faults on shared data section pages.
Workloads consisting of a large number of processes running the same program
with a very large shared data segment may experience performance problems
when numa balancing attempts to migrate the shared cow pages. This manifests
itself with many processes or tasks in TASK_UNINTERRUPTIBLE state waiting
for the shared pages to be migrated.
The program listed below simulates the conditions with these results when
run with 288 processes on a 144 core/8 socket machine.
Average throughput Average throughput Average throughput
with numa_balancing=0 with numa_balancing=1 with numa_balancing=1
without the patch with the patch
--------------------- --------------------- --------------------- 211878220215342107979
Complex production environments show less variability and fewer poorly
performing outliers accompanied with a smaller number of processes waiting
on NUMA page migration with this patch applied. In some cases, %iowait drops
from 16%-26% to 0.
// SPDX-License-Identifier: GPL-2.0
/*
* Copyright (c) 2017 Oracle and/or its affiliates. All rights reserved.
*/
This patch changes change_pte_range() to skip shared copy-on-write pages when
called from change_prot_numa().
NOTE: change_prot_numa() is nominally called from task_numa_work() and
queue_pages_test_walk(). task_numa_work() is the auto NUMA balancing path, and
queue_pages_test_walk() is part of explicit NUMA policy management. However,
queue_pages_test_walk() only calls change_prot_numa() when MPOL_MF_LAZY is
specified and currently that is not allowed, so change_prot_numa() is only
called from auto NUMA balancing.
In the case of explicit NUMA policy management, shared pages are not migrated
unless MPOL_MF_MOVE_ALL is specified, and MPOL_MF_MOVE_ALL depends on
CAP_SYS_NICE. Currently, there is no way to pass information about
MPOL_MF_MOVE_ALL to change_pte_range. This will have to be fixed if
MPOL_MF_LAZY is enabled and MPOL_MF_MOVE_ALL is to be honored in lazy
migration mode.
task_numa_work() skips the read-only VMAs of programs and shared libraries.
NOTE2: This patch was accepted upstream should appear in 4.15 or 4.16
V2:
- Combined patch and cover letter
- Added note about applicability of MPOL_MF_MOVE_ALL
Signed-off-by: Henry Willard <henry.willard@oracle.com> Reviewed-by: Håkon Bugge <haakon.bugge@oracle.com> Reviewed-by: Steve Sistare <steven.sistare@oracle.com> Acked-by: Mel Gorman <mgorman@suse.de> Signed-off-by: Somasundaram Krishnasamy <somasundaram.krishnasamy@oracle.com>
Orabug: 28814880 Signed-off-by: Brian Maly <brian.maly@oracle.com>
Conflicts:
mm/mprotect.c (Add #include "internal.h")
Similar test results are seen on UEK4 when running the above program
with and without this patch and with and without auto numa balancing enabled
(cherry picked from commit 04489748048c1bd12d2a14ffe46d7de5dcb408c5) Signed-off-by: Gayatri Vasudevan <gayatri.vasudevan@oracle.com> Reviewed-by: Darren Kenny <darren.kenny@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
Mike Kravetz [Thu, 18 Oct 2018 21:31:47 +0000 (14:31 -0700)]
hugetlbfs: dirty pages as they are added to pagecache
Some test systems were experiencing negative huge page reserve
counts and incorrect file block counts. This was traced to
/proc/sys/vm/drop_caches removing clean pages from hugetlbfs
file pagecaches. When non-hugetlbfs explicit code removes the
pages, the appropriate accounting is not performed.
This can be recreated as follows:
fallocate -l 2M /dev/hugepages/foo
echo 1 > /proc/sys/vm/drop_caches
fallocate -l 2M /dev/hugepages/foo
grep -i huge /proc/meminfo
AnonHugePages: 0 kB
ShmemHugePages: 0 kB
HugePages_Total: 2048
HugePages_Free: 2047
HugePages_Rsvd: 18446744073709551615
HugePages_Surp: 0
Hugepagesize: 2048 kB
Hugetlb: 4194304 kB
ls -lsh /dev/hugepages/foo
4.0M -rw-r--r--. 1 root root 2.0M Oct 17 20:05 /dev/hugepages/foo
To address this issue, dirty pages as they are added to pagecache.
This can easily be reproduced with fallocate as shown above. Read
faulted pages will eventually end up being marked dirty. But there
is a window where they are clean and could be impacted by code such
as drop_caches. So, just dirty them all as they are added to the
pagecache.
Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com> Reviewed-by: Larry Bassel <larry.bassel@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
Ka-Cheong Poon [Tue, 2 Oct 2018 14:50:02 +0000 (07:50 -0700)]
rds: RDS (tcp) hangs on sendto() to unresponding address
In rds_send_mprds_hash(), if the calculated hash value is non-zero and
the MPRDS connections are not yet up, it will wait. But it should not
wait if the send is non-blocking. In this case, it should just use the
base c_path for sending the message.
Scott Mayhew [Tue, 5 Dec 2017 18:55:44 +0000 (13:55 -0500)]
nfs: fix a deadlock in nfs client initialization
The following deadlock can occur between a process waiting for a client
to initialize in while walking the client list during nfsv4 server trunking
detection and another process waiting for the nfs_clid_init_mutex so it
can initialize that client:
Process 1 Process 2
--------- ---------
spin_lock(&nn->nfs_client_lock);
list_add_tail(&CLIENTA->cl_share_link,
&nn->nfs_client_list);
spin_unlock(&nn->nfs_client_lock);
spin_lock(&nn->nfs_client_lock);
list_add_tail(&CLIENTB->cl_share_link,
&nn->nfs_client_list);
spin_unlock(&nn->nfs_client_lock);
mutex_lock(&nfs_clid_init_mutex);
nfs41_walk_client_list(clp, result, cred);
nfs_wait_client_init_complete(CLIENTA);
(waiting for nfs_clid_init_mutex)
Make sure nfs_match_client() only evaluates clients that have completed
initialization in order to prevent that deadlock.
This patch also fixes v4.0 trunking behavior by not marking the client
NFS_CS_READY until the clientid has been confirmed.
Signed-off-by: Scott Mayhew <smayhew@redhat.com> Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
Orabug: 28486463
(cherry picked from commit c156618e15101a9cc8c815108fec0300a0ec6637) Signed-off-by: Calum Mackay <calum.mackay@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
Reviewed-by: Håkon Bugge <haakon.bugge@oracle.com> Reviewed-by: Darren Kenny <darren.kenny@oracle.com> Signed-off-by: Allen Pais <allen.pais@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
Mike Kravetz [Wed, 10 Oct 2018 21:18:38 +0000 (14:18 -0700)]
hugetlbfs: check for reserve page count going negative
When modifying hugetlb reserve page count, check for the value going
negative. This should not happen in practice. However, if it does
log a message and stack trace to aid in debug.
Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com> Reviewed-by: Larry Bassel <larry.bassel@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
Mike Kravetz [Sun, 7 Oct 2018 01:39:42 +0000 (18:39 -0700)]
hugetlbfs: introduce truncation/fault mutex to avoid races
The following hugetlbfs truncate/page fault race can be recreated
with programs doing something like the following.
A huegtlbfs file is mmap(MAP_SHARED) with a size of 4 pages. At
mmap time, 4 huge pages are reserved for the file/mapping. So,
the global reserve count is 4. In addition, since this is a shared
mapping an entry for 4 pages is added to the file's reserve map.
The first 3 of the 4 pages are faulted into the file. As a result,
the global reserve count is now 1.
Task A starts to fault in the last page (routines hugetlb_fault,
hugetlb_no_page). It allocates a huge page (alloc_huge_page).
The reserve map indicates there is a reserved page, so this is
used and the global reserve count goes to 0.
Now, task B truncates the file to size 0. It starts by setting
inode size to 0(hugetlb_vmtruncate). It then unmaps all mapping
of the file (hugetlb_vmdelete_list). Since task A's page table
lock is not held at the time, truncation is not blocked. Truncation
removes the 3 pages from the file (remove_inode_hugepages). When
cleaning up the reserved pages (hugetlb_unreserve_pages), it notices
the reserve map was for 4 pages. However, it has only freed 3 pages.
So it assumes there is still (4 - 3) 1 reserved pages. It then
decrements the global reserve count by 1 and it goes negative.
Task A then continues the page fault process and adds it's newly
acquired page to the page cache. Note that the index of this page
is beyond the size of the truncated file (0). The page fault process
then notices the file has been truncated and exits. However, the
page is left in the cache associated with the file.
Now, if the file is immediately deleted the truncate code runs again.
It will find and free the one page associated with the file. When
cleaning up reserves, it notices the reserve map is empty. Yet, one
page freed. So, the global reserve count is decremented by (0 - 1) -1.
This returns the global count to 0 as it should be. But, it is
possible for someone else to mmap this file/range before it is deleted.
If this happens, a reserve map entry for the allocated page is created
and the reserved page is forever leaked.
To avoid all these conditions, let's simply prevent faults to a file
while it is being truncated. Add a new truncation specific rw mutex
to hugetlbfs inode extensions. faults take the mutex in read mode,
truncation takes in write mode.
Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com> Reviewed-by: Larry Bassel <larry.bassel@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
Mike Kravetz [Fri, 31 Mar 2017 22:12:07 +0000 (15:12 -0700)]
mm/hugetlb.c: don't call region_abort if region_chg fails
Changes to hugetlbfs reservation maps is a two step process. The first
step is a call to region_chg to determine what needs to be changed, and
prepare that change. This should be followed by a call to call to
region_add to commit the change, or region_abort to abort the change.
The error path in hugetlb_reserve_pages called region_abort after a
failed call to region_chg. As a result, the adds_in_progress counter in
the reservation map is off by 1. This is caught by a VM_BUG_ON in
resv_map_release when the reservation map is freed.
syzkaller fuzzer (when using an injected kmalloc failure) found this
bug, that resulted in the following:
(cherry picked from commit ff8c0c53c47530ffea82c22a0a6df6332b56c957) Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com> Reviewed-by: Larry Bassel <larry.bassel@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
When dequeue_huge_page_vma() in alloc_huge_page() fails, we fall back on
alloc_buddy_huge_page() to directly create a hugepage from the buddy
allocator.
In that case, however, if alloc_buddy_huge_page() succeeds we don't
decrement h->resv_huge_pages, which means that successful
hugetlb_fault() returns without releasing the reserve count. As a
result, subsequent hugetlb_fault() might fail despite that there are
still free hugepages.
This patch simply adds decrementing code on that code path.
I reproduced this problem when testing v4.3 kernel in the following situation:
- the test machine/VM is a NUMA system,
- hugepage overcommiting is enabled,
- most of hugepages are allocated and there's only one free hugepage
which is on node 0 (for example),
- another program, which calls set_mempolicy(MPOL_BIND) to bind itself to
node 1, tries to allocate a hugepage,
- the allocation should fail but the reserve count is still hold.
Signed-off-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> Cc: David Rientjes <rientjes@google.com> Cc: Dave Hansen <dave.hansen@intel.com> Cc: Mel Gorman <mgorman@suse.de> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Hillf Danton <hillf.zj@alibaba-inc.com> Cc: Mike Kravetz <mike.kravetz@oracle.com> Cc: <stable@vger.kernel.org> [3.16+] Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Orabug: 28734496
(cherry picked from commit a88c769548047b21f76fd71e04b6a3300ff17160) Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com> Reviewed-by: Larry Bassel <larry.bassel@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
Pull the upstream commit 6b8675897338 to update just the bnxt_en driver.
Signed-off-by: Somasundaram Krishnasamy <somasundaram.krishnasamy@oracle.com> Reviewed-by: Brian Maly <brian.maly@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
w/ bpf_xdp_adjust_tail helper xdp's data_end pointer could be changed as
well (only "decrease" of pointer's location is going to be supported).
changing of this pointer will change packet's size.
for bnxt driver we will just calculate packet's length unconditionally
Acked-by: Alexei Starovoitov <ast@kernel.org> Signed-off-by: Nikita V. Shirokov <tehnerd@tehnerd.com> Acked-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: Daniel Borkmann <daniel@iogearbox.net>
(cherry picked from commit b968e735c79767a3c91217fbae691581aa557d8d)
Orabug: 27988326
Signed-off-by: Somasundaram Krishnasamy <somasundaram.krishnasamy@oracle.com> Reviewed-by: Brian Maly <brian.maly@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
Pull the upstream commit de8f3a83b0a0 to update only the bnxt_en driver.
Signed-off-by: Somasundaram Krishnasamy <somasundaram.krishnasamy@oracle.com> Reviewed-by: Brian Maly <brian.maly@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
Michael Chan [Tue, 11 Jul 2017 17:05:35 +0000 (13:05 -0400)]
bnxt_en: Fix bug in ethtool -L.
When changing channels from combined to rx/tx or vice versa, the code
uses the wrong "sh" parameter to determine if we are reserving rings
for shared or non-shared mode. It should be using the ethtool requested
"sh" parameter instead of the current "sh" parameter.
Fix it by passing the "sh" parameter to bnxt_reserve_rings(). For
ethtool, we will pass in the requested "sh" parameter.
Fixes: 391be5c27364 ("bnxt_en: Implement new scheme to reserve tx rings.") Signed-off-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
[ Upstream commit 3b6b34df342553a7522561e34288f5bb803aa9aa ]
Orabug: 27988326
Signed-off-by: Somasundaram Krishnasamy <somasundaram.krishnasamy@oracle.com> Reviewed-by: Brian Maly <brian.maly@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
Martin KaFai Lau [Fri, 16 Jun 2017 00:29:13 +0000 (17:29 -0700)]
bpf: bnxt: Report bpf_prog ID during XDP_QUERY_PROG
Add support to bnxt to report bpf_prog ID during XDP_QUERY_PROG.
Signed-off-by: Martin KaFai Lau <kafai@fb.com> Cc: Michael Chan <michael.chan@broadcom.com> Acked-by: Alexei Starovoitov <ast@fb.com> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: David S. Miller <davem@davemloft.net>
(cherry picked from commit 8902965f8cb23bba8aa7f3be293ec2f3067b82c6)
Orabug: 27988326
Signed-off-by: Somasundaram Krishnasamy <somasundaram.krishnasamy@oracle.com> Reviewed-by: Brian Maly <brian.maly@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
Michael Chan [Mon, 29 May 2017 23:06:08 +0000 (19:06 -0400)]
bnxt_en: Optimize doorbell write operations for newer chips (reapply).
Older chips require the doorbells to be written twice, but newer chips
do not. Add a new common function bnxt_db_write() to write all
doorbells appropriately depending on the chip. Eliminating the extra
doorbell on newer chips has a significant performance improvement
on pktgen.
Signed-off-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
(cherry picked from commit 434c975a8fe2f70b70ac09ea5ddd008e0528adfa)
Orabug: 27988326
Signed-off-by: Somasundaram Krishnasamy <somasundaram.krishnasamy@oracle.com> Reviewed-by: Brian Maly <brian.maly@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
Conflicts:
drivers/net/ethernet/broadcom/bnxt/bnxt.c
Michael Chan [Tue, 4 Apr 2017 22:14:16 +0000 (18:14 -0400)]
bnxt_en: Use short TX BDs for the XDP TX ring.
No offload is performed on the XDP_TX ring so we can use the short TX
BDs. This has the effect of doubling the size of the XDP TX ring so
that it now matches the size of the rx ring by default.
Signed-off-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
(cherry picked from commit 932dbf83ba18bdb871e0c03a4ffdd9785f7a9c07)
Orabug: 27988326
Signed-off-by: Somasundaram Krishnasamy <somasundaram.krishnasamy@oracle.com> Reviewed-by: Brian Maly <brian.maly@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
Michael Chan [Tue, 4 Apr 2017 22:14:13 +0000 (18:14 -0400)]
bnxt_en: Add ethtool mac loopback self test (reapply).
The mac loopback self test operates in polling mode. To support that,
we need to add functions to open and close the NIC half way. The half
open mode allows the rings to operate without IRQ and NAPI. We
use the XDP transmit function to send the loopback packet.
Signed-off-by: Somasundaram Krishnasamy <somasundaram.krishnasamy@oracle.com> Reviewed-by: Brian Maly <brian.maly@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
Michael Chan [Mon, 6 Feb 2017 21:55:43 +0000 (16:55 -0500)]
bnxt_en: Add support for XDP_TX action.
Add dedicated transmit function and transmit completion handler for
XDP. The XDP transmit logic and completion logic are different than
regular TX ring. The TX buffer is recycled back to the RX ring when
it completes.
v3: Improved the buffer recyling scheme for XDP_TX.
v2: Add trace_xdp_exception().
Add dma_sync.
Signed-off-by: Michael Chan <michael.chan@broadcom.com> Tested-by: Andy Gospodarek <gospo@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
[Upstream commit 38413406277fd060f46855ad527f6f8d4cf2652d]
Orabug: 27988326
Signed-off-by: Somasundaram Krishnasamy <somasundaram.krishnasamy@oracle.com> Reviewed-by: Brian Maly <brian.maly@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
Michael Chan [Mon, 6 Feb 2017 21:55:42 +0000 (16:55 -0500)]
bnxt_en: Add basic XDP support.
Add basic ndo_xdp support to setup and query program, configure the NIC
to run in rx page mode, and support XDP_PASS, XDP_DROP, XDP_ABORTED
actions only.
v3: Pass modified offset and length to stack for XDP_PASS.
Remove Kconfig option.
Signed-off-by: Michael Chan <michael.chan@broadcom.com> Tested-by: Andy Gospodarek <gospo@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
[Upstream commit c6d30e8391b85e00eb544e6cf047ee0160ee9938]
Orabug: 27988326
Signed-off-by: Somasundaram Krishnasamy <somasundaram.krishnasamy@oracle.com> Reviewed-by: Brian Maly <brian.maly@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
Shannon Nelson [Thu, 20 Sep 2018 21:59:06 +0000 (14:59 -0700)]
net: enable RPS on vlan devices
This patch modifies the RPS processing code so that it searches
for a matching vlan interface on the packet and then uses the
RPS settings of the vlan interface. If no vlan interface
is found or the vlan interface does not have RPS enabled,
it will fall back to the RPS settings of the underlying device.
In supporting VMs where we can't control the OS being used,
we'd like to separate the VM cpu processing from the host's
cpus as a way to help mitigate the impact of the L1TF issue.
When running the VM's traffic on a vlan we can stick the Rx
processing on one set of CPUs separate from the VM's CPUs.
Yes, choosing to use this will cause a bit of throughput pain
when the packets are actually passed into the VM and have to
move from one cache to another.
Signed-off-by: Silviu Smarandache <silviu.smarandache@oracle.com> Reviewed-by: Shannon Nelson <shannon.nelson@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
Ankur Arora [Thu, 30 Aug 2018 11:17:49 +0000 (04:17 -0700)]
xen-blkback: hold write vbd-lock while swapping the vbd
All paths holding a vbd handle or dereferencing it hold the read vbd-lock.
Swapping the device takes a write-trylock on the vbd. So, we fail a swap
if, for instance, there is an active IO operation.
Signed-off-by: Ankur Arora <ankur.a.arora@oracle.com> Reviewed-by: Darren Kenny <darren.kenny@oracle.com> Reviewed-by: Bhavesh Davda <bhavesh.davda@oracle.com> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
Ankur Arora [Thu, 30 Aug 2018 10:34:54 +0000 (03:34 -0700)]
xen-blkback: implement swapping of active vbd
Currently we disallow any change of major:minor of the vbd once created.
The danger is in the user switching backends where the contents of the
backend device are dissimilar.
However, changing the vbd can be quite useful -- for instance by switching
from a backend which is not multi-pathed (or raid'd) to one that is.
As for blkback itself, it is used purely as a passthrough device so there
is no state outside struct vbd which would be impacted with this change.
This patch allows a vbd swap, communicating the state of the swap via the
xenbus paths:
Signed-off-by: Ankur Arora <ankur.a.arora@oracle.com> Reviewed-by: Darren Kenny <darren.kenny@oracle.com> Reviewed-by: Bhavesh Davda <bhavesh.davda@oracle.com> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
Signed-off-by: Ankur Arora <ankur.a.arora@oracle.com> Reviewed-by: Darren Kenny <darren.kenny@oracle.com> Reviewed-by: Bhavesh Davda <bhavesh.davda@oracle.com> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
Ankur Arora [Thu, 30 Aug 2018 01:53:18 +0000 (18:53 -0700)]
xen-blkback: refactor backend_changed()
Add xenbus_scan_be_params() to handle vbd param parsing. This gets
called from backend_changed().
Also place all the parameters (major, minor, cdrom, readonly) in struct
blkif_params. The readonly param is an integer replacement of mode --
this allows us to avoid tracking an additional pointer.
Signed-off-by: Ankur Arora <ankur.a.arora@oracle.com> Reviewed-by: Darren Kenny <darren.kenny@oracle.com> Reviewed-by: Bhavesh Davda <bhavesh.davda@oracle.com> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
Signed-off-by: Ankur Arora <ankur.a.arora@oracle.com> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Reviewed-by: Darren Kenny <darren.kenny@oracle.com> Reviewed-by: Bhavesh Davda <bhavesh.davda@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
Jann Horn points out that the vmacache_flush_all() function is not only
potentially expensive, it's buggy too. It also happens to be entirely
unnecessary, because the sequence number overflow case can be avoided by
simply making the sequence number be 64-bit. That doesn't even grow the
data structures in question, because the other adjacent fields are
already 64-bit.
So simplify the whole thing by just making the sequence number overflow
case go away entirely, which gets rid of all the complications and makes
the code faster too. Win-win.
[ Oleg Nesterov points out that the VMACACHE_FULL_FLUSHES statistics
also just goes away entirely with this ]
Reported-by: Jann Horn <jannh@google.com> Suggested-by: Will Deacon <will.deacon@arm.com> Acked-by: Davidlohr Bueso <dave@stgolabs.net> Cc: Oleg Nesterov <oleg@redhat.com> Cc: stable@kernel.org Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
(cherry picked from commit 7a9cdebdcc17e426fb5287e4a82db1dfe86339b2) Signed-off-by: Brian Maly <brian.maly@oracle.com>
Conflicts:
include/linux/mm_types.h
include/linux/mm_types_task.h
mm/debug.c
The other end is using 4KB RDS fragsize (Solaris Super Cluster).
This end is UEK4 (4.1.12-94.8.4.el6uek.x86_64).
The message being copied arrived over 4KB RDS frag size connection.
But during the above check ic->i_frag_sz is 16KB.
This can happen during a reconnect at the connection setup phase.
We start off with ic->i_frag_sz as 16KB. Then settle down at 4KB.
Failing this check
if (copied % ic->i_frag_sz == 0) {
can result in sg not getting set correctly.
Say, "copied" = 4KB but ic->i_frag_sz is 16KB when it should be 4KB.
During race condition with a reconnect, ic->i_frag_sz can be 16KB
even though once the connection is set up it settled down to 4KB.
It can change from 4KB to 16KB and back to 4KB during connection setup
due to reconnect.
We started seeing this crash after bug 26848749.
But prior to that the same scenario could result in data copied to user
from incorrect "sg" resulting in data corruption.
Michael J. Ruhl [Sun, 9 Apr 2017 17:15:51 +0000 (10:15 -0700)]
IB/core: For multicast functions, verify that LIDs are multicast LIDs
The Infiniband spec defines "A multicast address is defined by a
MGID and a MLID" (section 10.5). Currently the MLID value is not
validated.
Add check to verify that the MLID value is in the correct address
range.
Fixes: 0c33aeedb2cf ("[IB] Add checks to multicast attach and detach") Cc: stable@vger.kernel.org Reviewed-by: Ira Weiny <ira.weiny@intel.com> Reviewed-by: Dasaratharaman Chandramouli <dasaratharaman.chandramouli@intel.com> Signed-off-by: Michael J. Ruhl <michael.j.ruhl@intel.com> Signed-off-by: Dennis Dalessandro <dennis.dalessandro@intel.com> Reviewed-by: Leon Romanovsky <leonro@mellanox.com> Signed-off-by: Doug Ledford <dledford@redhat.com>
(cherry picked from commit 8561eae60ff9417a50fa1fb2b83ae950dc5c1e21)
The reason for applying this patch is a bug in ibacm, which goes
undetected without this patch. For further information, see
https://patchwork.kernel.org/patch/10008461/
Signed-off-by: Brian Maly <brian.maly@oracle.com>
Conflicts:
* Had to add the define IB_MULTICAST_LID_BASE in
include/rdma/ib_verbs.h, due to missing commit b4e64397dabc
("IB/rdmavt: Break rdma_vt main include header file up").
Jeff Layton [Mon, 3 Aug 2015 11:44:53 +0000 (07:44 -0400)]
sunrpc: increase UNX_MAXNODENAME from 32 to __NEW_UTS_LEN bytes
The current limit of 32 bytes artificially limits the name string that
we end up stuffing into NFSv4.x client ID blobs. If you have multiple
hosts with long hostnames that only differ near the end, then this can
cause NFSv4 client ID collisions.
Linux nodenames are actually limited to __NEW_UTS_LEN bytes (64), so use
that as the limit instead. Also, use XDR_QUADLEN to specify the slack
length, just for clarity and in case someone in the future changes this
to something not evenly divisible by 4.
Reported-by: Michael Skralivetsky <michael.skralivetsky@primarydata.com> Signed-off-by: Jeff Layton <jeff.layton@primarydata.com> Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
Orabug: 28660177
(cherry picked from commit 24a9a9610ce3ba36fd87c1d2f2c9106de6b7e832) Signed-off-by: Calum Mackay <calum.mackay@oracle.com> Reviewed-by: Srinivas Eeda <srinivas.eeda@oracle.com> Reviewed-by: John Sobecki <john.sobecki@oracle.com> Tested-by: Joe Jin <joe.jin@oracle.com>
net: rds: Use address family to designate IPv4 or IPv6 addresses
The condition to interpret the supplied address in
rds_cancel_sent_to() was the length of the supplied user-data. We need
to tighten the API here to the extent possible, subject to SKGXP
passing in sizeof(struct sockaddr_storage) as the optlen, independent
of the actual address size.
We will 1) make sure the data passed in to the kernel is "big enough",
2) that the address is either AF_INET or AF_INET6, and 3) that the
bound address is ipv6_addr_v4mapped() in the AF_INET case and not in
the AF_INET6 case.
We are under rcu read lock protection at that point:
rcu_read_lock();
d = atomic_long_read(&ns->stashed);
if (!d)
goto slow;
dentry = (struct dentry *)d;
if (!lockref_get_not_dead(&dentry->d_lockref))
goto slow;
rcu_read_unlock();
but don't use a proper RCU API on the free path, therefore a parallel
__d_free() could free it at the same time. We need to mark the stashed
dentry with DCACHE_RCUACCESS so that __d_free() will be called after all
readers leave RCU.
Fixes: e149ed2b805f ("take the targets of /proc/*/ns/* symlinks to separate fs") Cc: Alexander Viro <viro@zeniv.linux.org.uk> Cc: Andrew Morton <akpm@linux-foundation.org> Reported-by: Andrey Konovalov <andreyknvl@google.com> Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
(cherry picked from commit 073c516ff73557a8f7315066856c04b50383ac34) Signed-off-by: Brian Maly <brian.maly@oracle.com>
Conflicts:
fs/nsfs.c
Reviewed-by: Somasundaram Krishnasamy <somasundaram.krishnasamy@oracle.com> Signed-off-by: Allen Pais <allen.pais@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
Orabug: 28604628 Reviewed-by: Mark Kanda <mark.kanda@oracle.com> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
Scott Bauer [Thu, 26 Apr 2018 17:51:08 +0000 (11:51 -0600)]
cdrom: Fix info leak/OOB read in cdrom_ioctl_drive_status
Like d88b6d04: "cdrom: information leak in cdrom_ioctl_media_changed()"
There is another cast from unsigned long to int which causes
a bounds check to fail with specially crafted input. The value is
then used as an index in the slot array in cdrom_slot_status().
Signed-off-by: Scott Bauer <scott.bauer@intel.com> Signed-off-by: Scott Bauer <sbauer@plzdonthack.me> Cc: stable@vger.kernel.org Signed-off-by: Jens Axboe <axboe@kernel.dk>
(cherry picked from commit 8f3fafc9c2f0ece10832c25f7ffcb07c97a32ad4)
Seunghun Han [Wed, 14 Mar 2018 23:12:56 +0000 (16:12 -0700)]
ACPICA: acpi: acpica: fix acpi operand cache leak in nseval.c
I found an ACPI cache leak in ACPI early termination and boot continuing case.
When early termination occurs due to malicious ACPI table, Linux kernel
terminates ACPI function and continues to boot process. While kernel terminates
ACPI function, kmem_cache_destroy() reports Acpi-Operand cache leak.
I analyzed this memory leak in detail and found acpi_ns_evaluate() function
only removes Info->return_object in AE_CTRL_RETURN_VALUE case. But, when errors
occur, the status value is not AE_CTRL_RETURN_VALUE, and Info->return_object is
also not null. Therefore, this causes acpi operand memory leak.
This cache leak causes a security threat because an old kernel (<= 4.9) shows
memory locations of kernel functions in stack dump. Some malicious users
could use this information to neutralize kernel ASLR.
I made a patch to fix ACPI operand cache leak.
Signed-off-by: Seunghun Han <kkamagui@gmail.com> Signed-off-by: Erik Schmauss <erik.schmauss@intel.com> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
(cherry picked from commit 97f3c0a4b0579b646b6b10ae5a3d59f0441cc12c)
Signed-off-by: Victor Erminpour <victor.erminpour@oracle.com> Reviewed-by: Jack Vogel <jack.vogel@oracle.com> Reviewed-by: John Haxby <john.haxby@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
Lu Fengqi [Mon, 13 Jun 2016 01:36:46 +0000 (09:36 +0800)]
btrfs: fix check_shared for fiemap ioctl
Only in the case of different root_id or different object_id, check_shared
identified extent as the shared. However, If a extent was referred by
different offset of same file, it should also be identified as shared.
In addition, check_shared's loop scale is at least n^3, so if a extent
has too many references, even causes soft hang up.
First, add all delayed_ref to the ref_tree and calculate the unqiue_refs,
if the unique_refs is greater than one, return BACKREF_FOUND_SHARED.
Then individually add the on-disk reference(inline/keyed) to the ref_tree
and calculate the unique_refs of the ref_tree to check if the unique_refs
is greater than one.Because once there are two references to return
SHARED, so the time complexity is close to the constant.
Reported-by: Tsutomu Itoh <t-itoh@jp.fujitsu.com> Signed-off-by: Lu Fengqi <lufq.fnst@cn.fujitsu.com> Signed-off-by: David Sterba <dsterba@suse.com>
(cherry picked from commit afce772e87c36c7f07f230a76d525025aaf09e41) Signed-off-by: Brian Maly <brian.maly@oracle.com>
Conflicts:
fs/btrfs/backref.c - Modified comment so that patch applies.
Signed-off-by: Divya Indi <divya.indi@oracle.com>
Orabug: 24716710 Signed-off-by: Brian Maly <brian.maly@oracle.com>
Jiri Kosina [Mon, 18 Jun 2018 07:59:54 +0000 (09:59 +0200)]
x86/pti: Don't report XenPV as vulnerable
Xen PV domain kernel is not by design affected by meltdown as it's
enforcing split CR3 itself. Let's not report such systems as "Vulnerable"
in sysfs (we're also already forcing PTI to off in X86_HYPER_XEN_PV cases);
the security of the system ultimately depends on presence of mitigation in
the Hypervisor, which can't be easily detected from DomU; let's report
that.
Reported-and-tested-by: Mike Latimer <mlatimer@suse.com> Signed-off-by: Jiri Kosina <jkosina@suse.cz> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Juergen Gross <jgross@suse.com> Cc: Borislav Petkov <bp@suse.de> Link: https://lkml.kernel.org/r/nycvar.YFH.7.76.1806180959080.6203@cbobk.fhfr.pm
[ Merge the user-visible string into a single line. ] Signed-off-by: Ingo Molnar <mingo@kernel.org>
(cherry picked from commit 6cb2b08ff92460290979de4be91363e5d1b6cec1)
Conflicts:
arch/x86/kernel/cpu/bugs.c
In UEK4, these changes are made in arch/x86/kernel/cpu/bugs_64.c.
Context around the headers was slightly different (there were some extra
headers relative to the cherry-picked patch).
There is noX86_HYPER_XEN_PV, instead compare x86_hyper to x86_hyper_xen.
Signed-off-by: Patrick Colp <patrick.colp@oracle.com> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
We're consistently hitting deadlocks here with XFS on recent kernels.
After some digging through the crash files, it looks like everyone in
the system is waiting for XFS to reclaim memory.
xfs_log_force_lsn is waiting for logs to get cleaned, which is waiting
for IO, which is waiting for workers to complete the IO which is waiting
for worker threads that don't exist yet:
I think we should be using WQ_MEM_RECLAIM to make sure this thread
pool makes progress when we're not able to allocate new workers.
[dchinner: make all workqueues WQ_MEM_RECLAIM]
Signed-off-by: Chris Mason <clm@fb.com> Reviewed-by: Dave Chinner <dchinner@redhat.com> Signed-off-by: Dave Chinner <david@fromorbit.com>
(cherry picked from commit 7a29ac474a47eb8cf212b45917683ae89d6fa13b)
Signed-off-by: Junxiao Bi <junxiao.bi@oracle.com> Reviewed-by: Shan Hai <shan.hai@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
Patrick Colp [Tue, 28 Aug 2018 23:22:41 +0000 (16:22 -0700)]
x86/spec_ctrl: Only set SPEC_CTRL_IBRS_FIRMWARE if IBRS is actually in use
Currently the SPEC_CTRL_IBRS_FIRMWARE flag always gets set as long as
IBRS is supported by the hardware. However, as best as can be determined
by the documention, if IBRS has been disabled (e.g., spectre_v2=off) then
SPEC_CTRL_IBRS_FIRMWARE should not be set:
nospectre_v2 [X86] Disable all mitigations for the Spectre variant 2
(indirect branch prediction) vulnerability. System may
allow data leaks with this option, which is equivalent
to spectre_v2=off.
and:
spectre_v2= [X86] Control mitigation of Spectre variant 2
(indirect branch speculation) vulnerability.
off - unconditionally disable
Add a check in set_ibrs_firmware() to only set SPEC_CTRL_IBRS_FIRMWARE if
ibrs_disabled is not also set.
Signed-off-by: Patrick Colp <patrick.colp@oracle.com> Reviewed-by: Kanth Ghatraju <kanth.ghatraju@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
In case skb in out_or_order_queue is the result of
multiple skbs coalescing, we would like to get a proper gso_segs
counter tracking, so that future tcp_drop() can report an accurate
number.
I chose to not implement this tracking for skbs in receive queue,
since they are not dropped, unless socket is disconnected.
Signed-off-by: Eric Dumazet <edumazet@google.com> Acked-by: Soheil Hassas Yeganeh <soheil@google.com> Acked-by: Yuchung Cheng <ycheng@google.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: David Woodhouse <dwmw@amazon.co.uk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
(adapted from v4.9.x commit 36ee106e844187e3fc612c9b87f12e5e23e9d8a5)
Signed-off-by: Shannon Nelson <shannon.nelson@oracle.com> Reviewed-by: Jack Vogel <jack.vogel@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
In order to be able to give better diagnostics and detect
malicious traffic, we need to have better sk->sk_drops tracking.
Fixes: 9f5afeae5152 ("tcp: use an RB tree for ooo receive queue") Signed-off-by: Eric Dumazet <edumazet@google.com> Acked-by: Soheil Hassas Yeganeh <soheil@google.com> Acked-by: Yuchung Cheng <ycheng@google.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
(adapted from v4.9.x commit 94623c7463f3424776408df2733012c42b52395a)
Signed-off-by: Shannon Nelson <shannon.nelson@oracle.com> Reviewed-by: Jack Vogel <jack.vogel@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
In case an attacker feeds tiny packets completely out of order,
tcp_collapse_ofo_queue() might scan the whole rb-tree, performing
expensive copies, but not changing socket memory usage at all.
1) Do not attempt to collapse tiny skbs.
2) Add logic to exit early when too many tiny skbs are detected.
We prefer not doing aggressive collapsing (which copies packets)
for pathological flows, and revert to tcp_prune_ofo_queue() which
will be less expensive.
In the future, we might add the possibility of terminating flows
that are proven to be malicious.
Signed-off-by: Eric Dumazet <edumazet@google.com> Acked-by: Soheil Hassas Yeganeh <soheil@google.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
(adapted from v4.9.x commit a878681484a0992ee3dfbd7826439951f9f82a69)
Signed-off-by: Shannon Nelson <shannon.nelson@oracle.com> Reviewed-by: Jack Vogel <jack.vogel@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
Right after a TCP flow is created, receiving tiny out of order
packets allways hit the condition :
if (atomic_read(&sk->sk_rmem_alloc) >= sk->sk_rcvbuf)
tcp_clamp_window(sk);
tcp_clamp_window() increases sk_rcvbuf to match sk_rmem_alloc
(guarded by tcp_rmem[2])
Calling tcp_collapse_ofo_queue() in this case is not useful,
and offers a O(N^2) surface attack to malicious peers.
Better not attempt anything before full queue capacity is reached,
forcing attacker to spend lots of resource and allow us to more
easily detect the abuse.
Signed-off-by: Eric Dumazet <edumazet@google.com> Acked-by: Soheil Hassas Yeganeh <soheil@google.com> Acked-by: Yuchung Cheng <ycheng@google.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
(adapted from v4.9.x commit fdf258ed5dd85b57cf0e0e66500be98d38d42d02)
Signed-off-by: Shannon Nelson <shannon.nelson@oracle.com> Reviewed-by: Jack Vogel <jack.vogel@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
Juha-Matti Tilli reported that malicious peers could inject tiny
packets in out_of_order_queue, forcing very expensive calls
to tcp_collapse_ofo_queue() and tcp_prune_ofo_queue() for
every incoming packet. out_of_order_queue rb-tree can contain
thousands of nodes, iterating over all of them is not nice.
Before linux-4.9, we would have pruned all packets in ofo_queue
in one go, every XXXX packets. XXXX depends on sk_rcvbuf and skbs
truesize, but is about 7000 packets with tcp_rmem[2] default of 6 MB.
Since we plan to increase tcp_rmem[2] in the future to cope with
modern BDP, can not revert to the old behavior, without great pain.
Strategy taken in this patch is to purge ~12.5 % of the queue capacity.
Fixes: 36a6503fedda ("tcp: refine tcp_prune_ofo_queue() to not drop all packets") Signed-off-by: Eric Dumazet <edumazet@google.com> Reported-by: Juha-Matti Tilli <juha-matti.tilli@iki.fi> Acked-by: Yuchung Cheng <ycheng@google.com> Acked-by: Soheil Hassas Yeganeh <soheil@google.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
(adapted from v4.9.x commit 2d08921c8da26bdce3d8848ef6f32068f594d7d4)
Signed-off-by: Shannon Nelson <shannon.nelson@oracle.com> Reviewed-by: Jack Vogel <jack.vogel@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
Yaogong Wang [Wed, 7 Sep 2016 21:49:28 +0000 (14:49 -0700)]
tcp: use an RB tree for ooo receive queue
Over the years, TCP BDP has increased by several orders of magnitude,
and some people are considering to reach the 2 Gbytes limit.
Even with current window scale limit of 14, ~1 Gbytes maps to ~740,000
MSS.
In presence of packet losses (or reorders), TCP stores incoming packets
into an out of order queue, and number of skbs sitting there waiting for
the missing packets to be received can be in the 10^5 range.
Most packets are appended to the tail of this queue, and when
packets can finally be transferred to receive queue, we scan the queue
from its head.
However, in presence of heavy losses, we might have to find an arbitrary
point in this queue, involving a linear scan for every incoming packet,
throwing away cpu caches.
This patch converts it to a RB tree, to get bounded latencies.
Yaogong wrote a preliminary patch about 2 years ago.
Eric did the rebase, added ofo_last_skb cache, polishing and tests.
Tested with network dropping between 1 and 10 % packets, with good
success (about 30 % increase of throughput in stress tests)
Next step would be to also use an RB tree for the write queue at sender
side ;)
Signed-off-by: Yaogong Wang <wygivan@google.com> Signed-off-by: Eric Dumazet <edumazet@google.com> Cc: Yuchung Cheng <ycheng@google.com> Cc: Neal Cardwell <ncardwell@google.com> Cc: Ilpo Järvinen <ilpo.jarvinen@helsinki.fi> Acked-By: Ilpo Järvinen <ilpo.jarvinen@helsinki.fi> Signed-off-by: David S. Miller <davem@davemloft.net>
(adapted from v4.9.x commit 9f5afeae51526b3ad7b7cb21ee8b145ce6ea7a7a)
Signed-off-by: Shannon Nelson <shannon.nelson@oracle.com> Reviewed-by: Jack Vogel <jack.vogel@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
Eric Dumazet [Wed, 17 Aug 2016 21:17:09 +0000 (14:17 -0700)]
tcp: refine tcp_prune_ofo_queue() to not drop all packets
Over the years, TCP BDP has increased a lot, and is typically
in the order of ~10 Mbytes with help of clever Congestion Control
modules.
In presence of packet losses, TCP stores incoming packets into an out of
order queue, and number of skbs sitting there waiting for the missing
packets to be received can match the BDP (~10 Mbytes)
In some cases, TCP needs to make room for incoming skbs, and current
strategy can simply remove all skbs in the out of order queue as a last
resort, incurring a huge penalty, both for receiver and sender.
Unfortunately these 'last resort events' are quite frequent, forcing
sender to send all packets again, stalling the flow and wasting a lot of
resources.
This patch cleans only a part of the out of order queue in order
to meet the memory constraints.
Signed-off-by: Eric Dumazet <edumazet@google.com> Cc: Neal Cardwell <ncardwell@google.com> Cc: Yuchung Cheng <ycheng@google.com> Cc: Soheil Hassas Yeganeh <soheil@google.com> Cc: C. Stephen Gun <csg@google.com> Cc: Van Jacobson <vanj@google.com> Acked-by: Soheil Hassas Yeganeh <soheil@google.com> Acked-by: Yuchung Cheng <ycheng@google.com> Acked-by: Neal Cardwell <ncardwell@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
(adapted from v4.9.x commit 36a6503feddadbbad415fb3891e80f94c10a9b21)
Signed-off-by: Shannon Nelson <shannon.nelson@oracle.com> Reviewed-by: Jack Vogel <jack.vogel@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
Eric Dumazet [Fri, 15 May 2015 19:39:27 +0000 (12:39 -0700)]
tcp: introduce tcp_under_memory_pressure()
Introduce an optimized version of sk_under_memory_pressure()
for TCP. Our intent is to use it in fast paths.
Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
(adapted from v4.9.x commit b8da51ebb1aa93908350f95efae73aecbc2e266c)
Signed-off-by: Shannon Nelson <shannon.nelson@oracle.com> Reviewed-by: Jack Vogel <jack.vogel@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
Eric Dumazet [Fri, 1 Apr 2016 15:52:19 +0000 (08:52 -0700)]
tcp: increment sk_drops for dropped rx packets
Now ss can report sk_drops, we can instruct TCP to increment
this per socket counter when it drops an incoming frame, to refine
monitoring and debugging.
Following patch takes care of listeners drops.
Signed-off-by: Eric Dumazet <edumazet@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
(adapted from v4.9.x commit 532182cd610782db8c18230c2747626562032205)
Signed-off-by: Shannon Nelson <shannon.nelson@oracle.com> Reviewed-by: Jack Vogel <jack.vogel@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
George Kennedy [Thu, 2 Aug 2018 18:51:42 +0000 (14:51 -0400)]
x86/entry/64: Ensure %ebx handling correct in xen_failsafe_callback
error_entry and error_exit communicate the user vs. kernel status of
the frame using %ebx. Make sure %ebx is set properly in
xen_failsafe_callback().
This fix is different from the one provided in the XSA to minimize
risk of a regression. This alternative fix was suggested
by Andy Lutomirski in the original patch (Commit b3681dd548d0
("x86/entry/64: Remove %ebx handling from error_entry/exit"))
Signed-off-by: George Kennedy <george.kennedy@oracle.com> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
Andi Kleen [Fri, 24 Aug 2018 17:03:50 +0000 (10:03 -0700)]
x86/speculation/l1tf: Increase l1tf memory limit for Nehalem+
On Nehalem and newer core CPUs the CPU cache internally uses 44 bits
physical address space. The L1TF workaround is limited by this internal
cache address width, and needs to have one bit free there for the
mitigation to work.
Older client systems report only 36bit physical address space so the range
check decides that L1TF is not mitigated for a 36bit phys/32GB system with
some memory holes.
But since these actually have the larger internal cache width this warning
is bogus because it would only really be needed if the system had more than
43bits of memory.
Add a new internal x86_cache_bits field. Normally it is the same as the
physical bits field reported by CPUID, but for Nehalem and newerforce it to
be at least 44bits.
Change the L1TF memory size warning to use the new cache_bits field to
avoid bogus warnings and remove the bogus comment about memory size.
Fixes: 17dbca119312 ("x86/speculation/l1tf: Add sysfs reporting for l1tf") Reported-by: George Anchev <studio@anchev.net> Reported-by: Christopher Snowhill <kode54@gmail.com> Signed-off-by: Andi Kleen <ak@linux.intel.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: x86@kernel.org Cc: linux-kernel@vger.kernel.org Cc: Michael Hocko <mhocko@suse.com> Cc: vbabka@suse.cz Cc: stable@vger.kernel.org Link: https://lkml.kernel.org/r/20180824170351.34874-1-andi@firstfloor.org
(cherry picked from commit cc51e5428ea54f575d49cfcede1d4cb3a72b4ec4)
Signed-off-by: Mihai Carabas <mihai.carabas@oracle.com> Reviewed-by: Liam Merwick <liam.merwick@oracle.com> Reviewed-by: Darren Kenny <darren.kenny@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
Conflicts:
arch/x86/include/asm/processor.h
arch/x86/kernel/cpu/common.c
arch/x86/kernel/cpu/bugs.c
Contextual: different content; bugs_64.c was modified instead of bugs.c
Vlastimil Babka [Thu, 23 Aug 2018 14:21:29 +0000 (16:21 +0200)]
x86/speculation/l1tf: Suggest what to do on systems with too much RAM
Two users have reported [1] that they have an "extremely unlikely" system
with more than MAX_PA/2 memory and L1TF mitigation is not effective.
Make the warning more helpful by suggesting the proper mem=X kernel boot
parameter to make it effective and a link to the L1TF document to help
decide if the mitigation is worth the unusable RAM.
Signed-off-by: Mihai Carabas <mihai.carabas@oracle.com> Reviewed-by: Liam Merwick <liam.merwick@oracle.com> Reviewed-by: Darren Kenny <darren.kenny@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
Conflicts:
arch/x86/kernel/cpu/bugs.c
Contextual: bugs_64.c was modified instead of bugs.c
Vlastimil Babka [Thu, 23 Aug 2018 13:44:18 +0000 (15:44 +0200)]
x86/speculation/l1tf: Fix off-by-one error when warning that system has too much RAM
Two users have reported [1] that they have an "extremely unlikely" system
with more than MAX_PA/2 memory and L1TF mitigation is not effective. In
fact it's a CPU with 36bits phys limit (64GB) and 32GB memory, but due to
holes in the e820 map, the main region is almost 500MB over the 32GB limit:
Suggestions to use 'mem=32G' to enable the L1TF mitigation while losing the
500MB revealed, that there's an off-by-one error in the check in
l1tf_select_mitigation().
l1tf_pfn_limit() returns the last usable pfn (inclusive) and the range
check in the mitigation path does not take this into account.
Instead of amending the range check, make l1tf_pfn_limit() return the first
PFN which is over the limit which is less error prone. Adjust the other
users accordingly.
Signed-off-by: Mihai Carabas <mihai.carabas@oracle.com> Reviewed-by: Liam Merwick <liam.merwick@oracle.com> Reviewed-by: Darren Kenny <darren.kenny@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
Vlastimil Babka [Mon, 20 Aug 2018 09:58:35 +0000 (11:58 +0200)]
x86/speculation/l1tf: Fix overflow in l1tf_pfn_limit() on 32bit
On 32bit PAE kernels on 64bit hardware with enough physical bits,
l1tf_pfn_limit() will overflow unsigned long. This in turn affects
max_swapfile_size() and can lead to swapon returning -EINVAL. This has been
observed in a 32bit guest with 42 bits physical address size, where
max_swapfile_size() overflows exactly to 1 << 32, thus zero, and produces
the following warning to dmesg:
[ 6.396845] Truncating oversized swap area, only using 0k out of 2047996k
Fix this by using unsigned long long instead.
Fixes: 17dbca119312 ("x86/speculation/l1tf: Add sysfs reporting for l1tf") Fixes: 377eeaa8e11f ("x86/speculation/l1tf: Limit swap file size to MAX_PA/2") Reported-by: Dominique Leuenberger <dimstar@suse.de> Reported-by: Adrian Schroeter <adrian@suse.de> Signed-off-by: Vlastimil Babka <vbabka@suse.cz> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Acked-by: Andi Kleen <ak@linux.intel.com> Acked-by: Michal Hocko <mhocko@suse.com> Cc: "H . Peter Anvin" <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Dave Hansen <dave.hansen@intel.com> Cc: Michal Hocko <mhocko@kernel.org> Cc: stable@vger.kernel.org Link: https://lkml.kernel.org/r/20180820095835.5298-1-vbabka@suse.cz
(cherry picked from commit 9df9516940a61d29aedf4d91b483ca6597e7d480)
Signed-off-by: Mihai Carabas <mihai.carabas@oracle.com> Reviewed-by: Liam Merwick <liam.merwick@oracle.com> Reviewed-by: Darren Kenny <darren.kenny@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
Sean Christopherson [Fri, 17 Aug 2018 17:27:36 +0000 (10:27 -0700)]
x86/speculation/l1tf: Exempt zeroed PTEs from inversion
It turns out that we should *not* invert all not-present mappings,
because the all zeroes case is obviously special.
clear_page() does not undergo the XOR logic to invert the address bits,
i.e. PTE, PMD and PUD entries that have not been individually written
will have val=0 and so will trigger __pte_needs_invert(). As a result,
{pte,pmd,pud}_pfn() will return the wrong PFN value, i.e. all ones
(adjusted by the max PFN mask) instead of zero. A zeroed entry is ok
because the page at physical address 0 is reserved early in boot
specifically to mitigate L1TF, so explicitly exempt them from the
inversion when reading the PFN.
Manifested as an unexpected mprotect(..., PROT_NONE) failure when called
on a VMA that has VM_PFNMAP and was mmap'd to as something other than
PROT_NONE but never used. mprotect() sends the PROT_NONE request down
prot_none_walk(), which walks the PTEs to check the PFNs.
prot_none_pte_entry() gets the bogus PFN from pte_pfn() and returns
-EACCES because it thinks mprotect() is trying to adjust a high MMIO
address.
[ This is a very modified version of Sean's original patch, but all
credit goes to Sean for doing this and also pointing out that
sometimes the __pte_needs_invert() function only gets the protection
bits, not the full eventual pte. But zero remains special even in
just protection bits, so that's ok. - Linus ]
Fixes: f22cc87f6c1f ("x86/speculation/l1tf: Invert all not present mappings") Signed-off-by: Sean Christopherson <sean.j.christopherson@intel.com> Acked-by: Andi Kleen <ak@linux.intel.com> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Josh Poimboeuf <jpoimboe@redhat.com> Cc: Michal Hocko <mhocko@suse.com> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: Dave Hansen <dave.hansen@intel.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
(cherry picked from commit f19f5c49bbc3ffcc9126cc245fc1b24cc29f4a37)
Signed-off-by: Mihai Carabas <mihai.carabas@oracle.com> Reviewed-by: Liam Merwick <liam.merwick@oracle.com> Reviewed-by: Darren Kenny <darren.kenny@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
Signed-off-by: Mihai Carabas <mihai.carabas@oracle.com> Reviewed-by: Liam Merwick <liam.merwick@oracle.com> Reviewed-by: Darren Kenny <darren.kenny@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
Conflicts:
arch/x86/kernel/cpu/bugs.c
Contextual: bugs_64.c was modified instead of bugs.c
Signed-off-by: Mihai Carabas <mihai.carabas@oracle.com> Reviewed-by: Liam Merwick <liam.merwick@oracle.com> Reviewed-by: Darren Kenny <darren.kenny@oracle.com>
Conflicts: bad_spectre_microcode was in arch/x86/kernel/cpu/scattered.c Signed-off-by: Brian Maly <brian.maly@oracle.com>
Thomas Gleixner [Sun, 12 Aug 2018 18:41:45 +0000 (20:41 +0200)]
KVM: x86: SVM: Call x86_spec_ctrl_set_guest/host() with interrupts disabled
Mikhail reported the following lockdep splat:
WARNING: possible irq lock inversion dependency detected
CPU 0/KVM/10284 just changed the state of lock: 000000000d538a88 (&st->lock){+...}, at:
speculative_store_bypass_update+0x10b/0x170
but this lock was taken by another, HARDIRQ-safe lock
in the past:
(&(&sighand->siglock)->rlock){-.-.}
and interrupts could create inverse lock ordering between them.
In svm_vcpu_run() speculative_store_bypass_update() is called with
interupts enabled via x86_virt_spec_ctrl_set_guest/host().
This is actually a false positive, because GIF=0 so interrupts are
disabled even if IF=1; however, we can easily move the invocations of
x86_virt_spec_ctrl_set_guest/host() into the interrupt disabled region to
cure it, and it's a good idea to keep the GIF=0/IF=1 area as small
and self-contained as possible.
Fixes: 1f50ddb4f418 ("x86/speculation: Handle HT correctly on AMD") Reported-by: Mikhail Gavrilov <mikhail.v.gavrilov@gmail.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Tested-by: Mikhail Gavrilov <mikhail.v.gavrilov@gmail.com> Cc: Joerg Roedel <joro@8bytes.org> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Radim Krčmář <rkrcmar@redhat.com> Cc: Matthew Wilcox <willy@infradead.org> Cc: Borislav Petkov <bp@suse.de> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Cc: Tom Lendacky <thomas.lendacky@amd.com> Cc: kvm@vger.kernel.org Cc: x86@kernel.org Cc: stable@vger.kernel.org Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry picked from commit 024d83cadc6b2af027e473720f3c3da97496c318)
Signed-off-by: Mihai Carabas <mihai.carabas@oracle.com> Reviewed-by: Liam Merwick <liam.merwick@oracle.com> Reviewed-by: Darren Kenny <darren.kenny@oracle.com>
Conflicts: contextual Signed-off-by: Brian Maly <brian.maly@oracle.com>
Signed-off-by: Mihai Carabas <mihai.carabas@oracle.com> Reviewed-by: Liam Merwick <liam.merwick@oracle.com> Reviewed-by: Darren Kenny <darren.kenny@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
Ashok Raj [Wed, 28 Feb 2018 10:28:43 +0000 (11:28 +0100)]
x86/microcode: Do not upload microcode if CPUs are offline
Avoid loading microcode if any of the CPUs are offline, and issue a
warning. Having different microcode revisions on the system at any time
is outright dangerous.
Signed-off-by: Mihai Carabas <mihai.carabas@oracle.com> Reviewed-by: Liam Merwick <liam.merwick@oracle.com> Reviewed-by: Darren Kenny <darren.kenny@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
Conflicts:
arch/x86/kernel/cpu/microcode/core.c
Contextual
in for(),if((optlen > 0) && (optptr[1] == 0)), enter infinite loop.
Test: receive a packet which the ip length > 20 and the first byte of ip option is 0, produce this issue
Signed-off-by: yujuan.qi <yujuan.qi@mediatek.com> Acked-by: Paul Moore <paul@paul-moore.com> Signed-off-by: David S. Miller <davem@davemloft.net>
(cherry picked from commit 40413955ee265a5e42f710940ec78f5450d49149) Signed-off-by: Somasundaram Krishnasamy <somasundaram.krishnasamy@oracle.com> Reviewed-by: Brian Maly <brian.maly@oracle.com Signed-off-by: Brian Maly <brian.maly@oracle.com>