Thomas Gleixner [Sun, 29 Apr 2018 13:26:40 +0000 (15:26 +0200)]
x86/speculation: Add prctl for Speculative Store Bypass mitigation
Add prctl based control for Speculative Store Bypass mitigation and make it
the default mitigation for Intel and AMD.
Andi Kleen provided the following rationale (slightly redacted):
There are multiple levels of impact of Speculative Store Bypass:
1) JITed sandbox.
It cannot invoke system calls, but can do PRIME+PROBE and may have call
interfaces to other code
2) Native code process.
No protection inside the process at this level.
3) Kernel.
4) Between processes.
The prctl tries to protect against case (1) doing attacks.
If the untrusted code can do random system calls then control is already
lost in a much worse way. So there needs to be system call protection in
some way (using a JIT not allowing them or seccomp). Or rather if the
process can subvert its environment somehow to do the prctl it can already
execute arbitrary code, which is much worse than SSB.
To put it differently, the point of the prctl is to not allow JITed code
to read data it shouldn't read from its JITed sandbox. If it already has
escaped its sandbox then it can already read everything it wants in its
address space, and do much worse.
The ability to control Speculative Store Bypass allows to enable the
protection selectively without affecting overall system performance.
Based on an initial patch from Tim Chen. Completely rewritten.
Mihai Carabas [Fri, 18 May 2018 09:42:44 +0000 (12:42 +0300)]
x86: thread_info.h: move RDS from index 5 to 23
In UEK4, the thread flags field is split in two parts:
- lower bits of the word which are used usually for "pending work-to-be-done"
- upper bits of the word
There is a comment in arch/x86/include/asm/thread_info.h:88 where it says that
the lower bits are hard-coded in entry_64.S. In entry_64.S a mask of 0x0000ffff
is used to check the state of the thread and determine if it would go to
userspace or not. Because we used bit "5", which was in the lower bits part,
one of the checked condition was always true and the program never returned
from kernel.
We moved RDS to bit 23 which was free to solve the issue.
Signed-off-by: Mihai Carabas <mihai.carabas@oracle.com> Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
Thomas Gleixner [Sun, 29 Apr 2018 13:21:42 +0000 (15:21 +0200)]
x86/process: Allow runtime control of Speculative Store Bypass
The Speculative Store Bypass vulnerability can be mitigated with the
Reduced Data Speculation (RDS) feature. To allow finer grained control of
this eventually expensive mitigation a per task mitigation control is
required.
Add a new TIF_RDS flag and put it into the group of TIF flags which are
evaluated for mismatch in switch_to(). If these bits differ in the previous
and the next task, then the slow path function __switch_to_xtra() is
invoked. Implement the TIF_RDS dependent mitigation control in the slow
path.
If the prctl for controlling Speculative Store Bypass is disabled or no
task uses the prctl then there is no overhead in the switch_to() fast
path.
Update the KVM related speculation control functions to take TID_RDS into
account as well.
Based on a patch from Tim Chen. Completely rewritten.
Thomas Gleixner [Sun, 29 Apr 2018 13:20:11 +0000 (15:20 +0200)]
prctl: Add speculation control prctls
Add two new prctls to control aspects of speculation related vulnerabilites
and their mitigations to provide finer grained control over performance
impacting mitigations.
PR_GET_SPECULATION_CTRL returns the state of the speculation misfeature
which is selected with arg2 of prctl(2). The return value uses bit 0-2 with
the following meaning:
Bit Define Description
0 PR_SPEC_PRCTL Mitigation can be controlled per task by
PR_SET_SPECULATION_CTRL
1 PR_SPEC_ENABLE The speculation feature is enabled, mitigation is
disabled
2 PR_SPEC_DISABLE The speculation feature is disabled, mitigation is
enabled
If all bits are 0 the CPU is not affected by the speculation misfeature.
If PR_SPEC_PRCTL is set, then the per task control of the mitigation is
available. If not set, prctl(PR_SET_SPECULATION_CTRL) for the speculation
misfeature will fail.
PR_SET_SPECULATION_CTRL allows to control the speculation misfeature, which
is selected by arg2 of prctl(2) per task. arg3 is used to hand in the
control value, i.e. either PR_SPEC_ENABLE or PR_SPEC_DISABLE.
The common return values are:
EINVAL prctl is not implemented by the architecture or the unused prctl()
arguments are not 0
ENODEV arg2 is selecting a not supported speculation misfeature
PR_SET_SPECULATION_CTRL has these additional return values:
ERANGE arg3 is incorrect, i.e. it's not either PR_SPEC_ENABLE or PR_SPEC_DISABLE
ENXIO prctl control of the selected speculation misfeature is disabled
The first supported controlable speculation misfeature is
PR_SPEC_STORE_BYPASS. Add the define so this can be shared between
architectures.
Based on an initial patch from Tim Chen and mostly rewritten.
Thomas Gleixner [Sun, 29 Apr 2018 13:01:37 +0000 (15:01 +0200)]
x86/speculation: Create spec-ctrl.h to avoid include hell
Having everything in nospec-branch.h creates a hell of dependencies when
adding the prctl based switching mechanism. Move everything which is not
required in nospec-branch.h to spec-ctrl.h and fix up the includes in the
relevant files.
x86/bugs/AMD: Add support to disable RDS on Fam[15,16,17]h if requested
AMD does not need the Speculative Store Bypass mitigation to be enabled.
The parameters for this are already available and can be done via MSR
C001_1020. Each family uses a different bit in that MSR for this.
[ tglx: Expose the bit mask via a variable and move the actual MSR fiddling
into the bugs code as that's the right thing to do and also required
to prepare for dynamic enable/disable ]
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Ingo Molnar <mingo@kernel.org>
(cherry picked from commit 1115a859f33276fe8afb31c60cf9d8e657872558) Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Reviewed-by: Mihai Carabas <mihai.carabas@oracle.com>
Conflicts:
arch/x86/kernel/cpu/bugs.c
[It is called bugs_64.c]
x86/bugs/intel: Set proper CPU features and setup RDS
Intel CPUs expose methods to:
- Detect whether RDS capability is available via CPUID.7.0.EDX[31],
- The SPEC_CTRL MSR(0x48), bit 2 set to enable RDS.
- MSR_IA32_ARCH_CAPABILITIES, Bit(4) no need to enable RRS.
With that in mind if spec_store_bypass_disable=[auto,on] is selected set at
boot-time the SPEC_CTRL MSR to enable RDS if the platform requires it.
Note that this does not fix the KVM case where the SPEC_CTRL is exposed to
guests which can muck with it, see patch titled :
KVM/SVM/VMX/x86/spectre_v2: Support the combination of guest and host IBRS.
And for the firmware (IBRS to be set), see patch titled:
x86/spectre_v2: Read SPEC_CTRL MSR during boot and re-use reserved bits
[ tglx: Distangled it from the intel implementation and kept the call order ]
x86/bugs: Provide boot parameters for the spec_store_bypass_disable mitigation
Contemporary high performance processors use a common industry-wide
optimization known as "Speculative Store Bypass" in which loads from
addresses to which a recent store has occurred may (speculatively) see an
older value. Intel refers to this feature as "Memory Disambiguation" which
is part of their "Smart Memory Access" capability.
Memory Disambiguation can expose a cache side-channel attack against such
speculatively read values. An attacker can create exploit code that allows
them to read memory outside of a sandbox environment (for example,
malicious JavaScript in a web page), or to perform more complex attacks
against code running within the same privilege level, e.g. via the stack.
As a first step to mitigate against such attacks, provide two boot command
line control knobs:
By default affected x86 processors will power on with Speculative
Store Bypass enabled. Hence the provided kernel parameters are written
from the point of view of whether to enable a mitigation or not.
The parameters are as follows:
- auto - Kernel detects whether your CPU model contains an implementation
of Speculative Store Bypass and picks the most appropriate
mitigation.
- on - disable Speculative Store Bypass
- off - enable Speculative Store Bypass
[ tglx: Reordered the checks so that the whole evaluation is not done
when the CPU does not support RDS ]
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Ingo Molnar <mingo@kernel.org>
(cherry picked from commit 0cc5fa00b0a88dad140b4e5c2cead9951ad36822) Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Reviewed-by: Mihai Carabas <mihai.carabas@oracle.com>
Conflicts:
arch/x86/include/asm/cpufeatures.h
[It is called cpufeature.h]
[We also need to use the scattered function to set the flag similar
to how the rest of CPUID.7.0.EDX[31] are done]
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Borislav Petkov <bp@suse.de> Reviewed-by: Ingo Molnar <mingo@kernel.org>
(cherry picked from commit c456442cd3a59eeb1d60293c26cbe2ff2c4e42cf) Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Reviewed-by: Mihai Carabas <mihai.carabas@oracle.com>
Conflicts:
arch/x86/include/asm/cpufeatures.h
[It is a different file - cpufeature.h]
arch/x86/kernel/cpu/bugs.c
[As well, called bugs_64.c]
arch/x86/kernel/cpu/common.c
[Location of cpu_set_bug_bits is different and also had to drop the __initconst]
Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Link: http://lkml.kernel.org/r/20160906184254.94440-1-andriy.shevchenko@linux.intel.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
(cherry picked from commit f5fbf848303c8704d0e1a1e7cabd08fd0a49552f) Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Reviewed-by: Mihai Carabas <mihai.carabas@oracle.com>
Conflicts:
arch/x86/platform/atom/punit_atom_debug.c
[Does not exist]
drivers/pci/pci-mid.c
drivers/powercap/intel_rapl.c
[We never added the support for it]
x86/bugs, KVM: Support the combination of guest and host IBRS
A guest may modify the SPEC_CTRL MSR from the value used by the
kernel. Since the kernel doesn't use IBRS, this means a value of zero is
what is needed in the host.
But the 336996-Speculative-Execution-Side-Channel-Mitigations.pdf refers to
the other bits as reserved so the kernel should respect the boot time
SPEC_CTRL value and use that.
This allows to deal with future extensions to the SPEC_CTRL interface if
any at all.
Note: This uses wrmsrl() instead of native_wrmsl(). I does not make any
difference as paravirt will over-write the callq *0xfff.. with the wrmsrl
assembler code.
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Borislav Petkov <bp@suse.de> Reviewed-by: Ingo Molnar <mingo@kernel.org>
(cherry picked from commit 5cf687548705412da47c9cec342fd952d71ed3d5) Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Reviewed-by: Mihai Carabas <mihai.carabas@oracle.com>
Conflicts:
arch/x86/kvm/svm.c
arch/x86/kvm/vmx.c
[We need to preserve the check for ibrs_inuse - which we can do now in the
functions]
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Reviewed-by: Mihai Carabas <mihai.carabas@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
Konrad Rzeszutek Wilk [Sun, 13 May 2018 23:06:42 +0000 (23:06 +0000)]
x86/bugs/IBRS: Use variable instead of defines for enabling IBRS
This follows what "x86/bugs: Read SPEC_CTRL MSR during boot and re-use
reserved bits" patch does - that is respect the other bits of the
SPEC CTRL MSR (if any at all).
This necessitates to convert all the assembler macros over, all
the various uses of the SPEC CTRL guarded by 'use_ibrs'.
Note the not so obvious change in the assembler macro from 'cmp' to
'test' to verify the right bit being set.
And to make sure it works with the IBRS support we need to
recognize it in x86_spec_ctrl_set.
This is not upstreamed. It builds on top of IBRS backport.
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Reviewed-by: Mihai Carabas <mihai.carabas@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
Conflicts:
arch/x86/kernel/cpu/bugs_64.c
Konrad Rzeszutek Wilk [Sat, 12 May 2018 01:11:04 +0000 (21:11 -0400)]
x86/bugs: Read SPEC_CTRL MSR during boot and re-use reserved bits
The 336996-Speculative-Execution-Side-Channel-Mitigations.pdf refers to all
the other bits as reserved. The Intel SDM glossary defines reserved as
implementation specific - aka unknown.
As such at bootup this must be taken it into account and proper masking for
the bits in use applied.
A copy of this document is available at
https://bugzilla.kernel.org/show_bug.cgi?id=199511
Suggested-by: Jon Masters <jcm@redhat.com> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Reviewed-by: Borislav Petkov <bp@suse.de> Reviewed-by: Ingo Molnar <mingo@kernel.org>
(cherry picked from commit 1b86883ccb8d5d9506529d42dbe1a5257cb30b18) Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Reviewed-by: Mihai Carabas <mihai.carabas@oracle.com>
Conflicts:
arch/x86/include/asm/nospec-branch.h
[As we don't have the firmware_restrict_branch_speculation_start and
firmware_restrict_branch_speculation_end and end up with a different
name. See commit 473ad76ea8d76f34555d764a3d5820bc1b33cabf
"x86/speculation: Use IBRS if available before calling into firmware"]
arch/x86/kernel/cpu/bugs.c
[File is called bugs_64.c in UEK4]
[Also the backport needs nospec-branch.h in different files ,and we can't
use __ro_after_init]
Konrad Rzeszutek Wilk [Sat, 12 May 2018 00:12:34 +0000 (20:12 -0400)]
x86/bugs/IBRS: Turn on IBRS in spectre_v2_select_mitigation
instead of during early bootup. This makes the bootup much
faster as we may get an NMI (watchdog) during booting before we
make it to spectre_v2_select_mitigation - which means we would
be running with IBRS enabled.
Fixes: XYZ ("x86/bugs/IBRS: Use variable instead of defines for enabling IBRS") Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Reviewed-by: Mihai Carabas <mihai.carabas@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Reviewed-by: Mihai Carabas <mihai.carabas@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
Hannes Reinecke [Fri, 30 Sep 2016 09:01:14 +0000 (11:01 +0200)]
scsi: libfc: Revisit kref handling
The kref handling in fc_rport is a mess. This patch updates
the kref handling according to the following rules:
- Take a reference whenever scheduling a workqueue
- Take a reference whenever an ELS command is send
- Drop the reference at the end of the workqueue function
- Drop the reference at the end of handling ELS replies
- Take a reference when allocating an rport
- Drop the reference when removing an rport
Signed-off-by: Hannes Reinecke <hare@suse.com> Acked-by: Johannes Thumshirn <jth@kernel.org> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
OraBug: 27363267
(cherry picked from commit 4d2095cc42a2d8062590891f929d9d694cbd927f) Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> Reviewed-by: Fred Herard <fred.herard@oracle.com> Signed-off-by: Rajan Shanmugavelu <rajan.shanmugavelu@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
Hannes Reinecke [Fri, 5 Aug 2016 12:55:02 +0000 (14:55 +0200)]
scsi: libfc: reset exchange manager during LOGO handling
FC-LS mandates that we should invalidate all sequences before sending a
LOGO. And we should set the event to RPORT_EV_STOP when a LOGO request
has been received to signal that all exchanges are terminated.
Signed-off-by: Hannes Reinecke <hare@suse.com> Reviewed-by: Chad Dupuis <chad.dupuis@qlogic.com> Tested-by: Chad Dupuis <chad.dupuis@qlogic.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
OraBug: 27363267
(cherry picked from commit 649eb8693857e9b9fca009fba4eb7e80f9f3a326) Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> Reviewed-by: Fred Herard <fred.herard@oracle.com> Signed-off-by: Rajan Shanmugavelu <rajan.shanmugavelu@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
Hannes Reinecke [Fri, 5 Aug 2016 12:55:01 +0000 (14:55 +0200)]
scsi: libfc: send LOGO for PLOGI failure
When running in point-to-multipoint mode PLOGI is done after FLOGI
completed. So when the PLOGI fails we should be sending a LOGO to the
remote port.
[mkp: Applied by hand]
Signed-off-by: Hannes Reinecke <hare@suse.com> Reviewed-by: Chad Dupuis <chad.dupuis@qlogic.com> Tested-by: Chad Dupuis <chad.dupuis@qlogic.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
OraBug: 27363267
(cherry picked from commit d391966a03846176a78ef8d53898de8b4302a2be) Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> Reviewed-by: Fred Herard <fred.herard@oracle.com> Signed-off-by: Rajan Shanmugavelu <rajan.shanmugavelu@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
Hannes Reinecke [Fri, 5 Aug 2016 12:55:00 +0000 (14:55 +0200)]
scsi: libfc: Issue PRLI after a PRLO has been received
When receiving a PRLO it just means that the operating parameters have
changed, it does _not_ mean that the port doesn't want to communicate
with us. So instead of implicitly logging out we should be issueing a
PRLI to figure out the new operating parameters. We can always recover
once PRLI fails.
Signed-off-by: Hannes Reinecke <hare@suse.com> Reviewed-by: Chad Dupuis <chad.dupuis@qlogic.com> Tested-by: Chad Dupuis <chad.dupuis@qlogic.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
OraBug: 27363267
(cherry picked from commit 166f310b629c046b7f5ca846adf978cda47b06c2) Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> Reviewed-by: Fred Herard <fred.herard@oracle.com> Signed-off-by: Rajan Shanmugavelu <rajan.shanmugavelu@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
Hannes Reinecke [Tue, 24 May 2016 06:11:58 +0000 (08:11 +0200)]
libfc: Update rport reference counting
Originally libfc would just be initializing the refcount to '1', and
using the disc_mutex to synchronize if and when the final put should be
happening. This has a race condition as the mutex might be delayed,
causing other threads to access an invalid structure. This patch
updates the rport reference counting to increase the reference every
time 'rport_lookup' is called, and decreases the reference
correspondingly. This removes the need to hold 'disc_mutex' when
removing the structure, and avoids the above race condition.
Signed-off-by: Hannes Reinecke <hare@suse.com> Acked-by: Vasu Dev <vasu.dev@intel.com> Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
OraBug: 27363267
(cherry picked from commit baa6719f902af9c03e528b08dfb847de295b5137) Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> Reviewed-by: Fred Herard <fred.herard@oracle.com> Signed-off-by: Rajan Shanmugavelu <rajan.shanmugavelu@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
Elena Ufimtseva [Fri, 27 Apr 2018 23:53:51 +0000 (19:53 -0400)]
amd/kvm: do not intercept new MSRs for spectre v2 mitigation
Do not intercept MSR_IA32_SPEC_CTRL and MSR_IA32_PRED_CMD on AMD
for Spectre v2 mitigation.
As IBRS is not used on AMD, attempt to intercept MSR_IA32_SPEC_CTRL
will have guest crash with injected GP fault.
Also change the comment about field 'always' in svm_direct_access_msrs structure
for clarity.
Signed-off-by: Elena Ufimtseva <elena.ufimtseva@oracle.com> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Reviewed-by: Håkon Bugge <haakon.bugge@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
set rm->atomic.op_active to 0 when rds_pin_pages() fails
or the user supplied address is invalid,
this prevents a NULL pointer usage in rds_atomic_free_op()
Signed-off-by: Mohamed Ghannam <simo.ghannam@gmail.com> Acked-by: Santosh Shilimkar <santosh.shilimkar@oracle.com> Signed-off-by: David S. Miller <davem@davemloft.net>
(cherry picked from commit 7d11f77f84b27cef452cee332f4e469503084737)
Reviewed-by: Shan Hai <shan.hai@oracle.com> Signed-off-by: Allen Pais <allen.pais@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
There's no need to be printing a raw kernel pointer to the kernel log at
every boot. So just remove it, and change the whole message to use the
correct dev_info() call at the same time.
Reported-by: Wang Qize <wang_qize@venustech.com.cn> Cc: All applicable <stable@vger.kernel.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
(cherry picked from commit 43cdd1b716b26f6af16da4e145b6578f98798bf6)
Reviewed-by: Shan Hai <shan.hai@oracle.com> Signed-off-by: Allen Pais <allen.pais@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
Venkat Venkatsubra [Wed, 16 May 2018 21:08:50 +0000 (14:08 -0700)]
net: ipv4: add support for ECMP hash policy choice
This patch adds support for ECMP hash policy choice via a new sysctl
called fib_multipath_hash_policy and also adds support for L4 hashes.
The current values for fib_multipath_hash_policy are:
0 - layer 3 (default)
1 - layer 4
If there's an skb hash already set and it matches the chosen policy then it
will be used instead of being calculated (currently only for L4).
In L3 mode we always calculate the hash due to the ICMP error special
case, the flow dissector's field consistentification should handle the
address order thus we can remove the address reversals.
If the skb is provided we always use it for the hash calculation,
otherwise we fallback to fl4, that is if skb is NULL fl4 has to be set.
Signed-off-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com> Signed-off-by: David S. Miller <davem@davemloft.net>
(cherry picked from commit bf4e0a3db97eb882368fd82980b3b1fa0b5b9778)
Reviewed-by: Zhu Yanjun <yanjun.zhu@oracle.com> Signed-off-by: Venkat Venkatsubra <venkat.x.venkatsubra@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
Conflicts:
include/net/ip_fib.h
include/net/netns/ipv4.h
include/net/route.h
net/ipv4/fib_semantics.c
net/ipv4/icmp.c
net/ipv4/route.c
net/ipv4/sysctl_net_ipv4.c
host h1 with IP 12.0.0.1 has 2 paths to host h3 at 15.0.0.5:
root@h1:~# ip ro ls
...
12.0.0.0/24 dev swp1 proto kernel scope link src 12.0.0.1
15.0.0.0/16
nexthop via 12.0.0.2 dev swp1 weight 1
nexthop via 12.0.0.3 dev swp1 weight 1
...
If the link between tor3 and tor1 is down and the link between tor1
and tor2 then tor1 is effectively cut-off from h1. Yet the route lookups
in h1 are alternating between the 2 routes: ping 15.0.0.5 gets one and
ssh 15.0.0.5 gets the other. Connections that attempt to use the
12.0.0.2 nexthop fail since that neighbor is not reachable:
root@h1:~# ip neigh show
...
12.0.0.3 dev swp1 lladdr 00:02:00:00:00:1b REACHABLE
12.0.0.2 dev swp1 FAILED
...
The failed path can be avoided by considering known neighbor information
when selecting next hops. If the neighbor lookup fails we have no
knowledge about the nexthop, so give it a shot. If there is an entry
then only select the nexthop if the state is sane. This is similar to
what fib_detect_death does.
To maintain backward compatibility use of the neighbor information is
based on a new sysctl, fib_multipath_use_neigh.
Signed-off-by: David Ahern <dsa@cumulusnetworks.com> Reviewed-by: Julian Anastasov <ja@ssi.bg> Signed-off-by: David S. Miller <davem@davemloft.net>
(cherry picked from commit a6db4494d218c2e559173661ee972e048dc04fdd)
Reviewed-by: Zhu Yanjun <yanjun.zhu@oracle.com> Signed-off-by: Venkat Venkatsubra <venkat.x.venkatsubra@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
Conflicts:
net/ipv4/fib_semantics.c
net/ipv4/sysctl_net_ipv4.c
Signed-off-by: Peter Nørlund <pch@ordbogen.com> Signed-off-by: David S. Miller <davem@davemloft.net>
(cherry picked from commit 0e884c78ee19e902f300ed147083c28a0c6302f0)
Reviewed-by: Zhu Yanjun <yanjun.zhu@oracle.com> Signed-off-by: Venkat Venkatsubra <venkat.x.venkatsubra@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
Conflicts:
include/net/ip_fib.h
net/ipv4/fib_semantics.c
The bug can be easily triggered, if an extra delay (e.g. 10ms) is added
between the test of DMF_FREEING & DMF_DELETING and dm_get() in
dm_get_from_kobject().
To fix it, we need to ensure the test of DMF_FREEING & DMF_DELETING and
dm_get() are done in an atomic way, so _minor_lock is used.
The other callers of dm_get() have also been checked to be OK: some
callers invoke dm_get() under _minor_lock, some callers invoke it under
_hash_lock, and dm_start_request() invoke it after increasing
md->open_count.
Cc: stable@vger.kernel.org Signed-off-by: Hou Tao <houtao1@huawei.com> Signed-off-by: Mike Snitzer <snitzer@redhat.com>
(cherry picked from commit b9a41d21dceadf8104812626ef85dc56ee8a60ed)
Reviewd-by: Shan Hai <shan.hai@oracle.com> Signed-off-by: Allen Pais <allen.pais@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
NFS: only invalidate dentrys that are clearly invalid.
Since commit bafc9b754f75 ("vfs: More precise tests in d_invalidate")
in v3.18, a return of '0' from ->d_revalidate() will cause the dentry
to be invalidated even if it has filesystems mounted on or it or on a
descendant. The mounted filesystem is unmounted.
This means we need to be careful not to return 0 unless the directory
referred to truly is invalid. So -ESTALE or -ENOENT should invalidate
the directory. Other errors such a -EPERM or -ERESTARTSYS should be
returned from ->d_revalidate() so they are propagated to the caller.
A particular problem can be demonstrated by:
1/ mount an NFS filesystem using NFSv3 on /mnt
2/ mount any other filesystem on /mnt/foo
3/ ls /mnt/foo
4/ turn off network, or otherwise make the server unable to respond
5/ ls /mnt/foo &
6/ cat /proc/$!/stack # note that nfs_lookup_revalidate is in the call stack
7/ kill -9 $! # this results in -ERESTARTSYS being returned
8/ observe that /mnt/foo has been unmounted.
This patch changes nfs_lookup_revalidate() to only treat
-ESTALE from nfs_lookup_verify_inode() and
-ESTALE or -ENOENT from ->lookup()
as indicating an invalid inode. Other errors are returned.
Also nfs_check_inode_attributes() is changed to return -ESTALE rather
than -EIO. This is consistent with the error returned in similar
circumstances from nfs_update_inode().
As this bug allows any user to unmount a filesystem mounted on an NFS
filesystem, this fix is suitable for stable kernels.
Fixes: bafc9b754f75 ("vfs: More precise tests in d_invalidate") Cc: stable@vger.kernel.org (v3.18+) Signed-off-by: NeilBrown <neilb@suse.com> Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
Orabug: 27870824
(cherry picked from commit cc89684c9a265828ce061037f1f79f4a68ccd3f7) Signed-off-by: Calum Mackay <calum.mackay@oracle.com> Reviewed-by: Jack Vogel <jack.vogel@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
David Ahern [Tue, 16 May 2017 06:19:17 +0000 (23:19 -0700)]
net: Improve handling of failures on link and route dumps
In general, rtnetlink dumps do not anticipate failure to dump a single
object (e.g., link or route) on a single pass. As both route and link
objects have grown via more attributes, that is no longer a given.
netlink dumps can handle a failure if the dump function returns an
error; specifically, netlink_dump adds the return code to the response
if it is <= 0 so userspace is notified of the failure. The missing
piece is the rtnetlink dump functions returning the error.
Fix route and link dump functions to return the errors if no object is
added to an skb (detected by skb->len != 0). IPv6 route dumps
(rt6_dump_route) already return the error; this patch updates IPv4 and
link dumps. Other dump functions may need to be ajusted as well.
Reported-by: Jan Moskyto Matejka <mq@ucw.cz> Signed-off-by: David Ahern <dsahern@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Orabug: 27959177
(cherry picked from commit f6c5775ff0bfa62b072face6bf1d40f659f194b2)
Add missing err variable for the cherry-pick in rtnetlink.c Signed-off-by: Jack Vogel <jack.vogel@oracle.com> Reviewed-by: Dan Duval <dan.duval@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
Bytes b4 ffff8801f582d750: ae 01 ff ff 00 00 00 00 5a 5a 5a 5a 5a 5a 5a 5a ........ZZZZZZZZ
Object ffff8801f582d760: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b kkkkkkkkkkkkkkkk
Object ffff8801f582d770: 6b 6b 6b 6b 6b 6b 6b a5 kkkkkkk.
Redzone ffff8801f582d778: bb bb bb bb bb bb bb bb ........
Padding ffff8801f582d8b8: 5a 5a 5a 5a 5a 5a 5a 5a ZZZZZZZZ
Memory state around the buggy address: ffff8801f582d600: fb fb fb fc fc fc fc fc fc fc fc fc fc fc fc fc ffff8801f582d680: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
>ffff8801f582d700: fc fc fc fc fc fc fc fc fc fc fc fc fb fb fb fc
!shared memory policy is not protected against parallel removal by other
thread which is normally protected by the mmap_sem. do_get_mempolicy,
however, drops the lock midway while we can still access it later.
Early premature up_read is a historical artifact from times when
put_user was called in this path see https://lwn.net/Articles/124754/
but that is gone since 8bccd85ffbaf ("[PATCH] Implement sys_* do_*
layering in the memory policy layer."). but when we have the the
current mempolicy ref count model. The issue was introduced
accordingly.
Fix the issue by removing the premature release.
Link: http://lkml.kernel.org/r/1502950924-27521-1-git-send-email-zhongjiang@huawei.com Signed-off-by: zhong jiang <zhongjiang@huawei.com> Acked-by: Michal Hocko <mhocko@suse.com> Cc: Minchan Kim <minchan@kernel.org> Cc: Vlastimil Babka <vbabka@suse.cz> Cc: David Rientjes <rientjes@google.com> Cc: Mel Gorman <mgorman@techsingularity.net> Cc: <stable@vger.kernel.org> [2.6+] Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Orabug: 27963519
CVE: CVE-2018-10675
(cherry picked from commit 73223e4e2e3867ebf033a5a8eb2e5df0158ccc99) Signed-off-by: Jack Vogel <jack.vogel@oracle.com> Reviewed-by: Shan Hai <shan.hai@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
The memmap options sent to the udl framebuffer driver were not being
checked for all sets of possible crazy values. Fix this up by properly
bounding the allowed values.
Eric Sandeen [Tue, 17 Apr 2018 06:07:27 +0000 (23:07 -0700)]
xfs: set format back to extents if xfs_bmap_extents_to_btree
If xfs_bmap_extents_to_btree fails in a mode where we call
xfs_iroot_realloc(-1) to de-allocate the root, set the
format back to extents.
Otherwise we can assume we can dereference ifp->if_broot
based on the XFS_DINODE_FMT_BTREE format, and crash.
Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=199423 Signed-off-by: Eric Sandeen <sandeen@redhat.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com> Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
(cherry picked from commit 2c4306f719b083d17df2963bc761777576b8ad1b)
Orabug: 27963576
CVE: CVE-2018-10323 Signed-off-by: Brian Maly <brian.maly@oracle.com>
Conflicts:
fs/xfs/libxfs/xfs_bmap.c
Drop a part of the patch which is inapplicable to the current kernel.
The WARN_ON_ONCE in the dropped part is introduced in the upstream
commit 2fcc319d2467 as a debugging facility since v4.11 kernel.
Signed-off-by: Shan Hai <shan.hai@oracle.com> Reviewed-by: Jack Vogel <jack.vogel@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
In the function l2cap_parse_conf_rsp and in the function
l2cap_parse_conf_req the following variable is declared without
initialization:
struct l2cap_conf_efs efs;
In addition, when parsing input configuration parameters in both of
these functions, the switch case for handling EFS elements may skip the
memcpy call that will write to the efs variable:
...
case L2CAP_CONF_EFS:
if (olen == sizeof(efs))
memcpy(&efs, (void *)val, olen);
...
The olen in the above if is attacker controlled, and regardless of that
if, in both of these functions the efs variable would eventually be
added to the outgoing configuration request that is being built:
So by sending a configuration request, or response, that contains an
L2CAP_CONF_EFS element, but with an element length that is not
sizeof(efs) - the memcpy to the uninitialized efs variable can be
avoided, and the uninitialized variable would be returned to the
attacker (16 bytes).
The capability check in nfnetlink_rcv() verifies that the caller
has CAP_NET_ADMIN in the namespace that "owns" the netlink socket.
However, nfnl_cthelper_list is shared by all net namespaces on the
system. An unprivileged user can create user and net namespaces
in which he holds CAP_NET_ADMIN to bypass the netlink_net_capable()
check:
Add capable() checks in nfnetlink_cthelper, as this is cleaner than
trying to generalize the solution.
Signed-off-by: Kevin Cernekee <cernekee@chromium.org> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
(cherry picked from commit 4b380c42f7d00a395feede754f0bc2292eebe6e5)
Kevin Cernekee [Wed, 6 Dec 2017 20:12:27 +0000 (12:12 -0800)]
netlink: Add netns check on taps
Currently, a nlmon link inside a child namespace can observe systemwide
netlink activity. Filter the traffic so that nlmon can only sniff
netlink messages from its own netns.
Test case:
vpnns -- bash -c "ip link add nlmon0 type nlmon; \
ip link set nlmon0 up; \
tcpdump -i nlmon0 -q -w /tmp/nlmon.pcap -U" &
sudo ip xfrm state add src 10.1.1.1 dst 10.1.1.2 proto esp \
spi 0x1 mode transport \
auth sha1 0x6162633132330000000000000000000000000000 \
enc aes 0x00000000000000000000000000000000
grep --binary abc123 /tmp/nlmon.pcap
Signed-off-by: Kevin Cernekee <cernekee@chromium.org> Signed-off-by: David S. Miller <davem@davemloft.net>
(cherry picked from commit 93c647643b48f0131f02e45da3bd367d80443291)
The path of patched vmmcall will patch 3 bytes opcode 0F 01 C1(vmcall)
to the guest memory, however, write_mmio tracepoint always prints 8 bytes
through *(u64 *)val since kvm splits the mmio access into 8 bytes. This
leaks 5 bytes from the kernel stack (CVE-2017-17741). This patch fixes
it by just accessing the bytes which we operate on.
Before patch:
syz-executor-5567 [007] .... 51370.561696: kvm_mmio: mmio write len 3 gpa 0x10 val 0x1ffff10077c1010f
After patch:
syz-executor-13416 [002] .... 51302.299573: kvm_mmio: mmio write len 3 gpa 0x10 val 0xc1010f
Reported-by: Dmitry Vyukov <dvyukov@google.com> Reviewed-by: Darren Kenny <darren.kenny@oracle.com> Reviewed-by: Marc Zyngier <marc.zyngier@arm.com> Tested-by: Marc Zyngier <marc.zyngier@arm.com> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Radim Krčmář <rkrcmar@redhat.com> Cc: Marc Zyngier <marc.zyngier@arm.com> Cc: Christoffer Dall <christoffer.dall@linaro.org> Signed-off-by: Wanpeng Li <wanpeng.li@hotmail.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com> Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
(cherry-picked from commit 653c41ac4729261cb356ee1aff0f3f4f342be1eb)
Chuck Lever [Tue, 11 Apr 2017 17:22:46 +0000 (13:22 -0400)]
xprtrdma: Detect unreachable NFS/RDMA servers more reliably
Current NFS clients rely on connection loss to determine when to
retransmit. In particular, for protocols like NFSv4, clients no
longer rely on RPC timeouts to drive retransmission: NFSv4 servers
are required to terminate a connection when they need a client to
retransmit pending RPCs.
When a server is no longer reachable, either because it has crashed
or because the network path has broken, the server cannot actively
terminate a connection. Thus NFS clients depend on transport-level
keepalive to determine when a connection must be replaced and
pending RPCs retransmitted.
However, RDMA RC connections do not have a native keepalive
mechanism. If an NFS/RDMA server crashes after a client has sent
RPCs successfully (an RC ACK has been received for all OTW RDMA
requests), there is no way for the client to know the connection is
moribund.
In addition, new RDMA requests are subject to the RPC-over-RDMA
credit limit. If the client has consumed all granted credits with
NFS traffic, it is not allowed to send another RDMA request until
the server replies. Thus it has no way to send a true keepalive when
the workload has already consumed all credits with pending RPCs.
To address this, forcibly disconnect a transport when an RPC times
out. This prevents moribund connections from stopping the
detection of failover or other configuration changes on the server.
Note that even if the connection is still good, retransmitting
any RPC will trigger a disconnect thanks to this logic in
xprt_rdma_send_request:
/* Must suppress retransmit to maintain credits */
if (req->rl_connect_cookie == xprt->connect_cookie)
goto drop_connection;
req->rl_connect_cookie = xprt->connect_cookie;
Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
Orabug: 27587008
(cherry picked from commit 33849792cbcdae2b04819cfb09fe3dca0a84a11e) Signed-off-by: Calum Mackay <calum.mackay@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
Chuck Lever [Tue, 11 Apr 2017 17:22:38 +0000 (13:22 -0400)]
sunrpc: Export xprt_force_disconnect()
xprt_force_disconnect() is already invoked from the socket
transport. I want to invoke xprt_force_disconnect() from the
RPC-over-RDMA transport, which is a separate module from sunrpc.ko.
Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
Orabug: 27587008
(cherry picked from commit e2a4f4fbefc5e5b7b4435f73711b7be94f780584) Signed-off-by: Calum Mackay <calum.mackay@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
Chuck Lever [Wed, 8 Feb 2017 22:00:51 +0000 (17:00 -0500)]
sunrpc: Allow xprt->ops->timer method to sleep
The transport lock is needed to protect the xprt_adjust_cwnd() call
in xs_udp_timer, but it is not necessary for accessing the
rq_reply_bytes_recvd or tk_status fields. It is correct to sublimate
the lock into UDP's xs_udp_timer method, where it is required.
The ->timer method has to take the transport lock if needed, but it
can now sleep safely, or even call back into the RPC scheduler.
This is more a clean-up than a fix, but the "issue" was introduced
by my transport switch patches back in 2005.
Fixes: 46c0ee8bc4ad ("RPC: separate xprt_timer implementations") Signed-off-by: Chuck Lever <chuck.lever@oracle.com> Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
Orabug: 27587008
(cherry picked from commit b977b644ccf821ab1269582f7efe1d0d85faa1f6) Signed-off-by: Calum Mackay <calum.mackay@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
Haozhong Zhang [Wed, 2 May 2018 01:12:08 +0000 (21:12 -0400)]
KVM: nVMX: fix guest CR4 loading when emulating L2 to L1 exit
When KVM emulates an exit from L2 to L1, it loads L1 CR4 into the
guest CR4. Before this CR4 loading, the guest CR4 refers to L2
CR4. Because these two CR4's are in different levels of guest, we
should vmx_set_cr4() rather than kvm_set_cr4() here. The latter, which
is used to handle guest writes to its CR4, checks the guest change to
CR4 and may fail if the change is invalid.
The failure may cause trouble. Consider we start
a L1 guest with non-zero L1 PCID in use,
(i.e. L1 CR4.PCIDE == 1 && L1 CR3.PCID != 0)
and
a L2 guest with L2 PCID disabled,
(i.e. L2 CR4.PCIDE == 0)
and following events may happen:
1. If kvm_set_cr4() is used in load_vmcs12_host_state() to load L1 CR4
into guest CR4 (in VMCS01) for L2 to L1 exit, it will fail because
of PCID check. As a result, the guest CR4 recorded in L0 KVM (i.e.
vcpu->arch.cr4) is left to the value of L2 CR4.
2. Later, if L1 attempts to change its CR4, e.g., clearing VMXE bit,
kvm_set_cr4() in L0 KVM will think L1 also wants to enable PCID,
because the wrong L2 CR4 is used by L0 KVM as L1 CR4. As L1
CR3.PCID != 0, L0 KVM will inject GP to L1 guest.
Fixes: 4704d0befb072 ("KVM: nVMX: Exiting from L2 to L1") Cc: qemu-stable@nongnu.org Signed-off-by: Haozhong Zhang <haozhong.zhang@intel.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry-picked from commit 8eb3f87d903168bdbd1222776a6b1e281f50513e)
Orabug: 27720128 Signed-off-by: Krish Sadhukhan <krish.sadhukhan@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
x86/microcode: probe CPU features on microcode update
Probe for updated CPUID features each time the microcode is
loaded. Specifically this means when the sysfs cpu/microcode
nodes are created (which is when the microcode is first loaded)
or from a user trigger via the sysfs microcode/reload interface.
x86/microcode: microcode_write() should not reference boot_cpu_data
microcode_write() internally calls the AMD or Intel microcode
update logic, both of which update the cpu_data(cpu)->microcode
value. For probing speculation features however, we call
init_scattered_cpuid_features() with boot_cpu_data which is stale
and might have an old value of microcode version.
Jann Horn [Tue, 14 Nov 2017 00:03:44 +0000 (01:03 +0100)]
mm/pagewalk.c: report holes in hugetlb ranges
This matters at least for the mincore syscall, which will otherwise copy
uninitialized memory from the page allocator to userspace. It is
probably also a correctness error for /proc/$pid/pagemap, but I haven't
tested that.
Removing the `walk->hugetlb_entry` condition in walk_hugetlb_range() has
no effect because the caller already checks for that.
This only reports holes in hugetlb ranges to callers who have specified
a hugetlb_entry callback.
This issue was found using an AFL-based fuzzer.
v2:
- don't crash on ->pte_hole==NULL (Andrew Morton)
- add Cc stable (Andrew Morton)
(cherry picked from commit 373c4557d2aa362702c4c2d41288fb1e54990b7c)
Conflict: one line adjust due to huge_pte_offset() interface diff Signed-off-by: Jack Vogel <jack.vogel@oracle.com> Reviewed-by: Shan Hai <shan.hai@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
David Howells [Thu, 12 Oct 2017 15:00:41 +0000 (16:00 +0100)]
KEYS: don't let add_key() update an uninstantiated key
Currently, when passed a key that already exists, add_key() will call the
key's ->update() method if such exists. But this is heavily broken in the
case where the key is uninstantiated because it doesn't call
__key_instantiate_and_link(). Consequently, it doesn't do most of the
things that are supposed to happen when the key is instantiated, such as
setting the instantiation state, clearing KEY_FLAG_USER_CONSTRUCT and
awakening tasks waiting on it, and incrementing key->user->nikeys.
It also never takes key_construction_mutex, which means that
->instantiate() can run concurrently with ->update() on the same key. In
the case of the "user" and "logon" key types this causes a memory leak, at
best. Maybe even worse, the ->update() methods of the "encrypted" and
"trusted" key types actually just dereference a NULL pointer when passed an
uninstantiated key.
Change key_create_or_update() to wait interruptibly for the key to finish
construction before continuing.
This patch only affects *uninstantiated* keys. For now we still allow a
negatively instantiated key to be updated (thereby positively
instantiating it), although that's broken too (the next patch fixes it)
and I'm not sure that anyone actually uses that functionality either.
Here is a simple reproducer for the bug using the "encrypted" key type
(requires CONFIG_ENCRYPTED_KEYS=y), though as noted above the bug
pertained to more than just the "encrypted" key type:
(cherry picked from commit 60ff5b2f547af3828aebafd54daded44cfb0807a) Signed-off-by: Jack Vogel <jack.vogel@oracle.com> Reviewed-by: Shan Hai <shan.hai@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
(cherry picked from commit 36274ab8c596f1240c606bb514da329add2a1bcd) Signed-off-by: Jack Vogel <jack.vogel@oracle.com> Reviewed-by: Shan Hai <shan.hai@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
Buddy Lumpkin [Thu, 15 Mar 2018 06:57:13 +0000 (06:57 +0000)]
vmscan: Support multiple kswapd threads per node
Page replacement is handled in the Linux Kernel in one of two ways:
1) Asynchronously via kswapd
2) Synchronously, via direct reclaim
At page allocation time the allocating task is immediately given a page
from the zone free list allowing it to go right back to work doing
whatever it was doing; Probably directly or indirectly executing
business logic.
Just prior to satisfying the allocation, free pages is checked to see if
it has reached the zone low watermark and if so, kswapd is awakened.
Kswapd will start scanning pages looking for inactive pages to evict to
make room for new page allocations. The work of kswapd allows tasks to
continue allocating memory from their respective zone free list without
incurring any delay.
When the demand for free pages exceeds the rate that kswapd tasks can
supply them, page allocation works differently. Once the allocating task
finds that the number of free pages is at or below the zone min
watermark, the task will no longer pull pages from the free list.
Instead, the task will run the same CPU-bound routines as kswapd to
satisfy its own allocation by scanning and evicting pages. This is
called a direct reclaim.
The time spent performing a direct reclaim can be substantial, often
taking tens to hundreds of milliseconds for small order0 allocations to
half a second or more for order9 huge-page allocations. In fact, kswapd
is not actually required on a linux system. It exists for the sole
purpose of optimizing performance by preventing direct reclaims.
When memory shortfall is sufficient to trigger direct reclaims, they can
occur in any task that is running on the system. A single aggressive
memory allocating task can set the stage for collateral damage to occur
in small tasks that rarely allocate additional memory. Consider the
impact of injecting an additional 100ms of latency when nscd allocates
memory to facilitate caching of a DNS query.
The presence of direct reclaims 10 years ago was a fairly reliable
indicator that too much was being asked of a Linux system. Kswapd was
likely wasting time scanning pages that were ineligible for eviction.
Adding RAM or reducing the working set size would usually make the
problem go away. Since then hardware has evolved to bring a new struggle
for kswapd. Storage speeds have increased by orders of magnitude while
CPU clock speeds have actually slowed down. This presents a throughput
problem for a single threaded kswapd that will get worse with each
generation of new hardware.
------------
Test Details
------------
The tests below were designed with the assumption that a kswapd
bottleneck is best demonstrated using filesystem reads. This way, the
inactive list will be full of clean pages, simplifying the analysis and
allowing kswapd to achieve the highest possible steal rate. Maximum
steal rates for kswapd are likely to be the same or lower for any other
mix of page types on the system.
Tests were run on a 2U Oracle X7-2L with 52 Intel Xeon Skylake 2GHz
cores, 756GB of RAM and 8 x 3.6 TB NVMe Solid State Disk drives. Each
drive has an XFS filesystem mounted separately as /d0 through /d7. NVMe
drives require multiple concurrent streams to show their potential, so I
created 11 250GB zero-filled files on each drive so that I could test
with parallel reads.
The test script runs in multiple stages. At each stage, the number of dd
tasks run concurrently is increased by 2. I did not include all of the
test output for brevity.
During each stage dd tasks are launched to read from each drive in a
round robin fashion until the specified number of tasks for the stage
has been reached. Then iostat, vmstat and top are started in the
background with 10 second intervals. After five minutes, all of the dd
tasks are killed and the iostat, vmstat and top output is parsed in
order to report the following:
CPU consumption
- sy: aggregate kernel mode CPU consumption from vmstat output. The
value doesn't tend to fluctuate much so I just grab the highest value.
Each sample is averaged over 10 seconds
- dd_cpu: for all of the dd tasks averaged across the top samples since
there is a lot of variation.
Throughput
- in Kbytes
- Command is iostat -x -d 10 -g total
This first test performs reads using O_DIRECT in order to show the peak
throughput that can be obtained using these drives. It also demonstrates
how rapidly throughput scales as the number of dd tasks are increased.
Throughput peaked with 40 dd tasks at 26265978.60 KB/s. Very little
system CPU was consumed as expected the drives DMA directly into the
user address space when using O_DIRECT.
The remaining tests do not use O_DIRECT. We drop the page cache before
testing and stop the test as soon as kswapd wakes up.
Each read has to pause after the buffer in kernel space is populated
while that data is added to the pagecache and copied into the user
address space. For this reason, more parallel streams are required to
achieve peak throughput. The copy operation consumes substantially more
CPU than direct IO as expected.
The next test measures throughput after kswapd starts running. This is
the same test only we wait for kswapd to wake up before we start
collecting metrics.
The script actually keeps track of a few things that were not mentioned
earlier. It tracks direct reclaims and page scans by watching the
metrics in /proc/vmstat. CPU consumption for kswapd is tracked the same
way it is tracked for dd.
Since the test is 100% reads, you can assume that the page steal rate
for kswapd and direct reclaims is almost identical to the scan rate.
Look closely at the scan statistics and the CPU consumption numbers and
it should be clear that the bulk of the CPU consumption is occuring in
the context of the dd tasks due to direct reclaims, not kswapd.
With 58 dd tasks, throughput is roughly the same as what we saw without
memory pressure. Ten additional kswapd tasks (5 per node) resulted in a
17% increase in aggregate kernel mode CPU consumption.
NOTE: The kswapd tasks were originally tracked with an array of task
structs in each pgdata structure. Sadly, any changes to the pg_data_t
resulted in KABI breakage. Look for the following definition that was
used as a workaround:
Currently F-RTO may repeatedly send new data packets on non-recurring
timeouts in CA_Loss mode. This is a bug because F-RTO (RFC5682)
should only be used on either new recovery or recurring timeouts.
This exacerbates the recovery progress during frequent timeout &
repair, because we prioritize sending new data packets instead of
repairing the holes when the bandwidth is already scarce.
Fix it by correcting the test of a new recovery episode.
Signed-off-by: Yuchung Cheng <ycheng@google.com> Signed-off-by: Neal Cardwell <ncardwell@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
(cherry picked from commit f82b681a511f4d61069e9586a9cf97bdef371ef3)
Reviewed-by: Hakon Bugge <haakon.bugge@oracle.com> Signed-off-by: Venkat Venkatsubra <venkat.x.venkatsubra@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
Commit c682e8474bd4 ("net/rds: reduce memory footprint during
ib_post_recv in IB transport") introduces an SG list instead of a
single contiguously fragment. When rebuilding the caches, it attempts
to release the number of fragments used by the new connection,
independent of the actual number of fragments used by the cache. This
leads to a kernel crash. Instead, release the correct number of
fragments.
Now that all rng implementations have switched over to the new
interface, we can remove the old low-level interface.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
(cherry picked from commit 94f1bb15bed84ad6c893916b7e7b9db6f1d7eec6) Signed-off-by: Brian Maly <brian.maly@oracle.com>
This patch converts the DRBG implementation to the new low-level
rng interface.
This allows us to get rid of struct drbg_gen by using the new RNG
API instead.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Acked-by: Stephan Mueller <smueller@chronox.de>
(cherry picked from commit 8fded5925d0a733c46f8d0b5edd1c9b315882b1d) Signed-off-by: Brian Maly <brian.maly@oracle.com>
Conflicts:
crypto/drbg.c
This patch ocnverts the ANSI CPRNG implementation to the new
low-level rng interface.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au> Acked-by: Neil Horman <nhorman@tuxdriver.com>
(cherry picked from commit e7c2422a839bfc6876a2f7a9b283bb2963f0287b) Signed-off-by: Brian Maly <brian.maly@oracle.com>
This patch ocnverts the KRNG implementation to the new low-level
rng interface.
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
(cherry picked from commit e33cf2c5aab7d0012e7890089e89ae2466c2449c) Signed-off-by: Brian Maly <brian.maly@oracle.com>
When args->nr_local is 0, nr_pages gets also 0 due some size
calculation via rds_rm_size(), which is later used to allocate
pages for DMA, this bug produces a heap Out-Of-Bound write access
to a specific memory region.
Signed-off-by: Mohamed Ghannam <simo.ghannam@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
(cherry picked from commit c095508770aebf1b9218e77026e48345d719b17c) Signed-off-by: Brian Maly <brian.maly@oracle.com>
Also, put_net() on the right cpu may reorder with left's cpu
list_replace_init(&cleanup_list, ..), and then cleanup_list
will be corrupted.
Since cleanup_net() is executed in worker thread, while
put_net(peer) can happen everywhere, there should be
enough time for concurrent get_net_ns_by_id() to pick
the peer up, and the race does not seem to be unlikely.
The patch fixes the problem in standard way.
(Also, there is possible problem in peernet2id_alloc(), which requires
check for net::count under nsid_lock and maybe_get_net(peer), but
in current stable kernel it's used under rtnl_lock() and it has to be
safe. Openswitch begun to use peernet2id_alloc(), and possibly it should
be fixed too. While this is not in stable kernel yet, so I'll send
a separate message to netdev@ later).
Cc: Nicolas Dichtel <nicolas.dichtel@6wind.com> Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com> Fixes: 0c7aecd4bde4 "netns: add rtnl cmd to add and get peer netns ids" Reviewed-by: Andrey Ryabinin <aryabinin@virtuozzo.com> Reviewed-by: "Eric W. Biederman" <ebiederm@xmission.com> Signed-off-by: Eric W. Biederman <ebiederm@xmission.com> Reviewed-by: Eric Dumazet <edumazet@google.com> Acked-by: Nicolas Dichtel <nicolas.dichtel@6wind.com> Signed-off-by: David S. Miller <davem@davemloft.net>
(cherry picked from commit 21b5944350052d2583e82dd59b19a9ba94a007f0) Signed-off-by: Brian Maly <brian.maly@oracle.com>
Conflicts:
net/core/net_namespace.c
Benjamin Coddington [Mon, 6 Jun 2016 22:07:59 +0000 (18:07 -0400)]
vhost/scsi: fix reuse of &vq->iov[out] in response
The address of the iovec &vq->iov[out] is not guaranteed to contain the scsi
command's response iovec throughout the lifetime of the command. Rather, it
is more likely to contain an iovec from an immediately following command
after looping back around to vhost_get_vq_desc(). Pass along the iovec
entirely instead.
Fixes: 79c14141a487 ("vhost/scsi: Convert completion path to use copy_to_iter") Cc: stable@vger.kernel.org Signed-off-by: Benjamin Coddington <bcodding@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit a77ec83a57890240c546df00ca5df1cdeedb1cc3)
x86/kernel/traps.c: fix trace_die_notifier return value
When triggering a int3 directly, the trace_die_notifier() actually returns 1
(whereas all other notifiers return 0), and that 1 value was being interpreted
as an indicator that DTrace handled the trap and that emulation is needed. The
codei, from that point on, took a branch that is only to be used when the trap
occurs in kernel code, which is not good when it was actually triggered from
userspace.
Signed-off-by: Kris Van Hees <kris.van.hees@oracle.com> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Signed-off-by: Mihai Carabas <mihai.carabas@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
Andy Lutomirski [Tue, 27 Mar 2018 16:28:03 +0000 (18:28 +0200)]
x86/entry/64: Dont use IST entry for #BP stack
There's nothing IST-worthy about #BP/int3. We don't allow kprobes
in the small handful of places in the kernel that run at CPL0 with
an invalid stack, and 32-bit kernels have used normal interrupt
gates for #BP forever.
Furthermore, we don't allow kprobes in places that have usergs while
in kernel mode, so "paranoid" is also unnecessary.
Signed-off-by: Andy Lutomirski <luto@kernel.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: stable@vger.kernel.org Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
(cherry picked from commit d8ba61ba58c88d5207c1ba2f7d9a2280e7d03be9)
Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Signed-off-by: Mihai Carabas <mihai.carabas@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
gregkh@linuxfoundation.org [Tue, 27 Mar 2018 16:27:55 +0000 (18:27 +0200)]
kvm/x86: fix icebp instruction handling
The undocumented 'icebp' instruction (aka 'int1') works pretty much like
'int3' in the absense of in-circuit probing equipment (except,
obviously, that it raises #DB instead of raising #BP), and is used by
some validation test-suites as such.
But Andy Lutomirski noticed that his test suite acted differently in kvm
than on bare hardware.
The reason is that kvm used an inexact test for the icebp instruction:
it just assumed that an all-zero VM exit qualification value meant that
the VM exit was due to icebp.
That is not unlike the guess that do_debug() does for the actual
exception handling case, but it's purely a heuristic, not an absolute
rule. do_debug() does it because it wants to ascribe _some_ reasons to
the #DB that happened, and an empty %dr6 value means that 'icebp' is the
most likely casue and we have no better information.
But kvm can just do it right, because unlike the do_debug() case, kvm
actually sees the real reason for the #DB in the VM-exit interruption
information field.
So instead of relying on an inexact heuristic, just use the actual VM
exit information that says "it was 'icebp'".
Right now the 'icebp' instruction isn't technically documented by Intel,
but that will hopefully change. The special "privileged software
exception" information _is_ actually mentioned in the Intel SDM, even
though the cause of it isn't enumerated.
Reported-by: Andy Lutomirski <luto@kernel.org> Tested-by: Paolo Bonzini <pbonzini@redhat.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
(cherry picked from commit 32d43cd391bacb5f0814c2624399a5dad3501d09)
Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Signed-off-by: Mihai Carabas <mihai.carabas@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
Annoyingly, modify_user_hw_breakpoint() unnecessarily complicates the
modification of a breakpoint - simplify it and remove the pointless
local variables.
Also update the stale Docbook while at it.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Acked-by: Thomas Gleixner <tglx@linutronix.de> Cc: <stable@vger.kernel.org> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Arnaldo Carvalho de Melo <acme@redhat.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Stephane Eranian <eranian@google.com> Cc: Vince Weaver <vincent.weaver@maine.edu> Signed-off-by: Ingo Molnar <mingo@kernel.org>
(cherry picked from commit f67b15037a7a50c57f72e69a6d59941ad90a0f0f) Signed-off-by: Brian Maly <brian.maly@oracle.com>
Jianchao Wang [Wed, 4 Apr 2018 02:07:31 +0000 (10:07 +0800)]
scsi: iscsi_tcp: set BDI_CAP_STABLE_WRITES when data digest enabled
iscsi tcp will first send out data, then calculate and send data
digest. If we don't have BDI_CAP_STABLE_WRITES, the page cache will be
written in spite of the on going writeback. Consequently, wrong digest
will be got and sent to target.
To fix this, set BDI_CAP_STABLE_WRITES when data digest is enabled
in iscsi_tcp .slave_configure callback.
Signed-off-by: Jianchao Wang <jianchao.w.wang@oracle.com> Acked-by: Chris Leech <cleech@redhat.com> Acked-by: Lee Duncan <lduncan@suse.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
(backport upstream commit 89d0c804392bb962553f23dc4c119d11b6bd1675)
Reviewed-by: Junxiao Bi <junxiao.bi@oracle.com> Signed-off-by: Jianchao Wang <jianchao.w.wang@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
Ming Lei [Fri, 14 Apr 2017 19:58:29 +0000 (13:58 -0600)]
block: fix bio_will_gap() for first bvec with offset
Commit 729204ef49ec("block: relax check on sg gap") allows us to merge
bios, if both are physically contiguous. This change can merge a huge
number of small bios, through mkfs for example, mkfs.ntfs running time
can be decreased to ~1/10.
But if one rq starts with a non-aligned buffer (the 1st bvec's bv_offset
is non-zero) and if we allow the merge, it is quite difficult to respect
sg gap limit, especially the max segment size, or we risk having an
unaligned virtual boundary. This patch tries to avoid the issue by
disallowing a merge, if the req starts with an unaligned buffer.
Also add comments to explain why the merged segment can't end in
unaligned virt boundary.
Fixes: 729204ef49ec ("block: relax check on sg gap") Tested-by: Johannes Thumshirn <jthumshirn@suse.de> Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de> Signed-off-by: Ming Lei <ming.lei@redhat.com>
Rewrote parts of the commit message and comments.
Signed-off-by: Shan Hai <shan.hai@oracle.com> Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
Ming Lei [Sat, 17 Dec 2016 10:49:09 +0000 (18:49 +0800)]
block: relax check on sg gap
If the last bvec of the 1st bio and the 1st bvec of the next
bio are physically contigious, and the latter can be merged
to last segment of the 1st bio, we should think they don't
violate sg gap(or virt boundary) limit.
Both Vitaly and Dexuan reported lots of unmergeable small bios
are observed when running mkfs on Hyper-V virtual storage, and
performance becomes quite low. This patch fixes that performance
issue.
The same issue should exist on NVMe, since it sets virt boundary too.
Reported-by: Vitaly Kuznetsov <vkuznets@redhat.com> Reported-by: Dexuan Cui <decui@microsoft.com> Tested-by: Dexuan Cui <decui@microsoft.com> Cc: Keith Busch <keith.busch@intel.com> Signed-off-by: Ming Lei <ming.lei@canonical.com> Signed-off-by: Jens Axboe <axboe@fb.com>
(cherry picked from commit 729204ef49ec00b788ce23deb9eb922a5769f55d)
Signed-off-by: Shan Hai <shan.hai@oracle.com> Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
Ming Lei [Sat, 12 Mar 2016 14:56:19 +0000 (22:56 +0800)]
block: don't optimize for non-cloned bio in bio_get_last_bvec()
For !BIO_CLONED bio, we can use .bi_vcnt safely, but it
doesn't mean we can just simply return .bi_io_vec[.bi_vcnt - 1]
because the start postion may have been moved in the middle of
the bvec, such as splitting in the middle of bvec.
Fixes: 7bcd79ac50d9(block: bio: introduce helpers to get the 1st and last bvec) Cc: stable@vger.kernel.org Reported-by: Kent Overstreet <kent.overstreet@gmail.com> Signed-off-by: Ming Lei <ming.lei@canonical.com> Signed-off-by: Jens Axboe <axboe@fb.com>
(cherry picked from commit 90d0f0f11588ec692c12f9009089b398be395184)
Signed-off-by: Shan Hai <shan.hai@oracle.com> Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
Signed-off-by: Shan Hai <shan.hai@oracle.com> Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
Signed-off-by: Shan Hai <shan.hai@oracle.com> Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
Ming Lei [Fri, 26 Feb 2016 15:40:51 +0000 (23:40 +0800)]
block: check virt boundary in bio_will_gap()
In the following patch, the way for figuring out
the last bvec will be changed with a bit cost introduced,
so return immediately if the queue doesn't have virt
boundary limit. Actually most of devices have not
this limit.
Reviewed-by: Sagi Grimberg <sagig@mellanox.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Ming Lei <ming.lei@canonical.com> Signed-off-by: Jens Axboe <axboe@fb.com>
(cherry picked from commit e0af29171aa8912e1ca95023b75ef336cd70d661)
Signed-off-by: Shan Hai <shan.hai@oracle.com> Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
Signed-off-by: Shan Hai <shan.hai@oracle.com> Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
Olga Kornievskaia [Mon, 14 Sep 2015 23:54:36 +0000 (19:54 -0400)]
Failing to send a CLOSE if file is opened WRONLY and server reboots on a 4.x mount
A test case is as the description says:
open(foobar, O_WRONLY);
sleep() --> reboot the server
close(foobar)
The bug is because in nfs4state.c in nfs4_reclaim_open_state() a few
line before going to restart, there is
clear_bit(NFS4CLNT_RECLAIM_NOGRACE, &state->flags).
NFS4CLNT_RECLAIM_NOGRACE is a flag for the client states not open
owner states. Value of NFS4CLNT_RECLAIM_NOGRACE is 4 which is the
value of NFS_O_WRONLY_STATE in nfs4_state->flags. So clearing it wipes
out state and when we go to close it, “call_close” doesn’t get set as
state flag is not set and CLOSE doesn’t go on the wire.
Signed-off-by: Olga Kornievskaia <aglo@umich.edu> Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
Orabug: 27848303
(cherry picked from commit a41cbe86df3afbc82311a1640e20858c0cd7e065) Signed-off-by: Calum Mackay <calum.mackay@oracle.com> Reviewed-by: Manjunath Patil <manjunath.b.patil@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
Theodore Ts'o [Tue, 27 Mar 2018 03:54:10 +0000 (23:54 -0400)]
ext4: add validity checks for bitmap block numbers
An privileged attacker can cause a crash by mounting a crafted ext4
image which triggers a out-of-bounds read in the function
ext4_valid_block_bitmap() in fs/ext4/balloc.c.
While reflinking an inode, we create a new inode in orphan directory, then
take EX lock on it, reflink the original inode to orphan inode and release
EX lock. Once the lock is released another node might request it in PR mode
which causes downconvert of the lock to PR mode.
Later we attempt to initialize security acl for the orphan inode and move
it to the reflink destination. However, while doing this we dont take EX
lock on the inode. So effectively, we are doing this and accessing the
journal for this inode while holding PR lock. While accessing the journal,
we make
ci->ci_last_trans = journal->j_trans_id
At this point, if there is another downconvert request on this inode from
another node (PR->NL), we will trip on the following condition in
ocfs2_ci_checkpointed()
because we hold the lock in PR mode and journal->j_trans_id is not greater
than ci_last_trans for the inode.
Fix this by taking orphan inode cluster lock in EX mode before
initializing security and moving orphan inode to reflink destination.
Use the __tracker variant while taking inode lock to avoid recursive
locking in the ocfs2_init_security_and_acl() call chain.
Signed-off-by: Ashish Samant <ashish.samant@oracle.com> Reviewed-by: Junxiao Bi <junxiao.bi@oracle.com> Acked-by: Jun Piao <piaojun@huawei.com> Reviewed-by: Joseph Qi <jiangqi903@gmail.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
Dmitry Torokhov [Mon, 23 Oct 2017 23:46:00 +0000 (16:46 -0700)]
Input: gtco - fix potential out-of-bound access
parse_hid_report_descriptor() has a while (i < length) loop, which
only guarantees that there's at least 1 byte in the buffer, but the
loop body can read multiple bytes which causes out-of-bounds access.
Signed-off-by: Tim Tianyang Chen <tianyang.chen@oracle.com> Reviewed-by: Brian Maly <brian.maly@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
Dmitry Torokhov [Sat, 7 Oct 2017 18:07:47 +0000 (11:07 -0700)]
Input: ims-psu - check if CDC union descriptor is sane
Before trying to use CDC union descriptor, try to validate whether that it
is sane by checking that intf->altsetting->extra is big enough and that
descriptor bLength is not too big and not too small.
Signed-off-by: Tim Tianyang Chen <tianyang.chen@oracle.com> Reviewed-by: Jack Vogel <jack.vogel@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
Alex Williamson [Mon, 2 Oct 2017 18:39:09 +0000 (12:39 -0600)]
vfio/pci: Virtualize Maximum Payload Size
With virtual PCI-Express chipsets, we now see userspace/guest drivers
trying to match the physical MPS setting to a virtual downstream port.
Of course a lone physical device surrounded by virtual interconnects
cannot make a correct decision for a proper MPS setting. Instead,
let's virtualize the MPS control register so that writes through to
hardware are disallowed. Userspace drivers like QEMU assume they can
write anything to the device and we'll filter out anything dangerous.
Since mismatched MPS can lead to AER and other faults, let's add it
to the kernel side rather than relying on userspace virtualization to
handle it.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com> Reviewed-by: Eric Auger <eric.auger@redhat.com>
(cherry picked from commit 523184972b282cd9ca17a76f6ca4742394856818) Signed-off-by: Wim ten Have <wim.ten.have@oracle.com> Reviewed-by: Ross Philipson <ross.philipson@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
Alex Williamson [Mon, 26 Sep 2016 19:52:16 +0000 (13:52 -0600)]
vfio-pci: Virtualize PCIe & AF FLR
We use a BAR restore trick to try to detect when a user has performed
a device reset, possibly through FLR or other backdoors, to put things
back into a working state. This is important for backdoor resets, but
we can actually just virtualize the "front door" resets provided via
PCIe and AF FLR. Set these bits as virtualized + writable, allowing
the default write to set them in vconfig, then we can simply check the
bit, perform an FLR of our own, and clear the bit. We don't actually
have the granularity in PCI to specify the type of reset we want to
do, but generally devices don't implement both PCIe and AF FLR and
we'll favor these over other types of reset, so we should generally
lineup. We do test whether the device provides the requested FLR type
to stay consistent with hardware capabilities though.
This seems to fix several instance of devices getting into bad states
with userspace drivers, like dpdk, running inside a VM.
Signed-off-by: Alex Williamson <alex.williamson@redhat.com> Reviewed-by: Greg Rose <grose@lightfleet.com>
(cherry picked from commit ddf9dc0eb5314d6dac8b19b1cc37c739c6896e7e) Signed-off-by: Wim ten Have <wim.ten.have@oracle.com> Reviewed-by: Ross Philipson <ross.philipson@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
Jianchao Wang [Thu, 19 Apr 2018 10:02:46 +0000 (18:02 +0800)]
uek-rpm: Disable DMA CMA
Change the following kernel config:
CONFIG_CMA_SIZE_MBYTES=16
to
CONFIG_CMA_SIZE_MBYTES=0
Orabug: 27892359
Reviewed-by: Thomas Tai <thomas.tai@oracle.com> Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Jianchao Wang <jianchao.w.wang@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
Commit c5f6ce97c1210 tries to address multiple resets but fails as
work_busy doesn't involve any synchronization and can fail. This is
reproducible easily as can be seen by WARNING below which is triggered
with line:
WARN_ON(dev->ctrl.state == NVME_CTRL_RESETTING)
Allowing multiple resets can result in multiple controller removal as
well if different conditions inside nvme_reset_work fail and which
might deadlock on device_release_driver.
This patch addresses the problem by using state of controller to
decide whether reset should be queued or not as state change is
synchronizated using controller spinlock. Also cancel_work_sync is
used to make sure remove cancels the reset_work and waits for it to
finish. This patch also changes return value from -ENODEV to more
appropriate -EBUSY if nvme_reset fails to change state.
Reviewed-by: Thomas Tai <thomas.tai@oracle.com> Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Jianchao Wang <jianchao.w.wang@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
Jianchao Wang [Thu, 15 Feb 2018 11:13:41 +0000 (19:13 +0800)]
nvme-pci: Fix nvme queue cleanup if IRQ setup fails
This patch fixes nvme queue cleanup if requesting an IRQ handler for
the queue's vector fails. It does this by resetting the cq_vector to
the uninitialized value of -1 so it is ignored for a controller reset.
Signed-off-by: Jianchao Wang <jianchao.w.wang@oracle.com>
[changelog updates, removed misc whitespace changes] Signed-off-by: Keith Busch <keith.busch@intel.com>
(cherry picked from f25a2dfc20e3a3ed8fe6618c331799dd7bd01190)
Orabug: 27892359
Reviewed-by: Thomas Tai <thomas.tai@oracle.com> Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Jianchao Wang <jianchao.w.wang@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
Keith Busch [Tue, 27 Jun 2017 23:44:05 +0000 (17:44 -0600)]
nvme/pci: Fix stuck nvme reset
The controller state is set to resetting prior to disabling the
controller, so this patch accounts for that state when deciding if it
needs to freeze the queues. Without this, an 'nvme reset /dev/nvme0'
blocks forever because the queues were never frozen.
Fixes: 82b057caefaf ("nvme-pci: fix multiple ctrl removal scheduling") Signed-off-by: Keith Busch <keith.busch@intel.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@kernel.dk>
(cherry picked from ebef7368571d88f0f80b817e6898075c62265b4e)
Orabug: 27892359
Reviewed-by: Thomas Tai <thomas.tai@oracle.com> Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Jianchao Wang <jianchao.w.wang@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
Keith Busch [Wed, 5 Oct 2016 20:32:45 +0000 (16:32 -0400)]
nvme: don't schedule multiple resets
The queue_work only fails if the work is pending, but not yet running. If
the work is running, the work item would get requeued, triggering a
double reset. If the first reset fails for any reason, the second
reset triggers:
WARN_ON(dev->ctrl.state == NVME_CTRL_RESETTING)
Hitting that schedules controller deletion for a second time, which
potentially takes a reference on the device that is being deleted.
If the reset occurs at the same time as a hot removal event, this causes
a double-free.
This patch has the reset helper function check if the work is busy
prior to queueing, and changes all places that schedule resets to use
this function. Since most users don't want to sync with that work, the
"flush_work" is moved to the only caller that wants to sync.
Signed-off-by: Keith Busch <keith.busch@intel.com> Reviewed-by: Sagi Grimberg<sagi@grimberg.me> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@fb.com>
(cherry picked from c5f6ce97c12104668784ee17fb927c52a944d3d8)
Orabug: 27892359
Reviewed-by: Thomas Tai <thomas.tai@oracle.com> Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Jianchao Wang <jianchao.w.wang@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
Junichi Nomura [Wed, 14 Oct 2015 05:02:15 +0000 (05:02 +0000)]
blk-mq: fix use-after-free in blk_mq_free_tag_set()
tags is freed in blk_mq_free_rq_map() and should not be used after that.
The problem doesn't manifest if CONFIG_CPUMASK_OFFSTACK is false because
free_cpumask_var() is nop.
tags->cpumask is allocated in blk_mq_init_tags() so it's natural to
free cpumask in its counter part, blk_mq_free_tags().
Fixes: f26cdc8536ad ("blk-mq: Shared tag enhancements") Signed-off-by: Jun'ichi Nomura <j-nomura@ce.jp.nec.com> Cc: Keith Busch <keith.busch@intel.com> Reviewed-by: Jeff Moyer <jmoyer@redhat.com> Signed-off-by: Jens Axboe <axboe@fb.com>
(cherry picked from f42d79ab67322e51b92dd7aa965e310c71352a64 )
Orabug: 27892359
Reviewed-by: Thomas Tai <thomas.tai@oracle.com> Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Jianchao Wang <jianchao.w.wang@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
A malicious USB device with crafted descriptors can cause the kernel
to access unallocated memory by setting the bNumInterfaces value too
high in a configuration descriptor. Although the value is adjusted
during parsing, this adjustment is skipped in one of the error return
paths.
This patch prevents the problem by setting bNumInterfaces to 0
initially. The existing code already sets it to the proper value
after parsing is complete.
Adrian Salido [Tue, 25 Apr 2017 23:55:26 +0000 (16:55 -0700)]
driver core: platform: fix race condition with driver_override
The driver_override implementation is susceptible to race condition when
different threads are reading vs storing a different driver override.
Add locking to avoid race condition.
Signed-off-by: Tim Tianyang Chen <tianyang.chen@oracle.com> Reviewed-by: Jack Vogel <jack.vogel@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
Fix this by not overwriting the port1 argument in usb_alloc_dev(), but
storing the raw port number as required by OF in an additional variable,
raw_port.
The problem is that trace_printk() cannot handle a format string like
%p* which prints out the content from a pointer. It stores the
pointer value. When the trace file is opened, that pointer is
de-referenced and causes problems. To use ftrace with such format
string, __trace_printk() needs to be used. It stores the formatted
string in the trace buffer instead.
In commit id 7ea004358b97c ("xfs: don't leave EFIs on AIL on mount
failure") I accidentally reverted aa6a6227435cb06c ("xfs: toggle
readonly state around xfs_log_mount_finish"), which caused a regression
in generic/417. Put back the code that enables iunlink processing on a
ro mount.
Setting dev->hard_mtu to 0 will cause a divide error in
usbnet_probe. Protect against devices with bogus CDC Ethernet
functional descriptors by ignoring a zero wMaxSegmentSize.
Signed-off-by: Bjørn Mork <bjorn@mork.no> Acked-by: Oliver Neukum <oneukum@suse.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Brian Maly <brian.maly@oracle.com>
Conflicts:
drivers/net/usb/cdc_ether.c
whitespace correction
Zhou Chengming [Fri, 6 Jan 2017 01:32:32 +0000 (09:32 +0800)]
sysctl: Drop reference added by grab_header in proc_sys_readdir
Fixes CVE-2016-9191, proc_sys_readdir doesn't drop reference
added by grab_header when return from !dir_emit_dots path.
It can cause any path called unregister_sysctl_table will
wait forever.
One cgroup maintainer mentioned that "cgroup is trying to offline
a cpuset css, which takes place under cgroup_mutex. The offlining
ends up trying to drain active usages of a sysctl table which apprently
is not happening."
The real reason is that proc_sys_readdir doesn't drop reference added
by grab_header when return from !dir_emit_dots path. So this cpuset
offline path will wait here forever.
See here for details: http://www.openwall.com/lists/oss-security/2016/11/04/13
Fixes: f0c3b5093add ("[readdir] convert procfs") Cc: stable@vger.kernel.org Reported-by: CAI Qian <caiqian@redhat.com> Tested-by: Yang Shukui <yangshukui@huawei.com> Signed-off-by: Zhou Chengming <zhouchengming1@huawei.com> Acked-by: Al Viro <viro@ZenIV.linux.org.uk> Signed-off-by: Eric W. Biederman <ebiederm@xmission.com>
(cherry picked from commit 93362fa47fe98b62e4a34ab408c4a418432e7939)