]> www.infradead.org Git - users/jedix/linux-maple.git/log
users/jedix/linux-maple.git
7 years agox86/speculation: Add prctl for Speculative Store Bypass mitigation
Thomas Gleixner [Sun, 29 Apr 2018 13:26:40 +0000 (15:26 +0200)]
x86/speculation: Add prctl for Speculative Store Bypass mitigation

Add prctl based control for Speculative Store Bypass mitigation and make it
the default mitigation for Intel and AMD.

Andi Kleen provided the following rationale (slightly redacted):

 There are multiple levels of impact of Speculative Store Bypass:

 1) JITed sandbox.
    It cannot invoke system calls, but can do PRIME+PROBE and may have call
    interfaces to other code

 2) Native code process.
    No protection inside the process at this level.

 3) Kernel.

 4) Between processes.

 The prctl tries to protect against case (1) doing attacks.

 If the untrusted code can do random system calls then control is already
 lost in a much worse way. So there needs to be system call protection in
 some way (using a JIT not allowing them or seccomp). Or rather if the
 process can subvert its environment somehow to do the prctl it can already
 execute arbitrary code, which is much worse than SSB.

 To put it differently, the point of the prctl is to not allow JITed code
 to read data it shouldn't read from its JITed sandbox. If it already has
 escaped its sandbox then it can already read everything it wants in its
 address space, and do much worse.

 The ability to control Speculative Store Bypass allows to enable the
 protection selectively without affecting overall system performance.

Based on an initial patch from Tim Chen. Completely rewritten.

OraBug: 28041771
CVE: CVE-2018-3639

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
(cherry picked from commit a73ec77ee17ec556fe7f165d00314cb7c047b1ac)
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Reviewed-by: Mihai Carabas <mihai.carabas@oracle.com>
 Conflicts:
Documentation/admin-guide/kernel-parameters.txt
arch/x86/kernel/cpu/bugs.c

Signed-off-by: Brian Maly <brian.maly@oracle.com>
7 years agox86: thread_info.h: move RDS from index 5 to 23
Mihai Carabas [Fri, 18 May 2018 09:42:44 +0000 (12:42 +0300)]
x86: thread_info.h: move RDS from index 5 to 23

In UEK4, the thread flags field is split in two parts:
- lower bits of the word which are used usually for "pending work-to-be-done"
- upper bits of the word

There is a comment in arch/x86/include/asm/thread_info.h:88 where it says that
the lower bits are hard-coded in entry_64.S. In entry_64.S a mask of 0x0000ffff
is used to check the state of the thread and determine if it would go to
userspace or not. Because we used bit "5", which was in the lower bits part,
one of the checked condition was always true and the program never returned
from kernel.

We moved RDS to bit 23 which was free to solve the issue.

OraBug: 28041771
CVE: CVE-2018-3639

Signed-off-by: Mihai Carabas <mihai.carabas@oracle.com>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Brian Maly <brian.maly@oracle.com>
7 years agox86/process: Allow runtime control of Speculative Store Bypass
Thomas Gleixner [Sun, 29 Apr 2018 13:21:42 +0000 (15:21 +0200)]
x86/process: Allow runtime control of Speculative Store Bypass

The Speculative Store Bypass vulnerability can be mitigated with the
Reduced Data Speculation (RDS) feature. To allow finer grained control of
this eventually expensive mitigation a per task mitigation control is
required.

Add a new TIF_RDS flag and put it into the group of TIF flags which are
evaluated for mismatch in switch_to(). If these bits differ in the previous
and the next task, then the slow path function __switch_to_xtra() is
invoked. Implement the TIF_RDS dependent mitigation control in the slow
path.

If the prctl for controlling Speculative Store Bypass is disabled or no
task uses the prctl then there is no overhead in the switch_to() fast
path.

Update the KVM related speculation control functions to take TID_RDS into
account as well.

Based on a patch from Tim Chen. Completely rewritten.

OraBug: 28041771
CVE: CVE-2018-3639

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Ingo Molnar <mingo@kernel.org>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
(cherry picked from commit 885f82bfbc6fefb6664ea27965c3ab9ac4194b8c)
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Reviewed-by: Mihai Carabas <mihai.carabas@oracle.com>
 Conflicts:
arch/x86/include/asm/msr-index.h
arch/x86/include/asm/thread_info.h
arch/x86/kernel/cpu/bugs.c
arch/x86/kernel/process.c
[u64->u32]

Signed-off-by: Brian Maly <brian.maly@oracle.com>
7 years agoprctl: Add speculation control prctls
Thomas Gleixner [Sun, 29 Apr 2018 13:20:11 +0000 (15:20 +0200)]
prctl: Add speculation control prctls

Add two new prctls to control aspects of speculation related vulnerabilites
and their mitigations to provide finer grained control over performance
impacting mitigations.

PR_GET_SPECULATION_CTRL returns the state of the speculation misfeature
which is selected with arg2 of prctl(2). The return value uses bit 0-2 with
the following meaning:

Bit  Define           Description
0    PR_SPEC_PRCTL    Mitigation can be controlled per task by
                      PR_SET_SPECULATION_CTRL
1    PR_SPEC_ENABLE   The speculation feature is enabled, mitigation is
                      disabled
2    PR_SPEC_DISABLE  The speculation feature is disabled, mitigation is
                      enabled

If all bits are 0 the CPU is not affected by the speculation misfeature.

If PR_SPEC_PRCTL is set, then the per task control of the mitigation is
available. If not set, prctl(PR_SET_SPECULATION_CTRL) for the speculation
misfeature will fail.

PR_SET_SPECULATION_CTRL allows to control the speculation misfeature, which
is selected by arg2 of prctl(2) per task. arg3 is used to hand in the
control value, i.e. either PR_SPEC_ENABLE or PR_SPEC_DISABLE.

The common return values are:

EINVAL  prctl is not implemented by the architecture or the unused prctl()
        arguments are not 0
ENODEV  arg2 is selecting a not supported speculation misfeature

PR_SET_SPECULATION_CTRL has these additional return values:

ERANGE  arg3 is incorrect, i.e. it's not either PR_SPEC_ENABLE or PR_SPEC_DISABLE
ENXIO   prctl control of the selected speculation misfeature is disabled

The first supported controlable speculation misfeature is
PR_SPEC_STORE_BYPASS. Add the define so this can be shared between
architectures.

Based on an initial patch from Tim Chen and mostly rewritten.

OraBug: 28041771
CVE: CVE-2018-3639

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Ingo Molnar <mingo@kernel.org>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
(cherry picked from commit b617cfc858161140d69cc0b5cc211996b557a1c7)
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Reviewed-by: Mihai Carabas <mihai.carabas@oracle.com>
 Conflicts:
Documentation/userspace-api/index.rst
include/linux/nospec.h
include/uapi/linux/prctl.h
kernel/sys.c

Signed-off-by: Brian Maly <brian.maly@oracle.com>
7 years agox86/speculation: Create spec-ctrl.h to avoid include hell
Thomas Gleixner [Sun, 29 Apr 2018 13:01:37 +0000 (15:01 +0200)]
x86/speculation: Create spec-ctrl.h to avoid include hell

Having everything in nospec-branch.h creates a hell of dependencies when
adding the prctl based switching mechanism. Move everything which is not
required in nospec-branch.h to spec-ctrl.h and fix up the includes in the
relevant files.

OraBug: 28041771
CVE: CVE-2018-3639

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Reviewed-by: Ingo Molnar <mingo@kernel.org>
(cherry picked from commit 28a2775217b17208811fa43a9e96bd1fdf417b86)
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Reviewed-by: Mihai Carabas <mihai.carabas@oracle.com>
 Conflicts:
arch/x86/kernel/cpu/bugs.c
arch/x86/kvm/svm.c
arch/x86/kvm/vmx.c

Signed-off-by: Brian Maly <brian.maly@oracle.com>
7 years agox86/KVM/VMX: Expose SPEC_CTRL Bit(2) to the guest
Konrad Rzeszutek Wilk [Thu, 26 Apr 2018 02:04:25 +0000 (22:04 -0400)]
x86/KVM/VMX: Expose SPEC_CTRL Bit(2) to the guest

Expose the CPUID.7.EDX[31] bit to the guest, and also guard against various
combinations of SPEC_CTRL MSR values.

The handling of the MSR (to take into account the host value of SPEC_CTRL
Bit(2)) is taken care of in patch:

  KVM/SVM/VMX/x86/spectre_v2: Support the combination of guest and host IBRS

OraBug: 28041771
CVE: CVE-2018-3639

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Ingo Molnar <mingo@kernel.org>
(cherry picked from commit da39556f66f5cfe8f9c989206974f1cb16ca5d7c)
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Reviewed-by: Mihai Carabas <mihai.carabas@oracle.com>
 Conflicts:
arch/x86/kvm/cpuid.c
arch/x86/kvm/vmx.c

Signed-off-by: Brian Maly <brian.maly@oracle.com>
7 years agox86/bugs/AMD: Add support to disable RDS on Fam[15,16,17]h if requested
Konrad Rzeszutek Wilk [Thu, 26 Apr 2018 02:04:24 +0000 (22:04 -0400)]
x86/bugs/AMD: Add support to disable RDS on Fam[15,16,17]h if requested

AMD does not need the Speculative Store Bypass mitigation to be enabled.

The parameters for this are already available and can be done via MSR
C001_1020. Each family uses a different bit in that MSR for this.

[ tglx: Expose the bit mask via a variable and move the actual MSR fiddling
   into the bugs code as that's the right thing to do and also required
to prepare for dynamic enable/disable ]

OraBug: 28041771
CVE: CVE-2018-3639

Suggested-by: Borislav Petkov <bp@suse.de>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Ingo Molnar <mingo@kernel.org>
(cherry picked from commit 764f3c21588a059cd783c6ba0734d4db2d72822d)
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Reviewed-by: Mihai Carabas <mihai.carabas@oracle.com>
 Conflicts:
arch/x86/include/asm/cpufeatures.h
arch/x86/kernel/cpu/amd.c
arch/x86/kernel/cpu/bugs.c
arch/x86/kernel/cpu/common.c

Signed-off-by: Brian Maly <brian.maly@oracle.com>
7 years agox86/bugs: Whitelist allowed SPEC_CTRL MSR values
Konrad Rzeszutek Wilk [Thu, 26 Apr 2018 02:04:23 +0000 (22:04 -0400)]
x86/bugs: Whitelist allowed SPEC_CTRL MSR values

Intel and AMD SPEC_CTRL (0x48) MSR semantics may differ in the
future (or in fact use different MSRs for the same functionality).

As such a run-time mechanism is required to whitelist the appropriate MSR
values.

[ tglx: Made the variable __ro_after_init ]

OraBug: 28041771
CVE: CVE-2018-3639

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Ingo Molnar <mingo@kernel.org>
(cherry picked from commit 1115a859f33276fe8afb31c60cf9d8e657872558)
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Reviewed-by: Mihai Carabas <mihai.carabas@oracle.com>
 Conflicts:
arch/x86/kernel/cpu/bugs.c
[It is called bugs_64.c]

Signed-off-by: Brian Maly <brian.maly@oracle.com>
7 years agox86/bugs/intel: Set proper CPU features and setup RDS
Konrad Rzeszutek Wilk [Thu, 26 Apr 2018 02:04:22 +0000 (22:04 -0400)]
x86/bugs/intel: Set proper CPU features and setup RDS

Intel CPUs expose methods to:

 - Detect whether RDS capability is available via CPUID.7.0.EDX[31],

 - The SPEC_CTRL MSR(0x48), bit 2 set to enable RDS.

 - MSR_IA32_ARCH_CAPABILITIES, Bit(4) no need to enable RRS.

With that in mind if spec_store_bypass_disable=[auto,on] is selected set at
boot-time the SPEC_CTRL MSR to enable RDS if the platform requires it.

Note that this does not fix the KVM case where the SPEC_CTRL is exposed to
guests which can muck with it, see patch titled :
 KVM/SVM/VMX/x86/spectre_v2: Support the combination of guest and host IBRS.

And for the firmware (IBRS to be set), see patch titled:
 x86/spectre_v2: Read SPEC_CTRL MSR during boot and re-use reserved bits

[ tglx: Distangled it from the intel implementation and kept the call order ]

OraBug: 28041771
CVE: CVE-2018-3639

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Borislav Petkov <bp@suse.de>
Reviewed-by: Ingo Molnar <mingo@kernel.org>
(cherry picked from commit 772439717dbf703b39990be58d8d4e3e4ad0598a)
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Reviewed-by: Mihai Carabas <mihai.carabas@oracle.com>
 Conflicts:
arch/x86/include/asm/msr-index.h

[Different file names]
arch/x86/kernel/cpu/bugs.c
arch/x86/kernel/cpu/common.c
arch/x86/kernel/cpu/cpu.h
arch/x86/kernel/cpu/intel.c

Signed-off-by: Brian Maly <brian.maly@oracle.com>
7 years agox86/bugs: Provide boot parameters for the spec_store_bypass_disable mitigation
Konrad Rzeszutek Wilk [Thu, 26 Apr 2018 02:04:21 +0000 (22:04 -0400)]
x86/bugs: Provide boot parameters for the spec_store_bypass_disable mitigation

Contemporary high performance processors use a common industry-wide
optimization known as "Speculative Store Bypass" in which loads from
addresses to which a recent store has occurred may (speculatively) see an
older value. Intel refers to this feature as "Memory Disambiguation" which
is part of their "Smart Memory Access" capability.

Memory Disambiguation can expose a cache side-channel attack against such
speculatively read values. An attacker can create exploit code that allows
them to read memory outside of a sandbox environment (for example,
malicious JavaScript in a web page), or to perform more complex attacks
against code running within the same privilege level, e.g. via the stack.

As a first step to mitigate against such attacks, provide two boot command
line control knobs:

 nospec_store_bypass_disable
 spec_store_bypass_disable=[off,auto,on]

By default affected x86 processors will power on with Speculative
Store Bypass enabled. Hence the provided kernel parameters are written
from the point of view of whether to enable a mitigation or not.
The parameters are as follows:

 - auto - Kernel detects whether your CPU model contains an implementation
  of Speculative Store Bypass and picks the most appropriate
  mitigation.

 - on   - disable Speculative Store Bypass
 - off  - enable Speculative Store Bypass

[ tglx: Reordered the checks so that the whole evaluation is not done
   when the CPU does not support RDS ]

OraBug: 28041771
CVE: CVE-2018-3639

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Borislav Petkov <bp@suse.de>
Reviewed-by: Ingo Molnar <mingo@kernel.org>
(cherry picked from commit 24f7fc83b9204d20f878c57cb77d261ae825e033)
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Reviewed-by: Mihai Carabas <mihai.carabas@oracle.com>
 Conflicts:
Documentation/admin-guide/kernel-parameters.txt

[It is called Documentation/kernel-parameters.txt]

arch/x86/include/asm/cpufeatures.h

[It is called cpufeature.h]

arch/x86/kernel/cpu/bugs.c

[And it is bugs_64.c]

Signed-off-by: Brian Maly <brian.maly@oracle.com>
7 years agox86/cpufeatures: Add X86_FEATURE_RDS
Konrad Rzeszutek Wilk [Sat, 28 Apr 2018 20:34:17 +0000 (22:34 +0200)]
x86/cpufeatures: Add X86_FEATURE_RDS

Add the CPU feature bit CPUID.7.0.EDX[31] which indicates whether the CPU
supports Reduced Data Speculation.

[ tglx: Split it out from a later patch ]

OraBug: 28041771
CVE: CVE-2018-3639

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Ingo Molnar <mingo@kernel.org>
(cherry picked from commit 0cc5fa00b0a88dad140b4e5c2cead9951ad36822)
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Reviewed-by: Mihai Carabas <mihai.carabas@oracle.com>
 Conflicts:
arch/x86/include/asm/cpufeatures.h
[It is called cpufeature.h]
[We also need to use the scattered function to set the flag similar
 to how the rest of CPUID.7.0.EDX[31] are done]

Signed-off-by: Brian Maly <brian.maly@oracle.com>
7 years agox86/bugs: Expose /sys/../spec_store_bypass
Konrad Rzeszutek Wilk [Thu, 26 Apr 2018 02:04:20 +0000 (22:04 -0400)]
x86/bugs: Expose /sys/../spec_store_bypass

Add the sysfs file for the new vulerability. It does not do much except
show the words 'Vulnerable' for recent x86 cores.

Intel cores prior to family 6 are known not to be vulnerable, and so are
some Atoms and some Xeon Phi.

It assumes that older Cyrix, Centaur, etc. cores are immune.

OraBug: 28041771
CVE: CVE-2018-3639

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Borislav Petkov <bp@suse.de>
Reviewed-by: Ingo Molnar <mingo@kernel.org>
(cherry picked from commit c456442cd3a59eeb1d60293c26cbe2ff2c4e42cf)
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Reviewed-by: Mihai Carabas <mihai.carabas@oracle.com>
 Conflicts:
arch/x86/include/asm/cpufeatures.h
[It is a different file - cpufeature.h]

arch/x86/kernel/cpu/bugs.c
[As well, called bugs_64.c]

arch/x86/kernel/cpu/common.c
[Location of cpu_set_bug_bits is different and also had to drop the __initconst]

Signed-off-by: Brian Maly <brian.maly@oracle.com>
7 years agox86/cpu/intel: Add Knights Mill to Intel family
Piotr Luc [Wed, 12 Oct 2016 18:05:20 +0000 (20:05 +0200)]
x86/cpu/intel: Add Knights Mill to Intel family

Add CPUID of Knights Mill (KNM) processor to Intel family list.

OraBug: 28041771
CVE: CVE-2018-3639

Signed-off-by: Piotr Luc <piotr.luc@intel.com>
Reviewed-by: Dave Hansen <dave.hansen@intel.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Josh Poimboeuf <jpoimboe@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/20161012180520.30976-1-piotr.luc@intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
(cherry picked from commit 0047f59834e5947d45f34f5f12eb330d158f700b)
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Reviewed-by: Mihai Carabas <mihai.carabas@oracle.com>
Signed-off-by: Brian Maly <brian.maly@oracle.com>
7 years agox86/cpu: Rename Merrifield2 to Moorefield
Andy Shevchenko [Tue, 6 Sep 2016 18:42:54 +0000 (21:42 +0300)]
x86/cpu: Rename Merrifield2 to Moorefield

Merrifield2 is actually Moorefield.

Rename it accordingly and drop tail digit from Merrifield1.

OraBug: 28041771
CVE: CVE-2018-3639

Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: http://lkml.kernel.org/r/20160906184254.94440-1-andriy.shevchenko@linux.intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
(cherry picked from commit f5fbf848303c8704d0e1a1e7cabd08fd0a49552f)
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Reviewed-by: Mihai Carabas <mihai.carabas@oracle.com>
 Conflicts:
arch/x86/platform/atom/punit_atom_debug.c
[Does not exist]
drivers/pci/pci-mid.c
drivers/powercap/intel_rapl.c
[We never added the support for it]

Signed-off-by: Brian Maly <brian.maly@oracle.com>
7 years agox86/bugs, KVM: Support the combination of guest and host IBRS
Konrad Rzeszutek Wilk [Thu, 26 Apr 2018 02:04:19 +0000 (22:04 -0400)]
x86/bugs, KVM: Support the combination of guest and host IBRS

A guest may modify the SPEC_CTRL MSR from the value used by the
kernel. Since the kernel doesn't use IBRS, this means a value of zero is
what is needed in the host.

But the 336996-Speculative-Execution-Side-Channel-Mitigations.pdf refers to
the other bits as reserved so the kernel should respect the boot time
SPEC_CTRL value and use that.

This allows to deal with future extensions to the SPEC_CTRL interface if
any at all.

Note: This uses wrmsrl() instead of native_wrmsl(). I does not make any
difference as paravirt will over-write the callq *0xfff.. with the wrmsrl
assembler code.

OraBug: 28041771
CVE: CVE-2018-3639

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Borislav Petkov <bp@suse.de>
Reviewed-by: Ingo Molnar <mingo@kernel.org>
(cherry picked from commit 5cf687548705412da47c9cec342fd952d71ed3d5)
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Reviewed-by: Mihai Carabas <mihai.carabas@oracle.com>
 Conflicts:
arch/x86/kvm/svm.c
arch/x86/kvm/vmx.c
[We need to preserve the check for ibrs_inuse - which we can do now in the
     functions]

Signed-off-by: Brian Maly <brian.maly@oracle.com>
7 years agox86/bugs/IBRS: Warn if IBRS is enabled during boot.
Konrad Rzeszutek Wilk [Sat, 12 May 2018 00:16:26 +0000 (20:16 -0400)]
x86/bugs/IBRS: Warn if IBRS is enabled during boot.

It should never be. But in case it is lets warn and clear it.

OraBug: 28041771
CVE: CVE-2018-3639

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Reviewed-by: Mihai Carabas <mihai.carabas@oracle.com>
Signed-off-by: Brian Maly <brian.maly@oracle.com>
7 years agox86/bugs/IBRS: Use variable instead of defines for enabling IBRS
Konrad Rzeszutek Wilk [Sun, 13 May 2018 23:06:42 +0000 (23:06 +0000)]
x86/bugs/IBRS: Use variable instead of defines for enabling IBRS

This follows what "x86/bugs: Read SPEC_CTRL MSR during boot and re-use
reserved bits" patch does - that is respect the other bits of the
SPEC CTRL MSR (if any at all).

This necessitates to convert all the assembler macros over, all
the various uses of the SPEC CTRL guarded by 'use_ibrs'.

Note the not so obvious change in the assembler macro from 'cmp' to
'test' to verify the right bit being set.

And to make sure it works with the IBRS support we need to
recognize it in x86_spec_ctrl_set.

This is not upstreamed. It builds on top of IBRS backport.

OraBug: 28041771
CVE: CVE-2018-3639

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Reviewed-by: Mihai Carabas <mihai.carabas@oracle.com>
Signed-off-by: Brian Maly <brian.maly@oracle.com>
Conflicts:
arch/x86/kernel/cpu/bugs_64.c

Signed-off-by: Brian Maly <brian.maly@oracle.com>
7 years agox86/bugs: Read SPEC_CTRL MSR during boot and re-use reserved bits
Konrad Rzeszutek Wilk [Sat, 12 May 2018 01:11:04 +0000 (21:11 -0400)]
x86/bugs: Read SPEC_CTRL MSR during boot and re-use reserved bits

The 336996-Speculative-Execution-Side-Channel-Mitigations.pdf refers to all
the other bits as reserved. The Intel SDM glossary defines reserved as
implementation specific - aka unknown.

As such at bootup this must be taken it into account and proper masking for
the bits in use applied.

A copy of this document is available at
https://bugzilla.kernel.org/show_bug.cgi?id=199511

[ tglx: Made x86_spec_ctrl_base __ro_after_init ]

OraBug: 28041771
CVE: CVE-2018-3639

Suggested-by: Jon Masters <jcm@redhat.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Borislav Petkov <bp@suse.de>
Reviewed-by: Ingo Molnar <mingo@kernel.org>
(cherry picked from commit 1b86883ccb8d5d9506529d42dbe1a5257cb30b18)
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Reviewed-by: Mihai Carabas <mihai.carabas@oracle.com>
 Conflicts:
arch/x86/include/asm/nospec-branch.h
[As we don't have the firmware_restrict_branch_speculation_start and
 firmware_restrict_branch_speculation_end and end up with a different
 name. See commit 473ad76ea8d76f34555d764a3d5820bc1b33cabf
 "x86/speculation: Use IBRS if available before calling into firmware"]

arch/x86/kernel/cpu/bugs.c
[File is called bugs_64.c in UEK4]

[Also the backport needs nospec-branch.h in different files ,and we can't
 use __ro_after_init]

Signed-off-by: Brian Maly <brian.maly@oracle.com>
7 years agox86/bugs: Concentrate bug reporting into a separate function
Konrad Rzeszutek Wilk [Sat, 12 May 2018 01:56:31 +0000 (21:56 -0400)]
x86/bugs: Concentrate bug reporting into a separate function

Those SysFS functions have a similar preamble, as such make common
code to handle them.

OraBug: 28041771
CVE: CVE-2018-3639

Suggested-by: Borislav Petkov <bp@suse.de>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Borislav Petkov <bp@suse.de>
Reviewed-by: Ingo Molnar <mingo@kernel.org>
(cherry picked from commit d1059518b4789cabe34bb4b714d07e6089c82ca1)
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Reviewed-by: Mihai Carabas <mihai.carabas@oracle.com>
 Conflicts:
arch/x86/kernel/cpu/bugs.c

[As it does not exist in UEK4. It is called bugs_64.c]

Signed-off-by: Brian Maly <brian.maly@oracle.com>
7 years agox86/bugs: Concentrate bug detection into a separate function
Konrad Rzeszutek Wilk [Thu, 26 Apr 2018 02:04:16 +0000 (22:04 -0400)]
x86/bugs: Concentrate bug detection into a separate function

Combine the various logic which goes through all those
x86_cpu_id matching structures in one function.

OraBug: 28041771
CVE: CVE-2018-3639

Suggested-by: Borislav Petkov <bp@suse.de>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Reviewed-by: Borislav Petkov <bp@suse.de>
Reviewed-by: Ingo Molnar <mingo@kernel.org>
(cherry picked from commit 4a28bfe3267b68e22c663ac26185aa16c9b879ef)
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Reviewed-by: Mihai Carabas <mihai.carabas@oracle.com>
 Conflicts:
arch/x86/kernel/cpu/common.c

Signed-off-by: Brian Maly <brian.maly@oracle.com>
7 years agox86/bugs/IBRS: Turn on IBRS in spectre_v2_select_mitigation
Konrad Rzeszutek Wilk [Sat, 12 May 2018 00:12:34 +0000 (20:12 -0400)]
x86/bugs/IBRS: Turn on IBRS in spectre_v2_select_mitigation

instead of during early bootup. This makes the bootup much
faster as we may get an NMI (watchdog) during booting before we
make it to spectre_v2_select_mitigation - which means we would
be running with IBRS enabled.

OraBug: 28041771
CVE: CVE-2018-3639

Fixes: XYZ ("x86/bugs/IBRS: Use variable instead of defines for enabling IBRS")
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Reviewed-by: Mihai Carabas <mihai.carabas@oracle.com>
Signed-off-by: Brian Maly <brian.maly@oracle.com>
7 years agox86/msr: Add SPEC_CTRL_IBRS..
Konrad Rzeszutek Wilk [Fri, 4 May 2018 00:45:44 +0000 (00:45 +0000)]
x86/msr: Add SPEC_CTRL_IBRS..

Instead of using the defines we have to easy backporting.

OraBug: 28041771
CVE: CVE-2018-3639

Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Reviewed-by: Mihai Carabas <mihai.carabas@oracle.com>
Signed-off-by: Brian Maly <brian.maly@oracle.com>
7 years agoscsi: libfc: Revisit kref handling
Hannes Reinecke [Fri, 30 Sep 2016 09:01:14 +0000 (11:01 +0200)]
scsi: libfc: Revisit kref handling

The kref handling in fc_rport is a mess. This patch updates
the kref handling according to the following rules:

- Take a reference whenever scheduling a workqueue
- Take a reference whenever an ELS command is send
- Drop the reference at the end of the workqueue function
- Drop the reference at the end of handling ELS replies
- Take a reference when allocating an rport
- Drop the reference when removing an rport

Signed-off-by: Hannes Reinecke <hare@suse.com>
Acked-by: Johannes Thumshirn <jth@kernel.org>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
OraBug: 27363267
(cherry picked from commit 4d2095cc42a2d8062590891f929d9d694cbd927f)
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
Reviewed-by: Fred Herard <fred.herard@oracle.com>
Signed-off-by: Rajan Shanmugavelu <rajan.shanmugavelu@oracle.com>
Signed-off-by: Brian Maly <brian.maly@oracle.com>
7 years agoscsi: libfc: reset exchange manager during LOGO handling
Hannes Reinecke [Fri, 5 Aug 2016 12:55:02 +0000 (14:55 +0200)]
scsi: libfc: reset exchange manager during LOGO handling

FC-LS mandates that we should invalidate all sequences before sending a
LOGO. And we should set the event to RPORT_EV_STOP when a LOGO request
has been received to signal that all exchanges are terminated.

Signed-off-by: Hannes Reinecke <hare@suse.com>
Reviewed-by: Chad Dupuis <chad.dupuis@qlogic.com>
Tested-by: Chad Dupuis <chad.dupuis@qlogic.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
OraBug: 27363267
(cherry picked from commit 649eb8693857e9b9fca009fba4eb7e80f9f3a326)
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
Reviewed-by: Fred Herard <fred.herard@oracle.com>
Signed-off-by: Rajan Shanmugavelu <rajan.shanmugavelu@oracle.com>
Signed-off-by: Brian Maly <brian.maly@oracle.com>
7 years agoscsi: libfc: send LOGO for PLOGI failure
Hannes Reinecke [Fri, 5 Aug 2016 12:55:01 +0000 (14:55 +0200)]
scsi: libfc: send LOGO for PLOGI failure

When running in point-to-multipoint mode PLOGI is done after FLOGI
completed. So when the PLOGI fails we should be sending a LOGO to the
remote port.

[mkp: Applied by hand]

Signed-off-by: Hannes Reinecke <hare@suse.com>
Reviewed-by: Chad Dupuis <chad.dupuis@qlogic.com>
Tested-by: Chad Dupuis <chad.dupuis@qlogic.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
OraBug: 27363267
(cherry picked from commit d391966a03846176a78ef8d53898de8b4302a2be)
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
Reviewed-by: Fred Herard <fred.herard@oracle.com>
Signed-off-by: Rajan Shanmugavelu <rajan.shanmugavelu@oracle.com>
Signed-off-by: Brian Maly <brian.maly@oracle.com>
7 years agoscsi: libfc: Issue PRLI after a PRLO has been received
Hannes Reinecke [Fri, 5 Aug 2016 12:55:00 +0000 (14:55 +0200)]
scsi: libfc: Issue PRLI after a PRLO has been received

When receiving a PRLO it just means that the operating parameters have
changed, it does _not_ mean that the port doesn't want to communicate
with us.  So instead of implicitly logging out we should be issueing a
PRLI to figure out the new operating parameters.  We can always recover
once PRLI fails.

Signed-off-by: Hannes Reinecke <hare@suse.com>
Reviewed-by: Chad Dupuis <chad.dupuis@qlogic.com>
Tested-by: Chad Dupuis <chad.dupuis@qlogic.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
OraBug: 27363267
(cherry picked from commit 166f310b629c046b7f5ca846adf978cda47b06c2)
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
Reviewed-by: Fred Herard <fred.herard@oracle.com>
Signed-off-by: Rajan Shanmugavelu <rajan.shanmugavelu@oracle.com>
Signed-off-by: Brian Maly <brian.maly@oracle.com>
7 years agolibfc: Update rport reference counting
Hannes Reinecke [Tue, 24 May 2016 06:11:58 +0000 (08:11 +0200)]
libfc: Update rport reference counting

Originally libfc would just be initializing the refcount to '1', and
using the disc_mutex to synchronize if and when the final put should be
happening.  This has a race condition as the mutex might be delayed,
causing other threads to access an invalid structure.  This patch
updates the rport reference counting to increase the reference every
time 'rport_lookup' is called, and decreases the reference
correspondingly.  This removes the need to hold 'disc_mutex' when
removing the structure, and avoids the above race condition.

Signed-off-by: Hannes Reinecke <hare@suse.com>
Acked-by: Vasu Dev <vasu.dev@intel.com>
Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
OraBug: 27363267
(cherry picked from commit baa6719f902af9c03e528b08dfb847de295b5137)
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
Reviewed-by: Fred Herard <fred.herard@oracle.com>
Signed-off-by: Rajan Shanmugavelu <rajan.shanmugavelu@oracle.com>
Signed-off-by: Brian Maly <brian.maly@oracle.com>
7 years agoamd/kvm: do not intercept new MSRs for spectre v2 mitigation
Elena Ufimtseva [Fri, 27 Apr 2018 23:53:51 +0000 (19:53 -0400)]
amd/kvm: do not intercept new MSRs for spectre v2 mitigation

Do not intercept MSR_IA32_SPEC_CTRL and MSR_IA32_PRED_CMD on AMD
for Spectre v2 mitigation.
As IBRS is not used on AMD, attempt to intercept MSR_IA32_SPEC_CTRL
will have guest crash with injected GP fault.
Also change the comment about field 'always' in svm_direct_access_msrs structure
for clarity.

OraBug: 27370258

Signed-off-by: Elena Ufimtseva <elena.ufimtseva@oracle.com>
Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Reviewed-by: Håkon Bugge <haakon.bugge@oracle.com>
Signed-off-by: Brian Maly <brian.maly@oracle.com>
7 years agoRDS: null pointer dereference in rds_atomic_free_op
Mohamed Ghannam [Wed, 3 Jan 2018 21:06:06 +0000 (21:06 +0000)]
RDS: null pointer dereference in rds_atomic_free_op

Orabug: 27422832
CVE: CVE-2018-5333

set rm->atomic.op_active to 0 when rds_pin_pages() fails
or the user supplied address is invalid,
this prevents a NULL pointer usage in rds_atomic_free_op()

Signed-off-by: Mohamed Ghannam <simo.ghannam@gmail.com>
Acked-by: Santosh Shilimkar <santosh.shilimkar@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
(cherry picked from commit 7d11f77f84b27cef452cee332f4e469503084737)

Reviewed-by: Shan Hai <shan.hai@oracle.com>
Signed-off-by: Allen Pais <allen.pais@oracle.com>
Signed-off-by: Brian Maly <brian.maly@oracle.com>
7 years agoACPI: sbshc: remove raw pointer from printk() message
Greg Kroah-Hartman [Fri, 19 Jan 2018 09:06:03 +0000 (10:06 +0100)]
ACPI: sbshc: remove raw pointer from printk() message

Orabug: 27501257
CVE: CVE-2018-5750

There's no need to be printing a raw kernel pointer to the kernel log at
every boot.  So just remove it, and change the whole message to use the
correct dev_info() call at the same time.

Reported-by: Wang Qize <wang_qize@venustech.com.cn>
Cc: All applicable <stable@vger.kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
(cherry picked from commit 43cdd1b716b26f6af16da4e145b6578f98798bf6)

Reviewed-by: Shan Hai <shan.hai@oracle.com>
Signed-off-by: Allen Pais <allen.pais@oracle.com>
Signed-off-by: Brian Maly <brian.maly@oracle.com>
7 years agofutex: Prevent overflow by strengthen input validation
Li Jinyue [Thu, 14 Dec 2017 09:04:54 +0000 (17:04 +0800)]
futex: Prevent overflow by strengthen input validation

Orabug: 27539548
CVE: CVE-2018-6927

UBSAN reports signed integer overflow in kernel/futex.c:

 UBSAN: Undefined behaviour in kernel/futex.c:2041:18
 signed integer overflow:
 0 - -2147483648 cannot be represented in type 'int'

Add a sanity check to catch negative values of nr_wake and nr_requeue.

Signed-off-by: Li Jinyue <lijinyue@huawei.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: peterz@infradead.org
Cc: dvhart@infradead.org
Cc: stable@vger.kernel.org
Link: https://lkml.kernel.org/r/1513242294-31786-1-git-send-email-lijinyue@huawei.com
(cherry picked from commit fbe0e839d1e22d88810f3ee3e2f1479be4c0aa4a)
Signed-off-by: Brian Maly <brian.maly@oracle.com>
Conflicts:
        kernel/futex.c

Reviewed-by: Shan Hai <shan.hai@oracle.com>
Signed-off-by: Allen Pais <allen.pais@oracle.com>
Signed-off-by: Brian Maly <brian.maly@oracle.com>
7 years agonet: ipv4: add support for ECMP hash policy choice
Venkat Venkatsubra [Wed, 16 May 2018 21:08:50 +0000 (14:08 -0700)]
net: ipv4: add support for ECMP hash policy choice

This patch adds support for ECMP hash policy choice via a new sysctl
called fib_multipath_hash_policy and also adds support for L4 hashes.
The current values for fib_multipath_hash_policy are:
 0 - layer 3 (default)
 1 - layer 4
If there's an skb hash already set and it matches the chosen policy then it
will be used instead of being calculated (currently only for L4).
In L3 mode we always calculate the hash due to the ICMP error special
case, the flow dissector's field consistentification should handle the
address order thus we can remove the address reversals.
If the skb is provided we always use it for the hash calculation,
otherwise we fallback to fl4, that is if skb is NULL fl4 has to be set.

Orabug: 27547114

Signed-off-by: Nikolay Aleksandrov <nikolay@cumulusnetworks.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
(cherry picked from commit bf4e0a3db97eb882368fd82980b3b1fa0b5b9778)

Reviewed-by: Zhu Yanjun <yanjun.zhu@oracle.com>
Signed-off-by: Venkat Venkatsubra <venkat.x.venkatsubra@oracle.com>
Signed-off-by: Brian Maly <brian.maly@oracle.com>
Conflicts:
include/net/ip_fib.h
include/net/netns/ipv4.h
include/net/route.h
net/ipv4/fib_semantics.c
net/ipv4/icmp.c
net/ipv4/route.c
net/ipv4/sysctl_net_ipv4.c

Signed-off-by: Brian Maly <brian.maly@oracle.com>
7 years agonet: ipv4: Consider failed nexthops in multipath routes
David Ahern [Thu, 7 Apr 2016 14:21:00 +0000 (07:21 -0700)]
net: ipv4: Consider failed nexthops in multipath routes

Multipath route lookups should consider knowledge about next hops and not
select a hop that is known to be failed.

Example:

                     [h2]                   [h3]   15.0.0.5
                      |                      |
                     3|                     3|
                    [SP1]                  [SP2]--+
                     1  2                   1     2
                     |  |     /-------------+     |
                     |   \   /                    |
                     |     X                      |
                     |    / \                     |
                     |   /   \---------------\    |
                     1  2                     1   2
         12.0.0.2  [TOR1] 3-----------------3 [TOR2] 12.0.0.3
                     4                         4
                      \                       /
                        \                    /
                         \                  /
                          -------|   |-----/
                                 1   2
                                [TOR3]
                                  3|
                                   |
                                  [h1]  12.0.0.1

host h1 with IP 12.0.0.1 has 2 paths to host h3 at 15.0.0.5:

    root@h1:~# ip ro ls
    ...
    12.0.0.0/24 dev swp1  proto kernel  scope link  src 12.0.0.1
    15.0.0.0/16
            nexthop via 12.0.0.2  dev swp1 weight 1
            nexthop via 12.0.0.3  dev swp1 weight 1
    ...

If the link between tor3 and tor1 is down and the link between tor1
and tor2 then tor1 is effectively cut-off from h1. Yet the route lookups
in h1 are alternating between the 2 routes: ping 15.0.0.5 gets one and
ssh 15.0.0.5 gets the other. Connections that attempt to use the
12.0.0.2 nexthop fail since that neighbor is not reachable:

    root@h1:~# ip neigh show
    ...
    12.0.0.3 dev swp1 lladdr 00:02:00:00:00:1b REACHABLE
    12.0.0.2 dev swp1  FAILED
    ...

The failed path can be avoided by considering known neighbor information
when selecting next hops. If the neighbor lookup fails we have no
knowledge about the nexthop, so give it a shot. If there is an entry
then only select the nexthop if the state is sane. This is similar to
what fib_detect_death does.

To maintain backward compatibility use of the neighbor information is
based on a new sysctl, fib_multipath_use_neigh.

Orabug: 27547114

Signed-off-by: David Ahern <dsa@cumulusnetworks.com>
Reviewed-by: Julian Anastasov <ja@ssi.bg>
Signed-off-by: David S. Miller <davem@davemloft.net>
(cherry picked from commit a6db4494d218c2e559173661ee972e048dc04fdd)

Reviewed-by: Zhu Yanjun <yanjun.zhu@oracle.com>
Signed-off-by: Venkat Venkatsubra <venkat.x.venkatsubra@oracle.com>
Signed-off-by: Brian Maly <brian.maly@oracle.com>
Conflicts:
net/ipv4/fib_semantics.c
net/ipv4/sysctl_net_ipv4.c

Signed-off-by: Brian Maly <brian.maly@oracle.com>
7 years agoipv4: L3 hash-based multipath
Peter Nørlund [Wed, 30 Sep 2015 08:12:21 +0000 (10:12 +0200)]
ipv4: L3 hash-based multipath

Replaces the per-packet multipath with a hash-based multipath using
source and destination address.

Orabug: 27547114

Signed-off-by: Peter Nørlund <pch@ordbogen.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
(cherry picked from commit 0e884c78ee19e902f300ed147083c28a0c6302f0)

Reviewed-by: Zhu Yanjun <yanjun.zhu@oracle.com>
Signed-off-by: Venkat Venkatsubra <venkat.x.venkatsubra@oracle.com>
Signed-off-by: Brian Maly <brian.maly@oracle.com>
Conflicts:
include/net/ip_fib.h
net/ipv4/fib_semantics.c

Signed-off-by: Brian Maly <brian.maly@oracle.com>
7 years agodm: fix race between dm_get_from_kobject() and __dm_destroy()
Hou Tao [Wed, 1 Nov 2017 07:42:36 +0000 (15:42 +0800)]
dm: fix race between dm_get_from_kobject() and __dm_destroy()

Orabug: 27677556
CVE: CVE-2017-18203

The following BUG_ON was hit when testing repeat creation and removal of
DM devices:

    kernel BUG at drivers/md/dm.c:2919!
    CPU: 7 PID: 750 Comm: systemd-udevd Not tainted 4.1.44
    Call Trace:
     [<ffffffff81649e8b>] dm_get_from_kobject+0x34/0x3a
     [<ffffffff81650ef1>] dm_attr_show+0x2b/0x5e
     [<ffffffff817b46d1>] ? mutex_lock+0x26/0x44
     [<ffffffff811df7f5>] sysfs_kf_seq_show+0x83/0xcf
     [<ffffffff811de257>] kernfs_seq_show+0x23/0x25
     [<ffffffff81199118>] seq_read+0x16f/0x325
     [<ffffffff811de994>] kernfs_fop_read+0x3a/0x13f
     [<ffffffff8117b625>] __vfs_read+0x26/0x9d
     [<ffffffff8130eb59>] ? security_file_permission+0x3c/0x44
     [<ffffffff8117bdb8>] ? rw_verify_area+0x83/0xd9
     [<ffffffff8117be9d>] vfs_read+0x8f/0xcf
     [<ffffffff81193e34>] ? __fdget_pos+0x12/0x41
     [<ffffffff8117c686>] SyS_read+0x4b/0x76
     [<ffffffff817b606e>] system_call_fastpath+0x12/0x71

The bug can be easily triggered, if an extra delay (e.g. 10ms) is added
between the test of DMF_FREEING & DMF_DELETING and dm_get() in
dm_get_from_kobject().

To fix it, we need to ensure the test of DMF_FREEING & DMF_DELETING and
dm_get() are done in an atomic way, so _minor_lock is used.

The other callers of dm_get() have also been checked to be OK: some
callers invoke dm_get() under _minor_lock, some callers invoke it under
_hash_lock, and dm_start_request() invoke it after increasing
md->open_count.

Cc: stable@vger.kernel.org
Signed-off-by: Hou Tao <houtao1@huawei.com>
Signed-off-by: Mike Snitzer <snitzer@redhat.com>
(cherry picked from commit b9a41d21dceadf8104812626ef85dc56ee8a60ed)

Reviewd-by: Shan Hai <shan.hai@oracle.com>
Signed-off-by: Allen Pais <allen.pais@oracle.com>
Signed-off-by: Brian Maly <brian.maly@oracle.com>
7 years agoNFS: only invalidate dentrys that are clearly invalid.
NeilBrown [Wed, 5 Jul 2017 02:22:20 +0000 (12:22 +1000)]
NFS: only invalidate dentrys that are clearly invalid.

Since commit bafc9b754f75 ("vfs: More precise tests in d_invalidate")
in v3.18, a return of '0' from ->d_revalidate() will cause the dentry
to be invalidated even if it has filesystems mounted on or it or on a
descendant.  The mounted filesystem is unmounted.

This means we need to be careful not to return 0 unless the directory
referred to truly is invalid.  So -ESTALE or -ENOENT should invalidate
the directory.  Other errors such a -EPERM or -ERESTARTSYS should be
returned from ->d_revalidate() so they are propagated to the caller.

A particular problem can be demonstrated by:

1/ mount an NFS filesystem using NFSv3 on /mnt
2/ mount any other filesystem on /mnt/foo
3/ ls /mnt/foo
4/ turn off network, or otherwise make the server unable to respond
5/ ls /mnt/foo &
6/ cat /proc/$!/stack # note that nfs_lookup_revalidate is in the call stack
7/ kill -9 $! # this results in -ERESTARTSYS being returned
8/ observe that /mnt/foo has been unmounted.

This patch changes nfs_lookup_revalidate() to only treat
  -ESTALE from nfs_lookup_verify_inode() and
  -ESTALE or -ENOENT from ->lookup()
as indicating an invalid inode.  Other errors are returned.

Also nfs_check_inode_attributes() is changed to return -ESTALE rather
than -EIO.  This is consistent with the error returned in similar
circumstances from nfs_update_inode().

As this bug allows any user to unmount a filesystem mounted on an NFS
filesystem, this fix is suitable for stable kernels.

Fixes: bafc9b754f75 ("vfs: More precise tests in d_invalidate")
Cc: stable@vger.kernel.org (v3.18+)
Signed-off-by: NeilBrown <neilb@suse.com>
Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
Orabug: 27870824

(cherry picked from commit cc89684c9a265828ce061037f1f79f4a68ccd3f7)
Signed-off-by: Calum Mackay <calum.mackay@oracle.com>
Reviewed-by: Jack Vogel <jack.vogel@oracle.com>
Signed-off-by: Brian Maly <brian.maly@oracle.com>
7 years agonet: Improve handling of failures on link and route dumps
David Ahern [Tue, 16 May 2017 06:19:17 +0000 (23:19 -0700)]
net: Improve handling of failures on link and route dumps

In general, rtnetlink dumps do not anticipate failure to dump a single
object (e.g., link or route) on a single pass. As both route and link
objects have grown via more attributes, that is no longer a given.

netlink dumps can handle a failure if the dump function returns an
error; specifically, netlink_dump adds the return code to the response
if it is <= 0 so userspace is notified of the failure. The missing
piece is the rtnetlink dump functions returning the error.

Fix route and link dump functions to return the errors if no object is
added to an skb (detected by skb->len != 0). IPv6 route dumps
(rt6_dump_route) already return the error; this patch updates IPv4 and
link dumps. Other dump functions may need to be ajusted as well.

Reported-by: Jan Moskyto Matejka <mq@ucw.cz>
Signed-off-by: David Ahern <dsahern@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Orabug: 27959177
(cherry picked from commit f6c5775ff0bfa62b072face6bf1d40f659f194b2)
Add missing err variable for the cherry-pick in rtnetlink.c
Signed-off-by: Jack Vogel <jack.vogel@oracle.com>
Reviewed-by: Dan Duval <dan.duval@oracle.com>
Signed-off-by: Brian Maly <brian.maly@oracle.com>
7 years agomm/mempolicy: fix use after free when calling get_mempolicy
zhong jiang [Fri, 18 Aug 2017 22:16:24 +0000 (15:16 -0700)]
mm/mempolicy: fix use after free when calling get_mempolicy

I hit a use after free issue when executing trinity and repoduced it
with KASAN enabled.  The related call trace is as follows.

  BUG: KASan: use after free in SyS_get_mempolicy+0x3c8/0x960 at addr ffff8801f582d766
  Read of size 2 by task syz-executor1/798

  INFO: Allocated in mpol_new.part.2+0x74/0x160 age=3 cpu=1 pid=799
     __slab_alloc+0x768/0x970
     kmem_cache_alloc+0x2e7/0x450
     mpol_new.part.2+0x74/0x160
     mpol_new+0x66/0x80
     SyS_mbind+0x267/0x9f0
     system_call_fastpath+0x16/0x1b
  INFO: Freed in __mpol_put+0x2b/0x40 age=4 cpu=1 pid=799
     __slab_free+0x495/0x8e0
     kmem_cache_free+0x2f3/0x4c0
     __mpol_put+0x2b/0x40
     SyS_mbind+0x383/0x9f0
     system_call_fastpath+0x16/0x1b
  INFO: Slab 0xffffea0009cb8dc0 objects=23 used=8 fp=0xffff8801f582de40 flags=0x200000000004080
  INFO: Object 0xffff8801f582d760 @offset=5984 fp=0xffff8801f582d600

  Bytes b4 ffff8801f582d750: ae 01 ff ff 00 00 00 00 5a 5a 5a 5a 5a 5a 5a 5a  ........ZZZZZZZZ
  Object ffff8801f582d760: 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b 6b  kkkkkkkkkkkkkkkk
  Object ffff8801f582d770: 6b 6b 6b 6b 6b 6b 6b a5                          kkkkkkk.
  Redzone ffff8801f582d778: bb bb bb bb bb bb bb bb                          ........
  Padding ffff8801f582d8b8: 5a 5a 5a 5a 5a 5a 5a 5a                          ZZZZZZZZ
  Memory state around the buggy address:
  ffff8801f582d600: fb fb fb fc fc fc fc fc fc fc fc fc fc fc fc fc
  ffff8801f582d680: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
  >ffff8801f582d700: fc fc fc fc fc fc fc fc fc fc fc fc fb fb fb fc

!shared memory policy is not protected against parallel removal by other
thread which is normally protected by the mmap_sem.  do_get_mempolicy,
however, drops the lock midway while we can still access it later.

Early premature up_read is a historical artifact from times when
put_user was called in this path see https://lwn.net/Articles/124754/
but that is gone since 8bccd85ffbaf ("[PATCH] Implement sys_* do_*
layering in the memory policy layer.").  but when we have the the
current mempolicy ref count model.  The issue was introduced
accordingly.

Fix the issue by removing the premature release.

Link: http://lkml.kernel.org/r/1502950924-27521-1-git-send-email-zhongjiang@huawei.com
Signed-off-by: zhong jiang <zhongjiang@huawei.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Cc: Minchan Kim <minchan@kernel.org>
Cc: Vlastimil Babka <vbabka@suse.cz>
Cc: David Rientjes <rientjes@google.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: <stable@vger.kernel.org> [2.6+]
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Orabug: 27963519
CVE: CVE-2018-10675
(cherry picked from commit 73223e4e2e3867ebf033a5a8eb2e5df0158ccc99)
Signed-off-by: Jack Vogel <jack.vogel@oracle.com>
Reviewed-by: Shan Hai <shan.hai@oracle.com>
Signed-off-by: Brian Maly <brian.maly@oracle.com>
7 years agodrm: udl: Properly check framebuffer mmap offsets
Greg Kroah-Hartman [Wed, 21 Mar 2018 15:45:53 +0000 (16:45 +0100)]
drm: udl: Properly check framebuffer mmap offsets

Orabug: 27963530
CVE-2018-8781

The memmap options sent to the udl framebuffer driver were not being
checked for all sets of possible crazy values.  Fix this up by properly
bounding the allowed values.

Reported-by: Eyal Itkin <eyalit@checkpoint.com>
Cc: stable <stable@vger.kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Link: https://patchwork.freedesktop.org/patch/msgid/20180321154553.GA18454@kroah.com
(cherry picked from commit 3b82a4db8eaccce735dffd50b4d4e1578099b8e8)

Reviewed-by: Jack Vogel <jack.vogel@oracle.com>
Signed-off-by: Allen Pais <allen.pais@oracle.com>
Signed-off-by: Brian Maly <brian.maly@oracle.com>
7 years agoxfs: set format back to extents if xfs_bmap_extents_to_btree
Eric Sandeen [Tue, 17 Apr 2018 06:07:27 +0000 (23:07 -0700)]
xfs: set format back to extents if xfs_bmap_extents_to_btree

If xfs_bmap_extents_to_btree fails in a mode where we call
xfs_iroot_realloc(-1) to de-allocate the root, set the
format back to extents.

Otherwise we can assume we can dereference ifp->if_broot
based on the XFS_DINODE_FMT_BTREE format, and crash.

Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=199423
Signed-off-by: Eric Sandeen <sandeen@redhat.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Reviewed-by: Darrick J. Wong <darrick.wong@oracle.com>
Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
(cherry picked from commit 2c4306f719b083d17df2963bc761777576b8ad1b)

Orabug: 27963576
CVE: CVE-2018-10323
Signed-off-by: Brian Maly <brian.maly@oracle.com>
Conflicts:
fs/xfs/libxfs/xfs_bmap.c
Drop a part of the patch which is inapplicable to the current kernel.
The WARN_ON_ONCE in the dropped part is introduced in the upstream
commit 2fcc319d2467 as a debugging facility since v4.11 kernel.

Signed-off-by: Shan Hai <shan.hai@oracle.com>
Reviewed-by: Jack Vogel <jack.vogel@oracle.com>
Signed-off-by: Brian Maly <brian.maly@oracle.com>
7 years agoRevert "mlx4: change the ICM table allocations to lowest needed size"
Håkon Bugge [Sat, 5 May 2018 13:13:50 +0000 (15:13 +0200)]
Revert "mlx4: change the ICM table allocations to lowest needed size"

This reverts commit UEK4-QU7 commit:
e7567cf1d53bddbe233dabea5c632658671dae9e
("mlx4: change the ICM table allocations to lowest needed size")

Orabug: 27980030

Signed-off-by: Håkon Bugge < haakon.bugge@oracle.com>
Reviewed-by: Chuck Anderson <chuck.anderson@oracle.com>
Signed-off-by: Qing Huang <qing.huang@oracle.com>
Signed-off-by: Brian Maly <brian.maly@oracle.com>
7 years agoBluetooth: Prevent stack info leak from the EFS element.
Ben Seri [Fri, 8 Dec 2017 14:14:47 +0000 (15:14 +0100)]
Bluetooth: Prevent stack info leak from the EFS element.

Orabug: 28030514
CVE: CVE-2017-1000410

In the function l2cap_parse_conf_rsp and in the function
l2cap_parse_conf_req the following variable is declared without
initialization:

struct l2cap_conf_efs efs;

In addition, when parsing input configuration parameters in both of
these functions, the switch case for handling EFS elements may skip the
memcpy call that will write to the efs variable:

...
case L2CAP_CONF_EFS:
if (olen == sizeof(efs))
memcpy(&efs, (void *)val, olen);
...

The olen in the above if is attacker controlled, and regardless of that
if, in both of these functions the efs variable would eventually be
added to the outgoing configuration request that is being built:

l2cap_add_conf_opt(&ptr, L2CAP_CONF_EFS, sizeof(efs), (unsigned long) &efs);

So by sending a configuration request, or response, that contains an
L2CAP_CONF_EFS element, but with an element length that is not
sizeof(efs) - the memcpy to the uninitialized efs variable can be
avoided, and the uninitialized variable would be returned to the
attacker (16 bytes).

This issue has been assigned CVE-2017-1000410

Cc: Marcel Holtmann <marcel@holtmann.org>
Cc: Gustavo Padovan <gustavo@padovan.org>
Cc: Johan Hedberg <johan.hedberg@gmail.com>
Cc: stable <stable@vger.kernel.org>
Signed-off-by: Ben Seri <ben@armis.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
(cherry picked from commit 06e7e776ca4d36547e503279aeff996cbb292c16)
Signed-off-by: Brian Maly <brian.maly@oracle.com>
7 years agonetfilter: nfnetlink_cthelper: Add missing permission checks
Kevin Cernekee [Sun, 3 Dec 2017 20:12:45 +0000 (12:12 -0800)]
netfilter: nfnetlink_cthelper: Add missing permission checks

The capability check in nfnetlink_rcv() verifies that the caller
has CAP_NET_ADMIN in the namespace that "owns" the netlink socket.
However, nfnl_cthelper_list is shared by all net namespaces on the
system.  An unprivileged user can create user and net namespaces
in which he holds CAP_NET_ADMIN to bypass the netlink_net_capable()
check:

    $ nfct helper list
    nfct v1.4.4: netlink error: Operation not permitted
    $ vpnns -- nfct helper list
    {
            .name = ftp,
            .queuenum = 0,
            .l3protonum = 2,
            .l4protonum = 6,
            .priv_data_len = 24,
            .status = enabled,
    };

Add capable() checks in nfnetlink_cthelper, as this is cleaner than
trying to generalize the solution.

Signed-off-by: Kevin Cernekee <cernekee@chromium.org>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
(cherry picked from commit 4b380c42f7d00a395feede754f0bc2292eebe6e5)

Orabug: 27260771
CVE: CVE-2017-17448

Signed-off-by: Shan Hai <shan.hai@oracle.com>
Reviewed-by: Jack Vogel <jack.vogel@oracle.com>
Signed-off-by: Brian Maly <brian.maly@oracle.com>
7 years agonetlink: Add netns check on taps
Kevin Cernekee [Wed, 6 Dec 2017 20:12:27 +0000 (12:12 -0800)]
netlink: Add netns check on taps

Currently, a nlmon link inside a child namespace can observe systemwide
netlink activity.  Filter the traffic so that nlmon can only sniff
netlink messages from its own netns.

Test case:

    vpnns -- bash -c "ip link add nlmon0 type nlmon; \
                      ip link set nlmon0 up; \
                      tcpdump -i nlmon0 -q -w /tmp/nlmon.pcap -U" &
    sudo ip xfrm state add src 10.1.1.1 dst 10.1.1.2 proto esp \
        spi 0x1 mode transport \
        auth sha1 0x6162633132330000000000000000000000000000 \
        enc aes 0x00000000000000000000000000000000
    grep --binary abc123 /tmp/nlmon.pcap

Signed-off-by: Kevin Cernekee <cernekee@chromium.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
(cherry picked from commit 93c647643b48f0131f02e45da3bd367d80443291)

Orabug: 27260799
CVE: CVE-2017-17449

Signed-off-by: Shan Hai <shan.hai@oracle.com>
Reviewed-by: Jack Vogel <jack.vogel@oracle.com>
Signed-off-by: Brian Maly <brian.maly@oracle.com>
7 years agoKVM: Fix stack-out-of-bounds read in write_mmio
Wanpeng Li [Fri, 13 Apr 2018 00:39:53 +0000 (20:39 -0400)]
KVM: Fix stack-out-of-bounds read in write_mmio

commit e39d200fa5bf5b94a0948db0dae44c1b73b84a56 upstream.

Reported by syzkaller:

  BUG: KASAN: stack-out-of-bounds in write_mmio+0x11e/0x270 [kvm]
  Read of size 8 at addr ffff8803259df7f8 by task syz-executor/32298

  CPU: 6 PID: 32298 Comm: syz-executor Tainted: G           OE    4.15.0-rc2+ #18
  Hardware name: LENOVO ThinkCentre M8500t-N000/SHARKBAY, BIOS FBKTC1AUS 02/16/2016
  Call Trace:
   dump_stack+0xab/0xe1
   print_address_description+0x6b/0x290
   kasan_report+0x28a/0x370
   write_mmio+0x11e/0x270 [kvm]
   emulator_read_write_onepage+0x311/0x600 [kvm]
   emulator_read_write+0xef/0x240 [kvm]
   emulator_fix_hypercall+0x105/0x150 [kvm]
   em_hypercall+0x2b/0x80 [kvm]
   x86_emulate_insn+0x2b1/0x1640 [kvm]
   x86_emulate_instruction+0x39a/0xb90 [kvm]
   handle_exception+0x1b4/0x4d0 [kvm_intel]
   vcpu_enter_guest+0x15a0/0x2640 [kvm]
   kvm_arch_vcpu_ioctl_run+0x549/0x7d0 [kvm]
   kvm_vcpu_ioctl+0x479/0x880 [kvm]
   do_vfs_ioctl+0x142/0x9a0
   SyS_ioctl+0x74/0x80
   entry_SYSCALL_64_fastpath+0x23/0x9a

The path of patched vmmcall will patch 3 bytes opcode 0F 01 C1(vmcall)
to the guest memory, however, write_mmio tracepoint always prints 8 bytes
through *(u64 *)val since kvm splits the mmio access into 8 bytes. This
leaks 5 bytes from the kernel stack (CVE-2017-17741).  This patch fixes
it by just accessing the bytes which we operate on.

Before patch:

syz-executor-5567  [007] .... 51370.561696: kvm_mmio: mmio write len 3 gpa 0x10 val 0x1ffff10077c1010f

After patch:

syz-executor-13416 [002] .... 51302.299573: kvm_mmio: mmio write len 3 gpa 0x10 val 0xc1010f

Reported-by: Dmitry Vyukov <dvyukov@google.com>
Reviewed-by: Darren Kenny <darren.kenny@oracle.com>
Reviewed-by: Marc Zyngier <marc.zyngier@arm.com>
Tested-by: Marc Zyngier <marc.zyngier@arm.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Radim Krčmář <rkrcmar@redhat.com>
Cc: Marc Zyngier <marc.zyngier@arm.com>
Cc: Christoffer Dall <christoffer.dall@linaro.org>
Signed-off-by: Wanpeng Li <wanpeng.li@hotmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Cc: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
(cherry-picked from commit 653c41ac4729261cb356ee1aff0f3f4f342be1eb)

Orabug: 27290606
CVE: CVE-2017-17741

Signed-off-by: Krish Sadhukhan <krish.sadhukhan@oracle.com>
Signed-off-by: Brian Maly <brian.maly@oracle.com>
7 years agoxprtrdma: Detect unreachable NFS/RDMA servers more reliably
Chuck Lever [Tue, 11 Apr 2017 17:22:46 +0000 (13:22 -0400)]
xprtrdma: Detect unreachable NFS/RDMA servers more reliably

Current NFS clients rely on connection loss to determine when to
retransmit. In particular, for protocols like NFSv4, clients no
longer rely on RPC timeouts to drive retransmission: NFSv4 servers
are required to terminate a connection when they need a client to
retransmit pending RPCs.

When a server is no longer reachable, either because it has crashed
or because the network path has broken, the server cannot actively
terminate a connection. Thus NFS clients depend on transport-level
keepalive to determine when a connection must be replaced and
pending RPCs retransmitted.

However, RDMA RC connections do not have a native keepalive
mechanism. If an NFS/RDMA server crashes after a client has sent
RPCs successfully (an RC ACK has been received for all OTW RDMA
requests), there is no way for the client to know the connection is
moribund.

In addition, new RDMA requests are subject to the RPC-over-RDMA
credit limit. If the client has consumed all granted credits with
NFS traffic, it is not allowed to send another RDMA request until
the server replies. Thus it has no way to send a true keepalive when
the workload has already consumed all credits with pending RPCs.

To address this, forcibly disconnect a transport when an RPC times
out. This prevents moribund connections from stopping the
detection of failover or other configuration changes on the server.

Note that even if the connection is still good, retransmitting
any RPC will trigger a disconnect thanks to this logic in
xprt_rdma_send_request:

/* Must suppress retransmit to maintain credits */
if (req->rl_connect_cookie == xprt->connect_cookie)
goto drop_connection;
req->rl_connect_cookie = xprt->connect_cookie;

Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
Orabug: 27587008

(cherry picked from commit 33849792cbcdae2b04819cfb09fe3dca0a84a11e)
Signed-off-by: Calum Mackay <calum.mackay@oracle.com>
Signed-off-by: Brian Maly <brian.maly@oracle.com>
7 years agosunrpc: Export xprt_force_disconnect()
Chuck Lever [Tue, 11 Apr 2017 17:22:38 +0000 (13:22 -0400)]
sunrpc: Export xprt_force_disconnect()

xprt_force_disconnect() is already invoked from the socket
transport. I want to invoke xprt_force_disconnect() from the
RPC-over-RDMA transport, which is a separate module from sunrpc.ko.

Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
Orabug: 27587008

(cherry picked from commit e2a4f4fbefc5e5b7b4435f73711b7be94f780584)
Signed-off-by: Calum Mackay <calum.mackay@oracle.com>
Signed-off-by: Brian Maly <brian.maly@oracle.com>
7 years agosunrpc: Allow xprt->ops->timer method to sleep
Chuck Lever [Wed, 8 Feb 2017 22:00:51 +0000 (17:00 -0500)]
sunrpc: Allow xprt->ops->timer method to sleep

The transport lock is needed to protect the xprt_adjust_cwnd() call
in xs_udp_timer, but it is not necessary for accessing the
rq_reply_bytes_recvd or tk_status fields. It is correct to sublimate
the lock into UDP's xs_udp_timer method, where it is required.

The ->timer method has to take the transport lock if needed, but it
can now sleep safely, or even call back into the RPC scheduler.

This is more a clean-up than a fix, but the "issue" was introduced
by my transport switch patches back in 2005.

Fixes: 46c0ee8bc4ad ("RPC: separate xprt_timer implementations")
Signed-off-by: Chuck Lever <chuck.lever@oracle.com>
Signed-off-by: Anna Schumaker <Anna.Schumaker@Netapp.com>
Orabug: 27587008

(cherry picked from commit b977b644ccf821ab1269582f7efe1d0d85faa1f6)
Signed-off-by: Calum Mackay <calum.mackay@oracle.com>
Signed-off-by: Brian Maly <brian.maly@oracle.com>
7 years agoKVM: nVMX: fix guest CR4 loading when emulating L2 to L1 exit
Haozhong Zhang [Wed, 2 May 2018 01:12:08 +0000 (21:12 -0400)]
KVM: nVMX: fix guest CR4 loading when emulating L2 to L1 exit

When KVM emulates an exit from L2 to L1, it loads L1 CR4 into the
guest CR4. Before this CR4 loading, the guest CR4 refers to L2
CR4. Because these two CR4's are in different levels of guest, we
should vmx_set_cr4() rather than kvm_set_cr4() here. The latter, which
is used to handle guest writes to its CR4, checks the guest change to
CR4 and may fail if the change is invalid.

The failure may cause trouble. Consider we start
  a L1 guest with non-zero L1 PCID in use,
     (i.e. L1 CR4.PCIDE == 1 && L1 CR3.PCID != 0)
and
  a L2 guest with L2 PCID disabled,
     (i.e. L2 CR4.PCIDE == 0)
and following events may happen:

1. If kvm_set_cr4() is used in load_vmcs12_host_state() to load L1 CR4
   into guest CR4 (in VMCS01) for L2 to L1 exit, it will fail because
   of PCID check. As a result, the guest CR4 recorded in L0 KVM (i.e.
   vcpu->arch.cr4) is left to the value of L2 CR4.

2. Later, if L1 attempts to change its CR4, e.g., clearing VMXE bit,
   kvm_set_cr4() in L0 KVM will think L1 also wants to enable PCID,
   because the wrong L2 CR4 is used by L0 KVM as L1 CR4. As L1
   CR3.PCID != 0, L0 KVM will inject GP to L1 guest.

Fixes: 4704d0befb072 ("KVM: nVMX: Exiting from L2 to L1")
Cc: qemu-stable@nongnu.org
Signed-off-by: Haozhong Zhang <haozhong.zhang@intel.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
(cherry-picked from commit 8eb3f87d903168bdbd1222776a6b1e281f50513e)
Orabug: 27720128
Signed-off-by: Krish Sadhukhan <krish.sadhukhan@oracle.com>
Signed-off-by: Brian Maly <brian.maly@oracle.com>
7 years agox86/microcode: probe CPU features on microcode update
Ankur Arora [Wed, 25 Apr 2018 22:12:25 +0000 (18:12 -0400)]
x86/microcode: probe CPU features on microcode update

Probe for updated CPUID features each time the microcode is
loaded. Specifically this means when the sysfs cpu/microcode
nodes are created (which is when the microcode is first loaded)
or from a user trigger via the sysfs microcode/reload interface.

Orabug: 27878230

Signed-off-by: Ankur Arora <ankur.a.arora@oracle.com>
Reviewed-by: Krish Sadhukhan <krish.sadhukhan@oracle.com>
(cherry-picked from commit c97a2bf2aa93390b23fbf9adb943e494fee18a18)

conflicts:
  arch/x86/kernel/cpu/microcode/core.c
  arch/x86/include/asm/processor.h

[Backport: call init_scattered_cpuid_features() instead of
 get_cpu_cap().]

Signed-off-by: Brian Maly <brian.maly@oracle.com>
7 years agox86/microcode: microcode_write() should not reference boot_cpu_data
Ankur Arora [Wed, 25 Apr 2018 21:55:02 +0000 (17:55 -0400)]
x86/microcode: microcode_write() should not reference boot_cpu_data

microcode_write() internally calls the AMD or Intel microcode
update logic, both of which update the cpu_data(cpu)->microcode
value. For probing speculation features however, we call
init_scattered_cpuid_features() with boot_cpu_data which is stale
and might have an old value of microcode version.

Fix this by using cpu_data() instead.

Orabug: 27878230

Signed-off-by: Ankur Arora <ankur.a.arora@oracle.com>
Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Reviewed-by: Krish Sadhukhan <krish.sadhukhan@oracle.com>
Reviewed-by: John Haxby <john.haxby@oracle.com>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
(cherry-picked from commit ea35f733dca011782ef2a3647e7f3ac284dbed2e)

Conflict:
  arch/x86/kernel/cpu/microcode/core.c

[Backport: modify arch/x86/kernel/microcode_core.c instead.]

Signed-off-by: Brian Maly <brian.maly@oracle.com>
7 years agox86/cpufeatures: use cpu_data in init_scattered_cpuid_flags()
Ankur Arora [Wed, 25 Apr 2018 21:38:58 +0000 (17:38 -0400)]
x86/cpufeatures: use cpu_data in init_scattered_cpuid_flags()

Post SMP init, cpu_data() contains the current cpuinfo state with
the boot_cpu_data potentially going stale.

Switch all boot_cpu_data references to cpu_data() in
init_scattered_cpuid_flags().

Orabug: 27878230

Signed-off-by: Ankur Arora <ankur.a.arora@oracle.com>
Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Reviewed-by: Krish Sadhukhan <krish.sadhukhan@oracle.com>
Reviewed-by: John Haxby <john.haxby@oracle.com>
Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
(cherry-picked from commit 6258d6d6ce2b9cf41830e8616ffc7841b3fddca9)

Conflict:
  arch/x86/kernel/cpu/common.c

[Backport: Modify init_scattered_cpuid_flags() instead of
 init_speculation_control().]

Signed-off-by: Brian Maly <brian.maly@oracle.com>
7 years agomm/pagewalk.c: report holes in hugetlb ranges
Jann Horn [Tue, 14 Nov 2017 00:03:44 +0000 (01:03 +0100)]
mm/pagewalk.c: report holes in hugetlb ranges

This matters at least for the mincore syscall, which will otherwise copy
uninitialized memory from the page allocator to userspace.  It is
probably also a correctness error for /proc/$pid/pagemap, but I haven't
tested that.

Removing the `walk->hugetlb_entry` condition in walk_hugetlb_range() has
no effect because the caller already checks for that.

This only reports holes in hugetlb ranges to callers who have specified
a hugetlb_entry callback.

This issue was found using an AFL-based fuzzer.

v2:
 - don't crash on ->pte_hole==NULL (Andrew Morton)
 - add Cc stable (Andrew Morton)

Fixes: 1e25a271c8ac ("mincore: apply page table walker on do_mincore()")
Signed-off-by: Jann Horn <jannh@google.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Orabug: 27913118
CVE: CVE-2017-16994

(cherry picked from commit 373c4557d2aa362702c4c2d41288fb1e54990b7c)
Conflict: one line adjust due to huge_pte_offset() interface diff
Signed-off-by: Jack Vogel <jack.vogel@oracle.com>
Reviewed-by: Shan Hai <shan.hai@oracle.com>
Signed-off-by: Brian Maly <brian.maly@oracle.com>
7 years agoKEYS: don't let add_key() update an uninstantiated key
David Howells [Thu, 12 Oct 2017 15:00:41 +0000 (16:00 +0100)]
KEYS: don't let add_key() update an uninstantiated key

Currently, when passed a key that already exists, add_key() will call the
key's ->update() method if such exists.  But this is heavily broken in the
case where the key is uninstantiated because it doesn't call
__key_instantiate_and_link().  Consequently, it doesn't do most of the
things that are supposed to happen when the key is instantiated, such as
setting the instantiation state, clearing KEY_FLAG_USER_CONSTRUCT and
awakening tasks waiting on it, and incrementing key->user->nikeys.

It also never takes key_construction_mutex, which means that
->instantiate() can run concurrently with ->update() on the same key.  In
the case of the "user" and "logon" key types this causes a memory leak, at
best.  Maybe even worse, the ->update() methods of the "encrypted" and
"trusted" key types actually just dereference a NULL pointer when passed an
uninstantiated key.

Change key_create_or_update() to wait interruptibly for the key to finish
construction before continuing.

This patch only affects *uninstantiated* keys.  For now we still allow a
negatively instantiated key to be updated (thereby positively
instantiating it), although that's broken too (the next patch fixes it)
and I'm not sure that anyone actually uses that functionality either.

Here is a simple reproducer for the bug using the "encrypted" key type
(requires CONFIG_ENCRYPTED_KEYS=y), though as noted above the bug
pertained to more than just the "encrypted" key type:

    #include <stdlib.h>
    #include <unistd.h>
    #include <keyutils.h>

    int main(void)
    {
        int ringid = keyctl_join_session_keyring(NULL);

        if (fork()) {
            for (;;) {
                const char payload[] = "update user:foo 32";

                usleep(rand() % 10000);
                add_key("encrypted", "desc", payload, sizeof(payload), ringid);
                keyctl_clear(ringid);
            }
        } else {
            for (;;)
                request_key("encrypted", "desc", "callout_info", ringid);
        }
    }

It causes:

    BUG: unable to handle kernel NULL pointer dereference at 0000000000000018
    IP: encrypted_update+0xb0/0x170
    PGD 7a178067 P4D 7a178067 PUD 77269067 PMD 0
    PREEMPT SMP
    CPU: 0 PID: 340 Comm: reproduce Tainted: G      D         4.14.0-rc1-00025-g428490e38b2e #796
    Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Bochs 01/01/2011
    task: ffff8a467a39a340 task.stack: ffffb15c40770000
    RIP: 0010:encrypted_update+0xb0/0x170
    RSP: 0018:ffffb15c40773de8 EFLAGS: 00010246
    RAX: 0000000000000000 RBX: ffff8a467a275b00 RCX: 0000000000000000
    RDX: 0000000000000005 RSI: ffff8a467a275b14 RDI: ffffffffb742f303
    RBP: ffffb15c40773e20 R08: 0000000000000000 R09: ffff8a467a275b17
    R10: 0000000000000020 R11: 0000000000000000 R12: 0000000000000000
    R13: 0000000000000000 R14: ffff8a4677057180 R15: ffff8a467a275b0f
    FS:  00007f5d7fb08700(0000) GS:ffff8a467f200000(0000) knlGS:0000000000000000
    CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
    CR2: 0000000000000018 CR3: 0000000077262005 CR4: 00000000001606f0
    Call Trace:
     key_create_or_update+0x2bc/0x460
     SyS_add_key+0x10c/0x1d0
     entry_SYSCALL_64_fastpath+0x1f/0xbe
    RIP: 0033:0x7f5d7f211259
    RSP: 002b:00007ffed03904c8 EFLAGS: 00000246 ORIG_RAX: 00000000000000f8
    RAX: ffffffffffffffda RBX: 000000003b2a7955 RCX: 00007f5d7f211259
    RDX: 00000000004009e4 RSI: 00000000004009ff RDI: 0000000000400a04
    RBP: 0000000068db8bad R08: 000000003b2a7955 R09: 0000000000000004
    R10: 000000000000001a R11: 0000000000000246 R12: 0000000000400868
    R13: 00007ffed03905d0 R14: 0000000000000000 R15: 0000000000000000
    Code: 77 28 e8 64 34 1f 00 45 31 c0 31 c9 48 8d 55 c8 48 89 df 48 8d 75 d0 e8 ff f9 ff ff 85 c0 41 89 c4 0f 88 84 00 00 00 4c 8b 7d c8 <49> 8b 75 18 4c 89 ff e8 24 f8 ff ff 85 c0 41 89 c4 78 6d 49 8b
    RIP: encrypted_update+0xb0/0x170 RSP: ffffb15c40773de8
    CR2: 0000000000000018

Cc: <stable@vger.kernel.org> # v2.6.12+
Reported-by: Eric Biggers <ebiggers@google.com>
Signed-off-by: David Howells <dhowells@redhat.com>
cc: Eric Biggers <ebiggers@google.com>

Orabug: 27913330
CVE: CVE-2017-15299

(cherry picked from commit 60ff5b2f547af3828aebafd54daded44cfb0807a)
Signed-off-by: Jack Vogel <jack.vogel@oracle.com>
Reviewed-by: Shan Hai <shan.hai@oracle.com>
Signed-off-by: Brian Maly <brian.maly@oracle.com>
7 years agodrm/vmwgfx: NULL pointer dereference in vmw_surface_define_ioctl()
Murray McAllister [Mon, 27 Mar 2017 09:12:53 +0000 (11:12 +0200)]
drm/vmwgfx: NULL pointer dereference in vmw_surface_define_ioctl()

Before memory allocations vmw_surface_define_ioctl() checks the
upper-bounds of a user-supplied size, but does not check if the
supplied size is 0.

Add check to avoid NULL pointer dereferences.

Cc: <stable@vger.kernel.org>
Signed-off-by: Murray McAllister <murray.mcallister@insomniasec.com>
Reviewed-by: Sinclair Yeh <syeh@vmware.com>
Orabug: 27913367
CVE: CVE-2017-7294

(cherry picked from commit 36274ab8c596f1240c606bb514da329add2a1bcd)
Signed-off-by: Jack Vogel <jack.vogel@oracle.com>
Reviewed-by: Shan Hai <shan.hai@oracle.com>
Signed-off-by: Brian Maly <brian.maly@oracle.com>
7 years agovmscan: Support multiple kswapd threads per node
Buddy Lumpkin [Thu, 15 Mar 2018 06:57:13 +0000 (06:57 +0000)]
vmscan: Support multiple kswapd threads per node

Page replacement is handled in the Linux Kernel in one of two ways:

1) Asynchronously via kswapd
2) Synchronously, via direct reclaim

At page allocation time the allocating task is immediately given a page
from the zone free list allowing it to go right back to work doing
whatever it was doing; Probably directly or indirectly executing
business logic.

Just prior to satisfying the allocation, free pages is checked to see if
it has reached the zone low watermark and if so, kswapd is awakened.
Kswapd will start scanning pages looking for inactive pages to evict to
make room for new page allocations. The work of kswapd allows tasks to
continue allocating memory from their respective zone free list without
incurring any delay.

When the demand for free pages exceeds the rate that kswapd tasks can
supply them, page allocation works differently. Once the allocating task
finds that the number of free pages is at or below the zone min
watermark, the task will no longer pull pages from the free list.
Instead, the task will run the same CPU-bound routines as kswapd to
satisfy its own allocation by scanning and evicting pages. This is
called a direct reclaim.

The time spent performing a direct reclaim can be substantial, often
taking tens to hundreds of milliseconds for small order0 allocations to
half a second or more for order9 huge-page allocations. In fact, kswapd
is not actually required on a linux system. It exists for the sole
purpose of optimizing performance by preventing direct reclaims.

When memory shortfall is sufficient to trigger direct reclaims, they can
occur in any task that is running on the system. A single aggressive
memory allocating task can set the stage for collateral damage to occur
in small tasks that rarely allocate additional memory. Consider the
impact of injecting an additional 100ms of latency when nscd allocates
memory to facilitate caching of a DNS query.

The presence of direct reclaims 10 years ago was a fairly reliable
indicator that too much was being asked of a Linux system. Kswapd was
likely wasting time scanning pages that were ineligible for eviction.
Adding RAM or reducing the working set size would usually make the
problem go away. Since then hardware has evolved to bring a new struggle
for kswapd. Storage speeds have increased by orders of magnitude while
CPU clock speeds have actually slowed down. This presents a throughput
problem for a single threaded kswapd that will get worse with each
generation of new hardware.

------------
Test Details
------------

The tests below were designed with the assumption that a kswapd
bottleneck is best demonstrated using filesystem reads. This way, the
inactive list will be full of clean pages, simplifying the analysis and
allowing kswapd to achieve the highest possible steal rate. Maximum
steal rates for kswapd are likely to be the same or lower for any other
mix of page types on the system.

Tests were run on a 2U Oracle X7-2L with 52 Intel Xeon Skylake 2GHz
cores, 756GB of RAM and 8 x 3.6 TB NVMe Solid State Disk drives. Each
drive has an XFS filesystem mounted separately as /d0 through /d7. NVMe
drives require multiple concurrent streams to show their potential, so I
created 11 250GB zero-filled files on each drive so that I could test
with parallel reads.

The test script runs in multiple stages. At each stage, the number of dd
tasks run concurrently is increased by 2. I did not include all of the
test output for brevity.

During each stage dd tasks are launched to read from each drive in a
round robin fashion until the specified number of tasks for the stage
has been reached. Then iostat, vmstat and top are started in the
background with 10 second intervals. After five minutes, all of the dd
tasks are killed and the iostat, vmstat and top output is parsed in
order to report the following:

CPU consumption
- sy: aggregate kernel mode CPU consumption from vmstat output. The
  value doesn't tend to fluctuate much so I just grab the highest value.
  Each sample is averaged over 10 seconds
- dd_cpu: for all of the dd tasks averaged across the top samples since
  there is a lot of variation.

Throughput
- in Kbytes
- Command is iostat -x -d 10 -g total

This first test performs reads using O_DIRECT in order to show the peak
throughput that can be obtained using these drives. It also demonstrates
how rapidly throughput scales as the number of dd tasks are increased.

The dd command for this test looks like this:

Command Used: dd iflag=direct if=/d${i}/$n of=/dev/null bs=4M

dd sy dd_cpu throughput
6  1  4.52   14966994.50
10 1  4.94   21503269.37
16 1  4.70   25791251.00
22 1  5.02   26139553.00
28 1  4.85   26242989.00
34 2  4.53   26253264.20
40 2  3.82   26265978.60
46 2  3.39   26256091.80
52 2  3.06   26256913.60
58 2  2.74   26256988.40
64 2  2.50   26256534.20
70 2  2.27   26255088.00
76 2  2.12   26247909.00
80 2  1.99   26251164.80

Throughput peaked with 40 dd tasks at 26265978.60 KB/s. Very little
system CPU was consumed as expected the drives DMA directly into the
user address space when using O_DIRECT.

The remaining tests do not use O_DIRECT. We drop the page cache before
testing and stop the test as soon as kswapd wakes up.

dd sy dd_cpu throughput
6  2  30.34  5245348.50
10 3 32.53 7735288.00
16 5 32.78 11059243.20
22 6 30.77 13371912.80
28 8 31.52 16092092.00
34 10 30.12 18000076.80
40 11 29.34 19368494.40
46 11 26.45 20450313.60
52 13 25.47 21249290.40
58 13 23.75 22008188.80
64 13 21.38 22298248.80
70 15 21.39 22442940.80
76 14 18.82 22876260.80
80 15 19.45 23143716.80

Each read has to pause after the buffer in kernel space is populated
while that data is added to the pagecache and copied into the user
address space. For this reason, more parallel streams are required to
achieve peak throughput. The copy operation consumes substantially more
CPU than direct IO as expected.

The next test measures throughput after kswapd starts running. This is
the same test only we wait for kswapd to wake up before we start
collecting metrics.

The script actually keeps track of a few things that were not mentioned
earlier. It tracks direct reclaims and page scans by watching the
metrics in /proc/vmstat. CPU consumption for kswapd is tracked the same
way it is tracked for dd.

Since the test is 100% reads, you can assume that the page steal rate
for kswapd and direct reclaims is almost identical to the scan rate.

1 kswapd thread per node
dd sy dd_cpu kswapd0 kswapd1 throughput  dr's  pgscan_kswapd pgscan_direct
10 4  31.56  24.86   23.56   7828015.60  0     460668253     0
16 7  36.10  65.94   71.74   11149401.80 0     900894848     0
22 10 37.79  91.94   94.25   14271445.20 179   1149779893    4236610
28 14 46.04  86.71   84.39   14633873.80 14624 829782742     346638611
34 16 41.23  85.76   84.04   16058195.40 16303 927594668     386370834
40 20 47.65  69.78   68.79   15381517.80 28538 566746661     676222823
46 22 45.76  64.39   64.50   15941522.40 32567 510483237     771659039
52 25 47.40  60.56   63.15   15189850.20 34504 422051924     816932307
58 29 48.21  53.44   57.19   15191931.40 38313 330596133     907319630
64 32 49.88  51.08   51.10   15073485.20 41939 233133908     993009680
70 36 50.71  51.32   51.54   15265733.00 43348 209193894     1026357511
76 40 51.78  51.39   54.38   15091290.20 43804 192167798     1037072462
80 44 52.95  48.91   55.95   15009935.60 44218 177718893     1046802606

Look closely at the scan statistics and the CPU consumption numbers and
it should be clear that the bulk of the CPU consumption is occuring in
the context of the dd tasks due to direct reclaims, not kswapd.

Same test, more kswapd tasks:

6 kswapd threads per node
dd sy dd_cpu kswapd0 kswapd1 throughput  dr's  pgscan_kswapd pgscan_direct
10 4  33.19  6.98    6.44    8184050.97  0     460877355     0
16 10 41.61  27.99   28.26   11533556.80 0     941456735     0
22 12 39.00  28.12   29.67   14303265.20 10    1170356251    237431
28 15 37.53  38.01   40.42   16449001.40 30    1355387318    711292
34 19 38.87  49.81   51.33   18094928.20 0     1495622630    0
40 22 37.62  56.93   59.27   19562580.80 0     1618461307    0
46 25 36.51  64.00   66.34   20800868.60 0     1715162179    0
52 28 36.89  70.68   74.60   21650189.60 0     1787311285    0
58 34 37.44  80.59   81.43   22395273.00 1190  1794721827    28089474
64 44 50.22  67.36   76.96   21848111.20 18150 1105060289    429342513
70 46 55.57  56.59   64.22   18766118.20 27918 724659653     660301547
76 50 42.37  67.79   75.83   23688889.40 18603 1171174674    440088674
80 49 40.14  72.05   79.60   22350506.00 15680 1310470634    370843890

With 58 dd tasks, throughput is roughly the same as what we saw without
memory pressure. Ten additional kswapd tasks (5 per node) resulted in a
17% increase in aggregate kernel mode CPU consumption.

NOTE: The kswapd tasks were originally tracked with an array of task
structs in each pgdata structure. Sadly, any changes to the pg_data_t
resulted in KABI breakage. Look for the following definition that was
used as a workaround:

static struct task_struct *kswapd_list[MAX_NUMNODES][MAX_KSWAPD_THREADS];

Orabug: 27913411

Signed-off-by: Buddy Lumpkin <buddy.lumpkin@oracle.com>
Reviewed-by: Khalid Aziz <khalid.aziz@oracle.com>
Reviewed-by: Pavel Tatashin <pasha.tatashin@oracle.com>
Reviewed-by: John Sobecki <john.sobecki@oracle.com>
Reviewed by: Henry Willard <henry.willard@oracle.com

Signed-off-by: Brian Maly <brian.maly@oracle.com>
7 years agotcp: don't use F-RTO on non-recurring timeouts
Yuchung Cheng [Mon, 13 Jul 2015 19:10:20 +0000 (12:10 -0700)]
tcp: don't use F-RTO on non-recurring timeouts

Currently F-RTO may repeatedly send new data packets on non-recurring
timeouts in CA_Loss mode. This is a bug because F-RTO (RFC5682)
should only be used on either new recovery or recurring timeouts.

This exacerbates the recovery progress during frequent timeout &
repair, because we prioritize sending new data packets instead of
repairing the holes when the bandwidth is already scarce.

Fix it by correcting the test of a new recovery episode.

Orabug: 27901860

Signed-off-by: Yuchung Cheng <ycheng@google.com>
Signed-off-by: Neal Cardwell <ncardwell@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
(cherry picked from commit f82b681a511f4d61069e9586a9cf97bdef371ef3)

Reviewed-by: Hakon Bugge <haakon.bugge@oracle.com>
Signed-off-by: Venkat Venkatsubra <venkat.x.venkatsubra@oracle.com>
Signed-off-by: Brian Maly <brian.maly@oracle.com>
7 years agonet/rds: ib: Release correct number of frags
Håkon Bugge [Tue, 24 Apr 2018 11:13:48 +0000 (13:13 +0200)]
net/rds: ib: Release correct number of frags

Commit c682e8474bd4 ("net/rds: reduce memory footprint during
ib_post_recv in IB transport") introduces an SG list instead of a
single contiguously fragment. When rebuilding the caches, it attempts
to release the number of fragments used by the new connection,
independent of the actual number of fragments used by the cache. This
leads to a kernel crash. Instead, release the correct number of
fragments.

Orabug: 27924161

Signed-off-by: Håkon Bugge <haakon.bugge@oracle.com>
Reviewed-by: Zhu Yanjun <yanjun.zhu@oracle.com>
Reviewed-by: Ka-Cheong Poon <ka-cheong.poon@oracle.com>
Acked-by: Santosh Shilimkar <santosh.shilimkar@oracle.com>
Signed-off-by: Brian Maly <brian.maly@oracle.com>
7 years agocrypto: rng - Remove old low-level rng interface
Herbert Xu [Tue, 21 Apr 2015 02:46:46 +0000 (10:46 +0800)]
crypto: rng - Remove old low-level rng interface

Orabug: 27926676
CVE: CVE-2017-15116

Now that all rng implementations have switched over to the new
interface, we can remove the old low-level interface.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
(cherry picked from commit 94f1bb15bed84ad6c893916b7e7b9db6f1d7eec6)
Signed-off-by: Brian Maly <brian.maly@oracle.com>
7 years agocrypto: drbg - Convert to new rng interface
Herbert Xu [Tue, 21 Apr 2015 02:46:41 +0000 (10:46 +0800)]
crypto: drbg - Convert to new rng interface

Orabug: 27926676
CVE: CVE-2017-15116

This patch converts the DRBG implementation to the new low-level
rng interface.

This allows us to get rid of struct drbg_gen by using the new RNG
API instead.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Acked-by: Stephan Mueller <smueller@chronox.de>
(cherry picked from commit 8fded5925d0a733c46f8d0b5edd1c9b315882b1d)
Signed-off-by: Brian Maly <brian.maly@oracle.com>
Conflicts:
crypto/drbg.c

Signed-off-by: Brian Maly <brian.maly@oracle.com>
7 years agocrypto: ansi_cprng - Convert to new rng interface
Herbert Xu [Tue, 21 Apr 2015 02:46:44 +0000 (10:46 +0800)]
crypto: ansi_cprng - Convert to new rng interface

Orabug:  27926676
CVE: CVE-2017-15116

This patch ocnverts the ANSI CPRNG implementation to the new
low-level rng interface.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
(cherry picked from commit e7c2422a839bfc6876a2f7a9b283bb2963f0287b)
Signed-off-by: Brian Maly <brian.maly@oracle.com>
7 years agocrypto: krng - Convert to new rng interface
Herbert Xu [Tue, 21 Apr 2015 02:46:45 +0000 (10:46 +0800)]
crypto: krng - Convert to new rng interface

Orabug: 27926676
CVE: CVE-2017-15116

This patch ocnverts the KRNG implementation to the new low-level
rng interface.

Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
(cherry picked from commit e33cf2c5aab7d0012e7890089e89ae2466c2449c)
Signed-off-by: Brian Maly <brian.maly@oracle.com>
7 years agoRDS: Heap OOB write in rds_message_alloc_sgs()
Mohamed Ghannam [Tue, 2 Jan 2018 19:44:34 +0000 (19:44 +0000)]
RDS: Heap OOB write in rds_message_alloc_sgs()

Orabug: 27934066
CVE: CVE-2018-5332

When args->nr_local is 0, nr_pages gets also 0 due some size
calculation via rds_rm_size(), which is later used to allocate
pages for DMA, this bug produces a heap Out-Of-Bound write access
to a specific memory region.

Signed-off-by: Mohamed Ghannam <simo.ghannam@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
(cherry picked from commit c095508770aebf1b9218e77026e48345d719b17c)
Signed-off-by: Brian Maly <brian.maly@oracle.com>
7 years agonet: Fix double free and memory corruption in get_net_ns_by_id()
Eric W. Biederman [Tue, 19 Dec 2017 17:27:56 +0000 (11:27 -0600)]
net: Fix double free and memory corruption in get_net_ns_by_id()

Orabug: 27934789
CVE: CVE-2017-15129

(I can trivially verify that that idr_remove in cleanup_net happens
 after the network namespace count has dropped to zero --EWB)

Function get_net_ns_by_id() does not check for net::count
after it has found a peer in netns_ids idr.

It may dereference a peer, after its count has already been
finaly decremented. This leads to double free and memory
corruption:

put_net(peer)                                   rtnl_lock()
atomic_dec_and_test(&peer->count) [count=0]     ...
__put_net(peer)                                 get_net_ns_by_id(net, id)
  spin_lock(&cleanup_list_lock)
  list_add(&net->cleanup_list, &cleanup_list)
  spin_unlock(&cleanup_list_lock)
queue_work()                                      peer = idr_find(&net->netns_ids, id)
  |                                               get_net(peer) [count=1]
  |                                               ...
  |                                               (use after final put)
  v                                               ...
  cleanup_net()                                   ...
    spin_lock(&cleanup_list_lock)                 ...
    list_replace_init(&cleanup_list, ..)          ...
    spin_unlock(&cleanup_list_lock)               ...
    ...                                           ...
    ...                                           put_net(peer)
    ...                                             atomic_dec_and_test(&peer->count) [count=0]
    ...                                               spin_lock(&cleanup_list_lock)
    ...                                               list_add(&net->cleanup_list, &cleanup_list)
    ...                                               spin_unlock(&cleanup_list_lock)
    ...                                             queue_work()
    ...                                           rtnl_unlock()
    rtnl_lock()                                   ...
    for_each_net(tmp) {                           ...
      id = __peernet2id(tmp, peer)                ...
      spin_lock_irq(&tmp->nsid_lock)              ...
      idr_remove(&tmp->netns_ids, id)             ...
      ...                                         ...
      net_drop_ns()                               ...
net_free(peer)                            ...
    }                                             ...
  |
  v
  cleanup_net()
    ...
    (Second free of peer)

Also, put_net() on the right cpu may reorder with left's cpu
list_replace_init(&cleanup_list, ..), and then cleanup_list
will be corrupted.

Since cleanup_net() is executed in worker thread, while
put_net(peer) can happen everywhere, there should be
enough time for concurrent get_net_ns_by_id() to pick
the peer up, and the race does not seem to be unlikely.
The patch fixes the problem in standard way.

(Also, there is possible problem in peernet2id_alloc(), which requires
check for net::count under nsid_lock and maybe_get_net(peer), but
in current stable kernel it's used under rtnl_lock() and it has to be
safe. Openswitch begun to use peernet2id_alloc(), and possibly it should
be fixed too. While this is not in stable kernel yet, so I'll send
a separate message to netdev@ later).

Cc: Nicolas Dichtel <nicolas.dichtel@6wind.com>
Signed-off-by: Kirill Tkhai <ktkhai@virtuozzo.com>
Fixes: 0c7aecd4bde4 "netns: add rtnl cmd to add and get peer netns ids"
Reviewed-by: Andrey Ryabinin <aryabinin@virtuozzo.com>
Reviewed-by: "Eric W. Biederman" <ebiederm@xmission.com>
Signed-off-by: Eric W. Biederman <ebiederm@xmission.com>
Reviewed-by: Eric Dumazet <edumazet@google.com>
Acked-by: Nicolas Dichtel <nicolas.dichtel@6wind.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
(cherry picked from commit 21b5944350052d2583e82dd59b19a9ba94a007f0)
Signed-off-by: Brian Maly <brian.maly@oracle.com>
Conflicts:
net/core/net_namespace.c

Signed-off-by: Brian Maly <brian.maly@oracle.com>
7 years agovhost/scsi: fix reuse of &vq->iov[out] in response
Benjamin Coddington [Mon, 6 Jun 2016 22:07:59 +0000 (18:07 -0400)]
vhost/scsi: fix reuse of &vq->iov[out] in response

The address of the iovec &vq->iov[out] is not guaranteed to contain the scsi
command's response iovec throughout the lifetime of the command.  Rather, it
is more likely to contain an iovec from an immediately following command
after looping back around to vhost_get_vq_desc().  Pass along the iovec
entirely instead.

Fixes: 79c14141a487 ("vhost/scsi: Convert completion path to use copy_to_iter")
Cc: stable@vger.kernel.org
Signed-off-by: Benjamin Coddington <bcodding@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
(cherry picked from commit a77ec83a57890240c546df00ca5df1cdeedb1cc3)

Orabug: 27928330

Signed-off-by: Gayatri Vasudevan <gayatri.vasudevan@oracle.com>
Reviewed-by: Mridula Shastry <mridula.c.shastry@oracle.com>
Signed-off-by: Brian Maly <brian.maly@oracle.com>
7 years agokernel.spec: add requires system-release for OL7
Brian Maly [Wed, 2 May 2018 23:46:35 +0000 (19:46 -0400)]
kernel.spec: add requires system-release for OL7

Orabug: 27955380

Add "Requires: system-release" to OL7 specfile

Signed-off-by: Brian Maly <brian.maly@oracle.com>
Reviewed-by: Victor Erminpour <victor.erminpour@oracle.com>
7 years agox86/kernel/traps.c: fix trace_die_notifier return value
Kris Van Hees [Wed, 18 Apr 2018 12:58:27 +0000 (15:58 +0300)]
x86/kernel/traps.c: fix trace_die_notifier return value

When triggering a int3 directly, the trace_die_notifier() actually returns 1
(whereas all other notifiers return 0), and that 1 value was being interpreted
as an indicator that DTrace handled the trap and that emulation is needed.  The
codei, from that point on, took a branch that is only to be used when the trap
occurs in kernel code, which is not good when it was actually triggered from
userspace.

OraBug: 27895315
CVE: CVE-2018-8897

Signed-off-by: Kris Van Hees <kris.van.hees@oracle.com>
Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Signed-off-by: Mihai Carabas <mihai.carabas@oracle.com>
Signed-off-by: Brian Maly <brian.maly@oracle.com>
7 years agox86/entry/64: Dont use IST entry for #BP stack
Andy Lutomirski [Tue, 27 Mar 2018 16:28:03 +0000 (18:28 +0200)]
x86/entry/64: Dont use IST entry for #BP stack

There's nothing IST-worthy about #BP/int3.  We don't allow kprobes
in the small handful of places in the kernel that run at CPL0 with
an invalid stack, and 32-bit kernels have used normal interrupt
gates for #BP forever.

Furthermore, we don't allow kprobes in places that have usergs while
in kernel mode, so "paranoid" is also unnecessary.

OraBug: 27895315
CVE: CVE-2018-8897

Signed-off-by: Andy Lutomirski <luto@kernel.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: stable@vger.kernel.org
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
(cherry picked from commit d8ba61ba58c88d5207c1ba2f7d9a2280e7d03be9)

Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Signed-off-by: Mihai Carabas <mihai.carabas@oracle.com>
Signed-off-by: Brian Maly <brian.maly@oracle.com>
7 years agokvm/x86: fix icebp instruction handling
gregkh@linuxfoundation.org [Tue, 27 Mar 2018 16:27:55 +0000 (18:27 +0200)]
kvm/x86: fix icebp instruction handling

The undocumented 'icebp' instruction (aka 'int1') works pretty much like
'int3' in the absense of in-circuit probing equipment (except,
obviously, that it raises #DB instead of raising #BP), and is used by
some validation test-suites as such.

But Andy Lutomirski noticed that his test suite acted differently in kvm
than on bare hardware.

The reason is that kvm used an inexact test for the icebp instruction:
it just assumed that an all-zero VM exit qualification value meant that
the VM exit was due to icebp.

That is not unlike the guess that do_debug() does for the actual
exception handling case, but it's purely a heuristic, not an absolute
rule.  do_debug() does it because it wants to ascribe _some_ reasons to
the #DB that happened, and an empty %dr6 value means that 'icebp' is the
most likely casue and we have no better information.

But kvm can just do it right, because unlike the do_debug() case, kvm
actually sees the real reason for the #DB in the VM-exit interruption
information field.

So instead of relying on an inexact heuristic, just use the actual VM
exit information that says "it was 'icebp'".

Right now the 'icebp' instruction isn't technically documented by Intel,
but that will hopefully change.  The special "privileged software
exception" information _is_ actually mentioned in the Intel SDM, even
though the cause of it isn't enumerated.

OraBug: 27895356
CVE: CVE-2018-1087

Reported-by: Andy Lutomirski <luto@kernel.org>
Tested-by: Paolo Bonzini <pbonzini@redhat.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
(cherry picked from commit 32d43cd391bacb5f0814c2624399a5dad3501d09)

Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Signed-off-by: Mihai Carabas <mihai.carabas@oracle.com>
Signed-off-by: Brian Maly <brian.maly@oracle.com>
7 years agoperf/hwbp: Simplify the perf-hwbp code, fix documentation
Linus Torvalds [Tue, 27 Mar 2018 01:39:07 +0000 (15:39 -1000)]
perf/hwbp: Simplify the perf-hwbp code, fix documentation

Orabug: 27947602
CVE: CVE-2018-100199

Annoyingly, modify_user_hw_breakpoint() unnecessarily complicates the
modification of a breakpoint - simplify it and remove the pointless
local variables.

Also update the stale Docbook while at it.

Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Acked-by: Thomas Gleixner <tglx@linutronix.de>
Cc: <stable@vger.kernel.org>
Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: Vince Weaver <vincent.weaver@maine.edu>
Signed-off-by: Ingo Molnar <mingo@kernel.org>
(cherry picked from commit f67b15037a7a50c57f72e69a6d59941ad90a0f0f)
Signed-off-by: Brian Maly <brian.maly@oracle.com>
7 years agoscsi: iscsi_tcp: set BDI_CAP_STABLE_WRITES when data digest enabled
Jianchao Wang [Wed, 4 Apr 2018 02:07:31 +0000 (10:07 +0800)]
scsi: iscsi_tcp: set BDI_CAP_STABLE_WRITES when data digest enabled

iscsi tcp will first send out data, then calculate and send data
digest. If we don't have BDI_CAP_STABLE_WRITES, the page cache will be
written in spite of the on going writeback. Consequently, wrong digest
will be got and sent to target.

To fix this, set BDI_CAP_STABLE_WRITES when data digest is enabled
in iscsi_tcp .slave_configure callback.

Signed-off-by: Jianchao Wang <jianchao.w.wang@oracle.com>
Acked-by: Chris Leech <cleech@redhat.com>
Acked-by: Lee Duncan <lduncan@suse.com>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
(backport upstream commit 89d0c804392bb962553f23dc4c119d11b6bd1675)

conflict:
drivers/scsi/iscsi_tcp.c

Orabug: 27726302

Reviewed-by: Junxiao Bi <junxiao.bi@oracle.com>
Signed-off-by: Jianchao Wang <jianchao.w.wang@oracle.com>
Signed-off-by: Brian Maly <brian.maly@oracle.com>
7 years agoblock: fix bio_will_gap() for first bvec with offset
Ming Lei [Fri, 14 Apr 2017 19:58:29 +0000 (13:58 -0600)]
block: fix bio_will_gap() for first bvec with offset

Commit 729204ef49ec("block: relax check on sg gap") allows us to merge
bios, if both are physically contiguous.  This change can merge a huge
number of small bios, through mkfs for example, mkfs.ntfs running time
can be decreased to ~1/10.

But if one rq starts with a non-aligned buffer (the 1st bvec's bv_offset
is non-zero) and if we allow the merge, it is quite difficult to respect
sg gap limit, especially the max segment size, or we risk having an
unaligned virtual boundary.  This patch tries to avoid the issue by
disallowing a merge, if the req starts with an unaligned buffer.

Also add comments to explain why the merged segment can't end in
unaligned virt boundary.

Fixes: 729204ef49ec ("block: relax check on sg gap")
Tested-by: Johannes Thumshirn <jthumshirn@suse.de>
Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de>
Signed-off-by: Ming Lei <ming.lei@redhat.com>
Rewrote parts of the commit message and comments.

Signed-off-by: Jens Axboe <axboe@fb.com>
(cherry picked from commit 5a8d75a1b8c99bdc926ba69b7b7dbe4fae81a5af)

Orabug: 27775588

Signed-off-by: Shan Hai <shan.hai@oracle.com>
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Brian Maly <brian.maly@oracle.com>
7 years agoblock: relax check on sg gap
Ming Lei [Sat, 17 Dec 2016 10:49:09 +0000 (18:49 +0800)]
block: relax check on sg gap

If the last bvec of the 1st bio and the 1st bvec of the next
bio are physically contigious, and the latter can be merged
to last segment of the 1st bio, we should think they don't
violate sg gap(or virt boundary) limit.

Both Vitaly and Dexuan reported lots of unmergeable small bios
are observed when running mkfs on Hyper-V virtual storage, and
performance becomes quite low. This patch fixes that performance
issue.

The same issue should exist on NVMe, since it sets virt boundary too.

Reported-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Reported-by: Dexuan Cui <decui@microsoft.com>
Tested-by: Dexuan Cui <decui@microsoft.com>
Cc: Keith Busch <keith.busch@intel.com>
Signed-off-by: Ming Lei <ming.lei@canonical.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
(cherry picked from commit 729204ef49ec00b788ce23deb9eb922a5769f55d)

Orabug: 27775588

Signed-off-by: Shan Hai <shan.hai@oracle.com>
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Brian Maly <brian.maly@oracle.com>
7 years agoblock: don't optimize for non-cloned bio in bio_get_last_bvec()
Ming Lei [Sat, 12 Mar 2016 14:56:19 +0000 (22:56 +0800)]
block: don't optimize for non-cloned bio in bio_get_last_bvec()

For !BIO_CLONED bio, we can use .bi_vcnt safely, but it
doesn't mean we can just simply return .bi_io_vec[.bi_vcnt - 1]
because the start postion may have been moved in the middle of
the bvec, such as splitting in the middle of bvec.

Fixes: 7bcd79ac50d9(block: bio: introduce helpers to get the 1st and last bvec)
Cc: stable@vger.kernel.org
Reported-by: Kent Overstreet <kent.overstreet@gmail.com>
Signed-off-by: Ming Lei <ming.lei@canonical.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
(cherry picked from commit 90d0f0f11588ec692c12f9009089b398be395184)

Orabug: 27775588

Signed-off-by: Shan Hai <shan.hai@oracle.com>
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Brian Maly <brian.maly@oracle.com>
7 years agoblock: merge: get the 1st and last bvec via helpers
Ming Lei [Fri, 26 Feb 2016 15:40:53 +0000 (23:40 +0800)]
block: merge: get the 1st and last bvec via helpers

This patch applies the two introduced helpers to
figure out the 1st and last bvec.

Reviewed-by: Sagi Grimberg <sagig@mellanox.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Ming Lei <ming.lei@canonical.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
(cherry picked from commit e827091cb1bcd8e718ac3657845fb809c0b93324)

Orabug: 27775588

Signed-off-by: Shan Hai <shan.hai@oracle.com>
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Brian Maly <brian.maly@oracle.com>
7 years agoblock: get the 1st and last bvec via helpers
Ming Lei [Fri, 26 Feb 2016 15:40:52 +0000 (23:40 +0800)]
block: get the 1st and last bvec via helpers

This patch applies the two introduced helpers to
figure out the 1st and last bvec, and fixes the
original way after bio splitting.

Cc: stable@vger.kernel.org
Reported-by: Sagi Grimberg <sagig@dev.mellanox.co.il>
Reviewed-by: Sagi Grimberg <sagig@mellanox.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Ming Lei <ming.lei@canonical.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
(cherry picked from commit 25e71a99f10e444cd00bb2ebccb11e1c9fb672b1)

Orabug: 27775588

Signed-off-by: Shan Hai <shan.hai@oracle.com>
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Brian Maly <brian.maly@oracle.com>
7 years agoblock: check virt boundary in bio_will_gap()
Ming Lei [Fri, 26 Feb 2016 15:40:51 +0000 (23:40 +0800)]
block: check virt boundary in bio_will_gap()

In the following patch, the way for figuring out
the last bvec will be changed with a bit cost introduced,
so return immediately if the queue doesn't have virt
boundary limit. Actually most of devices have not
this limit.

Reviewed-by: Sagi Grimberg <sagig@mellanox.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Ming Lei <ming.lei@canonical.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
(cherry picked from commit e0af29171aa8912e1ca95023b75ef336cd70d661)

Orabug: 27775588

Signed-off-by: Shan Hai <shan.hai@oracle.com>
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Brian Maly <brian.maly@oracle.com>
7 years agoblock: bio: introduce helpers to get the 1st and last bvec
Ming Lei [Fri, 26 Feb 2016 15:40:50 +0000 (23:40 +0800)]
block: bio: introduce helpers to get the 1st and last bvec

The bio passed to bio_will_gap() may be fast cloned from upper
layer(dm, md, bcache, fs, ...), or from bio splitting in block
core.

Unfortunately bio_will_gap() just figures out the last bvec via
'bi_io_vec[prev->bi_vcnt - 1]' directly, and this way is obviously
wrong.

This patch introduces two helpers for getting the first and last
bvec of one bio for fixing the issue.

Cc: stable@vger.kernel.org
Reported-by: Sagi Grimberg <sagig@dev.mellanox.co.il>
Reviewed-by: Sagi Grimberg <sagig@mellanox.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Ming Lei <ming.lei@canonical.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
(cherry picked from commit 7bcd79ac50d9d83350a835bdb91c04ac9e098412)

Orabug: 27775588

Signed-off-by: Shan Hai <shan.hai@oracle.com>
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Brian Maly <brian.maly@oracle.com>
7 years agoFailing to send a CLOSE if file is opened WRONLY and server reboots on a 4.x mount
Olga Kornievskaia [Mon, 14 Sep 2015 23:54:36 +0000 (19:54 -0400)]
Failing to send a CLOSE if file is opened WRONLY and server reboots on a 4.x mount

A test case is as the description says:
open(foobar, O_WRONLY);
sleep()  --> reboot the server
close(foobar)

The bug is because in nfs4state.c in nfs4_reclaim_open_state() a few
line before going to restart, there is
clear_bit(NFS4CLNT_RECLAIM_NOGRACE, &state->flags).

NFS4CLNT_RECLAIM_NOGRACE is a flag for the client states not open
owner states. Value of NFS4CLNT_RECLAIM_NOGRACE is 4 which is the
value of NFS_O_WRONLY_STATE in nfs4_state->flags. So clearing it wipes
out state and when we go to close it, “call_close” doesn’t get set as
state flag is not set and CLOSE doesn’t go on the wire.

Signed-off-by: Olga Kornievskaia <aglo@umich.edu>
Signed-off-by: Trond Myklebust <trond.myklebust@primarydata.com>
Orabug: 27848303

(cherry picked from commit a41cbe86df3afbc82311a1640e20858c0cd7e065)
Signed-off-by: Calum Mackay <calum.mackay@oracle.com>
Reviewed-by: Manjunath Patil <manjunath.b.patil@oracle.com>
Signed-off-by: Brian Maly <brian.maly@oracle.com>
7 years agoext4: add validity checks for bitmap block numbers
Theodore Ts'o [Tue, 27 Mar 2018 03:54:10 +0000 (23:54 -0400)]
ext4: add validity checks for bitmap block numbers

An privileged attacker can cause a crash by mounting a crafted ext4
image which triggers a out-of-bounds read in the function
ext4_valid_block_bitmap() in fs/ext4/balloc.c.

This issue has been assigned CVE-2018-1093.

BugLink: https://bugzilla.kernel.org/show_bug.cgi?id=199181
BugLink: https://bugzilla.redhat.com/show_bug.cgi?id=1560782
Reported-by: Wen Xu <wen.xu@gatech.edu>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Cc: stable@vger.kernel.org
(cherry picked from commit 7dac4a1726a9c64a517d595c40e95e2d0d135f6f)

Orabug: 27854373
CVE: CVE-2018-1093

Signed-off-by: Tim Tianyang Chen <tianyang.chen@oracle.com>
Reviewed-by: Jack Vogel <jack.vogel@oracle.com>
conflict:
    fs/ext4/balloc.c
    fs/ext4/ialloc.c
    Missing commit 6a797d2737 and
    EFSCORRUPTED missing. Use EIO instead.

Signed-off-by: Brian Maly <brian.maly@oracle.com>
7 years agoocfs2: Take inode cluster lock before moving reflinked inode from orphan dir
Ashish Samant [Fri, 2 Mar 2018 23:04:28 +0000 (15:04 -0800)]
ocfs2: Take inode cluster lock before moving reflinked inode from orphan dir

Orabug: 27869411

While reflinking an inode, we create a new inode in orphan directory, then
take EX lock on it, reflink the original inode to orphan inode and release
EX lock. Once the lock is released another node might request it in PR mode
which causes downconvert of the lock to PR mode.

Later we attempt to initialize security acl for the orphan inode and move
it to the reflink destination. However, while doing this we dont take EX
lock on the inode. So effectively, we are doing this and accessing the
journal for this inode while holding PR lock. While accessing the journal,
we make

ci->ci_last_trans = journal->j_trans_id

At this point, if there is another downconvert request on this inode from
another node (PR->NL), we will trip on the following condition in
ocfs2_ci_checkpointed()

BUG_ON(lockres->l_level != DLM_LOCK_EX && !checkpointed);

because we hold the lock in PR mode and journal->j_trans_id is not greater
than ci_last_trans for the inode.

Fix this by taking orphan inode cluster lock in EX mode before
initializing security and moving orphan inode to reflink destination.
Use the __tracker variant while taking inode lock to avoid recursive
locking in the ocfs2_init_security_and_acl() call chain.

Signed-off-by: Ashish Samant <ashish.samant@oracle.com>
Reviewed-by: Junxiao Bi <junxiao.bi@oracle.com>
Acked-by: Jun Piao <piaojun@huawei.com>
Reviewed-by: Joseph Qi <jiangqi903@gmail.com>
Signed-off-by: Brian Maly <brian.maly@oracle.com>
7 years agoInput: gtco - fix potential out-of-bound access
Dmitry Torokhov [Mon, 23 Oct 2017 23:46:00 +0000 (16:46 -0700)]
Input: gtco - fix potential out-of-bound access

parse_hid_report_descriptor() has a while (i < length) loop, which
only guarantees that there's at least 1 byte in the buffer, but the
loop body can read multiple bytes which causes out-of-bounds access.

Reported-by: Andrey Konovalov <andreyknvl@google.com>
Reviewed-by: Andrey Konovalov <andreyknvl@google.com>
Cc: stable@vger.kernel.org
Signed-off-by: Dmitry Torokhov <dmitry.torokhov@gmail.com>
(cherry picked from commit a50829479f58416a013a4ccca791336af3c584c7)

Orabug: 27869844
CVE: CVE-2017-16643

Signed-off-by: Tim Tianyang Chen <tianyang.chen@oracle.com>
Reviewed-by: Brian Maly <brian.maly@oracle.com>
Signed-off-by: Brian Maly <brian.maly@oracle.com>
7 years agoInput: ims-psu - check if CDC union descriptor is sane
Dmitry Torokhov [Sat, 7 Oct 2017 18:07:47 +0000 (11:07 -0700)]
Input: ims-psu - check if CDC union descriptor is sane

Before trying to use CDC union descriptor, try to validate whether that it
is sane by checking that intf->altsetting->extra is big enough and that
descriptor bLength is not too big and not too small.

Reported-by: Andrey Konovalov <andreyknvl@google.com>
Signed-off-by: Dmitry Torokhov <dmitry.torokhov@gmail.com>
(cherry picked from commit ea04efee7635c9120d015dcdeeeb6988130cb67a)

Orabug: 27870333
CVE: CVE-2017-16645

Signed-off-by: Tim Tianyang Chen <tianyang.chen@oracle.com>
Reviewed-by: Jack Vogel <jack.vogel@oracle.com>
Signed-off-by: Brian Maly <brian.maly@oracle.com>
7 years agovfio/pci: Virtualize Maximum Payload Size
Alex Williamson [Mon, 2 Oct 2017 18:39:09 +0000 (12:39 -0600)]
vfio/pci: Virtualize Maximum Payload Size

With virtual PCI-Express chipsets, we now see userspace/guest drivers
trying to match the physical MPS setting to a virtual downstream port.
Of course a lone physical device surrounded by virtual interconnects
cannot make a correct decision for a proper MPS setting.  Instead,
let's virtualize the MPS control register so that writes through to
hardware are disallowed.  Userspace drivers like QEMU assume they can
write anything to the device and we'll filter out anything dangerous.
Since mismatched MPS can lead to AER and other faults, let's add it
to the kernel side rather than relying on userspace virtualization to
handle it.

OraBug: 27876914

Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Reviewed-by: Eric Auger <eric.auger@redhat.com>
(cherry picked from commit 523184972b282cd9ca17a76f6ca4742394856818)
Signed-off-by: Wim ten Have <wim.ten.have@oracle.com>
Reviewed-by: Ross Philipson <ross.philipson@oracle.com>
Signed-off-by: Brian Maly <brian.maly@oracle.com>
7 years agovfio-pci: Virtualize PCIe & AF FLR
Alex Williamson [Mon, 26 Sep 2016 19:52:16 +0000 (13:52 -0600)]
vfio-pci: Virtualize PCIe & AF FLR

We use a BAR restore trick to try to detect when a user has performed
a device reset, possibly through FLR or other backdoors, to put things
back into a working state.  This is important for backdoor resets, but
we can actually just virtualize the "front door" resets provided via
PCIe and AF FLR.  Set these bits as virtualized + writable, allowing
the default write to set them in vconfig, then we can simply check the
bit, perform an FLR of our own, and clear the bit.  We don't actually
have the granularity in PCI to specify the type of reset we want to
do, but generally devices don't implement both PCIe and AF FLR and
we'll favor these over other types of reset, so we should generally
lineup.  We do test whether the device provides the requested FLR type
to stay consistent with hardware capabilities though.

This seems to fix several instance of devices getting into bad states
with userspace drivers, like dpdk, running inside a VM.

OraBug: 27876914

Signed-off-by: Alex Williamson <alex.williamson@redhat.com>
Reviewed-by: Greg Rose <grose@lightfleet.com>
(cherry picked from commit ddf9dc0eb5314d6dac8b19b1cc37c739c6896e7e)
Signed-off-by: Wim ten Have <wim.ten.have@oracle.com>
Reviewed-by: Ross Philipson <ross.philipson@oracle.com>
Signed-off-by: Brian Maly <brian.maly@oracle.com>
7 years agouek-rpm: Disable DMA CMA
Jianchao Wang [Thu, 19 Apr 2018 10:02:46 +0000 (18:02 +0800)]
uek-rpm: Disable DMA CMA

Change the following kernel config:
CONFIG_CMA_SIZE_MBYTES=16
to
CONFIG_CMA_SIZE_MBYTES=0
Orabug: 27892359

Reviewed-by: Thomas Tai <thomas.tai@oracle.com>
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Jianchao Wang <jianchao.w.wang@oracle.com>
Signed-off-by: Brian Maly <brian.maly@oracle.com>
7 years agonvme-pci: fix multiple ctrl removal scheduling
Rakesh Pandit [Thu, 19 Apr 2018 09:06:32 +0000 (17:06 +0800)]
nvme-pci: fix multiple ctrl removal scheduling

Commit c5f6ce97c1210 tries to address multiple resets but fails as
work_busy doesn't involve any synchronization and can fail.  This is
reproducible easily as can be seen by WARNING below which is triggered
with line:

WARN_ON(dev->ctrl.state == NVME_CTRL_RESETTING)

Allowing multiple resets can result in multiple controller removal as
well if different conditions inside nvme_reset_work fail and which
might deadlock on device_release_driver.

[  480.327007] WARNING: CPU: 3 PID: 150 at drivers/nvme/host/pci.c:1900 nvme_reset_work+0x36c/0xec0
[  480.327008] Modules linked in: rfcomm fuse nf_conntrack_netbios_ns nf_conntrack_broadcast...
[  480.327044]  btusb videobuf2_core ghash_clmulni_intel snd_hwdep cfg80211 acer_wmi hci_uart..
[  480.327065] CPU: 3 PID: 150 Comm: kworker/u16:2 Not tainted 4.12.0-rc1+ #13
[  480.327065] Hardware name: Acer Predator G9-591/Mustang_SLS, BIOS V1.10 03/03/2016
[  480.327066] Workqueue: nvme nvme_reset_work
[  480.327067] task: ffff880498ad8000 task.stack: ffffc90002218000
[  480.327068] RIP: 0010:nvme_reset_work+0x36c/0xec0
[  480.327069] RSP: 0018:ffffc9000221bdb8 EFLAGS: 00010246
[  480.327070] RAX: 0000000000460000 RBX: ffff880498a98128 RCX: dead000000000200
[  480.327070] RDX: 0000000000000001 RSI: ffff8804b1028020 RDI: ffff880498a98128
[  480.327071] RBP: ffffc9000221be50 R08: 0000000000000000 R09: 0000000000000000
[  480.327071] R10: ffffc90001963ce8 R11: 000000000000020d R12: ffff880498a98000
[  480.327072] R13: ffff880498a53500 R14: ffff880498a98130 R15: ffff880498a98128
[  480.327072] FS:  0000000000000000(0000) GS:ffff8804c1cc0000(0000) knlGS:0000000000000000
[  480.327073] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[  480.327074] CR2: 00007ffcf3c37f78 CR3: 0000000001e09000 CR4: 00000000003406e0
[  480.327074] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
[  480.327075] DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
[  480.327075] Call Trace:
[  480.327079]  ? __switch_to+0x227/0x400
[  480.327081]  process_one_work+0x18c/0x3a0
[  480.327082]  worker_thread+0x4e/0x3b0
[  480.327084]  kthread+0x109/0x140
[  480.327085]  ? process_one_work+0x3a0/0x3a0
[  480.327087]  ? kthread_park+0x60/0x60
[  480.327102]  ret_from_fork+0x2c/0x40
[  480.327103] Code: e8 5a dc ff ff 85 c0 41 89 c1 0f.....

This patch addresses the problem by using state of controller to
decide whether reset should be queued or not as state change is
synchronizated using controller spinlock.  Also cancel_work_sync is
used to make sure remove cancels the reset_work and waits for it to
finish.  This patch also changes return value from -ENODEV to more
appropriate -EBUSY if nvme_reset fails to change state.

Fixes: c5f6ce97c1210 ("nvme: don't schedule multiple resets")
Signed-off-by: Rakesh Pandit <rakesh@tuxera.com>
Reviewed-by: Sagi Grimberg <sagi@grimberg.me>
Signed-off-by: Christoph Hellwig <hch@lst.de>
(cherry picked from 82b057caefaff2a891f821a617d939f46e03e844)
Orabug: 27892359

Reviewed-by: Thomas Tai <thomas.tai@oracle.com>
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Jianchao Wang <jianchao.w.wang@oracle.com>
Signed-off-by: Brian Maly <brian.maly@oracle.com>
7 years agonvme-pci: Fix nvme queue cleanup if IRQ setup fails
Jianchao Wang [Thu, 15 Feb 2018 11:13:41 +0000 (19:13 +0800)]
nvme-pci: Fix nvme queue cleanup if IRQ setup fails

This patch fixes nvme queue cleanup if requesting an IRQ handler for
the queue's vector fails. It does this by resetting the cq_vector to
the uninitialized value of -1 so it is ignored for a controller reset.

Signed-off-by: Jianchao Wang <jianchao.w.wang@oracle.com>
[changelog updates, removed misc whitespace changes]
Signed-off-by: Keith Busch <keith.busch@intel.com>
(cherry picked from  f25a2dfc20e3a3ed8fe6618c331799dd7bd01190)
Orabug: 27892359

Reviewed-by: Thomas Tai <thomas.tai@oracle.com>
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Jianchao Wang <jianchao.w.wang@oracle.com>
Signed-off-by: Brian Maly <brian.maly@oracle.com>
7 years agonvme/pci: Fix stuck nvme reset
Keith Busch [Tue, 27 Jun 2017 23:44:05 +0000 (17:44 -0600)]
nvme/pci: Fix stuck nvme reset

The controller state is set to resetting prior to disabling the
controller, so this patch accounts for that state when deciding if it
needs to freeze the queues. Without this, an 'nvme reset /dev/nvme0'
blocks forever because the queues were never frozen.

Fixes: 82b057caefaf ("nvme-pci: fix multiple ctrl removal scheduling")
Signed-off-by: Keith Busch <keith.busch@intel.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@kernel.dk>
(cherry picked from ebef7368571d88f0f80b817e6898075c62265b4e)
Orabug: 27892359

Reviewed-by: Thomas Tai <thomas.tai@oracle.com>
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Jianchao Wang <jianchao.w.wang@oracle.com>
Signed-off-by: Brian Maly <brian.maly@oracle.com>
7 years agonvme: don't schedule multiple resets
Keith Busch [Wed, 5 Oct 2016 20:32:45 +0000 (16:32 -0400)]
nvme: don't schedule multiple resets

The queue_work only fails if the work is pending, but not yet running. If
the work is running, the work item would get requeued, triggering a
double reset. If the first reset fails for any reason, the second
reset triggers:

WARN_ON(dev->ctrl.state == NVME_CTRL_RESETTING)

Hitting that schedules controller deletion for a second time, which
potentially takes a reference on the device that is being deleted.
If the reset occurs at the same time as a hot removal event, this causes
a double-free.

This patch has the reset helper function check if the work is busy
prior to queueing, and changes all places that schedule resets to use
this function. Since most users don't want to sync with that work, the
"flush_work" is moved to the only caller that wants to sync.

Signed-off-by: Keith Busch <keith.busch@intel.com>
Reviewed-by: Sagi Grimberg<sagi@grimberg.me>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Jens Axboe <axboe@fb.com>
(cherry picked from  c5f6ce97c12104668784ee17fb927c52a944d3d8)
Orabug: 27892359

Reviewed-by: Thomas Tai <thomas.tai@oracle.com>
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Jianchao Wang <jianchao.w.wang@oracle.com>
Signed-off-by: Brian Maly <brian.maly@oracle.com>
7 years agoblk-mq: fix use-after-free in blk_mq_free_tag_set()
Junichi Nomura [Wed, 14 Oct 2015 05:02:15 +0000 (05:02 +0000)]
blk-mq: fix use-after-free in blk_mq_free_tag_set()

tags is freed in blk_mq_free_rq_map() and should not be used after that.
The problem doesn't manifest if CONFIG_CPUMASK_OFFSTACK is false because
free_cpumask_var() is nop.

tags->cpumask is allocated in blk_mq_init_tags() so it's natural to
free cpumask in its counter part, blk_mq_free_tags().

Fixes: f26cdc8536ad ("blk-mq: Shared tag enhancements")
Signed-off-by: Jun'ichi Nomura <j-nomura@ce.jp.nec.com>
Cc: Keith Busch <keith.busch@intel.com>
Reviewed-by: Jeff Moyer <jmoyer@redhat.com>
Signed-off-by: Jens Axboe <axboe@fb.com>
(cherry picked from  f42d79ab67322e51b92dd7aa965e310c71352a64 )
Orabug: 27892359

Reviewed-by: Thomas Tai <thomas.tai@oracle.com>
Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: Jianchao Wang <jianchao.w.wang@oracle.com>
Signed-off-by: Brian Maly <brian.maly@oracle.com>
7 years agoUSB: core: prevent malicious bNumInterfaces overflow
Alan Stern [Tue, 12 Dec 2017 19:25:13 +0000 (14:25 -0500)]
USB: core: prevent malicious bNumInterfaces overflow

commit 48a4ff1c7bb5a32d2e396b03132d20d552c0eca7 upstream.

A malicious USB device with crafted descriptors can cause the kernel
to access unallocated memory by setting the bNumInterfaces value too
high in a configuration descriptor.  Although the value is adjusted
during parsing, this adjustment is skipped in one of the error return
paths.

This patch prevents the problem by setting bNumInterfaces to 0
initially.  The existing code already sets it to the proper value
after parsing is complete.

Orabug: 27895909

Signed-off-by: Alan Stern <stern@rowland.harvard.edu>
Reported-by: Andrey Konovalov <andreyknvl@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
(cherry picked from commit 4c5ae6a301a5415d1334f6c655bebf91d475bd89)
Signed-off-by: Srinivas Eeda <srinivas.eeda@oracle.com>
Reviewed-by: junxiao.bi@oracle.com
Reviewed-by: jack.schwartz@oracle.com
Signed-off-by: Brian Maly <brian.maly@oracle.com>
7 years agodriver core: platform: fix race condition with driver_override
Adrian Salido [Tue, 25 Apr 2017 23:55:26 +0000 (16:55 -0700)]
driver core: platform: fix race condition with driver_override

The driver_override implementation is susceptible to race condition when
different threads are reading vs storing a different driver override.
Add locking to avoid race condition.

Fixes: 3d713e0e382e ("driver core: platform: add device binding path 'driver_override'")
Cc: stable@vger.kernel.org
Signed-off-by: Adrian Salido <salidoa@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
(cherry picked from commit 6265539776a0810b7ce6398c27866ddb9c6bd154)

Orabug: 27897874
CVE: CVE-2017-12146

Signed-off-by: Tim Tianyang Chen <tianyang.chen@oracle.com>
Reviewed-by: Jack Vogel <jack.vogel@oracle.com>
Signed-off-by: Brian Maly <brian.maly@oracle.com>
7 years agousb/core: usb_alloc_dev(): fix setting of ->portnum
Nicolai Stange [Thu, 17 Mar 2016 22:53:02 +0000 (23:53 +0100)]
usb/core: usb_alloc_dev(): fix setting of ->portnum

With commit 69bec7259853 ("USB: core: let USB device know device node"),
the port1 argument of usb_alloc_dev() gets overwritten as follows:

  ... usb_alloc_dev(..., unsigned port1)
  {
    ...
    if (!parent->parent) {
      port1 = usb_hcd_find_raw_port_number(..., port1);
    }
    ...
  }

Later on, this now overwritten port1 gets assigned to ->portnum:

  dev->portnum = port1;

However, since xhci_find_raw_port_number() isn't idempotent, the
aforementioned commit causes a number of KASAN splats like the following:

  BUG: KASAN: slab-out-of-bounds in xhci_find_raw_port_number+0x98/0x170
                                       at addr ffff8801d9311670
  Read of size 8 by task kworker/2:1/87
  [...]
  Workqueue: usb_hub_wq hub_event
   0000000000000188 000000005814b877 ffff8800cba17588 ffffffff8191447e
   0000000041b58ab3 ffffffff82a03209 ffffffff819143a2 ffffffff82a252f4
   ffff8801d93115e0 0000000000000188 ffff8801d9311628 ffff8800cba17588
  Call Trace:
   [<ffffffff8191447e>] dump_stack+0xdc/0x15e
   [<ffffffff819143a2>] ? _atomic_dec_and_lock+0xa2/0xa2
   [<ffffffff814e2cd1>] ? print_section+0x61/0xb0
   [<ffffffff814e4939>] print_trailer+0x179/0x2c0
   [<ffffffff814f0d84>] object_err+0x34/0x40
   [<ffffffff814f4388>] kasan_report_error+0x2f8/0x8b0
   [<ffffffff814eb91e>] ? __slab_alloc+0x5e/0x90
   [<ffffffff812178c0>] ? __lock_is_held+0x90/0x130
   [<ffffffff814f5091>] kasan_report+0x71/0xa0
   [<ffffffff814ec082>] ? kmem_cache_alloc_trace+0x212/0x560
   [<ffffffff81d99468>] ? xhci_find_raw_port_number+0x98/0x170
   [<ffffffff814f33d4>] __asan_load8+0x64/0x70
   [<ffffffff81d99468>] xhci_find_raw_port_number+0x98/0x170
   [<ffffffff81db0105>] xhci_setup_addressable_virt_dev+0x235/0xa10
   [<ffffffff81d9ea51>] xhci_setup_device+0x3c1/0x1430
   [<ffffffff8121cddd>] ? trace_hardirqs_on+0xd/0x10
   [<ffffffff81d9fac0>] ? xhci_setup_device+0x1430/0x1430
   [<ffffffff81d9fad3>] xhci_address_device+0x13/0x20
   [<ffffffff81d2081a>] hub_port_init+0x55a/0x1550
   [<ffffffff81d28705>] hub_event+0xef5/0x24d0
   [<ffffffff81d27810>] ? hub_port_debounce+0x2f0/0x2f0
   [<ffffffff8195e1ee>] ? debug_object_deactivate+0x1be/0x270
   [<ffffffff81210203>] ? print_rt_rq+0x53/0x2d0
   [<ffffffff8121657d>] ? trace_hardirqs_off+0xd/0x10
   [<ffffffff8226acfb>] ? _raw_spin_unlock_irqrestore+0x5b/0x60
   [<ffffffff81250000>] ? irq_domain_set_hwirq_and_chip+0x30/0xb0
   [<ffffffff81256339>] ? debug_lockdep_rcu_enabled+0x39/0x40
   [<ffffffff812178c0>] ? __lock_is_held+0x90/0x130
   [<ffffffff81196877>] process_one_work+0x567/0xec0
  [...]

Afterwards, xhci reports some functional errors:

  xhci_hcd 0000:00:14.0: ERROR: unexpected setup address command completion
                                code 0x11.
  xhci_hcd 0000:00:14.0: ERROR: unexpected setup address command completion
                                code 0x11.
  usb 4-3: device not accepting address 2, error -22

Fix this by not overwriting the port1 argument in usb_alloc_dev(), but
storing the raw port number as required by OF in an additional variable,
raw_port.

Orabug: 27908746

Fixes: 69bec7259853 ("USB: core: let USB device know device node")
Signed-off-by: Nicolai Stange <nicstange@gmail.com>
Acked-by: Alan Stern <stern@rowland.harvard.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
(cherry picked from commit 7222c832254a75dcd67d683df75753d4a4e125bb)
Signed-off-by: Todd Vierling <todd.vierling@oracle.com>
Reviewed-by: Jack Vogel <jack.vogel@oracle.com>
Signed-off-by: Brian Maly <brian.maly@oracle.com>
7 years agoctf: drop the run-as-root error
Nick Alcock [Thu, 12 Apr 2018 13:11:53 +0000 (14:11 +0100)]
ctf: drop the run-as-root error

Best practices continue to suggest running dwarf2ctf with nonroot
privileges, but established practice depends on being able to run
as root.

Orabug: 27852654
Signed-off-by: Nick Alcock <nick.alcock@oracle.com>
Reviewed-by: Tomas Jedlicka <tomas.jedlicka@oracle.com>
7 years agords: Node crashes when trace buffer is opened
Ka-Cheong Poon [Mon, 9 Apr 2018 16:38:48 +0000 (09:38 -0700)]
rds: Node crashes when trace buffer is opened

The problem is that trace_printk() cannot handle a format string like
%p* which prints out the content from a pointer.  It stores the
pointer value.  When the trace file is opened, that pointer is
de-referenced and causes problems.  To use ftrace with such format
string, __trace_printk() needs to be used.  It stores the formatted
string in the trace buffer instead.

Orabug: 27846191

Signed-off-by: Ka-Cheong Poon <ka-cheong.poon@oracle.com>
Reviewed-by: Håkon Bugge <haakon.bugge@oracle.com>
Reviewed-by: Yuval Shaia <yuval.shaia@oracle.com>
Reviewed-by: Tom Hromatka <tom.hromatka@oracle.com>
7 years agoxfs: fix accidental reversion of aa6a6227435cb
Darrick J. Wong [Thu, 12 Apr 2018 05:13:34 +0000 (22:13 -0700)]
xfs: fix accidental reversion of aa6a6227435cb

In commit id 7ea004358b97c ("xfs: don't leave EFIs on AIL on mount
failure") I accidentally reverted aa6a6227435cb06c ("xfs: toggle
readonly state around xfs_log_mount_finish"), which caused a regression
in generic/417.  Put back the code that enables iunlink processing on a
ro mount.

Orabug: 27845869

Signed-off-by: Darrick J. Wong <darrick.wong@oracle.com>
Reviewed-by: Jack Vogel <jack.vogel@oracle.com>
7 years agonet: cdc_ether: fix divide by 0 on bad descriptors
Bjørn Mork [Mon, 6 Nov 2017 14:37:22 +0000 (15:37 +0100)]
net: cdc_ether: fix divide by 0 on bad descriptors

Orabug: 27841392
CVE: CVE-2017-16649

Setting dev->hard_mtu to 0 will cause a divide error in
usbnet_probe. Protect against devices with bogus CDC Ethernet
functional descriptors by ignoring a zero wMaxSegmentSize.

Signed-off-by: Bjørn Mork <bjorn@mork.no>
Acked-by: Oliver Neukum <oneukum@suse.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Brian Maly <brian.maly@oracle.com>
Conflicts:
drivers/net/usb/cdc_ether.c
whitespace correction

Reviewed-by: Jack Vogel <jack.vogel@oracle.com>
7 years agosysctl: Drop reference added by grab_header in proc_sys_readdir
Zhou Chengming [Fri, 6 Jan 2017 01:32:32 +0000 (09:32 +0800)]
sysctl: Drop reference added by grab_header in proc_sys_readdir

Fixes CVE-2016-9191, proc_sys_readdir doesn't drop reference
added by grab_header when return from !dir_emit_dots path.
It can cause any path called unregister_sysctl_table will
wait forever.

The calltrace of CVE-2016-9191:

[ 5535.960522] Call Trace:
[ 5535.963265]  [<ffffffff817cdaaf>] schedule+0x3f/0xa0
[ 5535.968817]  [<ffffffff817d33fb>] schedule_timeout+0x3db/0x6f0
[ 5535.975346]  [<ffffffff817cf055>] ? wait_for_completion+0x45/0x130
[ 5535.982256]  [<ffffffff817cf0d3>] wait_for_completion+0xc3/0x130
[ 5535.988972]  [<ffffffff810d1fd0>] ? wake_up_q+0x80/0x80
[ 5535.994804]  [<ffffffff8130de64>] drop_sysctl_table+0xc4/0xe0
[ 5536.001227]  [<ffffffff8130de17>] drop_sysctl_table+0x77/0xe0
[ 5536.007648]  [<ffffffff8130decd>] unregister_sysctl_table+0x4d/0xa0
[ 5536.014654]  [<ffffffff8130deff>] unregister_sysctl_table+0x7f/0xa0
[ 5536.021657]  [<ffffffff810f57f5>] unregister_sched_domain_sysctl+0x15/0x40
[ 5536.029344]  [<ffffffff810d7704>] partition_sched_domains+0x44/0x450
[ 5536.036447]  [<ffffffff817d0761>] ? __mutex_unlock_slowpath+0x111/0x1f0
[ 5536.043844]  [<ffffffff81167684>] rebuild_sched_domains_locked+0x64/0xb0
[ 5536.051336]  [<ffffffff8116789d>] update_flag+0x11d/0x210
[ 5536.057373]  [<ffffffff817cf61f>] ? mutex_lock_nested+0x2df/0x450
[ 5536.064186]  [<ffffffff81167acb>] ? cpuset_css_offline+0x1b/0x60
[ 5536.070899]  [<ffffffff810fce3d>] ? trace_hardirqs_on+0xd/0x10
[ 5536.077420]  [<ffffffff817cf61f>] ? mutex_lock_nested+0x2df/0x450
[ 5536.084234]  [<ffffffff8115a9f5>] ? css_killed_work_fn+0x25/0x220
[ 5536.091049]  [<ffffffff81167ae5>] cpuset_css_offline+0x35/0x60
[ 5536.097571]  [<ffffffff8115aa2c>] css_killed_work_fn+0x5c/0x220
[ 5536.104207]  [<ffffffff810bc83f>] process_one_work+0x1df/0x710
[ 5536.110736]  [<ffffffff810bc7c0>] ? process_one_work+0x160/0x710
[ 5536.117461]  [<ffffffff810bce9b>] worker_thread+0x12b/0x4a0
[ 5536.123697]  [<ffffffff810bcd70>] ? process_one_work+0x710/0x710
[ 5536.130426]  [<ffffffff810c3f7e>] kthread+0xfe/0x120
[ 5536.135991]  [<ffffffff817d4baf>] ret_from_fork+0x1f/0x40
[ 5536.142041]  [<ffffffff810c3e80>] ? kthread_create_on_node+0x230/0x230

One cgroup maintainer mentioned that "cgroup is trying to offline
a cpuset css, which takes place under cgroup_mutex.  The offlining
ends up trying to drain active usages of a sysctl table which apprently
is not happening."
The real reason is that proc_sys_readdir doesn't drop reference added
by grab_header when return from !dir_emit_dots path. So this cpuset
offline path will wait here forever.

See here for details: http://www.openwall.com/lists/oss-security/2016/11/04/13

Fixes: f0c3b5093add ("[readdir] convert procfs")
Cc: stable@vger.kernel.org
Reported-by: CAI Qian <caiqian@redhat.com>
Tested-by: Yang Shukui <yangshukui@huawei.com>
Signed-off-by: Zhou Chengming <zhouchengming1@huawei.com>
Acked-by: Al Viro <viro@ZenIV.linux.org.uk>
Signed-off-by: Eric W. Biederman <ebiederm@xmission.com>
(cherry picked from commit 93362fa47fe98b62e4a34ab408c4a418432e7939)

Orabug: 27841944
CVE: CVE-2016-9191
Signed-off-by: Dhaval Giani <dhaval.giani@oracle.com>
Reviewed-by: John Haxby <john.haxby@oracle.com>
7 years agoRevert "sysctl: Drop reference added by grab_header in proc_sys_readdir"
Jack Vogel [Wed, 11 Apr 2018 04:35:49 +0000 (21:35 -0700)]
Revert "sysctl: Drop reference added by grab_header in proc_sys_readdir"

This reverts commit ae3ad84cc98a31bc65525115e61bd9f0c2334cc1. The commit
has a bogus CVE number which needs to be corrected. Simply changing the
metadata here.

Signed-off-by: Jack Vogel <jack.vogel@oracle.com>