Chuck Anderson [Mon, 31 Oct 2016 22:52:23 +0000 (15:52 -0700)]
Merge branch topic/uek-4.1/upstream-cherry-picks of git://ca-git.us.oracle.com/linux-uek into uek/uek-4.1
* topic/uek-4.1/upstream-cherry-picks:
sched: panic on corrupted stack end
ecryptfs: forbid opening files without mmap handler
proc: prevent stacking filesystems on top
Until now, hitting this BUG_ON caused a recursive oops (because oops
handling involves do_exit(), which calls into the scheduler, which in
turn raises an oops), which caused stuff below the stack to be
overwritten until a panic happened (e.g. via an oops in interrupt
context, caused by the overwritten CPU index in the thread_info).
Just panic directly.
Signed-off-by: Jann Horn <jannh@google.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
(cherry picked from commit 29d6455178a09e1dc340380c582b13356227e8df) Signed-off-by: Brian Maly <brian.maly@oracle.com>
Conflicts:
kernel/sched/core.c
This prevents users from triggering a stack overflow through a recursive
invocation of pagefault handling that involves mapping procfs files into
virtual memory.
Signed-off-by: Jann Horn <jannh@google.com> Acked-by: Tyler Hicks <tyhicks@canonical.com> Cc: stable@vger.kernel.org Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
(cherry picked from commit 2f36db71009304b3f0b95afacd8eba1f9f046b87) Signed-off-by: Brian Maly <brian.maly@oracle.com>
This prevents stacking filesystems (ecryptfs and overlayfs) from using
procfs as lower filesystem. There is too much magic going on inside
procfs, and there is no good reason to stack stuff on top of procfs.
(For example, procfs does access checks in VFS open handlers, and
ecryptfs by design calls open handlers from a kernel thread that doesn't
drop privileges or so.)
Signed-off-by: Jann Horn <jannh@google.com> Cc: stable@vger.kernel.org Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
(cherry picked from commit e54ad7f1ee263ffa5a2de9c609d58dfa27b21cd9) Signed-off-by: Brian Maly <brian.maly@oracle.com>
Chuck Anderson [Mon, 31 Oct 2016 10:48:30 +0000 (03:48 -0700)]
Merge branch topic/uek-4.1/upstream-cherry-picks of git://ca-git.us.oracle.com/linux-uek into uek/uek-4.1
* topic/uek-4.1/upstream-cherry-picks:
btrfs: Handle unaligned length in extent_same
panic, x86: Fix re-entrance problem due to panic on NMI
kernel/watchdog.c: perform all-CPU backtrace in case of hard lockup
Fix compilation error introduced by "cancel the setfilesize transation when io error happen"
cancel the setfilesize transation when io error happen
mm/hugetlb: optimize minimum size (min_size) accounting
Btrfs: fix device replace of a missing RAID 5/6 device
Btrfs: add RAID 5/6 BTRFS_RBIO_REBUILD_MISSING operation
bpf: fix double-fdput in replace_map_fd_with_map_ptr()
Mark Fasheh [Mon, 8 Jun 2015 22:05:25 +0000 (15:05 -0700)]
btrfs: Handle unaligned length in extent_same
The extent-same code rejects requests with an unaligned length. This
poses a problem when we want to dedupe the tail extent of files as we
skip cloning the portion between i_size and the extent boundary.
If we don't clone the entire extent, it won't be deleted. So the
combination of these behaviors winds up giving us worst-case dedupe on
many files.
We can fix this by allowing a length that extents to i_size and
internally aligining those to the end of the block. This is what
btrfs_ioctl_clone() so we can just copy that check over.
Signed-off-by: Mark Fasheh <mfasheh@suse.de> Signed-off-by: Chris Mason <clm@fb.com>
(cherry picked from commit e1d227a42ea2b4664f94212bd1106b9a3413ffb8) Signed-off-by: Divya Indi <divya.indi@oracle.com>
Orabug: 24696342
Hidehiro Kawai [Mon, 14 Dec 2015 10:19:09 +0000 (11:19 +0100)]
panic, x86: Fix re-entrance problem due to panic on NMI
If panic on NMI happens just after panic() on the same CPU, panic() is
recursively called. Kernel stalls, as a result, after failing to acquire
panic_lock.
To avoid this problem, don't call panic() in NMI context if we've
already entered panic().
For that, introduce nmi_panic() macro to reduce code duplication. In
the case of panic on NMI, don't return from NMI handlers if another CPU
already panicked.
Signed-off-by: Hidehiro Kawai <hidehiro.kawai.ez@hitachi.com> Acked-by: Michal Hocko <mhocko@suse.com> Cc: Aaron Tomlin <atomlin@redhat.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: Andy Lutomirski <luto@kernel.org> Cc: Baoquan He <bhe@redhat.com> Cc: Chris Metcalf <cmetcalf@ezchip.com> Cc: David Hildenbrand <dahi@linux.vnet.ibm.com> Cc: Don Zickus <dzickus@redhat.com> Cc: "Eric W. Biederman" <ebiederm@xmission.com> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Gobinda Charan Maji <gobinda.cemk07@gmail.com> Cc: HATAYAMA Daisuke <d.hatayama@jp.fujitsu.com> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Ingo Molnar <mingo@kernel.org> Cc: Javi Merino <javi.merino@arm.com> Cc: Jonathan Corbet <corbet@lwn.net> Cc: kexec@lists.infradead.org Cc: linux-doc@vger.kernel.org Cc: lkml <linux-kernel@vger.kernel.org> Cc: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Cc: Michal Nazarewicz <mina86@mina86.com> Cc: Nicolas Iooss <nicolas.iooss_linux@m4x.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Prarit Bhargava <prarit@redhat.com> Cc: Rasmus Villemoes <linux@rasmusvillemoes.dk> Cc: Rusty Russell <rusty@rustcorp.com.au> Cc: Seth Jennings <sjenning@redhat.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Ulrich Obergfell <uobergfe@redhat.com> Cc: Vitaly Kuznetsov <vkuznets@redhat.com> Cc: Vivek Goyal <vgoyal@redhat.com> Link: http://lkml.kernel.org/r/20151210014626.25437.13302.stgit@softrs
[ Cleanup comments, fixup formatting. ] Signed-off-by: Borislav Petkov <bp@suse.de> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
(cherry picked from commit 1717f2096b543cede7a380c858c765c41936bc35)
Jiri Kosina [Fri, 6 Nov 2015 02:44:41 +0000 (18:44 -0800)]
kernel/watchdog.c: perform all-CPU backtrace in case of hard lockup
In many cases of hardlockup reports, it's actually not possible to know
why it triggered, because the CPU that got stuck is usually waiting on a
resource (with IRQs disabled) in posession of some other CPU is holding.
IOW, we are often looking at the stacktrace of the victim and not the
actual offender.
Introduce sysctl / cmdline parameter that makes it possible to have
hardlockup detector perform all-CPU backtrace.
Signed-off-by: Jiri Kosina <jkosina@suse.cz> Reviewed-by: Aaron Tomlin <atomlin@redhat.com> Cc: Ulrich Obergfell <uobergfe@redhat.com> Acked-by: Don Zickus <dzickus@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
(cherry picked from commit 55537871ef666b4153fd1ef8782e4a13fee142cc)
Zhaohongjiang [Mon, 12 Oct 2015 04:28:39 +0000 (15:28 +1100)]
cancel the setfilesize transation when io error happen
When I ran xfstest/073 case, the remount process was blocked to wait
transactions to be zero. I found there was a io error happened, and
the setfilesize transaction was not released properly. We should add
the changes to cancel the io error in this case.
Reproduction steps:
1. dd if=/dev/zero of=xfs1.img bs=1M count=2048
2. mkfs.xfs xfs1.img
3. losetup -f ./xfs1.img /dev/loop0
4. mount -t xfs /dev/loop0 /home/test_dir/
5. mkdir /home/test_dir/test
6. mkfs.xfs -dfile,name=image,size=2g
7. mount -t xfs -o loop image /home/test_dir/test
8. cp a file bigger than 2g to /home/test_dir/test
9. mount -t xfs -o remount,ro /home/test_dir/test
[ dchinner: moved io error detection to xfs_setfilesize_ioend() after
transaction context restoration. ]
It was observed that minimum size accounting associated with the
hugetlbfs min_size mount option may not perform optimally and as
expected. As huge pages/reservations are released from the filesystem
and given back to the global pools, they are reserved for subsequent
filesystem use as long as the subpool reserved count is less than
subpool minimum size. It does not take into account used pages within
the filesystem. The filesystem size limits are not exceeded and this is
technically not a bug. However, better behavior would be to wait for
the number of used pages/reservations associated with the filesystem to
drop below the minimum size before taking reservations to satisfy
minimum size.
An optimization is also made to the hugepage_subpool_get_pages() routine
which is called when pages/reservations are allocated. This does not
change behavior, but simply avoids the accounting if all reservations
have already been taken (subpool reserved count == 0).
Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com> Acked-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> Cc: Hillf Danton <hillf.zj@alibaba-inc.com> Cc: David Rientjes <rientjes@google.com> Cc: Dave Hansen <dave.hansen@linux.intel.com> Cc: "Kirill A. Shutemov" <kirill.shutemov@linux.intel.com> Cc: Paul Gortmaker <paul.gortmaker@windriver.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Orabug: 24450029
(cherry picked from commit 09a95e29cb30a3930db22d340ddd072a82b6b0db) Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com>
Omar Sandoval [Fri, 19 Jun 2015 18:52:51 +0000 (11:52 -0700)]
Btrfs: fix device replace of a missing RAID 5/6 device
The original implementation of device replace on RAID 5/6 seems to have
missed support for replacing a missing device. When this is attempted,
we end up calling bio_add_page() on a bio with a NULL ->bi_bdev, which
crashes when we try to dereference it. This happens because
btrfs_map_block() has no choice but to return us the missing device
because RAID 5/6 don't have any alternate mirrors to read from, and a
missing device has a NULL bdev.
The idea implemented here is to handle the missing device case
separately, which better only happen when we're replacing a missing RAID
5/6 device. We use the new BTRFS_RBIO_REBUILD_MISSING operation to
reconstruct the data from parity, check it with
scrub_recheck_block_checksum(), and write it out with
scrub_write_block_to_dev_replace().
Reported-by: Philip <bugzilla@philip-seeger.de>
Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=96141 Signed-off-by: Omar Sandoval <osandov@fb.com> Signed-off-by: Chris Mason <clm@fb.com>
Orabug: 24447930
signed-off-by: Divya Indi <divya.indi@oracle.com>
(cherry picked from commit 73ff61dbe5edeb1799d7e91c8b0641f87feb75fa)
The current RAID 5/6 recovery code isn't quite prepared to handle
missing devices. In particular, it expects a bio that we previously
attempted to use in the read path, meaning that it has valid pages
allocated. However, missing devices have a NULL blkdev, and we can't
call bio_add_page() on a bio with a NULL blkdev. We could do manual
manipulation of bio->bi_io_vec, but that's pretty gross. So instead, add
a separate path that allows us to manually add pages to the rbio.
Signed-off-by: Omar Sandoval <osandov@fb.com> Signed-off-by: Chris Mason <clm@fb.com>
Orabug: 24447930 Signed-off-by: Divya Indi <divya.indi@oracle.com>
(cherry picked from commit b4ee1782686d5b7a97826d67fdeaefaedbca23ce)
Roman Kagan [Wed, 18 May 2016 14:48:20 +0000 (17:48 +0300)]
kvm:vmx: more complete state update on APICv on/off
The function to update APICv on/off state (in particular, to deactivate
it when enabling Hyper-V SynIC) is incomplete: it doesn't adjust
APICv-related fields among secondary processor-based VM-execution
controls. As a result, Windows 2012 guests get stuck when SynIC-based
auto-EOI interrupt intersected with e.g. an IPI in the guest.
In addition, the MSR intercept bitmap isn't updated every time "virtualize
x2APIC mode" is toggled. This path can only be triggered by a malicious
guest, because Windows didn't use x2APIC but rather their own synthetic
APIC access MSRs; however a guest running in a SynIC-enabled VM could
switch to x2APIC and thus obtain direct access to host APIC MSRs
(CVE-2016-4440).
The patch fixes those omissions.
Signed-off-by: Roman Kagan <rkagan@virtuozzo.com> Reported-by: Steve Rutherford <srutherford@google.com> Reported-by: Yang Zhang <yang.zhang.wz@gmail.com> Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Orabug: 23347009
CVE: CVE-2016-4440 Signed-off-by: Manjunath Govindashetty <manjunath.govindashetty@oracle.com>
Ashish Samant [Thu, 18 Aug 2016 20:54:26 +0000 (13:54 -0700)]
fuse: direct-io: don't dirty ITER_BVEC pages
When reading from a loop device backed by a fuse file it deadlocks on
lock_page().
This is because the page is already locked by the read() operation done on
the loop device. In this case we don't want to either lock the page or
dirty it.
So do what fs/direct-io.c does: only dirty the page for ITER_IOVEC vectors.
bpf: fix double-fdput in replace_map_fd_with_map_ptr()
When bpf(BPF_PROG_LOAD, ...) was invoked with a BPF program whose bytecode
references a non-map file descriptor as a map file descriptor, the error
handling code called fdput() twice instead of once (in __bpf_map_get() and
in replace_map_fd_with_map_ptr()). If the file descriptor table of the
current task is shared, this causes f_count to be decremented too much,
allowing the struct file to be freed while it is still in use
(use-after-free). This can be exploited to gain root privileges by an
unprivileged user.
This bug was introduced in
commit 0246e64d9a5f ("bpf: handle pseudo BPF_LD_IMM64 insn"), but is only
exploitable since
commit 1be7f75d1668 ("bpf: enable non-root eBPF programs") because
previously, CAP_SYS_ADMIN was required to reach the vulnerable code.
(posted publicly according to request by maintainer)
Signed-off-by: Jann Horn <jannh@google.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Acked-by: Alexei Starovoitov <ast@kernel.org> Acked-by: Daniel Borkmann <daniel@iogearbox.net> Signed-off-by: David S. Miller <davem@davemloft.net>
(cherry picked from commit 8358b02bf67d3a5d8a825070e1aa73f25fb2e4c7)
Mukesh Kacker [Thu, 27 Oct 2016 13:07:16 +0000 (06:07 -0700)]
mlx4_ib: remove WARN_ON() based on incorrect assumptions
A WARN_ON() was inserted when user data was introduced to the
ibv_cmd_alloc_shpd() by another infiniband provider to make
sure that no user data is sent to older providers such as this
one which do not expect it.
It assumed when no user data is sent the udata->inlen is zero.
The user-kernel API however always sends at least 8 octets
(which may not be initialized in case of provider libraries that
do not user user data).
We remove the WARN_ON and rely on providers not to touch that
field if their companion library is not expected to initialize it.
Ashok Vairavan [Thu, 27 Oct 2016 19:31:52 +0000 (12:31 -0700)]
nvme: fix max_segments integer truncation
The block layer uses an unsigned short for max_segments. The way we
calculate the value for NVMe tends to generate very large 32-bit values,
which after integer truncation may lead to a zero value instead of
the desired outcome.
Signed-off-by: Christoph Hellwig <hch@lst.de> Reported-by: Jeff Lien <Jeff.Lien@hgst.com> Tested-by: Jeff Lien <Jeff.Lien@hgst.com> Reviewed-by: Keith Busch <keith.busch@intel.com> Signed-off-by: Jens Axboe <axboe@fb.com>
Orabug: 24928835
Cherry picked commit: 45686b6198bd824f083ff5293f191d78db9d708a
Conflicts:
UEK4 QU2 nvme module doesn't have core.c file. All the functions
resides in pci.c. Hence, this patch is manually ported to the respective
function in pci.c
drivers/nvme/host/core.c
Santosh Shilimkar [Fri, 14 Oct 2016 23:47:49 +0000 (16:47 -0700)]
mlx4_core/ib: set the IB port MTU to 2K
'commit 096335b3f983 ("mlx4_core: Allow dynamic MTU configuration for IB
ports")' overwrite the default port MTU and sets it as 4K. Since this
directly impacts the HW VLs supported and Oracle workloads heavily uses
all supported 8 VLs for traffic classification, 2K default needs to
be kept as is.
We initilise it to default 2k so that the feature(dynamic MTU configuration)
is still available for non DB users to set the desired MTU value using sysctl.
Also for CX2 cards, commit 596c5ff4b7b3 ("net/mlx4: adjust initial
value of vl_cap in mlx4_SET_PORT") broke the vl_cap which made the
supported VLs to 4 irrespective of MTU size.
Knut Omang [Fri, 21 Oct 2016 06:23:39 +0000 (08:23 +0200)]
sif: cq: transfer headroom attribute to user mode
This commit makes sure old libsif versions works
with the driver while providing a forward compatible
way of making additional changes to the extra
headroom in the CQs.
We anticipate to be able to trim
down the extra entries once we have PQP errors
handled transparently. This commit then ensures that
the headroom is only set in one place, at the
driver side, and that user mode just can
pick up the configured headroom from the kernel.
This is done by providing the used headroom
in a formerly reserved 32 bit field, thus no changes
to the packet size is necessary.
Nevertheless we increment the abi version from
3.6 to 3.7 to allow libsif to detect whether
the headroom field can be trusted.
Knut Omang [Thu, 13 Oct 2016 10:07:33 +0000 (12:07 +0200)]
sif: Add vendor flag to support testing without oversized CQs
After introduction of extra CQ entries to reduce risk of
having duplicate completions overflow a CQ, we no longer can
trigger various CQ overflow scenarios without running a lot of
requests. We need to be able to test with a minimal set of operations
to allow co-sim based tests for further analysis.
Introduce a new vendor_flag no_x_cqe = 0x80 to turn off
the allocation of extra CQEs.
This patch fixes an incomplete patch in commit "cq: Add
additional SIF visible cqes to CQ". The max_cqe
capability reported by query_device is incorrect because
it includes the SIF visible cqes.
Signed-off-by: Wei Lin Guay <wei.lin.guay@oracle.com> Reviewed-by: Knut Omang <knut.omang@oracle.com>
Fix an issue in modify_qp_hw from SQE to RTS returns
EPSC_MODIFY_CANNOT_CHANGE_QP_ATTR. In sif, qp transition
from SQE to RTS must explicitly set the req_access_error
to 0.
Signed-off-by: Wei Lin Guay <wei.lin.guay@oracle.com> Reviewed-by: Knut Omang <knut.omang@oracle.com>
shared pd is not an IBTA defined feature, but an Oracle
Linux extension. Even though PSIF can share a pd easily,
it must comply with the Oracle ib_core implementation which
requires a new pd "object" when reusing a pd (via share_pd
verbs).
Without a new pd "object", it causes a NULL pointer deference
during pd clean-up phase. Thus, this patch creates a new pd
"object" when reusing a pd, and this pd "object" is pointing
to the original pd index.
Signed-off-by: Wei Lin Guay <wei.lin.guay@oracle.com> Reviewed-by: HÃ¥kon Bugge <haakon.bugge@oracle.com>
Francisco Triviño [Fri, 30 Sep 2016 08:35:14 +0000 (10:35 +0200)]
sif: eq: Add timeout to the threaded interrupt handler
This commit implements a timeout that prevents soft lockup issues when
the threaded interrupt function (sif_intr_worker) keeps processing
events for a long period. If the timeout is reached, the threaded
handler returns IRQ_HANDLED even if there are more events to be
processed. In such a case, the coalescing mechanism will generate
an IRQ for the last event.
RDS: IB: fix panic with handlers running post teardown
Shutdown cqe reaping loop takes care of emptying the
CQ's before they being destroyed. And once tasklets are
killed, the hanlders are not expected to run.
But because of core tasklet BUG, tasklet handler could
still run after tasklet_kill which lead can lead to kernel
panic. Fix for core tasklet code was proposed and accepted
upstream, but it comes with bagage of fixing quite a
few bad users of it. Also for receive, we have additional
kthread to take care.
The BUG fix done as part of Orabug 2446085, had an additional
assumption that reaping code won't reap all the CQEs after
QP moved to error state which was not correct. QP is
moved to error state as part of rdma_disconnect() and
all the CQEs are reaped by the loop properly.
Any handler running after above and trying to access the
qp/cq resources gets exposed to race conditions. Patch
fixes this race by makes sure that handlers returns
without any action post teardown.
Linus Torvalds [Thu, 13 Oct 2016 20:07:36 +0000 (13:07 -0700)]
mm: remove gup_flags FOLL_WRITE games from __get_user_pages()
This is an ancient bug that was actually attempted to be fixed once
(badly) by me eleven years ago in commit 4ceb5db9757a ("Fix
get_user_pages() race for write access") but that was then undone due to
problems on s390 by commit f33ea7f404e5 ("fix get_user_pages bug").
In the meantime, the s390 situation has long been fixed, and we can now
fix it by checking the pte_dirty() bit properly (and do it better). The
s390 dirty bit was implemented in abf09bed3cce ("s390/mm: implement
software dirty bits") which made it into v3.9. Earlier kernels will
have to look at the page state itself.
Also, the VM has become more scalable, and what used a purely
theoretical race back then has become easier to trigger.
To fix it, we introduce a new internal FOLL_COW flag to mark the "yes,
we already did a COW" rather than play racy games with FOLL_WRITE that
is very fundamental, and then use the pte dirty flag to validate that
the FOLL_COW flag is still valid.
Reported-and-tested-by: Phil "not Paul" Oester <kernel@linuxace.com> Acked-by: Hugh Dickins <hughd@google.com> Reviewed-by: Michal Hocko <mhocko@suse.com> Cc: Andy Lutomirski <luto@kernel.org> Cc: Kees Cook <keescook@chromium.org> Cc: Oleg Nesterov <oleg@redhat.com> Cc: Willy Tarreau <w@1wt.eu> Cc: Nick Piggin <npiggin@gmail.com> Cc: Greg Thelen <gthelen@google.com> Cc: stable@vger.kernel.org Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
(cherry picked from commit 19be0eaffa3ac7d8eb6784ad9bdbc7d67ed8e619)
Orabug: 24926639
Conflicts:
include/linux/mm.h
mm/gup.c Signed-off-by: Chuck Anderson <chuck.anderson@oracle.com>
Vitaly Kuznetsov [Thu, 30 Jun 2016 15:56:36 +0000 (17:56 +0200)]
x86/acpi: store ACPI ids from MADT for future usage
Currently we don't save ACPI ids (unlike LAPIC ids which go to
x86_cpu_to_apicid) from MADT and we may need this information later.
Particularly, ACPI ids is the only existent way for a PVHVM Xen guest
to figure out Xen's idea of its vCPUs ids before these CPUs boot and
in some cases these ids diverge from Linux's cpu ids.
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: David Vrabel <david.vrabel@citrix.com>
(cherry picked from commit 3e9e57fad3d8530aa30787f861c710f598ddc4e7) Signed-off-by: Bob Liu <bob.liu@oracle.com>
Orabug: 24820937
Filipe Manco [Thu, 15 Sep 2016 15:10:46 +0000 (17:10 +0200)]
xen-netback: fix error handling on netback_probe()
In case of error during netback_probe() (e.g. an entry missing on the
xenstore) netback_remove() is called on the new device, which will set
the device backend state to XenbusStateClosed by calling
set_backend_state(). However, the backend state wasn't initialized by
netback_probe() at this point, which will cause and invalid transaction
and set_backend_state() to BUG().
Initialize the backend state at the beginning of netback_probe() to
XenbusStateInitialising, and create two new valid state transitions on
set_backend_state(), from XenbusStateInitialising to XenbusStateClosed,
and from XenbusStateInitialising to XenbusStateInitWait.
Signed-off-by: Filipe Manco <filipe.manco@neclab.eu> Acked-by: Wei Liu <wei.liu2@citrix.com> Signed-off-by: David S. Miller <davem@davemloft.net>
(cherry picked from commit cce94483e47e8e3d74cf4475dea33f9fd4b6ad9f) Signed-off-by: Bob Liu <bob.liu@oracle.com>
Orabug: 24820937
We pass xen_vcpu_id mapping information to hypercalls which require
uint32_t type so it would be cleaner to have it as uint32_t. The
initializer to -1 can be dropped as we always do the mapping before using
it and we never check the 'not set' value anyway.
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: David Vrabel <david.vrabel@citrix.com>
(cherry picked from commit 55467dea2967259f21f4f854fc99d39cc5fea60e) Signed-off-by: Bob Liu <bob.liu@oracle.com>
Orabug: 24820937
Jan Beulich [Mon, 15 Aug 2016 15:02:38 +0000 (09:02 -0600)]
xenbus: don't look up transaction IDs for ordinary writes
This should really only be done for XS_TRANSACTION_END messages, or
else at least some of the xenstore-* tools don't work anymore.
Fixes: 0beef634b8 ("xenbus: don't BUG() on user mode induced condition") Reported-by: Richard Schütz <rschuetz@uni-koblenz.de> Cc: <stable@vger.kernel.org> Signed-off-by: Jan Beulich <jbeulich@suse.com> Tested-by: Richard Schütz <rschuetz@uni-koblenz.de> Signed-off-by: David Vrabel <david.vrabel@citrix.com>
(cherry picked from commit 9a035a40f7f3f6708b79224b86c5777a3334f7ea) Signed-off-by: Bob Liu <bob.liu@oracle.com>
Orabug: 24820937
Bob Liu [Wed, 27 Jul 2016 09:42:04 +0000 (17:42 +0800)]
xen-blkfront: free resources if xlvbd_alloc_gendisk fails
Current code forgets to free resources in the failure path of
xlvbd_alloc_gendisk(), this patch fix it.
Signed-off-by: Bob Liu <bob.liu@oracle.com> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
(cherry picked from commit 4e876c2bd37fbb5c37a4554a79cf979d486f0e82) Signed-off-by: Bob Liu <bob.liu@oracle.com>
Orabug: 24820937
xen: add static initialization of steal_clock op to xen_time_ops
pv_time_ops might be overwritten with xen_time_ops after the
steal_clock operation has been initialized already. To prevent calling
a now uninitialized function pointer add the steal_clock static
initialization to xen_time_ops.
Signed-off-by: Juergen Gross <jgross@suse.com> Signed-off-by: David Vrabel <david.vrabel@citrix.com>
(cherry picked from commit d34c30cc1fa80f509500ff192ea6bc7d30671061) Signed-off-by: Bob Liu <bob.liu@oracle.com>
Orabug: 24820937
Vitaly Kuznetsov [Thu, 30 Jun 2016 15:56:43 +0000 (17:56 +0200)]
xen/pvhvm: run xen_vcpu_setup() for the boot CPU
Historically we didn't call VCPUOP_register_vcpu_info for CPU0 for
PVHVM guests (while we had it for PV and ARM guests). This is usually
fine as we can use vcpu info in the shared_info page but when we try
booting on a vCPU with Xen's vCPU id > 31 (e.g. when we try to kdump
after crashing on this CPU) we're not able to boot.
Switch to always doing VCPUOP_register_vcpu_info for the boot CPU.
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: David Vrabel <david.vrabel@citrix.com>
(cherry picked from commit ee42d665d3f5db975caf87baf101a57235ddb566) Signed-off-by: Bob Liu <bob.liu@oracle.com>
Orabug: 24820937
Vitaly Kuznetsov [Thu, 30 Jun 2016 15:56:42 +0000 (17:56 +0200)]
xen/evtchn: use xen_vcpu_id mapping
Use the newly introduced xen_vcpu_id mapping to get Xen's idea of vCPU
id for CPU0.
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: David Vrabel <david.vrabel@citrix.com>
(cherry picked from commit cbbb4682394c45986a34d8c77a02e7a066e30235) Signed-off-by: Bob Liu <bob.liu@oracle.com>
Orabug: 24820937
Vitaly Kuznetsov [Thu, 30 Jun 2016 15:56:41 +0000 (17:56 +0200)]
xen/events: fifo: use xen_vcpu_id mapping
EVTCHNOP_init_control has vCPU id as a parameter and Xen's idea of
vCPU id should be used. Use the newly introduced xen_vcpu_id mapping
to convert it from Linux's id.
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: David Vrabel <david.vrabel@citrix.com>
(cherry picked from commit be78da1cf43db4c1a9e13af8b6754199a89d5d75) Signed-off-by: Bob Liu <bob.liu@oracle.com>
Orabug: 24820937
Vitaly Kuznetsov [Thu, 30 Jun 2016 15:56:40 +0000 (17:56 +0200)]
xen/events: use xen_vcpu_id mapping in events_base
EVTCHNOP_bind_ipi and EVTCHNOP_bind_virq pass vCPU id as a parameter
and Xen's idea of vCPU id should be used. Use the newly introduced
xen_vcpu_id mapping to convert it from Linux's id.
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: David Vrabel <david.vrabel@citrix.com>
(cherry picked from commit 8058c0b897e7d1ba5c900cb17eb82aa0d88fca53) Signed-off-by: Bob Liu <bob.liu@oracle.com>
Orabug: 24820937
Vitaly Kuznetsov [Thu, 30 Jun 2016 15:56:39 +0000 (17:56 +0200)]
x86/xen: use xen_vcpu_id mapping when pointing vcpu_info to shared_info
shared_info page has space for 32 vcpu info slots for first 32 vCPUs
but these are the first 32 vCPUs from Xen's perspective and we should
map them accordingly with the newly introduced xen_vcpu_id mapping.
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: David Vrabel <david.vrabel@citrix.com>
(cherry picked from commit e15a8621935cac527b4e0ed4078d24c3e5ef73a6) Signed-off-by: Bob Liu <bob.liu@oracle.com>
Orabug: 24820937
Vitaly Kuznetsov [Thu, 30 Jun 2016 15:56:38 +0000 (17:56 +0200)]
x86/xen: use xen_vcpu_id mapping for HYPERVISOR_vcpu_op
HYPERVISOR_vcpu_op() passes Linux's idea of vCPU id as a parameter
while Xen's idea is expected. In some cases these ideas diverge so we
need to do remapping.
Convert all callers of HYPERVISOR_vcpu_op() to use xen_vcpu_nr().
Leave xen_fill_possible_map() and xen_filter_cpu_maps() intact as
they're only being called by PV guests before perpu areas are
initialized. While the issue could be solved by switching to
early_percpu for xen_vcpu_id I think it's not worth it: PV guests will
probably never get to the point where their idea of vCPU id diverges
from Xen's.
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: David Vrabel <david.vrabel@citrix.com>
(cherry picked from commit ad5475f9faf5186b7f59de2c6481ee3e211f1ed7) Signed-off-by: Bob Liu <bob.liu@oracle.com>
Orabug: 24820937
Vitaly Kuznetsov [Thu, 30 Jun 2016 15:56:37 +0000 (17:56 +0200)]
xen: introduce xen_vcpu_id mapping
It may happen that Xen's and Linux's ideas of vCPU id diverge. In
particular, when we crash on a secondary vCPU we may want to do kdump
and unlike plain kexec where we do migrate_to_reboot_cpu() we try
booting on the vCPU which crashed. This doesn't work very well for
PVHVM guests as we have a number of hypercalls where we pass vCPU id
as a parameter. These hypercalls either fail or do something
unexpected.
To solve the issue introduce percpu xen_vcpu_id mapping. ARM and PV
guests get direct mapping for now. Boot CPU for PVHVM guest gets its
id from CPUID. With secondary CPUs it is a bit more
trickier. Currently, we initialize IPI vectors before these CPUs boot
so we can't use CPUID. Use ACPI ids from MADT instead.
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: David Vrabel <david.vrabel@citrix.com>
(cherry picked from commit 88e957d6e47f1232ad15b21e54a44f1147ea8c1b) Signed-off-by: Bob Liu <bob.liu@oracle.com>
Orabug: 24820937
Vitaly Kuznetsov [Thu, 30 Jun 2016 15:56:35 +0000 (17:56 +0200)]
x86/xen: update cpuid.h from Xen-4.7
Update cpuid.h header from xen hypervisor tree to get
XEN_HVM_CPUID_VCPU_ID_PRESENT definition.
Signed-off-by: Vitaly Kuznetsov <vkuznets@redhat.com> Signed-off-by: David Vrabel <david.vrabel@citrix.com>
(cherry picked from commit de2f5537b397249e91cafcbed4de64a24818542e) Signed-off-by: Bob Liu <bob.liu@oracle.com>
Orabug: 24820937
David Vrabel [Mon, 11 Jul 2016 14:45:51 +0000 (15:45 +0100)]
xen/evtchn: add IOCTL_EVTCHN_RESTRICT
IOCTL_EVTCHN_RESTRICT limits the file descriptor to being able to bind
to interdomain event channels from a specific domain. Event channels
that are already bound continue to work for sending and receiving
notifications.
This is useful as part of deprivileging a user space PV backend or
device model (QEMU). e.g., Once the device model as bound to the
ioreq server event channels it can restrict the file handle so an
exploited DM cannot use it to create or bind to arbitrary event
channels.
Signed-off-by: David Vrabel <david.vrabel@citrix.com> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
(cherry picked from commit fbc872c38c8fed31948c85683b5326ee5ab9fccc) Signed-off-by: Bob Liu <bob.liu@oracle.com>
Orabug: 24820937
Jan Beulich [Thu, 7 Jul 2016 08:05:46 +0000 (02:05 -0600)]
xen-blkfront: prefer xenbus_scanf() over xenbus_gather()
... for single items being collected: It is more typesafe (as the
compiler can check format string and to-be-written-to variable match)
and requires one less parameter to be passed.
Jan Beulich [Thu, 7 Jul 2016 08:05:21 +0000 (02:05 -0600)]
xen-blkback: prefer xenbus_scanf() over xenbus_gather()
... for single items being collected: It is more typesafe (as the
compiler can check format string and to-be-written-to variable match)
and requires one less parameter to be passed.
Paul Gortmaker [Thu, 14 Jul 2016 00:18:59 +0000 (20:18 -0400)]
x86/xen: Audit and remove any unnecessary uses of module.h
Historically a lot of these existed because we did not have
a distinction between what was modular code and what was providing
support to modules via EXPORT_SYMBOL and friends. That changed
when we forked out support for the latter into the export.h file.
This means we should be able to reduce the usage of module.h
in code that is obj-y Makefile or bool Kconfig. The advantage
in doing so is that module.h itself sources about 15 other headers;
adding significantly to what we feed cpp, and it can obscure what
headers we are effectively using.
Since module.h was the source for init.h (for __init) and for
export.h (for EXPORT_SYMBOL) we consider each obj-y/bool instance
for the presence of either and replace as needed.
Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com> Acked-by: Juergen Gross <jgross@suse.com> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com> Cc: David Vrabel <david.vrabel@citrix.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: xen-devel@lists.xenproject.org Link: http://lkml.kernel.org/r/20160714001901.31603-7-paul.gortmaker@windriver.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
(cherry picked from commit 7a2463dcacee3f2f36c78418c201756372eeea6b) Signed-off-by: Bob Liu <bob.liu@oracle.com>
Orabug: 24820937
Jan Beulich [Sat, 9 Jul 2016 00:35:30 +0000 (17:35 -0700)]
Input: xen-kbdfront - prefer xenbus_write() over xenbus_printf() where possible
... as being the simpler variant.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Signed-off-by: Dmitry Torokhov <dmitry.torokhov@gmail.com>
(cherry picked from commit cd6763be8f553c7db421d38ddcb36466fb8512cd) Signed-off-by: Bob Liu <bob.liu@oracle.com>
Orabug: 24820937
Up to now reading the stolen time of a remote cpu was not possible in a
performant way under Xen. This made support of runqueue steal time via
paravirt_steal_rq_enabled impossible.
With the addition of an appropriate hypervisor interface this is now
possible, so add the support.
Signed-off-by: Juergen Gross <jgross@suse.com> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org> Signed-off-by: David Vrabel <david.vrabel@citrix.com>
(cherry picked from commit 6ba286ad845799b135e5af73d1fbc838fa79f709) Signed-off-by: Bob Liu <bob.liu@oracle.com>
Orabug: 24820937
Jan Beulich [Wed, 6 Jul 2016 07:00:14 +0000 (01:00 -0600)]
xen-pciback: drop superfluous variables
req_start is simply an alias of the "offset" function parameter, and
req_end is being used just once in each function. (And both variables
were loop invariant anyway, so should at least have got initialized
outside the loop.)
Signed-off-by: Jan Beulich <jbeulich@suse.com> Signed-off-by: David Vrabel <david.vrabel@citrix.com>
(cherry picked from commit 1ad6344acfbf19288573b4a5fa0b07cbb5af27d7) Signed-off-by: Bob Liu <bob.liu@oracle.com>
Orabug: 24820937
Jan Beulich [Wed, 6 Jul 2016 06:59:35 +0000 (00:59 -0600)]
xen-pciback: short-circuit read path used for merging write values
There's no point calling xen_pcibk_config_read() here - all it'll do is
return whatever conf_space_read() returns for the field which was found
here (and which would be found there again). Also there's no point
clearing tmp_val before the call.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Signed-off-by: David Vrabel <david.vrabel@citrix.com>
(cherry picked from commit ee87d6d0d36d98c550f99274a81841033226e3bf) Signed-off-by: Bob Liu <bob.liu@oracle.com>
Orabug: 24820937
Jan Beulich [Wed, 6 Jul 2016 06:58:58 +0000 (00:58 -0600)]
xen-pciback: use const and unsigned in bar_init()
Signed-off-by: Jan Beulich <jbeulich@suse.com> Signed-off-by: David Vrabel <david.vrabel@citrix.com>
(cherry picked from commit 585203609c894db11dea724b743c04d0c9927f39) Signed-off-by: Bob Liu <bob.liu@oracle.com>
Orabug: 24820937
Jan Beulich [Wed, 6 Jul 2016 06:58:19 +0000 (00:58 -0600)]
xen-pciback: simplify determination of 64-bit memory resource
Other than for raw BAR values, flags are properly separated in the
internal representation.
Signed-off-by: Jan Beulich <jbeulich@suse.com> Signed-off-by: David Vrabel <david.vrabel@citrix.com>
(cherry picked from commit c8670c22e04e4e42e752cc5b53922106b3eedbda) Signed-off-by: Bob Liu <bob.liu@oracle.com>
Orabug: 24820937
Jan Beulich [Wed, 6 Jul 2016 06:57:43 +0000 (00:57 -0600)]
xen-pciback: fold read_dev_bar() into its now single caller
Signed-off-by: Jan Beulich <jbeulich@suse.com> Signed-off-by: David Vrabel <david.vrabel@citrix.com>
(cherry picked from commit 6ad2655d87d2d35c1de4500402fae10fe7b30b4a) Signed-off-by: Bob Liu <bob.liu@oracle.com>
Orabug: 24820937
Jan Beulich [Wed, 6 Jul 2016 06:57:07 +0000 (00:57 -0600)]
xen-pciback: drop rom_init()
It is now identical to bar_init().
Signed-off-by: Jan Beulich <jbeulich@suse.com> Signed-off-by: David Vrabel <david.vrabel@citrix.com>
(cherry picked from commit 664093bb6b797c8ba0a525ee0a36ad8cbf89413e) Signed-off-by: Bob Liu <bob.liu@oracle.com>
Orabug: 24820937
Jan Beulich [Wed, 6 Jul 2016 06:56:27 +0000 (00:56 -0600)]
xen-pciback: drop unused function parameter of read_dev_bar()
Signed-off-by: Jan Beulich <jbeulich@suse.com> Signed-off-by: David Vrabel <david.vrabel@citrix.com>
(cherry picked from commit 6c6e4caa2006ab82587a3648967314ec92569a98) Signed-off-by: Bob Liu <bob.liu@oracle.com>
Orabug: 24820937
The kernel.h macro DIV_ROUND_UP performs the computation
(((n) + (d) - 1) /(d)) but is perhaps more readable.
The Coccinelle script used to make this change is as follows:
@haskernel@
@@
@depends on haskernel@
expression n,d;
@@
(
- (n + d - 1) / d
+ DIV_ROUND_UP(n,d)
|
- (n + (d - 1)) / d
+ DIV_ROUND_UP(n,d)
)
Signed-off-by: Amitoj Kaur Chawla <amitoj1606@gmail.com> Signed-off-by: David Vrabel <david.vrabel@citrix.com>
(cherry picked from commit 585423c8c4d2f39a2c299bc6dd16433e6141fba5) Signed-off-by: Bob Liu <bob.liu@oracle.com>
Orabug: 24820937
Bhaktipriya Shridhar [Tue, 31 May 2016 16:56:30 +0000 (22:26 +0530)]
xen: xenbus: Remove create_workqueue
System workqueues have been able to handle high level of concurrency
for a long time now and there's no reason to use dedicated workqueues
just to gain concurrency. Replace dedicated xenbus_frontend_wq with the
use of system_wq.
Unlike a dedicated per-cpu workqueue created with create_workqueue(),
system_wq allows multiple work items to overlap executions even on
the same CPU; however, a per-cpu workqueue doesn't have any CPU
locality or global ordering guarantees unless the target CPU is
explicitly specified and the increase of local concurrency shouldn't
make any difference.
In this case, there is only a single work item, increase of concurrency
level by switching to system_wq should not make any difference.
Signed-off-by: Bhaktipriya Shridhar <bhaktipriya96@gmail.com> Acked-by: Tejun Heo <tj@kernel.org> Signed-off-by: David Vrabel <david.vrabel@citrix.com>
(cherry picked from commit 5ee405d9d234ee5641741c07a654e4c6ba3e2a9d) Signed-off-by: Bob Liu <bob.liu@oracle.com>
Orabug: 24820937
Bhaktipriya Shridhar [Wed, 1 Jun 2016 14:15:08 +0000 (19:45 +0530)]
xen: xen-pciback: Remove create_workqueue
System workqueues have been able to handle high level of concurrency
for a long time now and there's no reason to use dedicated workqueues
just to gain concurrency. Replace dedicated xen_pcibk_wq with the
use of system_wq.
Unlike a dedicated per-cpu workqueue created with create_workqueue(),
system_wq allows multiple work items to overlap executions even on
the same CPU; however, a per-cpu workqueue doesn't have any CPU
locality or global ordering guarantees unless the target CPU is
explicitly specified and thus the increase of local concurrency shouldn't
make any difference.
Since the work items could be pending, flush_work() has been used in
xen_pcibk_disconnect(). xen_pcibk_xenbus_remove() calls free_pdev()
which in turn calls xen_pcibk_disconnect() for every pdev to ensure that
there is no pending task while disconnecting the driver.
Signed-off-by: Bhaktipriya Shridhar <bhaktipriya96@gmail.com> Acked-by: Tejun Heo <tj@kernel.org> Signed-off-by: David Vrabel <david.vrabel@citrix.com>
(cherry picked from commit 429eafe60943bdfa33b15540ab2db5642a1f8c3c) Signed-off-by: Bob Liu <bob.liu@oracle.com>
Orabug: 24820937
Boris Ostrovsky [Tue, 21 Jun 2016 14:17:33 +0000 (10:17 -0400)]
xen/PMU: Log VPMU initialization error at lower level
This will match how PMU errors are reported at check_hw_exists()'s
msr_fail label, which is reached when VPMU initialzation fails.
Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Reviewed-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Acked-by: Juergen Gross <jgross@suse.com> Signed-off-by: David Vrabel <david.vrabel@citrix.com>
(cherry picked from commit 6ab9507ed96a6c0b24174d3430064a90b3dddd0a) Signed-off-by: Bob Liu <bob.liu@oracle.com>
Orabug: 24820937
The pv_time_ops structure contains a function pointer for the
"steal_clock" functionality used only by KVM and Xen on ARM. Xen on x86
uses its own mechanism to account for the "stolen" time a thread wasn't
able to run due to hypervisor scheduling.
Add support in Xen arch independent time handling for this feature by
moving it out of the arm arch into drivers/xen and remove the x86 Xen
hack.
Signed-off-by: Juergen Gross <jgross@suse.com> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org> Signed-off-by: David Vrabel <david.vrabel@citrix.com>
(cherry picked from commit ecb23dc6f2eff0ce64dd60351a81f376f13b12cc) Signed-off-by: Bob Liu <bob.liu@oracle.com>
Orabug: 24820937
Muhammad Falak R Wani [Sun, 24 Apr 2016 12:03:32 +0000 (20:03 +0800)]
xen: use vma_pages().
Replace explicit computation of vma page count by a call to
vma_pages().
Signed-off-by: Muhammad Falak R Wani <falakreyaz@gmail.com> Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com> Signed-off-by: David Vrabel <david.vrabel@citrix.com>
(cherry picked from commit c7ebf9d9c6b4e9402b978da0b0785db4129c1f79) Signed-off-by: Bob Liu <bob.liu@oracle.com>
Orabug: 24820937
XEN: EFI: Move x86 specific codes to architecture directory
Move x86 specific codes to architecture directory and export those EFI
runtime service functions. This will be useful for initializing runtime
service on ARM later.
Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org> Reviewed-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Tested-by: Julien Grall <julien.grall@arm.com> Signed-off-by: Stefano Stabellini <sstabellini@kernel.org>
(cherry picked from commit a62ed500307bfaf4c1a818b69f7c1e7df1039a16) Signed-off-by: Bob Liu <bob.liu@oracle.com>
Orabug: 24820937
xen/hvm/params: Add a new delivery type for event-channel in HVM_PARAM_CALLBACK_IRQ
This new delivery type which is for ARM shares the same value with
HVM_PARAM_CALLBACK_TYPE_VECTOR which is for x86.
val[15:8] is flag: val[7:0] is a PPI.
To the flag, bit 8 stands the interrupt mode is edge(1) or level(0) and
bit 9 stands the interrupt polarity is active low(1) or high(0).
Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org> Acked-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Reviewed-by: Julien Grall <julien.grall@arm.com> Tested-by: Julien Grall <julien.grall@arm.com>
(cherry picked from commit 383ff518a79fe3dcece579b9d30be77b219d10f8) Signed-off-by: Bob Liu <bob.liu@oracle.com>
Orabug: 24820937
Xen: xlate: Use page_to_xen_pfn instead of page_to_pfn
Make xen_xlate_map_ballooned_pages work with 64K pages. In that case
Kernel pages are 64K in size but Xen pages remain 4K in size. Xen pfns
refer to 4K pages.
Signed-off-by: Shannon Zhao <shannon.zhao@linaro.org> Reviewed-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Reviewed-by: Julien Grall <julien.grall@arm.com> Tested-by: Julien Grall <julien.grall@arm.com>
(cherry picked from commit 975fac3c4f38e0b47514abdb689548a8e9971081) Signed-off-by: Bob Liu <bob.liu@oracle.com>
Orabug: 24820937
Andy Lutomirski [Thu, 30 Jul 2015 21:31:31 +0000 (14:31 -0700)]
x86/xen: Probe target addresses in set_aliased_prot() before the hypercall
The update_va_mapping hypercall can fail if the VA isn't present
in the guest's page tables. Under certain loads, this can
result in an OOPS when the target address is in unpopulated vmap
space.
While we're at it, add comments to help explain what's going on.
This isn't a great long-term fix. This code should probably be
changed to use something like set_memory_ro.
Signed-off-by: Andy Lutomirski <luto@kernel.org> Cc: Andrew Cooper <andrew.cooper3@citrix.com> Cc: Andy Lutomirski <luto@amacapital.net> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com> Cc: Borislav Petkov <bp@alien8.de> Cc: Brian Gerst <brgerst@gmail.com> Cc: David Vrabel <dvrabel@cantab.net> Cc: Denys Vlasenko <dvlasenk@redhat.com> Cc: H. Peter Anvin <hpa@zytor.com> Cc: Jan Beulich <jbeulich@suse.com> Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Sasha Levin <sasha.levin@oracle.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: security@kernel.org <security@kernel.org> Cc: <stable@vger.kernel.org> Cc: xen-devel <xen-devel@lists.xen.org> Link: http://lkml.kernel.org/r/0b0e55b995cda11e7829f140b833ef932fcabe3a.1438291540.git.luto@kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
(cherry picked from commit aa1acff356bbedfd03b544051f5b371746735d89) Signed-off-by: Bob Liu <bob.liu@oracle.com>
Orabug: 24820937
The current NVME driver allocates I/O queue per-cpu and alloctes IRQ
per-queue for the devices, a large number of IRQs will be allotted to
the I/O queues on a large NUMA/SMP system with multiple NVME devices
installed because of this design.
It would cause failure of CPU hotplug operations on the above mentioned
system, the problem is that the CPU cores could not be hotplugged after
certain number of them are offlined because the remaining online CPU cores
have not enough IRQ vectors to accept the large number of migrated IRQs.
This patch fixes it by providing a way to reduce the NVME queue IRQs to
an acceptable number.
Signed-off-by: Shan Hai <shan.hai@oracle.com> Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>
Jens Axboe [Wed, 24 Aug 2016 21:38:01 +0000 (15:38 -0600)]
blk-mq: improve warning for running a queue on the wrong CPU
__blk_mq_run_hw_queue() currently warns if we are running the queue on a
CPU that isn't set in its mask. However, this can happen if a CPU is
being offlined, and the workqueue handling will place the work on CPU0
instead. Improve the warning so that it only triggers if the batch cpu
in the hardware queue is currently online. If it triggers for that
case, then it's indicative of a flow problem in blk-mq, so we want to
retain it for that case.
There are several race conditions while freezing queue.
When unfreezing queue, there is a small window between decrementing
q->mq_freeze_depth to zero and percpu_ref_reinit() call with
q->mq_usage_counter. If the other calls blk_mq_freeze_queue_start()
in the window, q->mq_freeze_depth is increased from zero to one and
percpu_ref_kill() is called with q->mq_usage_counter which is already
killed. percpu refcount should be re-initialized before killed again.
Also, there is a race condition while switching to percpu mode.
percpu_ref_switch_to_percpu() and percpu_ref_kill() must not be
executed at the same time as the following scenario is possible:
1. q->mq_usage_counter is initialized in atomic mode.
(atomic counter: 1)
2. After the disk registration, a process like systemd-udev starts
accessing the disk, and successfully increases refcount successfully
by percpu_ref_tryget_live() in blk_mq_queue_enter().
(atomic counter: 2)
3. In the final stage of initialization, q->mq_usage_counter is being
switched to percpu mode by percpu_ref_switch_to_percpu() in
blk_mq_finish_init(). But if CONFIG_PREEMPT_VOLUNTARY is enabled,
the process is rescheduled in the middle of switching when calling
wait_event() in __percpu_ref_switch_to_percpu().
(atomic counter: 2)
4. CPU hotplug handling for blk-mq calls percpu_ref_kill() to freeze
request queue. q->mq_usage_counter is decreased and marked as
DEAD. Wait until all requests have finished.
(atomic counter: 1)
5. The process rescheduled in the step 3. is resumed and finishes
all remaining work in __percpu_ref_switch_to_percpu().
A bias value is added to atomic counter of q->mq_usage_counter.
(atomic counter: PERCPU_COUNT_BIAS + 1)
6. A request issed in the step 2. is finished and q->mq_usage_counter
is decreased by blk_mq_queue_exit(). q->mq_usage_counter is DEAD,
so atomic counter is decreased and no release handler is called.
(atomic counter: PERCPU_COUNT_BIAS)
7. CPU hotplug handling in the step 4. will wait forever as
q->mq_usage_counter will never be zero.
Also, percpu_ref_reinit() and percpu_ref_kill() must not be executed
at the same time. Because both functions could call
__percpu_ref_switch_to_percpu() which adds the bias value and
initialize percpu counter.
Fix those races by serializing with per-queue mutex.
Signed-off-by: Akinobu Mita <akinobu.mita@gmail.com> Acked-by: Tejun Heo <tj@kernel.org> Reviewed-by: Ming Lei <tom.leiming@gmail.com>
(cherry picked from https://patchwork.kernel.org/patch/7269471/)
Signed-off-by: Shan Hai <shan.hai@oracle.com> Reviewed-by: Martin K. Petersen <martin.petersen@oracle.com>