]> www.infradead.org Git - users/jedix/linux-maple.git/log
users/jedix/linux-maple.git
8 years agoxfs: skip stale inodes in xfs_iflush_cluster
Dave Chinner [Wed, 18 May 2016 03:54:23 +0000 (13:54 +1000)]
xfs: skip stale inodes in xfs_iflush_cluster

Orabug: 25256936

[ Upstream commit 7d3aa7fe970791f1a674b14572a411accf2f4d4e ]

We don't write back stale inodes so we should skip them in
xfs_iflush_cluster, too.

cc: <stable@vger.kernel.org> # 3.10.x-
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Brian Foster <bfoster@redhat.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Dave Chinner <david@fromorbit.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
(cherry picked from commit 4b126ba9fab396983926238ab018c727a086996e)
Signed-off-by: Dhaval Giani <dhaval.giani@oracle.com>
8 years agoxfs: fix inode validity check in xfs_iflush_cluster
Dave Chinner [Wed, 18 May 2016 03:54:22 +0000 (13:54 +1000)]
xfs: fix inode validity check in xfs_iflush_cluster

Orabug: 25256935

[ Upstream commit 51b07f30a71c27405259a0248206ed4e22adbee2 ]

Some careless idiot(*) wrote crap code in commit 1a3e8f3 ("xfs:
convert inode cache lookups to use RCU locking") back in late 2010,
and so xfs_iflush_cluster checks the wrong inode for whether it is
still valid under RCU protection. Fix it to lock and check the
correct inode.

(*) Careless-idiot: Dave Chinner <dchinner@redhat.com>

cc: <stable@vger.kernel.org> # 3.10.x-
Discovered-by: Brain Foster <bfoster@redhat.com>
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Dave Chinner <david@fromorbit.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
(cherry picked from commit 3eeb7e764de82bba421e04154bb62b8dc518c3a1)
Signed-off-by: Dhaval Giani <dhaval.giani@oracle.com>
8 years agoxfs: xfs_iflush_cluster fails to abort on error
Dave Chinner [Wed, 18 May 2016 03:53:42 +0000 (13:53 +1000)]
xfs: xfs_iflush_cluster fails to abort on error

Orabug: 25256934

[ Upstream commit b1438f477934f5a4d5a44df26f3079a7575d5946 ]

When a failure due to an inode buffer occurs, the error handling
fails to abort the inode writeback correctly. This can result in the
inode being reclaimed whilst still in the AIL, leading to
use-after-free situations as well as filesystems that cannot be
unmounted as the inode log items left in the AIL never get removed.

Fix this by ensuring fatal errors from xfs_imap_to_bp() result in
the inode flush being aborted correctly.

cc: <stable@vger.kernel.org> # 3.10.x-
Reported-by: Shyam Kaushik <shyam@zadarastorage.com>
Diagnosed-by: Shyam Kaushik <shyam@zadarastorage.com>
Tested-by: Shyam Kaushik <shyam@zadarastorage.com>
Signed-off-by: Dave Chinner <dchinner@redhat.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Dave Chinner <david@fromorbit.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
(cherry picked from commit 6659d79405e4c4ec1abac74aaf551fb822081112)
Signed-off-by: Dhaval Giani <dhaval.giani@oracle.com>
8 years agocrypto: caam - fix caam_jr_alloc() ret code
Catalin Vasile [Fri, 6 May 2016 13:18:53 +0000 (16:18 +0300)]
crypto: caam - fix caam_jr_alloc() ret code

Orabug: 25256932

[ Upstream commit e930c765ca5c6b039cd22ebfb4504ea7b5dab43d ]

caam_jr_alloc() used to return NULL if a JR device could not be
allocated for a session. In turn, every user of this function used
IS_ERR() function to verify if anything went wrong, which does NOT look
for NULL values. This made the kernel crash if the sanity check failed,
because the driver continued to think it had allocated a valid JR dev
instance to the session and at some point it tries to do a caam_jr_free()
on a NULL JR dev pointer.
This patch is a fix for this issue.

Cc: <stable@vger.kernel.org>
Signed-off-by: Catalin Vasile <cata.vasile@nxp.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
(cherry picked from commit ceee7b2f9dbb0c8ac05b00f2095662587c98f861)
Signed-off-by: Dhaval Giani <dhaval.giani@oracle.com>
8 years agoUSB: serial: quatech2: fix use-after-free in probe error path
Johan Hovold [Sun, 8 May 2016 18:08:02 +0000 (20:08 +0200)]
USB: serial: quatech2: fix use-after-free in probe error path

Orabug: 25256931

[ Upstream commit 028c49f5e02a257c94129cd815f7c8485f51d4ef ]

The interface read URB is submitted in attach, but was only unlinked by
the driver at disconnect.

In case of a late probe error (e.g. due to failed minor allocation),
disconnect is never called and we would end up with active URBs for an
unbound interface. This in turn could lead to deallocated memory being
dereferenced in the completion callback.

Fixes: f7a33e608d9a ("USB: serial: add quatech2 usb to serial driver")
Cc: stable <stable@vger.kernel.org> # v3.5: 40d04738491d
Signed-off-by: Johan Hovold <johan@kernel.org>
Acked-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
(cherry picked from commit 7124f5dfff7b22fd91546ce7dbf9e80b56f0d61f)
Signed-off-by: Dhaval Giani <dhaval.giani@oracle.com>
8 years agoUSB: serial: mxuport: fix use-after-free in probe error path
Johan Hovold [Sun, 8 May 2016 18:08:01 +0000 (20:08 +0200)]
USB: serial: mxuport: fix use-after-free in probe error path

Orabug: 25256930

[ Upstream commit 9e45284984096314994777f27e1446dfbfd2f0d7 ]

The interface read and event URBs are submitted in attach, but were
never explicitly unlinked by the driver. Instead the URBs would have
been killed by usb-serial core on disconnect.

In case of a late probe error (e.g. due to failed minor allocation),
disconnect is never called and we could end up with active URBs for an
unbound interface. This in turn could lead to deallocated memory being
dereferenced in the completion callbacks.

Fixes: ee467a1f2066 ("USB: serial: add Moxa UPORT 12XX/14XX/16XX
driver")
Cc: stable <stable@vger.kernel.org> # v3.14
Signed-off-by: Johan Hovold <johan@kernel.org>
Acked-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
(cherry picked from commit 17e958ca3f508c361325688ab1da7144febb35fe)
Signed-off-by: Dhaval Giani <dhaval.giani@oracle.com>
8 years agoUSB: serial: keyspan: fix use-after-free in probe error path
Johan Hovold [Sun, 8 May 2016 18:07:58 +0000 (20:07 +0200)]
USB: serial: keyspan: fix use-after-free in probe error path

Orabug: 25256929

[ Upstream commit 35be1a71d70775e7bd7e45fa6d2897342ff4c9d2 ]

The interface instat and indat URBs were submitted in attach, but never
unlinked in release before deallocating the corresponding transfer
buffers.

In the case of a late probe error (e.g. due to failed minor allocation),
disconnect would not have been called before release, causing the
buffers to be freed while the URBs are still in use. We'd also end up
with active URBs for an unbound interface.

Fixes: f9c99bb8b3a1 ("USB: usb-serial: replace shutdown with disconnect,
release")
Cc: stable <stable@vger.kernel.org> # v2.6.31
Signed-off-by: Johan Hovold <johan@kernel.org>
Acked-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
(cherry picked from commit d2e9eeb2504c73c09c415e5d4fc4cc33ffc4cb9f)
Signed-off-by: Dhaval Giani <dhaval.giani@oracle.com>
8 years agoUSB: serial: io_edgeport: fix memory leaks in probe error path
Johan Hovold [Sun, 8 May 2016 18:07:57 +0000 (20:07 +0200)]
USB: serial: io_edgeport: fix memory leaks in probe error path

Orabug: 25256928

[ Upstream commit c8d62957d450cc1a22ce3242908709fe367ddc8e ]

URBs and buffers allocated in attach for Epic devices would never be
deallocated in case of a later probe error (e.g. failure to allocate
minor numbers) as disconnect is then never called.

Fix by moving deallocation to release and making sure that the
URBs are first unlinked.

Fixes: f9c99bb8b3a1 ("USB: usb-serial: replace shutdown with disconnect,
release")
Cc: stable <stable@vger.kernel.org> # v2.6.31
Signed-off-by: Johan Hovold <johan@kernel.org>
Acked-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
(cherry picked from commit 90eb29f547e7de02d663257a3e8b042b80940925)
Signed-off-by: Dhaval Giani <dhaval.giani@oracle.com>
8 years agoUSB: serial: io_edgeport: fix memory leaks in attach error path
Johan Hovold [Sun, 8 May 2016 18:07:56 +0000 (20:07 +0200)]
USB: serial: io_edgeport: fix memory leaks in attach error path

Orabug: 25256927

[ Upstream commit c5c0c55598cefc826d6cfb0a417eeaee3631715c ]

Private data, URBs and buffers allocated for Epic devices during
attach were never released on errors (e.g. missing endpoints).

Fixes: 6e8cf7751f9f ("USB: add EPIC support to the io_edgeport driver")
Cc: stable <stable@vger.kernel.org> # v2.6.21
Signed-off-by: Johan Hovold <johan@kernel.org>
Acked-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
(cherry picked from commit 7bbc1379766b1887cd7b3c225c78acca5b82ec5e)
Signed-off-by: Dhaval Giani <dhaval.giani@oracle.com>
8 years agomfd: omap-usb-tll: Fix scheduling while atomic BUG
Roger Quadros [Mon, 9 May 2016 08:28:37 +0000 (11:28 +0300)]
mfd: omap-usb-tll: Fix scheduling while atomic BUG

Orabug: 25256926

[ Upstream commit b49b927f16acee626c56a1af4ab4cb062f75b5df ]

We shouldn't be calling clk_prepare_enable()/clk_prepare_disable()
in an atomic context.

Fixes the following issue:

[    5.830970] ehci-omap: OMAP-EHCI Host Controller driver
[    5.830974] driver_register 'ehci-omap'
[    5.895849] driver_register 'wl1271_sdio'
[    5.896870] BUG: scheduling while atomic: udevd/994/0x00000002
[    5.896876] 4 locks held by udevd/994:
[    5.896904]  #0:  (&dev->mutex){......}, at: [<c049597c>] __driver_attach+0x60/0xac
[    5.896923]  #1:  (&dev->mutex){......}, at: [<c049598c>] __driver_attach+0x70/0xac
[    5.896946]  #2:  (tll_lock){+.+...}, at: [<c04c2630>] omap_tll_enable+0x2c/0xd0
[    5.896966]  #3:  (prepare_lock){+.+...}, at: [<c05ce9c8>] clk_prepare_lock+0x48/0xe0
[    5.897042] Modules linked in: wlcore_sdio(+) ehci_omap(+) dwc3_omap snd_soc_ts3a225e leds_is31fl319x bq27xxx_battery_i2c tsc2007 bq27xxx_battery bq2429x_charger ina2xx tca8418_keypad as5013 leds_tca6507 twl6040_vibra gpio_twl6040 bmp085_i2c(+) palmas_gpadc usb3503 palmas_pwrbutton bmg160_i2c(+) bmp085 bma150(+) bmg160_core bmp280 input_polldev snd_soc_omap_mcbsp snd_soc_omap_mcpdm snd_soc_omap snd_pcm_dmaengine
[    5.897048] Preemption disabled at:[<  (null)>]   (null)
[    5.897051]
[    5.897059] CPU: 0 PID: 994 Comm: udevd Not tainted 4.6.0-rc5-letux+ #233
[    5.897062] Hardware name: Generic OMAP5 (Flattened Device Tree)
[    5.897076] [<c010e714>] (unwind_backtrace) from [<c010af34>] (show_stack+0x10/0x14)
[    5.897087] [<c010af34>] (show_stack) from [<c040aa7c>] (dump_stack+0x88/0xc0)
[    5.897099] [<c040aa7c>] (dump_stack) from [<c020c558>] (__schedule_bug+0xac/0xd0)
[    5.897111] [<c020c558>] (__schedule_bug) from [<c06f3d44>] (__schedule+0x88/0x7e4)
[    5.897120] [<c06f3d44>] (__schedule) from [<c06f46d8>] (schedule+0x9c/0xc0)
[    5.897129] [<c06f46d8>] (schedule) from [<c06f4904>] (schedule_preempt_disabled+0x14/0x20)
[    5.897140] [<c06f4904>] (schedule_preempt_disabled) from [<c06f64e4>] (mutex_lock_nested+0x258/0x43c)
[    5.897150] [<c06f64e4>] (mutex_lock_nested) from [<c05ce9c8>] (clk_prepare_lock+0x48/0xe0)
[    5.897160] [<c05ce9c8>] (clk_prepare_lock) from [<c05d0e7c>] (clk_prepare+0x10/0x28)
[    5.897169] [<c05d0e7c>] (clk_prepare) from [<c04c2668>] (omap_tll_enable+0x64/0xd0)
[    5.897180] [<c04c2668>] (omap_tll_enable) from [<c04c1728>] (usbhs_runtime_resume+0x18/0x17c)
[    5.897192] [<c04c1728>] (usbhs_runtime_resume) from [<c049d404>] (pm_generic_runtime_resume+0x2c/0x40)
[    5.897202] [<c049d404>] (pm_generic_runtime_resume) from [<c049f180>] (__rpm_callback+0x38/0x68)
[    5.897210] [<c049f180>] (__rpm_callback) from [<c049f220>] (rpm_callback+0x70/0x88)
[    5.897218] [<c049f220>] (rpm_callback) from [<c04a0a00>] (rpm_resume+0x4ec/0x7ec)
[    5.897227] [<c04a0a00>] (rpm_resume) from [<c04a0f48>] (__pm_runtime_resume+0x4c/0x64)
[    5.897236] [<c04a0f48>] (__pm_runtime_resume) from [<c04958dc>] (driver_probe_device+0x30/0x70)
[    5.897246] [<c04958dc>] (driver_probe_device) from [<c04959a4>] (__driver_attach+0x88/0xac)
[    5.897256] [<c04959a4>] (__driver_attach) from [<c04940f8>] (bus_for_each_dev+0x50/0x84)
[    5.897267] [<c04940f8>] (bus_for_each_dev) from [<c0494e40>] (bus_add_driver+0xcc/0x1e4)
[    5.897276] [<c0494e40>] (bus_add_driver) from [<c0496914>] (driver_register+0xac/0xf4)
[    5.897286] [<c0496914>] (driver_register) from [<c01018e0>] (do_one_initcall+0x100/0x1b8)
[    5.897296] [<c01018e0>] (do_one_initcall) from [<c01c7a54>] (do_init_module+0x58/0x1c0)
[    5.897304] [<c01c7a54>] (do_init_module) from [<c01c8a3c>] (SyS_finit_module+0x88/0x90)
[    5.897313] [<c01c8a3c>] (SyS_finit_module) from [<c0107120>] (ret_fast_syscall+0x0/0x1c)
[    5.912697] ------------[ cut here ]------------
[    5.912711] WARNING: CPU: 0 PID: 994 at kernel/sched/core.c:2996 _raw_spin_unlock+0x28/0x58
[    5.912717] DEBUG_LOCKS_WARN_ON(val > preempt_count())

Cc: <stable@vger.kernel.org>
Reported-by: H. Nikolaus Schaller <hns@goldelico.com>
Tested-by: H. Nikolaus Schaller <hns@goldelico.com>
Signed-off-by: Roger Quadros <rogerq@ti.com>
Signed-off-by: Lee Jones <lee.jones@linaro.org>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
(cherry picked from commit ec391566f8e498f5f3d92ff6c6abfdf6ac5a164b)
Signed-off-by: Dhaval Giani <dhaval.giani@oracle.com>
8 years agoext4: fix oops on corrupted filesystem
Jan Kara [Thu, 5 May 2016 15:10:15 +0000 (11:10 -0400)]
ext4: fix oops on corrupted filesystem

Orabug: 25256925

[ Upstream commit 74177f55b70e2f2be770dd28684dd6d17106a4ba ]

When filesystem is corrupted in the right way, it can happen
ext4_mark_iloc_dirty() in ext4_orphan_add() returns error and we
subsequently remove inode from the in-memory orphan list. However this
deletion is done with list_del(&EXT4_I(inode)->i_orphan) and thus we
leave i_orphan list_head with a stale content. Later we can look at this
content causing list corruption, oops, or other issues. The reported
trace looked like:

WARNING: CPU: 0 PID: 46 at lib/list_debug.c:53 __list_del_entry+0x6b/0x100()
list_del corruption, 0000000061c1d6e0->next is LIST_POISON1
0000000000100100)
CPU: 0 PID: 46 Comm: ext4.exe Not tainted 4.1.0-rc4+ #250
Stack:
 60462947 62219960 602ede24 62219960
 602ede24 603ca293 622198f0 602f02eb
 62219950 6002c12c 62219900 601b4d6b
Call Trace:
 [<6005769c>] ? vprintk_emit+0x2dc/0x5c0
 [<602ede24>] ? printk+0x0/0x94
 [<600190bc>] show_stack+0xdc/0x1a0
 [<602ede24>] ? printk+0x0/0x94
 [<602ede24>] ? printk+0x0/0x94
 [<602f02eb>] dump_stack+0x2a/0x2c
 [<6002c12c>] warn_slowpath_common+0x9c/0xf0
 [<601b4d6b>] ? __list_del_entry+0x6b/0x100
 [<6002c254>] warn_slowpath_fmt+0x94/0xa0
 [<602f4d09>] ? __mutex_lock_slowpath+0x239/0x3a0
 [<6002c1c0>] ? warn_slowpath_fmt+0x0/0xa0
 [<60023ebf>] ? set_signals+0x3f/0x50
 [<600a205a>] ? kmem_cache_free+0x10a/0x180
 [<602f4e88>] ? mutex_lock+0x18/0x30
 [<601b4d6b>] __list_del_entry+0x6b/0x100
 [<601177ec>] ext4_orphan_del+0x22c/0x2f0
 [<6012f27c>] ? __ext4_journal_start_sb+0x2c/0xa0
 [<6010b973>] ? ext4_truncate+0x383/0x390
 [<6010bc8b>] ext4_write_begin+0x30b/0x4b0
 [<6001bb50>] ? copy_from_user+0x0/0xb0
 [<601aa840>] ? iov_iter_fault_in_readable+0xa0/0xc0
 [<60072c4f>] generic_perform_write+0xaf/0x1e0
 [<600c4166>] ? file_update_time+0x46/0x110
 [<60072f0f>] __generic_file_write_iter+0x18f/0x1b0
 [<6010030f>] ext4_file_write_iter+0x15f/0x470
 [<60094e10>] ? unlink_file_vma+0x0/0x70
 [<6009b180>] ? unlink_anon_vmas+0x0/0x260
 [<6008f169>] ? free_pgtables+0xb9/0x100
 [<600a6030>] __vfs_write+0xb0/0x130
 [<600a61d5>] vfs_write+0xa5/0x170
 [<600a63d6>] SyS_write+0x56/0xe0
 [<6029fcb0>] ? __libc_waitpid+0x0/0xa0
 [<6001b698>] handle_syscall+0x68/0x90
 [<6002633d>] userspace+0x4fd/0x600
 [<6002274f>] ? save_registers+0x1f/0x40
 [<60028bd7>] ? arch_prctl+0x177/0x1b0
 [<60017bd5>] fork_handler+0x85/0x90

Fix the problem by using list_del_init() as we always should with
i_orphan list.

CC: stable@vger.kernel.org
Reported-by: Vegard Nossum <vegard.nossum@oracle.com>
Signed-off-by: Jan Kara <jack@suse.cz>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
(cherry picked from commit f245ed015ed94730b9a9fea17029d7d79fa88dd7)
Signed-off-by: Dhaval Giani <dhaval.giani@oracle.com>
8 years agothunderbolt: Fix double free of drom buffer
Andreas Noever [Sun, 10 Apr 2016 10:48:27 +0000 (12:48 +0200)]
thunderbolt: Fix double free of drom buffer

Orabug: 25256924

[ Upstream commit 2ffa9a5d76a75abbc1f95c17959fced666095bdd ]

If tb_drom_read() fails, sw->drom is freed but not set to NULL.  sw->drom
is then freed again in the error path of tb_switch_alloc().

The bug can be triggered by unplugging a thunderbolt device shortly after
it is detected by the thunderbolt driver.

Clear sw->drom if tb_drom_read() fails.

[bhelgaas: add Fixes:, stable versions of interest]
Fixes: 343fcb8c70d7 ("thunderbolt: Fix nontrivial endpoint devices.")
Signed-off-by: Andreas Noever <andreas.noever@gmail.com>
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
CC: stable@vger.kernel.org # v3.17+
CC: Lukas Wunner <lukas@wunner.de>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
(cherry picked from commit 1dab35a8a62e5e7c49bf89b844b6efbdebdd34c1)
Signed-off-by: Dhaval Giani <dhaval.giani@oracle.com>
8 years agoFix OpenSSH pty regression on close
Brian Bloniarz [Sun, 6 Mar 2016 21:16:30 +0000 (13:16 -0800)]
Fix OpenSSH pty regression on close

Orabug: 25256923

[ Upstream commit 0f40fbbcc34e093255a2b2d70b6b0fb48c3f39aa ]

OpenSSH expects the (non-blocking) read() of pty master to return
EAGAIN only if it has received all of the slave-side output after
it has received SIGCHLD. This used to work on pre-3.12 kernels.

This fix effectively forces non-blocking read() and poll() to
block for parallel i/o to complete for all ttys. It also unwinds
these changes:

1) f8747d4a466ab2cafe56112c51b3379f9fdb7a12
   tty: Fix pty master read() after slave closes

2) 52bce7f8d4fc633c9a9d0646eef58ba6ae9a3b73
   pty, n_tty: Simplify input processing on final close

3) 1a48632ffed61352a7810ce089dc5a8bcd505a60
   pty: Fix input race when closing

Inspired by analysis and patch from Marc Aurele La France <tsi@tuyoix.net>

Reported-by: Volth <openssh@volth.com>
Reported-by: Marc Aurele La France <tsi@tuyoix.net>
BugLink: https://bugzilla.mindrot.org/show_bug.cgi?id=52
BugLink: https://bugzilla.mindrot.org/show_bug.cgi?id=2492
Signed-off-by: Brian Bloniarz <brian.bloniarz@gmail.com>
Reviewed-by: Peter Hurley <peter@hurleysoftware.com>
Cc: stable <stable@vger.kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
(cherry picked from commit 29a7543e33dbb168c1e044a3ad411ce96286924b)
Signed-off-by: Dhaval Giani <dhaval.giani@oracle.com>
8 years agoext4: clean up error handling when orphan list is corrupted
Theodore Ts'o [Sat, 30 Apr 2016 04:49:54 +0000 (00:49 -0400)]
ext4: clean up error handling when orphan list is corrupted

Orabug: 25256922

[ Upstream commit 7827a7f6ebfcb7f388dc47fddd48567a314701ba ]

Instead of just printing warning messages, if the orphan list is
corrupted, declare the file system is corrupted.  If there are any
reserved inodes in the orphaned inode list, declare the file system
corrupted and stop right away to avoid doing more potential damage to
the file system.

Cc: stable@vger.kernel.org
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
(cherry picked from commit e0934da7eb12fe444a9b9225a9f2c9281d3e2b8a)
Signed-off-by: Dhaval Giani <dhaval.giani@oracle.com>
8 years agoext4: fix hang when processing corrupted orphaned inode list
Theodore Ts'o [Sat, 30 Apr 2016 04:48:54 +0000 (00:48 -0400)]
ext4: fix hang when processing corrupted orphaned inode list

Orabug: 25256921

[ Upstream commit c9eb13a9105e2e418f72e46a2b6da3f49e696902 ]

If the orphaned inode list contains inode #5, ext4_iget() returns a
bad inode (since the bootloader inode should never be referenced
directly).  Because of the bad inode, we end up processing the inode
repeatedly and this hangs the machine.

This can be reproduced via:

   mke2fs -t ext4 /tmp/foo.img 100
   debugfs -w -R "ssv last_orphan 5" /tmp/foo.img
   mount -o loop /tmp/foo.img /mnt

(But don't do this if you are using an unpatched kernel if you care
about the system staying functional.  :-)

This bug was found by the port of American Fuzzy Lop into the kernel
to find file system problems[1].  (Since it *only* happens if inode #5
shows up on the orphan list --- 3, 7, 8, etc. won't do it, it's not
surprising that AFL needed two hours before it found it.)

[1] http://events.linuxfoundation.org/sites/events/files/slides/AFL%20filesystem%20fuzzing%2C%20Vault%202016_0.pdf

Cc: stable@vger.kernel.org
Reported by: Vegard Nossum <vegard.nossum@oracle.com>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
(cherry picked from commit 06c6dcb86328d114709b2f464ce9c6f015acad70)
Signed-off-by: Dhaval Giani <dhaval.giani@oracle.com>
8 years agostaging: comedi: das1800: fix possible NULL dereference
H Hartley Sweeten [Fri, 8 Apr 2016 17:14:58 +0000 (10:14 -0700)]
staging: comedi: das1800: fix possible NULL dereference

Orabug: 25256919

[ Upstream commit d375278d666760e195693b57415ba0a125cadd55 ]

DMA is optional with this driver. If it was not enabled the devpriv->dma
pointer will be NULL.

Fix the possible NULL pointer dereference when trying to disable the DMA
channels in das1800_ai_cancel() and tidy up the comments to fix the
checkpatch.pl issues:
WARNING: line over 80 characters

It's probably harmless in das1800_ai_setup_dma() because the 'desc' pointer
will not be used if DMA is disabled but fix it there also.

Fixes: 99dfc3357e98 ("staging: comedi: das1800: remove depends on ISA_DMA_API limitation")
Cc: <stable@vger.kernel.org> # 4.0+
Signed-off-by: H Hartley Sweeten <hsweeten@visionengravers.com>
Reviewed-by: Ian Abbott <abbotti@mev.co.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
(cherry picked from commit 788da6ee7dc349b45047212f7dbce31e3fdc82b0)
Signed-off-by: Dhaval Giani <dhaval.giani@oracle.com>
8 years agoserial: 8250_pci: fix divide error bug if baud rate is 0
David Müller [Wed, 27 Apr 2016 09:58:32 +0000 (11:58 +0200)]
serial: 8250_pci: fix divide error bug if baud rate is 0

Orabug: 25256918

[ Upstream commit 6f210c18c1c0f016772c8cd51ae12a02bfb9e7ef ]

Since commit 21947ba654a6 ("serial: 8250_pci: replace switch-case by
formula"), the 8250 driver crashes in the byt_set_termios() function
with a divide error. This is caused by the fact that a baud rate of 0 (B0)
is not handled properly. Fix it by falling back to B9600 in this case.

Signed-off-by: David Müller <d.mueller@elsoft.ch>
Fixes: 21947ba654a6 ("serial: 8250_pci: replace switch-case by formula")
Cc: stable@vger.kernel.org
Suggested-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Reviewed-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
(cherry picked from commit c91d8c2d38843168a5b58bdc6b809495f5f31863)
Signed-off-by: Dhaval Giani <dhaval.giani@oracle.com>
8 years agousb: gadget: mass_storage: Fix freeing luns sysfs implementation
Krzysztof Opasiak [Fri, 31 Jul 2015 11:37:45 +0000 (13:37 +0200)]
usb: gadget: mass_storage: Fix freeing luns sysfs implementation

Orabug: 25256917

[ Upstream commit 5542f58c95aef67bc9016855f7c0bd6117b43100 ]

Use device_is_registered() instad of sysfs flag to determine if
we should free sysfs representation of particular LUN.

sysfs flag in fsg common determines if luns attributes should be
exposed using sysfs. This flag is used when creating and freeing
luns. Unfortunately there is no guarantee that this flag will not
be changed between creation and removal of particular LUN. Especially
because of lun.0 which is created during allocating instance of
function. This may lead to resource leak or NULL pointer dereference:

[   62.539925] Unable to handle kernel NULL pointer dereference at virtual address 00000044
[   62.548014] pgd = ec994000
[   62.550679] [00000044] *pgd=6d7be831, *pte=00000000, *ppte=00000000
[   62.556933] Internal error: Oops: 17 [#1] PREEMPT SMP ARM
[   62.562310] Modules linked in: g_mass_storage(+)
[   62.566916] CPU: 2 PID: 613 Comm: insmod Not tainted 4.2.0-rc4-00077-ge29ee91-dirty #125
[   62.574984] Hardware name: SAMSUNG EXYNOS (Flattened Device Tree)
[   62.581061] task: eca56e80 ti: eca76000 task.ti: eca76000
[   62.586450] PC is at kernfs_find_ns+0x8/0xe8
[   62.590698] LR is at kernfs_find_and_get_ns+0x30/0x48
[   62.595732] pc : [<c01277c0>]    lr : [<c0127b88>]    psr: 40010053
[   62.595732] sp : eca77c40  ip : eca77c38  fp : 000008c1
[   62.607187] r10: 00000001  r9 : c0082f38  r8 : ed41ce40
[   62.612395] r7 : c05c1484  r6 : 00000000  r5 : 00000000  r4 : c0814488
[   62.618904] r3 : 00000000  r2 : 00000000  r1 : c05c1484  r0 : 00000000
[   62.625417] Flags: nZcv  IRQs on  FIQs off  Mode SVC_32  ISA ARM  Segment user
[   62.632620] Control: 10c5387d  Table: 6c99404a  DAC: 00000015
[   62.638348] Process insmod (pid: 613, stack limit = 0xeca76210)
[   62.644251] Stack: (0xeca77c40 to 0xeca78000)
[   62.648594] 7c40: c0814488 00000000 00000000 c05c1484 ed41ce40 c0127b88 00000000 c0824888
[   62.656753] 7c60: ed41d038 ed41d030 ed41d000 c012af4c 00000000 c0824858 ed41d038 c02e3314
[   62.664912] 7c80: ed41d030 00000000 ed41ce04 c02d9e8c c070eda8 eca77cb4 000008c1 c058317c
[   62.673071] 7ca0: 000008c1 ed41d030 ed41ce00 ed41ce04 ed41d000 c02da044 ed41cf48 c0375870
[   62.681230] 7cc0: ed9d3c04 ed9d3c00 ed52df80 bf000940 fffffff0 c03758f4 c03758c0 00000000
[   62.689389] 7ce0: bf000564 c03614e0 ed9d3c04 bf000194 c0082f38 00000001 00000000 c0000100
[   62.697548] 7d00: c0814488 c0814488 c086b1dc c05893a8 00000000 ed7e8320 00000000 c0128b88
[   62.705707] 7d20: ed8a6b40 00000000 00000000 ed410500 ed8a6b40 c0594818 ed7e8320 00000000
[   62.713867] 7d40: 00000000 c0129f20 00000000 c082c444 ed8a6b40 c012a684 00001000 00000000
[   62.722026] 7d60: c0594818 c082c444 00000000 00000000 ed52df80 ed52df80 00000000 00000000
[   62.730185] 7d80: 00000000 00000000 00000001 00000002 ed8e9b70 ed52df80 bf0006d0 00000000
[   62.738345] 7da0: ed8e9b70 ed410500 ed618340 c036129c ed8c1c00 bf0006d0 c080b158 ed8c1c00
[   62.746504] 7dc0: bf0006d0 c080b158 ed8c1c08 ed410500 c0082f38 ed618340 000008c1 c03640ac
[   62.754663] 7de0: 00000000 bf0006d0 c082c8dc c080b158 c080b158 c03642d4 00000000 bf003000
[   62.762822] 7e00: 00000000 c0009784 00000000 00000001 00000000 c05849b0 00000002 ee7ab780
[   62.770981] 7e20: 00000002 ed4105c0 0000c53e 000000d0 c0808600 eca77e5c 00000004 00000000
[   62.779140] 7e40: bf000000 c0095680 c08075a0 ee001f00 ed4105c0 c00cadc0 ed52df80 bf000780
[   62.787300] 7e60: ed4105c0 bf000780 00000001 bf0007c8 c0082f38 ed618340 000008c1 c0083e24
[   62.795459] 7e80: 00000001 bf000780 00000001 eca77f58 00000001 bf000780 00000001 c00857f4
[   62.803618] 7ea0: bf00078c 00007fff 00000000 c00835b4 eca77f58 00000000 c0082fac eca77f58
[   62.811777] 7ec0: f05038c0 0003b008 bf000904 00000000 00000000 bf00078c 6e72656b 00006c65
[   62.819936] 7ee0: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
[   62.828095] 7f00: 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000
[   62.836255] 7f20: 00000000 00000000 00000000 00000000 00000000 00000000 00000003 0003b008
[   62.844414] 7f40: 0000017b c000f5c8 eca76000 00000000 0003b008 c0085df8 f04ef000 0001b8a9
[   62.852573] 7f60: f0503258 f05030c2 f0509fe8 00000968 00000dc8 00000000 00000000 00000000
[   62.860732] 7f80: 00000029 0000002a 00000011 00000000 0000000a 00000000 33f6eb00 0003b008
[   62.868892] 7fa0: bef01cac c000f400 33f6eb00 0003b008 00000003 0003b008 00000000 00000003
[   62.877051] 7fc0: 33f6eb00 0003b008 bef01cac 0000017b 00000000 0003b008 0000000b 0003b008
[   62.885210] 7fe0: bef01ae0 bef01ad0 0001dc23 b6e8c162 800b0070 00000003 c0c0c0c0 c0c0c0c0
[   62.893380] [<c01277c0>] (kernfs_find_ns) from [<c0824888>] (pm_qos_latency_tolerance_attr_group+0x0/0x10)
[   62.903005] Code: e28dd00c e8bd80f0 e92d41f0 e2923000 (e1d0e4b4)
[   62.909115] ---[ end trace 02fb4373ef095c7b ]---

Acked-by: Michal Nazarewicz <mina86@mina86.com>
Signed-off-by: Krzysztof Opasiak <k.opasiak@samsung.com>
Signed-off-by: Felipe Balbi <balbi@ti.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
(cherry picked from commit a27b3984f7fdea785e67195f6de3d06c4835e325)
Signed-off-by: Dhaval Giani <dhaval.giani@oracle.com>
8 years agoBluetooth: vhci: purge unhandled skbs
Jiri Slaby [Sat, 19 Mar 2016 10:49:43 +0000 (11:49 +0100)]
Bluetooth: vhci: purge unhandled skbs

Orabug: 25256916

[ Upstream commit 13407376b255325fa817798800117a839f3aa055 ]

The write handler allocates skbs and queues them into data->readq.
Read side should read them, if there is any. If there is none, skbs
should be dropped by hdev->flush. But this happens only if the device
is HCI_UP, i.e. hdev->power_on work was triggered already. When it was
not, skbs stay allocated in the queue when /dev/vhci is closed. So
purge the queue in ->release.

Program to reproduce:
#include <err.h>
#include <fcntl.h>
#include <stdio.h>
#include <unistd.h>

#include <sys/stat.h>
#include <sys/types.h>
#include <sys/uio.h>

int main()
{
char buf[] = { 0xff, 0 };
struct iovec iov = {
.iov_base = buf,
.iov_len = sizeof(buf),
};
int fd;

while (1) {
fd = open("/dev/vhci", O_RDWR);
if (fd < 0)
err(1, "open");

usleep(50);

if (writev(fd, &iov, 1) < 0)
err(1, "writev");

usleep(50);

close(fd);
}

return 0;
}

Result:
kmemleak: 4609 new suspected memory leaks
unreferenced object 0xffff88059f4d5440 (size 232):
  comm "vhci", pid 1084, jiffies 4294912542 (age 37569.296s)
  hex dump (first 32 bytes):
    20 f0 23 87 05 88 ff ff 20 f0 23 87 05 88 ff ff   .#..... .#.....
    00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
  backtrace:
...
    [<ffffffff81ece010>] __alloc_skb+0x0/0x5a0
    [<ffffffffa021886c>] vhci_create_device+0x5c/0x580 [hci_vhci]
    [<ffffffffa0219436>] vhci_write+0x306/0x4c8 [hci_vhci]

Fixes: 23424c0d31 (Bluetooth: Add support creating virtual AMP controllers)
Signed-off-by: Jiri Slaby <jslaby@suse.cz>
Signed-off-by: Marcel Holtmann <marcel@holtmann.org>
Cc: stable 3.13+ <stable@vger.kernel.org>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
(cherry picked from commit b2df540f5e80e7d8106fe3e8113f22bc359ed15c)
Signed-off-by: Dhaval Giani <dhaval.giani@oracle.com>
8 years agoBluetooth: vhci: fix open_timeout vs. hdev race
Jiri Slaby [Sat, 19 Mar 2016 10:05:18 +0000 (11:05 +0100)]
Bluetooth: vhci: fix open_timeout vs. hdev race

Orabug: 25256915

[ Upstream commit 373a32c848ae3a1c03618517cce85f9211a6facf ]

Both vhci_get_user and vhci_release race with open_timeout work. They
both contain cancel_delayed_work_sync, but do not test whether the
work actually created hdev or not. Since the work can be in progress
and _sync will wait for finishing it, we can have data->hdev allocated
when cancel_delayed_work_sync returns. But the call sites do 'if
(data->hdev)' *before* cancel_delayed_work_sync.

As a result:
* vhci_get_user allocates a second hdev and puts it into
  data->hdev. The former is leaked.
* vhci_release does not release data->hdev properly as it thinks there
  is none.

Fix both cases by moving the actual test *after* the call to
cancel_delayed_work_sync.

This can be hit by this program:
#include <err.h>
#include <fcntl.h>
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
#include <unistd.h>

#include <sys/stat.h>
#include <sys/types.h>

int main(int argc, char **argv)
{
int fd;

srand(time(NULL));

while (1) {
const int delta = (rand() % 200 - 100) * 100;

fd = open("/dev/vhci", O_RDWR);
if (fd < 0)
err(1, "open");

usleep(1000000 + delta);

close(fd);
}

return 0;
}

And the result is:
BUG: KASAN: use-after-free in skb_queue_tail+0x13e/0x150 at addr ffff88006b0c1228
Read of size 8 by task kworker/u13:1/32068
=============================================================================
BUG kmalloc-192 (Tainted: G            E     ): kasan: bad access detected
-----------------------------------------------------------------------------

Disabling lock debugging due to kernel taint
INFO: Allocated in vhci_open+0x50/0x330 [hci_vhci] age=260 cpu=3 pid=32040
...
kmem_cache_alloc_trace+0x150/0x190
vhci_open+0x50/0x330 [hci_vhci]
misc_open+0x35b/0x4e0
chrdev_open+0x23b/0x510
...
INFO: Freed in vhci_release+0xa4/0xd0 [hci_vhci] age=9 cpu=2 pid=32040
...
__slab_free+0x204/0x310
vhci_release+0xa4/0xd0 [hci_vhci]
...
INFO: Slab 0xffffea0001ac3000 objects=16 used=13 fp=0xffff88006b0c1e00 flags=0x5fffff80004080
INFO: Object 0xffff88006b0c1200 @offset=4608 fp=0xffff88006b0c0600
Bytes b4 ffff88006b0c11f0: 09 df 00 00 01 00 00 00 00 00 00 00 00 00 00 00  ................
Object ffff88006b0c1200: 00 06 0c 6b 00 88 ff ff 00 00 00 00 00 00 00 00  ...k............
Object ffff88006b0c1210: 10 12 0c 6b 00 88 ff ff 10 12 0c 6b 00 88 ff ff  ...k.......k....
Object ffff88006b0c1220: c0 46 c2 6b 00 88 ff ff c0 46 c2 6b 00 88 ff ff  .F.k.....F.k....
Object ffff88006b0c1230: 01 00 00 00 01 00 00 00 e0 ff ff ff 0f 00 00 00  ................
Object ffff88006b0c1240: 40 12 0c 6b 00 88 ff ff 40 12 0c 6b 00 88 ff ff  @..k....@..k....
Object ffff88006b0c1250: 50 0d 6e a0 ff ff ff ff 00 02 00 00 00 00 ad de  P.n.............
Object ffff88006b0c1260: 00 00 00 00 00 00 00 00 ab 62 02 00 01 00 00 00  .........b......
Object ffff88006b0c1270: 90 b9 19 81 ff ff ff ff 38 12 0c 6b 00 88 ff ff  ........8..k....
Object ffff88006b0c1280: 03 00 20 00 ff ff ff ff ff ff ff ff 00 00 00 00  .. .............
Object ffff88006b0c1290: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
Object ffff88006b0c12a0: 00 00 00 00 00 00 00 00 00 80 cd 3d 00 88 ff ff  ...........=....
Object ffff88006b0c12b0: 00 20 00 00 00 00 00 00 00 00 00 00 00 00 00 00  . ..............
Redzone ffff88006b0c12c0: bb bb bb bb bb bb bb bb                          ........
Padding ffff88006b0c13f8: 00 00 00 00 00 00 00 00                          ........
CPU: 3 PID: 32068 Comm: kworker/u13:1 Tainted: G    B       E      4.4.6-0-default #1
Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.8.1-0-g4adadbd-20151112_172657-sheep25 04/01/2014
Workqueue: hci0 hci_cmd_work [bluetooth]
 00000000ffffffff ffffffff81926cfa ffff88006be37c68 ffff88006bc27180
 ffff88006b0c1200 ffff88006b0c1234 ffffffff81577993 ffffffff82489320
 ffff88006bc24240 0000000000000046 ffff88006a100000 000000026e51eb80
Call Trace:
...
 [<ffffffff81ec8ebe>] ? skb_queue_tail+0x13e/0x150
 [<ffffffffa06e027c>] ? vhci_send_frame+0xac/0x100 [hci_vhci]
 [<ffffffffa0c61268>] ? hci_send_frame+0x188/0x320 [bluetooth]
 [<ffffffffa0c61515>] ? hci_cmd_work+0x115/0x310 [bluetooth]
 [<ffffffff811a1375>] ? process_one_work+0x815/0x1340
 [<ffffffff811a1f85>] ? worker_thread+0xe5/0x11f0
 [<ffffffff811a1ea0>] ? process_one_work+0x1340/0x1340
 [<ffffffff811b3c68>] ? kthread+0x1c8/0x230
...
Memory state around the buggy address:
 ffff88006b0c1100: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
 ffff88006b0c1180: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
>ffff88006b0c1200: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb
                                  ^
 ffff88006b0c1280: fb fb fb fb fb fb fb fb fc fc fc fc fc fc fc fc
 ffff88006b0c1300: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc

Fixes: 23424c0d31 (Bluetooth: Add support creating virtual AMP controllers)
Signed-off-by: Jiri Slaby <jslaby@suse.cz>
Signed-off-by: Marcel Holtmann <marcel@holtmann.org>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: stable 3.13+ <stable@vger.kernel.org>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
(cherry picked from commit 497519c4a3bf22c5d7fa2e6da6424e07e6ac543c)
Signed-off-by: Dhaval Giani <dhaval.giani@oracle.com>
8 years agoxfs: disallow rw remount on fs with unknown ro-compat features
Eric Sandeen [Tue, 5 Apr 2016 21:05:41 +0000 (07:05 +1000)]
xfs: disallow rw remount on fs with unknown ro-compat features

Orabug: 25256914

[ Upstream commit d0a58e833931234c44e515b5b8bede32bd4e6eed ]

Today, a kernel which refuses to mount a filesystem read-write
due to unknown ro-compat features can still transition to read-write
via the remount path.  The old kernel is most likely none the wiser,
because it's unaware of the new feature, and isn't using it.  However,
writing to the filesystem may well corrupt metadata related to that
new feature, and moving to a newer kernel which understand the feature
will have problems.

Right now the only ro-compat feature we have is the free inode btree,
which showed up in v3.16.  It would be good to push this back to
all the active stable kernels, I think, so that if anyone is using
newer mkfs (which enables the finobt feature) with older kernel
releases, they'll be protected.

Cc: <stable@vger.kernel.org> # 3.10.x-
Signed-off-by: Eric Sandeen <sandeen@redhat.com>
Reviewed-by: Bill O'Donnell <billodo@redhat.com>
Reviewed-by: Dave Chinner <dchinner@redhat.com>
Signed-off-by: Dave Chinner <david@fromorbit.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
(cherry picked from commit 9c153a82fb6f9a177d0ec2034005b4245973a28f)
Signed-off-by: Dhaval Giani <dhaval.giani@oracle.com>
8 years agoperf/x86/intel/pt: Generate PMI in the STOP region as well
Sasha Levin [Mon, 30 May 2016 21:50:36 +0000 (17:50 -0400)]
perf/x86/intel/pt: Generate PMI in the STOP region as well

Orabug: 25256913

[ Upstream commit ab92b232ae05c382c3df0e3d6a5c6d16b639ac8c ]

Currently, the PT driver always sets the PMI bit one region (page) before
the STOP region so that we can wake up the consumer before we run out of
room in the buffer and have to disable the event. However, we also need
an interrupt in the last output region, so that we actually get to disable
the event (if no more room from new data is available at that point),
otherwise hardware just quietly refuses to start, but the event is
scheduled in and we end up losing trace data till the event gets removed.

For a cpu-wide event it is even worse since there may not be any
re-scheduling at all and no chance for the ring buffer code to notice
that its buffer is filled up and the event needs to be disabled (so that
the consumer can re-enable it when it finishes reading the data out). In
other words, all the trace data will be lost after the buffer gets filled
up.

This patch makes PT also generate a PMI when the last output region is
full.

Reported-by: Markus Metzger <markus.t.metzger@intel.com>
Signed-off-by: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: <stable@vger.kernel.org>
Cc: Arnaldo Carvalho de Melo <acme@infradead.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Vince Weaver <vincent.weaver@maine.edu>
Cc: vince@deater.net
Link: http://lkml.kernel.org/r/1462886313-13660-2-git-send-email-alexander.shishkin@linux.intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
(cherry picked from commit 9a243f2469d31682394673bfe37954789ba50176)
Signed-off-by: Dhaval Giani <dhaval.giani@oracle.com>
8 years agoefi: Fix out-of-bounds read in variable_matches()
Laszlo Ersek [Fri, 2 Dec 2016 22:18:05 +0000 (14:18 -0800)]
efi: Fix out-of-bounds read in variable_matches()

Orabug: 25227159

[ Upstream commit 630ba0cc7a6dbafbdee43795617c872b35cde1b4 ]

The variable_matches() function can currently read "var_name[len]", for
example when:

 - var_name[0] == 'a',
 - len == 1
 - match_name points to the NUL-terminated string "ab".

This function is supposed to accept "var_name" inputs that are not
NUL-terminated (hence the "len" parameter"). Document the function, and
access "var_name[*match]" only if "*match" is smaller than "len".

Reported-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Cc: Peter Jones <pjones@redhat.com>
Cc: Matthew Garrett <mjg59@coreos.com>
Cc: Jason Andryuk <jandryuk@gmail.com>
Cc: Jani Nikula <jani.nikula@linux.intel.com>
Cc: <stable@vger.kernel.org> # v3.10+
Link: http://thread.gmane.org/gmane.comp.freedesktop.xorg.drivers.intel/86906
Signed-off-by: Matt Fleming <matt@codeblueprint.co.uk>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
(cherry picked from commit 83619523130164cb525e35b6001755cd185055fb)

Conflicts:

drivers/firmware/efi/vars.c

Signed-off-by: Dhaval Giani <dhaval.giani@oracle.com>
8 years agonf_conntrack: avoid kernel pointer value leak in slab name
Linus Torvalds [Sat, 14 May 2016 18:11:44 +0000 (11:11 -0700)]
nf_conntrack: avoid kernel pointer value leak in slab name

Orabug: 25227158

[ Upstream commit 31b0b385f69d8d5491a4bca288e25e63f1d945d0 ]

The slab name ends up being visible in the directory structure under
/sys, and even if you don't have access rights to the file you can see
the filenames.

Just use a 64-bit counter instead of the pointer to the 'net' structure
to generate a unique name.

This code will go away in 4.7 when the conntrack code moves to a single
kmemcache, but this is the backportable simple solution to avoiding
leaking kernel pointers to user space.

Fixes: 5b3501faa874 ("netfilter: nf_conntrack: per netns nf_conntrack_cachep")
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Acked-by: Eric Dumazet <eric.dumazet@gmail.com>
Cc: stable@vger.kernel.org
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
(cherry picked from commit 8de861b16d946b7c0ce246fcbb65d550c357c65c)

Signed-off-by: Dhaval Giani <dhaval.giani@oracle.com>
8 years agoworkqueue: fix rebind bound workers warning
Wanpeng Li [Wed, 11 May 2016 09:55:18 +0000 (17:55 +0800)]
workqueue: fix rebind bound workers warning

Orabug: 25227155

[ Upstream commit f7c17d26f43d5cc1b7a6b896cd2fa24a079739b9 ]

------------[ cut here ]------------
WARNING: CPU: 0 PID: 16 at kernel/workqueue.c:4559 rebind_workers+0x1c0/0x1d0
Modules linked in:
CPU: 0 PID: 16 Comm: cpuhp/0 Not tainted 4.6.0-rc4+ #31
Hardware name: IBM IBM System x3550 M4 Server -[7914IUW]-/00Y8603, BIOS -[D7E128FUS-1.40]- 07/23/2013
 0000000000000000 ffff881037babb58 ffffffff8139d885 0000000000000010
 0000000000000000 0000000000000000 0000000000000000 ffff881037babba8
 ffffffff8108505d ffff881037ba0000 000011cf3e7d6e60 0000000000000046
Call Trace:
 dump_stack+0x89/0xd4
 __warn+0xfd/0x120
 warn_slowpath_null+0x1d/0x20
 rebind_workers+0x1c0/0x1d0
 workqueue_cpu_up_callback+0xf5/0x1d0
 notifier_call_chain+0x64/0x90
 ? trace_hardirqs_on_caller+0xf2/0x220
 ? notify_prepare+0x80/0x80
 __raw_notifier_call_chain+0xe/0x10
 __cpu_notify+0x35/0x50
 notify_down_prepare+0x5e/0x80
 ? notify_prepare+0x80/0x80
 cpuhp_invoke_callback+0x73/0x330
 ? __schedule+0x33e/0x8a0
 cpuhp_down_callbacks+0x51/0xc0
 cpuhp_thread_fun+0xc1/0xf0
 smpboot_thread_fn+0x159/0x2a0
 ? smpboot_create_threads+0x80/0x80
 kthread+0xef/0x110
 ? wait_for_completion+0xf0/0x120
 ? schedule_tail+0x35/0xf0
 ret_from_fork+0x22/0x50
 ? __init_kthread_worker+0x70/0x70
---[ end trace eb12ae47d2382d8f ]---
notify_down_prepare: attempt to take down CPU 0 failed

This bug can be reproduced by below config w/ nohz_full= all cpus:

CONFIG_BOOTPARAM_HOTPLUG_CPU0=y
CONFIG_DEBUG_HOTPLUG_CPU0=y
CONFIG_NO_HZ_FULL=y

As Thomas pointed out:

| If a down prepare callback fails, then DOWN_FAILED is invoked for all
| callbacks which have successfully executed DOWN_PREPARE.
|
| But, workqueue has actually two notifiers. One which handles
| UP/DOWN_FAILED/ONLINE and one which handles DOWN_PREPARE.
|
| Now look at the priorities of those callbacks:
|
| CPU_PRI_WORKQUEUE_UP        = 5
| CPU_PRI_WORKQUEUE_DOWN      = -5
|
| So the call order on DOWN_PREPARE is:
|
| CB 1
| CB ...
| CB workqueue_up() -> Ignores DOWN_PREPARE
| CB ...
| CB X ---> Fails
|
| So we call up to CB X with DOWN_FAILED
|
| CB 1
| CB ...
| CB workqueue_up() -> Handles DOWN_FAILED
| CB ...
| CB X-1
|
| So the problem is that the workqueue stuff handles DOWN_FAILED in the up
| callback, while it should do it in the down callback. Which is not a good idea
| either because it wants to be called early on rollback...
|
| Brilliant stuff, isn't it? The hotplug rework will solve this problem because
| the callbacks become symetric, but for the existing mess, we need some
| workaround in the workqueue code.

The boot CPU handles housekeeping duty(unbound timers, workqueues,
timekeeping, ...) on behalf of full dynticks CPUs. It must remain
online when nohz full is enabled. There is a priority set to every
notifier_blocks:

workqueue_cpu_up > tick_nohz_cpu_down > workqueue_cpu_down

So tick_nohz_cpu_down callback failed when down prepare cpu 0, and
notifier_blocks behind tick_nohz_cpu_down will not be called any
more, which leads to workers are actually not unbound. Then hotplug
state machine will fallback to undo and online cpu 0 again. Workers
will be rebound unconditionally even if they are not unbound and
trigger the warning in this progress.

This patch fix it by catching !DISASSOCIATED to avoid rebind bound
workers.

Cc: Tejun Heo <tj@kernel.org>
Cc: Lai Jiangshan <jiangshanlai@gmail.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Frédéric Weisbecker <fweisbec@gmail.com>
Cc: stable@vger.kernel.org
Suggested-by: Lai Jiangshan <jiangshanlai@gmail.com>
Signed-off-by: Wanpeng Li <wanpeng.li@hotmail.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
(cherry picked from commit 3a1b9a74de5c10995d1fd842e3d2344d4e826ae1)

Signed-off-by: Dhaval Giani <dhaval.giani@oracle.com>
8 years agotools lib traceevent: Do not reassign parg after collapse_tree()
Steven Rostedt [Wed, 11 May 2016 19:09:36 +0000 (15:09 -0400)]
tools lib traceevent: Do not reassign parg after collapse_tree()

Orabug: 25227153

[ Upstream commit 106b816cb46ebd87408b4ed99a2e16203114daa6 ]

At the end of process_filter(), collapse_tree() was changed to update
the parg parameter, but the reassignment after the call wasn't removed.

What happens is that the "current_op" gets modified and freed and parg
is assigned to the new allocated argument. But after the call to
collapse_tree(), parg is assigned again to the just freed "current_op",
and this causes the tool to crash.

The current_op variable must also be assigned to NULL in case of error,
otherwise it will cause it to be free()ed twice.

Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Acked-by: Namhyung Kim <namhyung@kernel.org>
Cc: stable@vger.kernel.org # 3.14+
Fixes: 42d6194d133c ("tools lib traceevent: Refactor process_filter()")
Link: http://lkml.kernel.org/r/20160511150936.678c18a1@gandalf.local.home
Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
(cherry picked from commit 47348e3fcc9aeeb64d792d05c735b37b2c22fbac)

Signed-off-by: Dhaval Giani <dhaval.giani@oracle.com>
8 years agoperf/core: Disable the event on a truncated AUX record
Alexander Shishkin [Tue, 10 May 2016 13:18:33 +0000 (16:18 +0300)]
perf/core: Disable the event on a truncated AUX record

Orabug: 25227152

[ Upstream commit 9f448cd3cbcec8995935e60b27802ae56aac8cc0 ]

When the PMU driver reports a truncated AUX record, it effectively means
that there is no more usable room in the event's AUX buffer (even though
there may still be some room, so that perf_aux_output_begin() doesn't take
action). At this point the consumer still has to be woken up and the event
has to be disabled, otherwise the event will just keep spinning between
perf_aux_output_begin() and perf_aux_output_end() until its context gets
unscheduled.

Again, for cpu-wide events this means never, so once in this condition,
they will be forever losing data.

Fix this by disabling the event and waking up the consumer in case of a
truncated AUX record.

Reported-by: Markus Metzger <markus.t.metzger@intel.com>
Signed-off-by: Alexander Shishkin <alexander.shishkin@linux.intel.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: <stable@vger.kernel.org>
Cc: Arnaldo Carvalho de Melo <acme@infradead.org>
Cc: Arnaldo Carvalho de Melo <acme@redhat.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Jiri Olsa <jolsa@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Stephane Eranian <eranian@google.com>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Vince Weaver <vincent.weaver@maine.edu>
Cc: vince@deater.net
Link: http://lkml.kernel.org/r/1462886313-13660-3-git-send-email-alexander.shishkin@linux.intel.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
(cherry picked from commit 6f24160677ab5a2783b7f1f2ea5c2a23526d494e)

Signed-off-by: Dhaval Giani <dhaval.giani@oracle.com>
8 years agox86/tsc: Read all ratio bits from MSR_PLATFORM_INFO
Chen Yu [Fri, 6 May 2016 03:33:39 +0000 (11:33 +0800)]
x86/tsc: Read all ratio bits from MSR_PLATFORM_INFO

Orabug: 25227151

[ Upstream commit 886123fb3a8656699dff40afa0573df359abeb18 ]

Currently we read the tsc radio: ratio = (MSR_PLATFORM_INFO >> 8) & 0x1f;

Thus we get bit 8-12 of MSR_PLATFORM_INFO, however according to the SDM
(35.5), the ratio bits are bit 8-15.

Ignoring the upper bits can result in an incorrect tsc ratio, which causes the
TSC calibration and the Local APIC timer frequency to be incorrect.

Fix this problem by masking 0xff instead.

[ tglx: Massaged changelog ]

Fixes: 7da7c1561366 "x86, tsc: Add static (MSR) TSC calibration on Intel Atom SoCs"
Signed-off-by: Chen Yu <yu.c.chen@intel.com>
Cc: "Rafael J. Wysocki" <rafael@kernel.org>
Cc: stable@vger.kernel.org
Cc: Bin Gao <bin.gao@intel.com>
Cc: Len Brown <lenb@kernel.org>
Link: http://lkml.kernel.org/r/1462505619-5516-1-git-send-email-yu.c.chen@intel.com
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
(cherry picked from commit fbb78d42a4923cca5a9322ae495a6e2c1d4455f5)

Signed-off-by: Dhaval Giani <dhaval.giani@oracle.com>
8 years agomm, cma: prevent nr_isolated_* counters from going negative
Hugh Dickins [Thu, 5 May 2016 23:22:15 +0000 (16:22 -0700)]
mm, cma: prevent nr_isolated_* counters from going negative

Orabug: 25227150

[ Upstream commit 14af4a5e9b26ad251f81c174e8a43f3e179434a5 ]

/proc/sys/vm/stat_refresh warns nr_isolated_anon and nr_isolated_file go
increasingly negative under compaction: which would add delay when
should be none, or no delay when should delay.  The bug in compaction
was due to a recent mmotm patch, but much older instance of the bug was
also noticed in isolate_migratepages_range() which is used for CMA and
gigantic hugepage allocations.

The bug is caused by putback_movable_pages() in an error path
decrementing the isolated counters without them being previously
incremented by acct_isolated().  Fix isolate_migratepages_range() by
removing the error-path putback, thus reaching acct_isolated() with
migratepages still isolated, and leaving putback to caller like most
other places do.

Fixes: edc2ca612496 ("mm, compaction: move pageblock checks up from isolate_migratepages_range()")
[vbabka@suse.cz: expanded the changelog]
Signed-off-by: Hugh Dickins <hughd@google.com>
Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
Acked-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
(cherry picked from commit c58ca6100b3d1b9890776d9fdd8aa12352d49bc8)

Signed-off-by: Dhaval Giani <dhaval.giani@oracle.com>
8 years agoACPICA: Dispatcher: Update thread ID for recursive method calls
Prarit Bhargava [Wed, 4 May 2016 05:48:56 +0000 (13:48 +0800)]
ACPICA: Dispatcher: Update thread ID for recursive method calls

Orabug: 25227148

[ Upstream commit 93d68841a23a5779cef6fb9aa0ef32e7c5bd00da ]

ACPICA commit 7a3bd2d962f221809f25ddb826c9e551b916eb25

Set the mutex owner thread ID.
Original patch from: Prarit Bhargava <prarit@redhat.com>

Link: https://bugzilla.kernel.org/show_bug.cgi?id=115121
Link: https://github.com/acpica/acpica/commit/7a3bd2d9
Signed-off-by: Prarit Bhargava <prarit@redhat.com>
Tested-by: Andy Lutomirski <luto@kernel.org> # On a Dell XPS 13 9350
Signed-off-by: Bob Moore <robert.moore@intel.com>
Signed-off-by: Lv Zheng <lv.zheng@intel.com>
Cc: All applicable <stable@vger.kernel.org>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
(cherry picked from commit a10c059ad73b3b84c349592e72d445d9c73a1e27)

Signed-off-by: Dhaval Giani <dhaval.giani@oracle.com>
8 years agotracing: Don't display trigger file for events that can't be enabled
Chunyu Hu [Tue, 3 May 2016 11:34:34 +0000 (19:34 +0800)]
tracing: Don't display trigger file for events that can't be enabled

Orabug: 25227147

[ Upstream commit 854145e0a8e9a05f7366d240e2f99d9c1ca6d6dd ]

Currently register functions for events will be called
through the 'reg' field of event class directly without
any check when seting up triggers.

Triggers for events that don't support register through
debug fs (events under events/ftrace are for trace-cmd to
read event format, and most of them don't have a register
function except events/ftrace/functionx) can't be enabled
at all, and an oops will be hit when setting up trigger
for those events, so just not creating them is an easy way
to avoid the oops.

Link: http://lkml.kernel.org/r/1462275274-3911-1-git-send-email-chuhu@redhat.com
Cc: stable@vger.kernel.org # 3.14+
Fixes: 85f2b08268c01 ("tracing: Add basic event trigger framework")
Signed-off-by: Chunyu Hu <chuhu@redhat.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
(cherry picked from commit 3216eb22857e515aa89f594f84c0ef2a36a59b08)

Signed-off-by: Dhaval Giani <dhaval.giani@oracle.com>
8 years agoEDAC: i7core, sb_edac: Don't return NOTIFY_BAD from mce_decoder callback
Tony Luck [Fri, 29 Apr 2016 13:42:25 +0000 (15:42 +0200)]
EDAC: i7core, sb_edac: Don't return NOTIFY_BAD from mce_decoder callback

Orabug: 25227145

[ Upstream commit c4fc1956fa31003bfbe4f597e359d751568e2954 ]

Both of these drivers can return NOTIFY_BAD, but this terminates
processing other callbacks that were registered later on the chain.
Since the driver did nothing to log the error it seems wrong to prevent
other interested parties from seeing it. E.g. neither of them had even
bothered to check the type of the error to see if it was a memory error
before the return NOTIFY_BAD.

Signed-off-by: Tony Luck <tony.luck@intel.com>
Acked-by: Aristeu Rozanski <aris@redhat.com>
Acked-by: Mauro Carvalho Chehab <mchehab@osg.samsung.com>
Cc: linux-edac <linux-edac@vger.kernel.org>
Cc: <stable@vger.kernel.org>
Link: http://lkml.kernel.org/r/72937355dd92318d2630979666063f8a2853495b.1461864507.git.tony.luck@intel.com
Signed-off-by: Borislav Petkov <bp@suse.de>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
(cherry picked from commit cb4a26d103973231b1a4f6b4c1495f41aaeb9980)

Signed-off-by: Dhaval Giani <dhaval.giani@oracle.com>
8 years agomm: check __PG_HWPOISON separately from PAGE_FLAGS_CHECK_AT_*
Naoya Horiguchi [Thu, 6 Aug 2015 22:47:08 +0000 (15:47 -0700)]
mm: check __PG_HWPOISON separately from PAGE_FLAGS_CHECK_AT_*

Orabug: 25227144

[ Upstream commit f4c18e6f7b5bbb5b528b3334115806b0d76f50f9 ]

The race condition addressed in commit add05cecef80 ("mm: soft-offline:
don't free target page in successful page migration") was not closed
completely, because that can happen not only for soft-offline, but also
for hard-offline.  Consider that a slab page is about to be freed into
buddy pool, and then an uncorrected memory error hits the page just
after entering __free_one_page(), then VM_BUG_ON_PAGE(page->flags &
PAGE_FLAGS_CHECK_AT_PREP) is triggered, despite the fact that it's not
necessary because the data on the affected page is not consumed.

To solve it, this patch drops __PG_HWPOISON from page flag checks at
allocation/free time.  I think it's justified because __PG_HWPOISON
flags is defined to prevent the page from being reused, and setting it
outside the page's alloc-free cycle is a designed behavior (not a bug.)

For recent months, I was annoyed about BUG_ON when soft-offlined page
remains on lru cache list for a while, which is avoided by calling
put_page() instead of putback_lru_page() in page migration's success
path.  This means that this patch reverts a major change from commit
add05cecef80 about the new refcounting rule of soft-offlined pages, so
"reuse window" revives.  This will be closed by a subsequent patch.

Signed-off-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: Dean Nelson <dnelson@redhat.com>
Cc: Tony Luck <tony.luck@intel.com>
Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: Hugh Dickins <hughd@google.com>
Cc: David Rientjes <rientjes@google.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
(cherry picked from commit 9de27bd72b3aba88cf7847e8834cc54745ea3352)

Signed-off-by: Dhaval Giani <dhaval.giani@oracle.com>
8 years agomm: soft-offline: don't free target page in successful page migration
Naoya Horiguchi [Wed, 24 Jun 2015 23:56:50 +0000 (16:56 -0700)]
mm: soft-offline: don't free target page in successful page migration

Orabug: 25227143

[ Upstream commit add05cecef803f3372c5fc1d2a964171872daf9f ]

Stress testing showed that soft offline events for a process iterating
"mmap-pagefault-munmap" loop can trigger
VM_BUG_ON(PAGE_FLAGS_CHECK_AT_PREP) in __free_one_page():

  Soft offlining page 0x70fe1 at 0x70100008d000
  Soft offlining page 0x705fb at 0x70300008d000
  page:ffffea0001c3f840 count:0 mapcount:0 mapping:          (null) index:0x2
  flags: 0x1fffff80800000(hwpoison)
  page dumped because: VM_BUG_ON_PAGE(page->flags & ((1 << 25) - 1))
  ------------[ cut here ]------------
  kernel BUG at /src/linux-dev/mm/page_alloc.c:585!
  invalid opcode: 0000 [#1] SMP DEBUG_PAGEALLOC
  Modules linked in: cfg80211 rfkill crc32c_intel microcode ppdev parport_pc pcspkr serio_raw virtio_balloon parport i2c_piix4 virtio_blk virtio_net ata_generic pata_acpi floppy
  CPU: 3 PID: 1779 Comm: test_base_madv_ Not tainted 4.0.0-v4.0-150511-1451-00009-g82360a3730e6 #139
  RIP: free_pcppages_bulk+0x52a/0x6f0
  Call Trace:
    drain_pages_zone+0x3d/0x50
    drain_local_pages+0x1d/0x30
    on_each_cpu_mask+0x46/0x80
    drain_all_pages+0x14b/0x1e0
    soft_offline_page+0x432/0x6e0
    SyS_madvise+0x73c/0x780
    system_call_fastpath+0x12/0x17
  Code: ff 89 45 b4 48 8b 45 c0 48 83 b8 a8 00 00 00 00 0f 85 e3 fb ff ff 0f 1f 00 0f 0b 48 8b 7d 90 48 c7 c6 e8 95 a6 81 e8 e6 32 02 00 <0f> 0b 8b 45 cc 49 89 47 30 41 8b 47 18 83 f8 ff 0f 85 10 ff ff
  RIP  [<ffffffff811a806a>] free_pcppages_bulk+0x52a/0x6f0
   RSP <ffff88007a117d28>
  ---[ end trace 53926436e76d1f35 ]---

When soft offline successfully migrates page, the source page is supposed
to be freed.  But there is a race condition where a source page looks
isolated (i.e.  the refcount is 0 and the PageHWPoison is set) but
somewhat linked to pcplist.  Then another soft offline event calls
drain_all_pages() and tries to free such hwpoisoned page, which is
forbidden.

This odd page state seems to happen due to the race between put_page() in
putback_lru_page() and __pagevec_lru_add_fn().  But I don't want to play
with tweaking drain code as done in commit 9ab3b598d2df "mm: hwpoison:
drop lru_add_drain_all() in __soft_offline_page()", or to change page
freeing code for this soft offline's purpose.

Instead, let's think about the difference between hard offline and soft
offline.  There is an interesting difference in how to isolate the in-use
page between these, that is, hard offline marks PageHWPoison of the target
page at first, and doesn't free it by keeping its refcount 1.  OTOH, soft
offline tries to free the target page then marks PageHWPoison.  This
difference might be the source of complexity and result in bugs like the
above.  So making soft offline isolate with keeping refcount can be a
solution for this problem.

We can pass to page migration code the "reason" which shows the caller, so
let's use this more to avoid calling putback_lru_page() when called from
soft offline, which effectively does the isolation for soft offline.  With
this change, target pages of soft offline never be reused without changing
migratetype, so this patch also removes the related code.

Signed-off-by: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
Cc: Andi Kleen <andi@firstfloor.org>
Cc: Tony Luck <tony.luck@intel.com>
Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
(cherry picked from commit 6936c1672176fa8403818ff95cff1f4a972b787d)

Signed-off-by: Dhaval Giani <dhaval.giani@oracle.com>
8 years agomm/huge_memory: replace VM_NO_THP VM_BUG_ON with actual VMA check
Konstantin Khlebnikov [Thu, 28 Apr 2016 23:18:32 +0000 (16:18 -0700)]
mm/huge_memory: replace VM_NO_THP VM_BUG_ON with actual VMA check

Orabug: 25227142

[ Upstream commit 3486b85a29c1741db99d0c522211c82d2b7a56d0 ]

Khugepaged detects own VMAs by checking vm_file and vm_ops but this way
it cannot distinguish private /dev/zero mappings from other special
mappings like /dev/hpet which has no vm_ops and popultes PTEs in mmap.

This fixes false-positive VM_BUG_ON and prevents installing THP where
they are not expected.

Link: http://lkml.kernel.org/r/CACT4Y+ZmuZMV5CjSFOeXviwQdABAgT7T+StKfTqan9YDtgEi5g@mail.gmail.com
Fixes: 78f11a255749 ("mm: thp: fix /dev/zero MAP_PRIVATE and vm_flags cleanups")
Signed-off-by: Konstantin Khlebnikov <koct9i@gmail.com>
Reported-by: Dmitry Vyukov <dvyukov@google.com>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Dmitry Vyukov <dvyukov@google.com>
Cc: Andrea Arcangeli <aarcange@redhat.com>
Cc: stable <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
(cherry picked from commit 9684dc001e986b11763b30c7bd71c0091fce28f8)

Signed-off-by: Dhaval Giani <dhaval.giani@oracle.com>
8 years agodrm/dp/mst: Get validated port ref in drm_dp_update_payload_part1()
cpaul@redhat.com [Fri, 22 Apr 2016 20:08:46 +0000 (16:08 -0400)]
drm/dp/mst: Get validated port ref in drm_dp_update_payload_part1()

Orabug: 25227141

[ Upstream commit 263efde31f97c498e1ebad30e4d2906609d7ad6b ]

We can thank KASAN for finding this, otherwise I probably would have spent
hours on it. This fixes a somewhat harder to trigger kernel panic, occuring
while enabling MST where the port we were currently updating the payload on
would have all of it's refs dropped before we finished what we were doing:

==================================================================
BUG: KASAN: use-after-free in drm_dp_update_payload_part1+0xb3f/0xdb0 [drm_kms_helper] at addr ffff8800d29de018
Read of size 4 by task Xorg/973
=============================================================================
BUG kmalloc-2048 (Tainted: G    B   W      ): kasan: bad access detected
-----------------------------------------------------------------------------

INFO: Allocated in drm_dp_add_port+0x1aa/0x1ed0 [drm_kms_helper] age=16477 cpu=0 pid=2175
___slab_alloc+0x472/0x490
__slab_alloc+0x20/0x40
kmem_cache_alloc_trace+0x151/0x190
drm_dp_add_port+0x1aa/0x1ed0 [drm_kms_helper]
drm_dp_send_link_address+0x526/0x960 [drm_kms_helper]
drm_dp_check_and_send_link_address+0x1ac/0x210 [drm_kms_helper]
drm_dp_mst_link_probe_work+0x77/0xd0 [drm_kms_helper]
process_one_work+0x562/0x1350
worker_thread+0xd9/0x1390
kthread+0x1c5/0x260
ret_from_fork+0x22/0x40
INFO: Freed in drm_dp_free_mst_port+0x50/0x60 [drm_kms_helper] age=7521 cpu=0 pid=2175
__slab_free+0x17f/0x2d0
kfree+0x169/0x180
drm_dp_free_mst_port+0x50/0x60 [drm_kms_helper]
drm_dp_destroy_connector_work+0x2b8/0x490 [drm_kms_helper]
process_one_work+0x562/0x1350
worker_thread+0xd9/0x1390
kthread+0x1c5/0x260
ret_from_fork+0x22/0x40

which on this T460s, would eventually lead to kernel panics in somewhat
random places later in intel_mst_enable_dp() if we got lucky enough.

Signed-off-by: Lyude <cpaul@redhat.com>
Cc: stable@vger.kernel.org
Signed-off-by: Dave Airlie <airlied@redhat.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
(cherry picked from commit 85489830a875eee4c5357b2ffa217f06676c1211)

Signed-off-by: Dhaval Giani <dhaval.giani@oracle.com>
8 years agoworkqueue: fix ghost PENDING flag while doing MQ IO
Roman Pen [Tue, 26 Apr 2016 11:15:35 +0000 (13:15 +0200)]
workqueue: fix ghost PENDING flag while doing MQ IO

Orabug: 25227140

[ Upstream commit 346c09f80459a3ad97df1816d6d606169a51001a ]

The bug in a workqueue leads to a stalled IO request in MQ ctx->rq_list
with the following backtrace:

[  601.347452] INFO: task kworker/u129:5:1636 blocked for more than 120 seconds.
[  601.347574]       Tainted: G           O    4.4.5-1-storage+ #6
[  601.347651] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[  601.348142] kworker/u129:5  D ffff880803077988     0  1636      2 0x00000000
[  601.348519] Workqueue: ibnbd_server_fileio_wq ibnbd_dev_file_submit_io_worker [ibnbd_server]
[  601.348999]  ffff880803077988 ffff88080466b900 ffff8808033f9c80 ffff880803078000
[  601.349662]  ffff880807c95000 7fffffffffffffff ffffffff815b0920 ffff880803077ad0
[  601.350333]  ffff8808030779a0 ffffffff815b01d5 0000000000000000 ffff880803077a38
[  601.350965] Call Trace:
[  601.351203]  [<ffffffff815b0920>] ? bit_wait+0x60/0x60
[  601.351444]  [<ffffffff815b01d5>] schedule+0x35/0x80
[  601.351709]  [<ffffffff815b2dd2>] schedule_timeout+0x192/0x230
[  601.351958]  [<ffffffff812d43f7>] ? blk_flush_plug_list+0xc7/0x220
[  601.352208]  [<ffffffff810bd737>] ? ktime_get+0x37/0xa0
[  601.352446]  [<ffffffff815b0920>] ? bit_wait+0x60/0x60
[  601.352688]  [<ffffffff815af784>] io_schedule_timeout+0xa4/0x110
[  601.352951]  [<ffffffff815b3a4e>] ? _raw_spin_unlock_irqrestore+0xe/0x10
[  601.353196]  [<ffffffff815b093b>] bit_wait_io+0x1b/0x70
[  601.353440]  [<ffffffff815b056d>] __wait_on_bit+0x5d/0x90
[  601.353689]  [<ffffffff81127bd0>] wait_on_page_bit+0xc0/0xd0
[  601.353958]  [<ffffffff81096db0>] ? autoremove_wake_function+0x40/0x40
[  601.354200]  [<ffffffff81127cc4>] __filemap_fdatawait_range+0xe4/0x140
[  601.354441]  [<ffffffff81127d34>] filemap_fdatawait_range+0x14/0x30
[  601.354688]  [<ffffffff81129a9f>] filemap_write_and_wait_range+0x3f/0x70
[  601.354932]  [<ffffffff811ced3b>] blkdev_fsync+0x1b/0x50
[  601.355193]  [<ffffffff811c82d9>] vfs_fsync_range+0x49/0xa0
[  601.355432]  [<ffffffff811cf45a>] blkdev_write_iter+0xca/0x100
[  601.355679]  [<ffffffff81197b1a>] __vfs_write+0xaa/0xe0
[  601.355925]  [<ffffffff81198379>] vfs_write+0xa9/0x1a0
[  601.356164]  [<ffffffff811c59d8>] kernel_write+0x38/0x50

The underlying device is a null_blk, with default parameters:

  queue_mode    = MQ
  submit_queues = 1

Verification that nullb0 has something inflight:

root@pserver8:~# cat /sys/block/nullb0/inflight
       0        1
root@pserver8:~# find /sys/block/nullb0/mq/0/cpu* -name rq_list -print -exec cat {} \;
...
/sys/block/nullb0/mq/0/cpu2/rq_list
CTX pending:
        ffff8838038e2400
...

During debug it became clear that stalled request is always inserted in
the rq_list from the following path:

   save_stack_trace_tsk + 34
   blk_mq_insert_requests + 231
   blk_mq_flush_plug_list + 281
   blk_flush_plug_list + 199
   wait_on_page_bit + 192
   __filemap_fdatawait_range + 228
   filemap_fdatawait_range + 20
   filemap_write_and_wait_range + 63
   blkdev_fsync + 27
   vfs_fsync_range + 73
   blkdev_write_iter + 202
   __vfs_write + 170
   vfs_write + 169
   kernel_write + 56

So blk_flush_plug_list() was called with from_schedule == true.

If from_schedule is true, that means that finally blk_mq_insert_requests()
offloads execution of __blk_mq_run_hw_queue() and uses kblockd workqueue,
i.e. it calls kblockd_schedule_delayed_work_on().

That means, that we race with another CPU, which is about to execute
__blk_mq_run_hw_queue() work.

Further debugging shows the following traces from different CPUs:

  CPU#0                                  CPU#1
  ----------------------------------     -------------------------------
  reqeust A inserted
  STORE hctx->ctx_map[0] bit marked
  kblockd_schedule...() returns 1
  <schedule to kblockd workqueue>
                                         request B inserted
                                         STORE hctx->ctx_map[1] bit marked
                                         kblockd_schedule...() returns 0
  *** WORK PENDING bit is cleared ***
  flush_busy_ctxs() is executed, but
  bit 1, set by CPU#1, is not observed

As a result request B pended forever.

This behaviour can be explained by speculative LOAD of hctx->ctx_map on
CPU#0, which is reordered with clear of PENDING bit and executed _before_
actual STORE of bit 1 on CPU#1.

The proper fix is an explicit full barrier <mfence>, which guarantees
that clear of PENDING bit is to be executed before all possible
speculative LOADS or STORES inside actual work function.

Signed-off-by: Roman Pen <roman.penyaev@profitbricks.com>
Cc: Gioh Kim <gi-oh.kim@profitbricks.com>
Cc: Michael Wang <yun.wang@profitbricks.com>
Cc: Tejun Heo <tj@kernel.org>
Cc: Jens Axboe <axboe@kernel.dk>
Cc: linux-block@vger.kernel.org
Cc: linux-kernel@vger.kernel.org
Cc: stable@vger.kernel.org
Signed-off-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
(cherry picked from commit 14794cfb6c9bcca151dbe940f3d7b9f9f818499f)

Signed-off-by: Dhaval Giani <dhaval.giani@oracle.com>
8 years agoALSA: hda - Add dock support for ThinkPad X260
Conrad Kostecki [Fri, 2 Dec 2016 21:16:34 +0000 (13:16 -0800)]
ALSA: hda - Add dock support for ThinkPad X260

Orabug: 25227139

[ Upstream commit 037e119738120c1cdc460c6ae33871c3000531f3 ]

Fixes audio output on a ThinkPad X260, when using Lenovo CES 2013
docking station series (basic, pro, ultra).

Signed-off-by: Conrad Kostecki <ck+linuxkernel@bl4ckb0x.de>
Cc: <stable@vger.kernel.org>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
(cherry picked from commit 2519c9fc644c0fc14c54518bab9440765e0ea99a)

Conflicts:

sound/pci/hda/patch_realtek.c

Signed-off-by: Dhaval Giani <dhaval.giani@oracle.com>
8 years agoiio: ak8975: Fix NULL pointer exception on early interrupt
Krzysztof Kozlowski [Mon, 4 Apr 2016 05:54:59 +0000 (14:54 +0900)]
iio: ak8975: Fix NULL pointer exception on early interrupt

Orabug: 25227138

[ Upstream commit 07d2390e36ee5b3265e9cc8305f2a106c8721e16 ]

In certain probe conditions the interrupt came right after registering
the handler causing a NULL pointer exception because of uninitialized
waitqueue:

$ udevadm trigger
i2c-gpio i2c-gpio-1: using pins 143 (SDA) and 144 (SCL)
i2c-gpio i2c-gpio-3: using pins 53 (SDA) and 52 (SCL)
Unable to handle kernel NULL pointer dereference at virtual address 00000000
pgd = e8b38000
[00000000] *pgd=00000000
Internal error: Oops: 5 [#1] SMP ARM
Modules linked in: snd_soc_i2s(+) i2c_gpio(+) snd_soc_idma snd_soc_s3c_dma snd_soc_core snd_pcm_dmaengine snd_pcm snd_timer snd soundcore ac97_bus spi_s3c64xx pwm_samsung dwc2 exynos_adc phy_exynos_usb2 exynosdrm exynos_rng rng_core rtc_s3c
CPU: 0 PID: 717 Comm: data-provider-m Not tainted 4.6.0-rc1-next-20160401-00011-g1b8d87473b9e-dirty #101
Hardware name: SAMSUNG EXYNOS (Flattened Device Tree)
(...)
(__wake_up_common) from [<c0379624>] (__wake_up+0x38/0x4c)
(__wake_up) from [<c0a41d30>] (ak8975_irq_handler+0x28/0x30)
(ak8975_irq_handler) from [<c0386720>] (handle_irq_event_percpu+0x88/0x140)
(handle_irq_event_percpu) from [<c038681c>] (handle_irq_event+0x44/0x68)
(handle_irq_event) from [<c0389c40>] (handle_edge_irq+0xf0/0x19c)
(handle_edge_irq) from [<c0385e04>] (generic_handle_irq+0x24/0x34)
(generic_handle_irq) from [<c05ee360>] (exynos_eint_gpio_irq+0x50/0x68)
(exynos_eint_gpio_irq) from [<c0386720>] (handle_irq_event_percpu+0x88/0x140)
(handle_irq_event_percpu) from [<c038681c>] (handle_irq_event+0x44/0x68)
(handle_irq_event) from [<c0389a70>] (handle_fasteoi_irq+0xb4/0x194)
(handle_fasteoi_irq) from [<c0385e04>] (generic_handle_irq+0x24/0x34)
(generic_handle_irq) from [<c03860b4>] (__handle_domain_irq+0x5c/0xb4)
(__handle_domain_irq) from [<c0301774>] (gic_handle_irq+0x54/0x94)
(gic_handle_irq) from [<c030c910>] (__irq_usr+0x50/0x80)

The bug was reproduced on exynos4412-trats2 (with a max77693 device also
using i2c-gpio) after building max77693 as a module.

Cc: <stable@vger.kernel.org>
Fixes: 94a6d5cf7caa ("iio:ak8975 Implement data ready interrupt handling")
Signed-off-by: Krzysztof Kozlowski <k.kozlowski@samsung.com>
Tested-by: Gregor Boirie <gregor.boirie@parrot.com>
Signed-off-by: Jonathan Cameron <jic23@kernel.org>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
(cherry picked from commit e7e16bb686861933ff351e945432bd607fc9fa02)

Signed-off-by: Dhaval Giani <dhaval.giani@oracle.com>
8 years agoregmap: spmi: Fix regmap_spmi_ext_read in multi-byte case
Jack Pham [Fri, 15 Apr 2016 06:37:26 +0000 (23:37 -0700)]
regmap: spmi: Fix regmap_spmi_ext_read in multi-byte case

Orabug: 25227137

[ Upstream commit dec8e8f6e6504aa3496c0f7cc10c756bb0e10f44 ]

Specifically for the case of reads that use the Extended Register
Read Long command, a multi-byte read operation is broken up into
8-byte chunks.  However the call to spmi_ext_register_readl() is
incorrectly passing 'val_size', which if greater than 8 will
always fail.  The argument should instead be 'len'.

Fixes: c9afbb05a9ff ("regmap: spmi: support base and extended register spaces")
Signed-off-by: Jack Pham <jackp@codeaurora.org>
Signed-off-by: Mark Brown <broonie@kernel.org>
Cc: stable@vger.kernel.org
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
(cherry picked from commit 24a50739c3e4e9947f4f4a28a023c348d7bea042)

Signed-off-by: Dhaval Giani <dhaval.giani@oracle.com>
8 years agoRevert "usb: hub: do not clear BOS field during reset device"
Greg Kroah-Hartman [Sat, 20 Feb 2016 22:19:34 +0000 (14:19 -0800)]
Revert "usb: hub: do not clear BOS field during reset device"

Orabug: 25227136

This reverts commit 522dc09ca0451d118795e8a05c7b3717f3d1ad8b..

Tony writes:

This upstream commit is causing an oops:
d8f00cd685f5 ("usb: hub: do not clear BOS field during reset device")

This patch has already been included in several -stable kernels.  Here
are the affected kernels:
4.5.0-rc4 (current git)
4.4.2
4.3.6 (currently in review)
4.1.18
3.18.27
3.14.61

How to reproduce the problem:
Boot kernel with slub debugging enabled (otherwise memory corruption
will cause random oopses later instead of immediately)
Plug in USB 3.0 disk to xhci USB 3.0 port
dd if=/dev/sdc of=/dev/null bs=65536
(where /dev/sdc is the USB 3.0 disk)
Unplug USB cable while dd is still going
Oops is immediate:

Reported-by: Tony Battersby <tonyb@cybernetics.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
(cherry picked from commit f94e3630d7cf37e59d9c8772e38d4d73fa59e59e)

Signed-off-by: Dhaval Giani <dhaval.giani@oracle.com>
8 years agox86/mm/xen: Suppress hugetlbfs in PV guests
Jan Beulich [Thu, 21 Apr 2016 06:27:04 +0000 (00:27 -0600)]
x86/mm/xen: Suppress hugetlbfs in PV guests

Orabug: 25227135

[ Upstream commit 103f6112f253017d7062cd74d17f4a514ed4485c ]

Huge pages are not normally available to PV guests. Not suppressing
hugetlbfs use results in an endless loop of page faults when user mode
code tries to access a hugetlbfs mapped area (since the hypervisor
denies such PTEs to be created, but error indications can't be
propagated out of xen_set_pte_at(), just like for various of its
siblings), and - once killed in an oops like this:

  kernel BUG at .../fs/hugetlbfs/inode.c:428!
  invalid opcode: 0000 [#1] SMP
  ...
  RIP: e030:[<ffffffff811c333b>]  [<ffffffff811c333b>] remove_inode_hugepages+0x25b/0x320
  ...
  Call Trace:
   [<ffffffff811c3415>] hugetlbfs_evict_inode+0x15/0x40
   [<ffffffff81167b3d>] evict+0xbd/0x1b0
   [<ffffffff8116514a>] __dentry_kill+0x19a/0x1f0
   [<ffffffff81165b0e>] dput+0x1fe/0x220
   [<ffffffff81150535>] __fput+0x155/0x200
   [<ffffffff81079fc0>] task_work_run+0x60/0xa0
   [<ffffffff81063510>] do_exit+0x160/0x400
   [<ffffffff810637eb>] do_group_exit+0x3b/0xa0
   [<ffffffff8106e8bd>] get_signal+0x1ed/0x470
   [<ffffffff8100f854>] do_signal+0x14/0x110
   [<ffffffff810030e9>] prepare_exit_to_usermode+0xe9/0xf0
   [<ffffffff814178a5>] retint_user+0x8/0x13

This is CVE-2016-3961 / XSA-174.

Reported-by: Vitaly Kuznetsov <vkuznets@redhat.com>
Signed-off-by: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: David Vrabel <david.vrabel@citrix.com>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Juergen Gross <JGross@suse.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Luis R. Rodriguez <mcgrof@suse.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Toshi Kani <toshi.kani@hp.com>
Cc: stable@vger.kernel.org
Cc: xen-devel <xen-devel@lists.xenproject.org>
Link: http://lkml.kernel.org/r/57188ED802000078000E431C@prv-mh.provo.novell.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
(cherry picked from commit 936d0871135e41fba0dc15095319ca106f55a584)

Signed-off-by: Dhaval Giani <dhaval.giani@oracle.com>
8 years agomm: hugetlb: allow hugepages_supported to be architecture specific
Dominik Dingel [Fri, 17 Jul 2015 23:23:37 +0000 (16:23 -0700)]
mm: hugetlb: allow hugepages_supported to be architecture specific

Orabug: 25227134

[ Upstream commit 2531c8cf56a640cd7d17057df8484e570716a450 ]

s390 has a constant hugepage size, by setting HPAGE_SHIFT we also change
e.g. the pageblock_order, which should be independent in respect to
hugepage support.

With this patch every architecture is free to define how to check
for hugepage support.

Signed-off-by: Dominik Dingel <dingel@linux.vnet.ibm.com>
Acked-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Christian Borntraeger <borntraeger@de.ibm.com>
Cc: Michael Holzheu <holzheu@linux.vnet.ibm.com>
Cc: Gerald Schaefer <gerald.schaefer@de.ibm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
(cherry picked from commit ec8d85066c3a598ba747f578232d3dc89b33fa13)

Signed-off-by: Dhaval Giani <dhaval.giani@oracle.com>
8 years agodrm/radeon: forbid mapping of userptr bo through radeon device file
Jérôme Glisse [Tue, 19 Apr 2016 13:07:50 +0000 (09:07 -0400)]
drm/radeon: forbid mapping of userptr bo through radeon device file

Orabug: 25227133

[ Upstream commit b5dcec693f87cb8475f2291c0075b2422addd3d6 ]

Allowing userptr bo which are basicly a list of page from some vma
(so either anonymous page or file backed page) would lead to serious
corruption of kernel structures and counters (because we overwrite
the page->mapping field when mapping buffer).

This will already block if the buffer was populated before anyone does
try to mmap it because then TTM_PAGE_FLAG_SG would be set in in the
ttm_tt flags. But that flag is check before ttm_tt_populate in the ttm
vm fault handler.

So to be safe just add a check to verify_access() callback.

Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Jérôme Glisse <jglisse@redhat.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
(cherry picked from commit 2719d3c05d4c93054e7c5838cc5b1906f72b465e)

Signed-off-by: Dhaval Giani <dhaval.giani@oracle.com>
8 years agoALSA: pcxhr: Fix missing mutex unlock
Takashi Iwai [Thu, 21 Apr 2016 15:37:54 +0000 (17:37 +0200)]
ALSA: pcxhr: Fix missing mutex unlock

Orabug: 25227132

[ Upstream commit 67f3754b51f22b18c4820fb84062f658c30e8644 ]

The commit [9bef72bdb26e: ALSA: pcxhr: Use nonatomic PCM ops]
converted to non-atomic PCM ops, but shamelessly with an unbalanced
mutex locking, which leads to the hangup easily.  Fix it.

Fixes: 9bef72bdb26e ('ALSA: pcxhr: Use nonatomic PCM ops')
Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=116441
Cc: <stable@vger.kernel.org> # 3.18+
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
(cherry picked from commit faaa136bc7a4ed96ffe9585b495ec8bfac0032a8)

Signed-off-by: Dhaval Giani <dhaval.giani@oracle.com>
8 years agofutex: Handle unlock_pi race gracefully
Sebastian Andrzej Siewior [Fri, 15 Apr 2016 12:35:39 +0000 (14:35 +0200)]
futex: Handle unlock_pi race gracefully

Orabug: 25227131

[ Upstream commit 89e9e66ba1b3bde9d8ea90566c2aee20697ad681 ]

If userspace calls UNLOCK_PI unconditionally without trying the TID -> 0
transition in user space first then the user space value might not have the
waiters bit set. This opens the following race:

CPU0                 CPU1
uval = get_user(futex)
    lock(hb)
lock(hb)
    futex |= FUTEX_WAITERS
    ....
    unlock(hb)

cmpxchg(futex, uval, newval)

So the cmpxchg fails and returns -EINVAL to user space, which is wrong because
the futex value is valid.

To handle this (yes, yet another) corner case gracefully, check for a flag
change and retry.

[ tglx: Massaged changelog and slightly reworked implementation ]

Fixes: ccf9e6a80d9e ("futex: Make unlock_pi more robust")
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Cc: stable@vger.kernel.org
Cc: Davidlohr Bueso <dave@stgolabs.net>
Cc: Darren Hart <dvhart@linux.intel.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Link: http://lkml.kernel.org/r/1460723739-5195-1-git-send-email-bigeasy@linutronix.de
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
(cherry picked from commit 28f83d2daecd97392c9a0580c00853a19a76a8bc)

Signed-off-by: Dhaval Giani <dhaval.giani@oracle.com>
8 years agousb: gadget: f_fs: Fix use-after-free
Lars-Peter Clausen [Thu, 14 Apr 2016 15:01:17 +0000 (17:01 +0200)]
usb: gadget: f_fs: Fix use-after-free

Orabug: 25227130

[ Upstream commit 38740a5b87d53ceb89eb2c970150f6e94e00373a ]

When using asynchronous read or write operations on the USB endpoints the
issuer of the IO request is notified by calling the ki_complete() callback
of the submitted kiocb when the URB has been completed.

Calling this ki_complete() callback will free kiocb. Make sure that the
structure is no longer accessed beyond that point, otherwise undefined
behaviour might occur.

Fixes: 2e4c7553cd6f ("usb: gadget: f_fs: add aio support")
Cc: <stable@vger.kernel.org> # v3.15+
Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
Signed-off-by: Felipe Balbi <felipe.balbi@linux.intel.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
(cherry picked from commit 0763ce11708553fc7b2124f184ce2e4bb0cb186d)

Signed-off-by: Dhaval Giani <dhaval.giani@oracle.com>
8 years agocrypto: ccp - Prevent information leakage on export
Tom Lendacky [Wed, 13 Apr 2016 15:52:25 +0000 (10:52 -0500)]
crypto: ccp - Prevent information leakage on export

Orabug: 25227129

[ Upstream commit f709b45ec461b548c41a00044dba1f1b572783bf ]

Prevent information from leaking to userspace by doing a memset to 0 of
the export state structure before setting the structure values and copying
it. This prevents un-initialized padding areas from being copied into the
export area.

Cc: <stable@vger.kernel.org> # 3.14.x-
Reported-by: Ben Hutchings <ben@decadent.org.uk>
Signed-off-by: Tom Lendacky <thomas.lendacky@amd.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
(cherry picked from commit 3fa5e41c8667e3856ba3191b7156157b8f8aaf1c)

Signed-off-by: Dhaval Giani <dhaval.giani@oracle.com>
8 years agocrypto: sha1-mb - use corrcet pointer while completing jobs
Xiaodong Liu [Tue, 12 Apr 2016 09:45:51 +0000 (09:45 +0000)]
crypto: sha1-mb - use corrcet pointer while completing jobs

Orabug: 25227128

[ Upstream commit 0851561d9c965df086ef8a53f981f5f95a57c2c8 ]

In sha_complete_job, incorrect mcryptd_hash_request_ctx pointer is used
when check and complete other jobs. If the memory of first completed req
is freed, while still completing other jobs in the func, kernel will
crash since NULL pointer is assigned to RIP.

Cc: <stable@vger.kernel.org>
Signed-off-by: Xiaodong Liu <xiaodong.liu@intel.com>
Acked-by: Tim Chen <tim.c.chen@linux.intel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
(cherry picked from commit f6a93797957220ead95f575047f43e1d4d085205)

Signed-off-by: Dhaval Giani <dhaval.giani@oracle.com>
8 years agonl80211: check netlink protocol in socket release notification
Dmitry Ivanov [Wed, 6 Apr 2016 14:23:18 +0000 (17:23 +0300)]
nl80211: check netlink protocol in socket release notification

Orabug: 25227127

[ Upstream commit 8f815cdde3e550e10c2736990d791f60c2ce43eb ]

A non-privileged user can create a netlink socket with the same port_id as
used by an existing open nl80211 netlink socket (e.g. as used by a hostapd
process) with a different protocol number.

Closing this socket will then lead to the notification going to nl80211's
socket release notification handler, and possibly cause an action such as
removing a virtual interface.

Fix this issue by checking that the netlink protocol is NETLINK_GENERIC.
Since generic netlink has no notifier chain of its own, we can't fix the
problem more generically.

Fixes: 026331c4d9b5 ("cfg80211/mac80211: allow registering for and sending action frames")
Cc: stable@vger.kernel.org
Signed-off-by: Dmitry Ivanov <dima@ubnt.com>
[rewrite commit message]
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
(cherry picked from commit 2c8c83f64c0fb9764b486f14a08f2e3e51bcbc40)

Signed-off-by: Dhaval Giani <dhaval.giani@oracle.com>
8 years agoInput: gtco - fix crash on detecting device without endpoints
Vladis Dronov [Thu, 31 Mar 2016 17:53:42 +0000 (10:53 -0700)]
Input: gtco - fix crash on detecting device without endpoints

Orabug: 25227121

[ Upstream commit 162f98dea487206d9ab79fc12ed64700667a894d ]

The gtco driver expects at least one valid endpoint. If given malicious
descriptors that specify 0 for the number of endpoints, it will crash in
the probe function. Ensure there is at least one endpoint on the interface
before using it.

Also let's fix a minor coding style issue.

The full correct report of this issue can be found in the public
Red Hat Bugzilla:

https://bugzilla.redhat.com/show_bug.cgi?id=1283385

Reported-by: Ralf Spenneberg <ralf@spenneberg.net>
Signed-off-by: Vladis Dronov <vdronov@redhat.com>
Cc: stable@vger.kernel.org
Signed-off-by: Dmitry Torokhov <dmitry.torokhov@gmail.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
(cherry picked from commit 464508beeb30376f277fbfbfd9137cf19cbfa785)

Signed-off-by: Dhaval Giani <dhaval.giani@oracle.com>
8 years agouserfaultfd: prevent khugepaged to merge if userfaultfd is armed
Andrea Arcangeli [Fri, 4 Sep 2015 22:46:27 +0000 (15:46 -0700)]
userfaultfd: prevent khugepaged to merge if userfaultfd is armed

Orabug: 21685254

If userfaultfd is armed on a certain vma we can't "fill" the holes with
zeroes or we'll break the userland on demand paging.  The holes if the
userfault is armed, are really missing information (not zeroes) that the
userland has to load from network or elsewhere.

The same issue happens for wrprotected ptes that we can't just convert
into a single writable pmd_trans_huge.

We could however in theory still merge across zeropages if only
VM_UFFD_MISSING is set (so if VM_UFFD_WP is not set)...  that could be
slightly improved but it'd be much more complex code for a tiny corner
case.

Signed-off-by: Andrea Arcangeli <aarcange@redhat.com>
Acked-by: Pavel Emelyanov <xemul@parallels.com>
Cc: Sanidhya Kashyap <sanidhya.gatech@gmail.com>
Cc: zhang.zhanghailiang@huawei.com
Cc: "Kirill A. Shutemov" <kirill@shutemov.name>
Cc: Andres Lagar-Cavilla <andreslc@google.com>
Cc: Dave Hansen <dave.hansen@intel.com>
Cc: Paolo Bonzini <pbonzini@redhat.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Hugh Dickins <hughd@google.com>
Cc: Peter Feiner <pfeiner@google.com>
Cc: "Dr. David Alan Gilbert" <dgilbert@redhat.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: "Huangpeng (Peter)" <peter.huangpeng@huawei.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
(cherry picked from commit c1294d05de5df1ab8c93aa13c531782ede907e14)
Signed-off-by: Mike Kravetz <mike.kravetz@oracle.com>
Reviewed-by: Dhaval Giani <dhaval.giani@oracle.com>
Conflicts:
mm/huge_memory.c

Signed-off-by: Dhaval Giani <dhaval.giani@oracle.com>
8 years agocrypto: skcipher - Fix blkcipher walk OOM crash
Herbert Xu [Tue, 13 Sep 2016 06:43:29 +0000 (14:43 +0800)]
crypto: skcipher - Fix blkcipher walk OOM crash

[ Upstream commit acdb04d0b36769b3e05990c488dc74d8b7ac8060 ]

When we need to allocate a temporary blkcipher_walk_next and it
fails, the code is supposed to take the slow path of processing
the data block by block.  However, due to an unrelated change
we instead end up dereferencing the NULL pointer.

This patch fixes it by moving the unrelated bsize setting out
of the way so that we enter the slow path as inteded.

Fixes: 7607bd8ff03b ("[CRYPTO] blkcipher: Added blkcipher_walk_virt_block")
Cc: stable@vger.kernel.org
Reported-by: xiakaixu <xiakaixu@huawei.com>
Reported-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Tested-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
(cherry picked from commit a55614f1d6e3299837301b0e0a8599c3463db6bd)
Signed-off-by: Todd Vierling <todd.vierling@oracle.com>
Reviewed-by: Ethan Zhao <ethan.zhao@oracle.com>
8 years agocrypto: cryptd - initialize child shash_desc on import
Ard Biesheuvel [Thu, 1 Sep 2016 13:25:43 +0000 (14:25 +0100)]
crypto: cryptd - initialize child shash_desc on import

[ Upstream commit 0bd2223594a4dcddc1e34b15774a3a4776f7749e ]

When calling .import() on a cryptd ahash_request, the structure members
that describe the child transform in the shash_desc need to be initialized
like they are when calling .init()

Cc: stable@vger.kernel.org
Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
(cherry picked from commit b8e51107942182fdcd111c5fc3e601b3b8c9e8f4)
Signed-off-by: Todd Vierling <todd.vierling@oracle.com>
Reviewed-by: Ethan Zhao <ethan.zhao@oracle.com>
8 years agocrypto: scatterwalk - Fix test in scatterwalk_done
Herbert Xu [Tue, 12 Jul 2016 05:17:57 +0000 (13:17 +0800)]
crypto: scatterwalk - Fix test in scatterwalk_done

[ Upstream commit 5f070e81bee35f1b7bd1477bb223a873ff657803 ]

When there is more data to be processed, the current test in
scatterwalk_done may prevent us from calling pagedone even when
we should.

In particular, if we're on an SG entry spanning multiple pages
where the last page is not a full page, we will incorrectly skip
calling pagedone on the second last page.

This patch fixes this by adding a separate test for whether we've
reached the end of a page.

Cc: stable@vger.kernel.org
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
(cherry picked from commit 812925986c9f0398f9a15228580cf4d38fdd1577)
Signed-off-by: Todd Vierling <todd.vierling@oracle.com>
Reviewed-by: Ethan Zhao <ethan.zhao@oracle.com>
8 years agocrypto: gcm - Filter out async ghash if necessary
Herbert Xu [Wed, 15 Jun 2016 14:27:05 +0000 (22:27 +0800)]
crypto: gcm - Filter out async ghash if necessary

[ Upstream commit b30bdfa86431afbafe15284a3ad5ac19b49b88e3 ]

As it is if you ask for a sync gcm you may actually end up with
an async one because it does not filter out async implementations
of ghash.

This patch fixes this by adding the necessary filter when looking
for ghash.

Cc: stable@vger.kernel.org
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Sasha Levin <alexander.levin@verizon.com>
(cherry picked from commit 843137840f140051965e0611fa95c831b5064737)
Signed-off-by: Todd Vierling <todd.vierling@oracle.com>
Reviewed-by: Ethan Zhao <ethan.zhao@oracle.com>
8 years agoPKCS#7: pkcs7_validate_trust(): initialize the _trusted output argument
Nicolai Stange [Sun, 20 Mar 2016 22:23:46 +0000 (23:23 +0100)]
PKCS#7: pkcs7_validate_trust(): initialize the _trusted output argument

[ Upstream commit e54358915d0a00399c11c2c23ae1be674cba188a ]

Despite what the DocBook comment to pkcs7_validate_trust() says, the
*_trusted argument is never set to false.

pkcs7_validate_trust() only positively sets *_trusted upon encountering
a trusted PKCS#7 SignedInfo block.

This is quite unfortunate since its callers, system_verify_data() for
example, depend on pkcs7_validate_trust() clearing *_trusted on non-trust.

Indeed, UBSAN splats when attempting to load the uninitialized local
variable 'trusted' from system_verify_data() in pkcs7_validate_trust():

  UBSAN: Undefined behaviour in crypto/asymmetric_keys/pkcs7_trust.c:194:14
  load of value 82 is not a valid value for type '_Bool'
  [...]
  Call Trace:
    [<ffffffff818c4d35>] dump_stack+0xbc/0x117
    [<ffffffff818c4c79>] ? _atomic_dec_and_lock+0x169/0x169
    [<ffffffff8194113b>] ubsan_epilogue+0xd/0x4e
    [<ffffffff819419fa>] __ubsan_handle_load_invalid_value+0x111/0x158
    [<ffffffff819418e9>] ? val_to_string.constprop.12+0xcf/0xcf
    [<ffffffff818334a4>] ? x509_request_asymmetric_key+0x114/0x370
    [<ffffffff814b83f0>] ? kfree+0x220/0x370
    [<ffffffff818312c2>] ? public_key_verify_signature_2+0x32/0x50
    [<ffffffff81835e04>] pkcs7_validate_trust+0x524/0x5f0
    [<ffffffff813c391a>] system_verify_data+0xca/0x170
    [<ffffffff813c3850>] ? top_trace_array+0x9b/0x9b
    [<ffffffff81510b29>] ? __vfs_read+0x279/0x3d0
    [<ffffffff8129372f>] mod_verify_sig+0x1ff/0x290
    [...]

The implication is that pkcs7_validate_trust() effectively grants trust
when it really shouldn't have.

Fix this by explicitly setting *_trusted to false at the very beginning
of pkcs7_validate_trust().

Cc: <stable@vger.kernel.org>
Signed-off-by: Nicolai Stange <nicstange@gmail.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
(cherry picked from commit fb7a806fe08816a6fd2eb258db5e28218624bd07)
Signed-off-by: Todd Vierling <todd.vierling@oracle.com>
Reviewed-by: Ethan Zhao <ethan.zhao@oracle.com>
8 years agocrypto: public_key: select CRYPTO_AKCIPHER
Arnd Bergmann [Wed, 18 May 2016 14:55:56 +0000 (16:55 +0200)]
crypto: public_key: select CRYPTO_AKCIPHER

[ Upstream commit bad6a185b4d6f81d0ed2b6e4c16307969f160b95 ]

In some rare randconfig builds, we can end up with
ASYMMETRIC_PUBLIC_KEY_SUBTYPE enabled but CRYPTO_AKCIPHER disabled,
which fails to link because of the reference to crypto_alloc_akcipher:

crypto/built-in.o: In function `public_key_verify_signature':
:(.text+0x110e4): undefined reference to `crypto_alloc_akcipher'

This adds a Kconfig 'select' statement to ensure the dependency
is always there.

Cc: <stable@vger.kernel.org>
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
(cherry picked from commit 4dbfb8be1989118cd013c058c2f0ce199ecaf93f)
Signed-off-by: Todd Vierling <todd.vierling@oracle.com>
Reviewed-by: Ethan Zhao <ethan.zhao@oracle.com>
8 years agocrypto: hash - Fix page length clamping in hash walk
Herbert Xu [Wed, 4 May 2016 09:52:56 +0000 (17:52 +0800)]
crypto: hash - Fix page length clamping in hash walk

[ Upstream commit 13f4bb78cf6a312bbdec367ba3da044b09bf0e29 ]

The crypto hash walk code is broken when supplied with an offset
greater than or equal to PAGE_SIZE.  This patch fixes it by adjusting
walk->pg and walk->offset when this happens.

Cc: <stable@vger.kernel.org>
Reported-by: Steffen Klassert <steffen.klassert@secunet.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
(cherry picked from commit 1669540814a1cf57ff50e037c5bf84f831ea4e09)
Signed-off-by: Todd Vierling <todd.vierling@oracle.com>
Reviewed-by: Ethan Zhao <ethan.zhao@oracle.com>
8 years agoocfs2: fix not enough credit panic
Junxiao Bi [Tue, 1 Nov 2016 06:42:20 +0000 (14:42 +0800)]
ocfs2: fix not enough credit panic

The following panic was caught when run ocfs2 disconfig single test
(block size 512 and cluster size 8192). ocfs2_journal_dirty() return
-ENOSPC, that means credits were used up. The total credit should
include 3 times of "num_dx_leaves" from ocfs2_dx_dir_rebalance(),
because 2 times will be consumed in ocfs2_dx_dir_transfer_leaf() and
1 time will be consumed in ocfs2_dx_dir_new_cluster()->
__ocfs2_dx_dir_new_cluster()->ocfs2_dx_dir_format_cluster(). But only
two times is included in ocfs2_dx_dir_rebalance_credits(), fix it.

[34377.331151] ------------[ cut here ]------------
[34377.332007] kernel BUG at fs/ocfs2/journal.c:775!
[34377.344107] invalid opcode: 0000 [#1] SMP
[34377.346090] Modules linked in: ocfs2 nfsd lockd grace nfs_acl auth_rpcgss sunrpc autofs4 ocfs2_dlmfs ocfs2_stack_o2cb ocfs2_dlm ocfs2_nodemanager ocfs2_stackglue configfs sd_mod sg ip6t_REJECT nf_reject_ipv6 nf_conntrack_ipv6 nf_defrag_ipv6 xt_state nf_conntrack ip6table_filter ip6_tables be2iscsi iscsi_boot_sysfs bnx2i cnic uio cxgb4i cxgb4 cxgb3i libcxgbi cxgb3 mdio ib_iser rdma_cm ib_cm iw_cm ib_sa ib_mad ib_core ib_addr ipv6 iscsi_tcp libiscsi_tcp libiscsi scsi_transport_iscsi ppdev xen_kbdfront xen_netfront fb_sys_fops sysimgblt sysfillrect syscopyarea parport_pc parport acpi_cpufreq i2c_piix4 i2c_core pcspkr ext4 jbd2 mbcache xen_blkfront floppy pata_acpi ata_generic ata_piix dm_mirror dm_region_hash dm_log dm_mod
[34377.346090] CPU: 2 PID: 10601 Comm: dd Not tainted 4.1.12-71.el6uek.bug24939243.x86_64 #2
[34377.346090] Hardware name: Xen HVM domU, BIOS 4.4.4OVM 02/11/2016
[34377.346090] task: ffff8800b6de6200 ti: ffff8800a7d48000 task.ti: ffff8800a7d48000
[34377.346090] RIP: 0010:[<ffffffffa06e7397>]  [<ffffffffa06e7397>] ocfs2_journal_dirty+0xa7/0xb0 [ocfs2]
[34377.346090] RSP: 0018:ffff8800a7d4b6d8  EFLAGS: 00010286
[34377.346090] RAX: 00000000ffffffe4 RBX: 00000000814d0a9c RCX: 00000000000004f9
[34377.346090] RDX: ffffffffa008e990 RSI: ffffffffa008f1ee RDI: ffff8800622b6460
[34377.346090] RBP: ffff8800a7d4b6f8 R08: ffffffffa008f288 R09: ffff8800622b6460
[34377.346090] R10: 0000000000000000 R11: 0000000000000282 R12: 0000000002c8421e
[34377.346090] R13: ffff88006d0cad00 R14: ffff880092beef60 R15: 0000000000000070
[34377.346090] FS:  00007f9b83e92700(0000) GS:ffff8800be880000(0000) knlGS:0000000000000000
[34377.346090] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
[34377.346090] CR2: 00007fb2c0d1a000 CR3: 0000000008f80000 CR4: 00000000000406e0
[34377.346090] Stack:
[34377.346090]  00000000814d0a9c ffff88005fe61e00 ffff88006e995c00 ffff880009847c00
[34377.346090]  ffff8800a7d4b768 ffffffffa06c0999 ffff88009d3c2a10 ffff88005fe61e30
[34377.346090]  ffff8800997ce500 ffff8800997ce980 ffff880092beef60 000000100000000e
[34377.346090] Call Trace:
[34377.346090]  [<ffffffffa06c0999>] ocfs2_dx_dir_transfer_leaf+0x159/0x1a0 [ocfs2]
[34377.346090]  [<ffffffffa06c3eeb>] ocfs2_dx_dir_rebalance+0xd9b/0xea0 [ocfs2]
[34377.346090]  [<ffffffffa06dedb2>] ? ocfs2_inode_cache_io_unlock+0x12/0x20 [ocfs2]
[34377.346090]  [<ffffffffa0761180>] ? ocfs2_refcount_tree_et_ops+0x60/0xfffffffffffe4b31 [ocfs2]
[34377.346090]  [<ffffffffa06e7730>] ? ocfs2_journal_access_dl+0x20/0x20 [ocfs2]
[34377.346090]  [<ffffffffa06c6b63>] ocfs2_find_dir_space_dx+0xd3/0x300 [ocfs2]
[34377.346090]  [<ffffffffa06c9709>] ocfs2_prepare_dx_dir_for_insert+0x219/0x450 [ocfs2]
[34377.346090]  [<ffffffffa06c9b16>] ocfs2_prepare_dir_for_insert+0x1d6/0x580 [ocfs2]
[34377.346090]  [<ffffffffa06dee90>] ? ocfs2_read_inode_block+0x10/0x20 [ocfs2]
[34377.346090]  [<ffffffffa06f38e2>] ocfs2_mknod+0x5a2/0x1400 [ocfs2]
[34377.346090]  [<ffffffffa06f4933>] ocfs2_create+0x73/0x180 [ocfs2]
[34377.346090]  [<ffffffff81211de8>] vfs_create+0xd8/0x100
[34377.346090]  [<ffffffff8120f5fd>] ? lookup_real+0x1d/0x60
[34377.346090]  [<ffffffff81212535>] lookup_open+0x185/0x1c0
[34377.346090]  [<ffffffff8121571d>] do_last+0x36d/0x780
[34377.346090]  [<ffffffff812a85d6>] ? security_file_alloc+0x16/0x20
[34377.346090]  [<ffffffff81215bc2>] path_openat+0x92/0x470
[34377.346090]  [<ffffffff81215fea>] do_filp_open+0x4a/0xa0
[34377.346090]  [<ffffffff8132c570>] ? find_next_zero_bit+0x10/0x20
[34377.346090]  [<ffffffff812232ec>] ? __alloc_fd+0xac/0x150
[34377.346090]  [<ffffffff8120459a>] do_sys_open+0x11a/0x230
[34377.346090]  [<ffffffff810259d3>] ? syscall_trace_enter_phase1+0x153/0x180
[34377.346090]  [<ffffffff812046ee>] SyS_open+0x1e/0x20
[34377.346090]  [<ffffffff816cb6ee>] system_call_fastpath+0x12/0x71
[34377.346090] Code: 1d 3f 29 09 00 48 85 db 74 1f 48 8b 03 0f 1f 80 00 00 00 00 48 8b 7b 08 48 83 c3 10 4c 89 e6 ff d0 48 8b 03 48 85 c0 75 eb eb 90 <0f> 0b eb fe 0f 1f 44 00 00 55 48 89 e5 41 57 41 56 41 55 41 54
[34377.346090] RIP  [<ffffffffa06e7397>] ocfs2_journal_dirty+0xa7/0xb0 [ocfs2]
[34377.346090]  RSP <ffff8800a7d4b6d8>
[34377.615401] ---[ end trace 91ac5312a6ee1288 ]---
[34377.618919] Kernel panic - not syncing: Fatal exception
[34377.619910] Kernel Offset: disabled

Signed-off-by: Junxiao Bi <junxiao.bi@oracle.com>
8 years agoocfs2: fix deadlock on mmapped page in ocfs2_write_begin_nolock()
Eric Ren [Fri, 30 Sep 2016 22:11:32 +0000 (15:11 -0700)]
ocfs2: fix deadlock on mmapped page in ocfs2_write_begin_nolock()

The testcase "mmaptruncate" of ocfs2-test deadlocks occasionally.

In this testcase, we create a 2*CLUSTER_SIZE file and mmap() on it;
there are 2 process repeatedly performing the following operations
respectively: one is doing memset(mmaped_addr + 2*CLUSTER_SIZE - 1, 'a',
1), while the another is playing ftruncate(fd, 2*CLUSTER_SIZE) and then
ftruncate(fd, CLUSTER_SIZE) again and again.

This is the backtrace when the deadlock happens:

   __wait_on_bit_lock+0x50/0xa0
   __lock_page+0xb7/0xc0
   ocfs2_write_begin_nolock+0x163f/0x1790 [ocfs2]
   ocfs2_page_mkwrite+0x1c7/0x2a0 [ocfs2]
   do_page_mkwrite+0x66/0xc0
   handle_mm_fault+0x685/0x1350
   __do_page_fault+0x1d8/0x4d0
   trace_do_page_fault+0x37/0xf0
   do_async_page_fault+0x19/0x70
   async_page_fault+0x28/0x30

In ocfs2_write_begin_nolock(), we first grab the pages and then allocate
disk space for this write; ocfs2_try_to_free_truncate_log() will be
called if -ENOSPC is returned; if we're lucky to get enough clusters,
which is usually the case, we start over again.

But in ocfs2_free_write_ctxt() the target page isn't unlocked, so we
will deadlock when trying to grab the target page again.

Also, -ENOMEM might be returned in ocfs2_grab_pages_for_write().
Another deadlock will happen in __do_page_mkwrite() if
ocfs2_page_mkwrite() returns non-VM_FAULT_LOCKED, and along with a
locked target page.

These two errors fail on the same path, so fix them by unlocking the
target page manually before ocfs2_free_write_ctxt().

Jan Kara helps me clear out the JBD2 part, and suggest the hint for root
cause.

Changes since v1:
1. Also put ENOMEM error case into consideration.

Link: http://lkml.kernel.org/r/1474173902-32075-1-git-send-email-zren@suse.com
Signed-off-by: Eric Ren <zren@suse.com>
Reviewed-by: He Gang <ghe@suse.com>
Acked-by: Joseph Qi <joseph.qi@huawei.com>
Cc: Mark Fasheh <mfasheh@suse.de>
Cc: Joel Becker <jlbec@evilplan.org>
Cc: Junxiao Bi <junxiao.bi@oracle.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
(cherry picked from commit c33f0785bf292cf1d15f4fbe42869c63e205b21c)

Conflicts:

fs/ocfs2/aops.c

Orabug: 24939243

Signed-off-by: Junxiao Bi <junxiao.bi@oracle.com>
8 years agoocfs2/dlm: fix race between convert and migration
Joseph Qi [Mon, 19 Sep 2016 21:43:55 +0000 (14:43 -0700)]
ocfs2/dlm: fix race between convert and migration

Commit ac7cf246dfdb ("ocfs2/dlm: fix race between convert and recovery")
checks if lockres master has changed to identify whether new master has
finished recovery or not.  This will introduce a race that right after
old master does umount ( means master will change), a new convert
request comes.

In this case, it will reset lockres state to DLM_RECOVERING and then
retry convert, and then fail with lockres->l_action being set to
OCFS2_AST_INVALID, which will cause inconsistent lock level between
ocfs2 and dlm, and then finally BUG.

Since dlm recovery will clear lock->convert_pending in
dlm_move_lockres_to_recovery_list, we can use it to correctly identify
the race case between convert and recovery.  So fix it.

Fixes: ac7cf246dfdb ("ocfs2/dlm: fix race between convert and recovery")
Link: http://lkml.kernel.org/r/57CE1569.8010704@huawei.com
Signed-off-by: Joseph Qi <joseph.qi@huawei.com>
Signed-off-by: Jun Piao <piaojun@huawei.com>
Cc: Mark Fasheh <mfasheh@suse.de>
Cc: Joel Becker <jlbec@evilplan.org>
Cc: Junxiao Bi <junxiao.bi@oracle.com>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
(cherry picked from commit e6f0c6e6170fec175fe676495f29029aecdf486c)

Orabug: 24939243

Signed-off-by: Junxiao Bi <junxiao.bi@oracle.com>
8 years agoocfs2: solve a problem of crossing the boundary in updating backups
jiangyiwen [Fri, 25 Mar 2016 21:21:35 +0000 (14:21 -0700)]
ocfs2: solve a problem of crossing the boundary in updating backups

In update_backups() there exists a problem of crossing the boundary as
follows:

we assume that lun will be resized to 1TB(cluster_size is 32kb), it will
include 0~33554431 cluster, in update_backups func, it will backup super
block in location of 1TB which is the 33554432th cluster, so the
phenomenon of crossing the boundary happens.

Signed-off-by: Yiwen Jiang <jiangyiwen@huawei.com>
Reviewed-by: Joseph Qi <joseph.qi@huawei.com>
Cc: Xue jiufei <xuejiufei@huawei.com>
Cc: Mark Fasheh <mfasheh@suse.de>
Cc: Joel Becker <jlbec@evilplan.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
(cherry picked from commit 584dca3440732afa84fbca07567bb66e1453936a)

Orabug: 24939243

Signed-off-by: Junxiao Bi <junxiao.bi@oracle.com>
8 years agoocfs2: use spinlock_irqsave() to downconvert lock in ocfs2_osb_dump()
jiangyiwen [Tue, 15 Mar 2016 21:53:01 +0000 (14:53 -0700)]
ocfs2: use spinlock_irqsave() to downconvert lock in ocfs2_osb_dump()

Commit a75e9ccabd92 ("ocfs2: use spinlock irqsave for downconvert lock")
missed an unmodified place in ocfs2_osb_dump(), so it still exists a
deadlock scenario.

    ocfs2_wake_downconvert_thread
    ocfs2_rw_unlock
    ocfs2_dio_end_io
    dio_complete
    .....
    bio_endio
    req_bio_endio
    ....
    scsi_io_completion
    blk_done_softirq
    __do_softirq
    do_softirq
    irq_exit
    do_IRQ
    ocfs2_osb_dump
    cat /sys/kernel/debug/ocfs2/${uuid}/fs_state

This patch still uses spin_lock_irqsave() - replace spin_lock() to solve
this situation.

Signed-off-by: Yiwen Jiang <jiangyiwen@huawei.com>
Reviewed-by: Joseph Qi <joseph.qi@huawei.com>
Cc: Mark Fasheh <mfasheh@suse.de>
Cc: Joel Becker <jlbec@evilplan.org>
Cc: Junxiao Bi <junxiao.bi@oracle.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
(cherry picked from commit bfd97a0320d338b2fce422adeddd512466ef2390)

Orabug: 24939243

Signed-off-by: Junxiao Bi <junxiao.bi@oracle.com>
8 years agoocfs2: access orphan dinode before delete entry in ocfs2_orphan_del
Joseph Qi [Thu, 14 Jan 2016 23:17:44 +0000 (15:17 -0800)]
ocfs2: access orphan dinode before delete entry in ocfs2_orphan_del

In ocfs2_orphan_del, currently it finds and deletes entry first, and
then access orphan dir dinode.  This will have a problem once
ocfs2_journal_access_di fails.  In this case, entry will be removed from
orphan dir, but in deed the inode hasn't been deleted successfully.  In
other words, the file is missing but not actually deleted.  So we should
access orphan dinode first like unlink and rename.

Signed-off-by: Joseph Qi <joseph.qi@huawei.com>
Reviewed-by: Jiufei Xue <xuejiufei@huawei.com>
Cc: Mark Fasheh <mfasheh@suse.de>
Cc: Joel Becker <jlbec@evilplan.org>
Reviewed-by: Junxiao Bi <junxiao.bi@oracle.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
(cherry picked from commit 074a6c655f6da12cb1123c8a84bfd8d781138800)

Orabug: 24939243

Signed-off-by: Junxiao Bi <junxiao.bi@oracle.com>
8 years agoocfs2/dlm: do not insert a new mle when another process is already migrating
xuejiufei [Thu, 14 Jan 2016 23:17:41 +0000 (15:17 -0800)]
ocfs2/dlm: do not insert a new mle when another process is already migrating

When two processes are migrating the same lockres,
dlm_add_migration_mle() return -EEXIST, but insert a new mle in hash
list.  dlm_migrate_lockres() will detach the old mle and free the new
one which is already in hash list, that will destroy the list.

Signed-off-by: Jiufei Xue <xuejiufei@huawei.com>
Reviewed-by: Joseph Qi <joseph.qi@huawei.com>
Cc: Mark Fasheh <mfasheh@suse.de>
Cc: Joel Becker <jlbec@evilplan.org>
Reviewed-by: Junxiao Bi <junxiao.bi@oracle.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
(cherry picked from commit 32e493265b2be96404aaa478fb2913be29b06887)

Orabug: 24939243

Signed-off-by: Junxiao Bi <junxiao.bi@oracle.com>
8 years agoocfs2: fix slot overwritten if storage link down during mount
jiangyiwen [Thu, 14 Jan 2016 23:17:33 +0000 (15:17 -0800)]
ocfs2: fix slot overwritten if storage link down during mount

The following case will lead to slot overwritten.

N1                               N2
mount ocfs2 volume, find and
allocate slot 0, then set
osb->slot_num to 0, begin to
write slot info to disk
                                 mount ocfs2 volume, wait for super lock
write block fail because of
storage link down, unlock
super lock
                                 got super lock and also allocate slot 0
                                 then unlock super lock

mount fail and then dismount,
since osb->slot_num is 0, try to
put invalid slot to disk. And it
will succeed if storage link
restores.
                                 N2 slot info is now overwritten

Once another node say N3 mount, it will find and allocate slot 0 again,
which will lead to mount hung because journal has already been locked by
N2.  so when write slot info failed, invalidate slot in advance to avoid
overwrite slot.

[akpm@linux-foundation.org: coding-style fixes]
Signed-off-by: Yiwen Jiang <jiangyiwen@huawei.com>
Reviewed-by: Joseph Qi <joseph.qi@huawei.com>
Cc: Mark Fasheh <mfasheh@suse.de>
Cc: Joel Becker <jlbec@evilplan.org>
Cc: Junxiao Bi <junxiao.bi@oracle.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
(cherry picked from commit 1247017f43a93eae3d64b7c25f3637dc545f5a47)

Orabug: 24939243

Signed-off-by: Junxiao Bi <junxiao.bi@oracle.com>
8 years agoocfs2/dlm: return appropriate value when dlm_grab() returns NULL
Xue jiufei [Thu, 14 Jan 2016 23:17:29 +0000 (15:17 -0800)]
ocfs2/dlm: return appropriate value when dlm_grab() returns NULL

dlm_grab() may return NULL when the node is doing unmount.  When doing
code review, we found that some dlm handlers may return error to caller
when dlm_grab() returns NULL and make caller BUG or other problems.
Here is an example:

Node 1                                 Node 2
receives migration message
from node 3, and send
migrate request to others
                                     start unmounting

                                     receives migrate request
                                     from node 1 and call
                                     dlm_migrate_request_handler()

                                     unmount thread unregisters
                                     domain handlers and removes
                                     dlm_context from dlm_domains

                                     dlm_migrate_request_handlers()
                                     returns -EINVAL to node 1
Exit migration neither clearing the
migration state nor sending
assert master message to node 3 which
cause node 3 hung.

Signed-off-by: Jiufei Xue <xuejiufei@huawei.com>
Reviewed-by: Joseph Qi <joseph.qi@huawei.com>
Reviewed-by: Yiwen Jiang <jiangyiwen@huawei.com>
Cc: Mark Fasheh <mfasheh@suse.de>
Cc: Joel Becker <jlbec@evilplan.org>
Cc: Junxiao Bi <junxiao.bi@oracle.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
(cherry picked from commit c372f2193a2e73d5936bf37259ae63ca388b4cbc)

Orabug: 24939243

Signed-off-by: Junxiao Bi <junxiao.bi@oracle.com>
8 years agoocfs2/dlm: wait until DLM_LOCK_RES_SETREF_INPROG is cleared in dlm_deref_lockres_worker
jiangyiwen [Thu, 14 Jan 2016 23:17:23 +0000 (15:17 -0800)]
ocfs2/dlm: wait until DLM_LOCK_RES_SETREF_INPROG is cleared in dlm_deref_lockres_worker

Commit f3f854648de6 ("ocfs2_dlm: Ensure correct ordering of set/clear
refmap bit on lockres") still exists a race which can't ensure the
ordering is exactly correct.

Node1               Node2                    Node3
umount, migrate
lockres to Node2
                    migrate finished,
                    send migrate request
                    to Node3
                                              received migrate request,
                                              create a migration_mle,
                                              respond to Node2.
                    set DLM_LOCK_RES_SETREF_INPROG
                    and send assert master to
                    Node3
                                              delete migration_mle in
                                              assert_master_handler,
                                              Node3 umount without response
                                              dlm_thread purge
                                              this lockres, send drop
                                              deref message to Node2
                    found the flag of
                    DLM_LOCK_RES_SETREF_INPROG
                    is set, dispatch
                    dlm_deref_lockres_worker to
                    clear refmap, but in function of
                    dlm_deref_lockres_worker,
                    only if node in refmap it wait
                    DLM_LOCK_RES_SETREF_INPROG
                    to be cleared. So worker is
                    done successfully

                                              purge lockres, send
                                              assert master response
                                              to Node1, and finish umount
                    set Node3 in refmap, and it
                    won't be cleared forever, thus
                    lead to umount hung

so wait until DLM_LOCK_RES_SETREF_INPROG is cleared in
dlm_deref_lockres_worker.

Signed-off-by: Yiwen Jiang <jiangyiwen@huawei.com>
Reviewed-by: Joseph Qi <joseph.qi@huawei.com>
Reviewed-by: Junxiao Bi <junxiao.bi@oracle.com>
Cc: Mark Fasheh <mfasheh@suse.de>
Cc: Joel Becker <jlbec@evilplan.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
(cherry picked from commit b5560143385e18b4109ad6951c7719705e3dd995)

Orabug: 24939243

Signed-off-by: Junxiao Bi <junxiao.bi@oracle.com>
8 years agoocfs2/dlm: fix a race between purge and migration
Xue jiufei [Thu, 14 Jan 2016 23:17:18 +0000 (15:17 -0800)]
ocfs2/dlm: fix a race between purge and migration

We found a race between purge and migration when doing code review.
Node A put lockres to purgelist before receiving the migrate message
from node B which is the master.  Node A call dlm_mig_lockres_handler to
handle this message.

dlm_mig_lockres_handler
  dlm_lookup_lockres
  >>>>>> race window, dlm_run_purge_list may run and send
         deref message to master, waiting the response
  spin_lock(&res->spinlock);
  res->state |= DLM_LOCK_RES_MIGRATING;
  spin_unlock(&res->spinlock);
  dlm_mig_lockres_handler returns

  >>>>>> dlm_thread receives the response from master for the deref
  message and triggers the BUG because the lockres has the state
  DLM_LOCK_RES_MIGRATING with the following message:

dlm_purge_lockres:209 ERROR: 6633EB681FA7474A9C280A4E1A836F0F: res
M0000000000000000030c0300000000 in use after deref

Signed-off-by: Jiufei Xue <xuejiufei@huawei.com>
Reviewed-by: Joseph Qi <joseph.qi@huawei.com>
Reviewed-by: Yiwen Jiang <jiangyiwen@huawei.com>
Cc: Mark Fasheh <mfasheh@suse.de>
Cc: Joel Becker <jlbec@evilplan.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
(cherry picked from commit 30bee898f86506893883ffb8db20d8101a29b5f5)

Orabug: 24939243

Signed-off-by: Junxiao Bi <junxiao.bi@oracle.com>
8 years agoocfs2/dlm: clear migration_pending when migration target goes down
xuejiufei [Tue, 29 Dec 2015 22:54:29 +0000 (14:54 -0800)]
ocfs2/dlm: clear migration_pending when migration target goes down

We have found a BUG on res->migration_pending when migrating lock
resources.  The situation is as follows.

dlm_mark_lockres_migration
  res->migration_pending = 1;
  __dlm_lockres_reserve_ast
  dlm_lockres_release_ast returns with res->migration_pending remains
      because other threads reserve asts
  wait dlm_migration_can_proceed returns 1
  >>>>>>> o2hb found that target goes down and remove target
          from domain_map
  dlm_migration_can_proceed returns 1
  dlm_mark_lockres_migrating returns -ESHOTDOWN with
      res->migration_pending still remains.

When reentering dlm_mark_lockres_migrating(), it will trigger the BUG_ON
with res->migration_pending.  So clear migration_pending when target is
down.

Signed-off-by: Jiufei Xue <xuejiufei@huawei.com>
Reviewed-by: Joseph Qi <joseph.qi@huawei.com>
Cc: Mark Fasheh <mfasheh@suse.de>
Cc: Joel Becker <jlbec@evilplan.org>
Cc: Junxiao Bi <junxiao.bi@oracle.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
(cherry picked from commit cc28d6d80f6ab494b10f0e2ec949eacd610f66e3)

Orabug: 24939243

Signed-off-by: Junxiao Bi <junxiao.bi@oracle.com>
8 years agoocfs2: fix BUG when calculate new backup super
Joseph Qi [Tue, 29 Dec 2015 22:54:06 +0000 (14:54 -0800)]
ocfs2: fix BUG when calculate new backup super

When resizing, it firstly extends the last gd.  Once it should backup
super in the gd, it calculates new backup super and update the
corresponding value.

But it currently doesn't consider the situation that the backup super is
already done.  And in this case, it still sets the bit in gd bitmap and
then decrease from bg_free_bits_count, which leads to a corrupted gd and
trigger the BUG in ocfs2_block_group_set_bits:

    BUG_ON(le16_to_cpu(bg->bg_free_bits_count) < num_bits);

So check whether the backup super is done and then do the updates.

Signed-off-by: Joseph Qi <joseph.qi@huawei.com>
Reviewed-by: Jiufei Xue <xuejiufei@huawei.com>
Reviewed-by: Yiwen Jiang <jiangyiwen@huawei.com>
Cc: Mark Fasheh <mfasheh@suse.de>
Cc: Joel Becker <jlbec@evilplan.org>
Cc: <stable@vger.kernel.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
(cherry picked from commit 5c9ee4cbf2a945271f25b89b137f2c03bbc3be33)

Orabug: 24939243

Signed-off-by: Junxiao Bi <junxiao.bi@oracle.com>
8 years agoocfs2: should reclaim the inode if '__ocfs2_mknod_locked' returns an error
alex chen [Fri, 6 Nov 2015 02:44:10 +0000 (18:44 -0800)]
ocfs2: should reclaim the inode if '__ocfs2_mknod_locked' returns an error

In ocfs2_mknod_locked if '__ocfs2_mknod_locke d' returns an error, we
should reclaim the inode successfully claimed above, otherwise, the
inode never be reused. The case is described below:

ocfs2_mknod
    ocfs2_mknod_locked
        ocfs2_claim_new_inode
                Successfully claim the inode
        __ocfs2_mknod_locked
            ocfs2_journal_access_di
            Failed because of -ENOMEM or other reasons, the inode
                        lockres has not been initialized yet.

    iput(inode)
        ocfs2_evict_inode
            ocfs2_delete_inode
                ocfs2_inode_lock
                    ocfs2_inode_lock_full_nested
                        __ocfs2_cluster_lock
                                Return -EINVAL because of the inode
                                lockres has not been initialized.

                So the following operations are not performed
                ocfs2_wipe_inode
                        ocfs2_remove_inode
                                ocfs2_free_dinode
                                        ocfs2_free_suballoc_bits

Signed-off-by: Alex Chen <alex.chen@huawei.com>
Reviewed-by: Joseph Qi <joseph.qi@huawei.com>
Cc: Mark Fasheh <mfasheh@suse.de>
Cc: Joel Becker <jlbec@evilplan.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
(cherry picked from commit b1529a41f777a48f95d4af29668b70ffe3360e1b)

Orabug: 24939243

Signed-off-by: Junxiao Bi <junxiao.bi@oracle.com>
8 years agoocfs2: fix race between mount and delete node/cluster
Joseph Qi [Fri, 6 Nov 2015 02:44:07 +0000 (18:44 -0800)]
ocfs2: fix race between mount and delete node/cluster

There is a race case between mount and delete node/cluster, which will
lead o2hb_thread to malfunctioning dead loop.

    o2hb_thread
    {
        o2nm_depend_this_node();
        <<<<<< race window, node may have already been deleted, and then
               enter the loop, o2hb thread will be malfunctioning
               because of no configured nodes found.
        while (!kthread_should_stop() &&
               !reg->hr_unclean_stop && !reg->hr_aborted_start) {
    }

So check the return value of o2nm_depend_this_node() is needed.  If node
has been deleted, do not enter the loop and let mount fail.

Signed-off-by: Joseph Qi <joseph.qi@huawei.com>
Cc: Mark Fasheh <mfasheh@suse.com>
Cc: Joel Becker <jlbec@evilplan.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
(cherry picked from commit 0986fe9b50f425ec81f25a1a85aaf3574b31d801)

Orabug: 24939243

Signed-off-by: Junxiao Bi <junxiao.bi@oracle.com>
8 years agoocfs2/dlm: unlock lockres spinlock before dlm_lockres_put
Joseph Qi [Thu, 22 Oct 2015 20:32:29 +0000 (13:32 -0700)]
ocfs2/dlm: unlock lockres spinlock before dlm_lockres_put

dlm_lockres_put will call dlm_lockres_release if it is the last
reference, and then it may call dlm_print_one_lock_resource and
take lockres spinlock.

So unlock lockres spinlock before dlm_lockres_put to avoid deadlock.

Signed-off-by: Joseph Qi <joseph.qi@huawei.com>
Cc: Mark Fasheh <mfasheh@suse.de>
Cc: Joel Becker <jlbec@evilplan.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
(cherry picked from commit b67de018b37a97548645a879c627d4188518e907)

Orabug: 24939243

Signed-off-by: Junxiao Bi <junxiao.bi@oracle.com>
8 years agoocfs2: avoid access invalid address when read o2dlm debug messages
Yiwen Jiang [Fri, 4 Sep 2015 22:44:37 +0000 (15:44 -0700)]
ocfs2: avoid access invalid address when read o2dlm debug messages

The following case will lead to a lockres is freed but is still in use.

cat /sys/kernel/debug/o2dlm/locking_state dlm_thread
lockres_seq_start
    -> lock dlm->track_lock
    -> get resA
                                                resA->refs decrease to 0,
                                                call dlm_lockres_release,
                                                and wait for "cat" unlock.
Although resA->refs is already set to 0,
increase resA->refs, and then unlock
                                                lock dlm->track_lock
                                                    -> list_del_init()
                                                    -> unlock
                                                    -> free resA

In such a race case, invalid address access may occurs.  So we should
delete list res->tracking before resA->refs decrease to 0.

Signed-off-by: Yiwen Jiang <jiangyiwen@huawei.com>
Reviewed-by: Joseph Qi <joseph.qi@huawei.com>
Cc: Joel Becker <jlbec@evilplan.org>
Signed-off-by: Mark Fasheh <mfasheh@suse.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
(cherry picked from commit f57a22ddecd6f26040a67e2c12880f98f88b6e00)

Orabug: 24939243

Signed-off-by: Junxiao Bi <junxiao.bi@oracle.com>
8 years agoocfs2: fix a tiny case that inode can not removed
Yiwen Jiang [Fri, 4 Sep 2015 22:44:25 +0000 (15:44 -0700)]
ocfs2: fix a tiny case that inode can not removed

When running dirop_fileop_racer we found a case that inode
can not removed.

Two nodes, say Node A and Node B, mount the same ocfs2 volume.  Create
two dirs /race/1/ and /race/2/ in the filesystem.

  Node A                            Node B
  rm -r /race/2/
                                    mv /race/1/ /race/2/
  call ocfs2_unlink(), get
  the EX mode of /race/2/
                                    wait for B unlock /race/2/
  decrease i_nlink of /race/2/ to 0,
  and add inode of /race/2/ into
  orphan dir, unlock /race/2/
                                    got EX mode of /race/2/. because
                                    /race/1/ is dir, so inc i_nlink
                                    of /race/2/ and update into disk,
                                    unlock /race/2/
  because i_nlink of /race/2/
  is not zero, this inode will
  always remain in orphan dir

This patch fixes this case by test whether i_nlink of new dir is zero.

Signed-off-by: Yiwen Jiang <jiangyiwen@huawei.com>
Reviewed-by: Mark Fasheh <mfasheh@suse.de>
Cc: Joel Becker <jlbec@evilplan.org>
Cc: Joseph Qi <joseph.qi@huawei.com>
Cc: Xue jiufei <xuejiufei@huawei.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
(cherry picked from commit 928dda1f9433f024ac48c3d97ae683bf83dd0e42)

Orabug: 24939243

Signed-off-by: Junxiao Bi <junxiao.bi@oracle.com>
8 years agoocfs2: trusted xattr missing CAP_SYS_ADMIN check
Sanidhya Kashyap [Fri, 4 Sep 2015 22:44:08 +0000 (15:44 -0700)]
ocfs2: trusted xattr missing CAP_SYS_ADMIN check

The trusted extended attributes are only visible to the process which
hvae CAP_SYS_ADMIN capability but the check is missing in ocfs2
xattr_handler trusted list.  The check is important because this will be
used for implementing mechanisms in the userspace for which other
ordinary processes should not have access to.

Signed-off-by: Sanidhya Kashyap <sanidhya.gatech@gmail.com>
Reviewed-by: Mark Fasheh <mfasheh@suse.de>
Cc: Joel Becker <jlbec@evilplan.org>
Cc: Taesoo kim <taesoo@gatech.edu>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
(cherry picked from commit 0f5e7b41f91814447defc34e915fc5d6e52266d9)

Orabug: 24939243

Signed-off-by: Junxiao Bi <junxiao.bi@oracle.com>
8 years agoocfs2: set filesytem read-only when ocfs2_delete_entry failed.
jiangyiwen [Fri, 4 Sep 2015 22:44:06 +0000 (15:44 -0700)]
ocfs2: set filesytem read-only when ocfs2_delete_entry failed.

In ocfs2_rename, it will lead to an inode with two entried(old and new) if
ocfs2_delete_entry(old) failed.  Thus, filesystem will be inconsistent.

The case is described below:

ocfs2_rename
    -> ocfs2_start_trans
    -> ocfs2_add_entry(new)
    -> ocfs2_delete_entry(old)
        -> __ocfs2_journal_access *failed* because of -ENOMEM
    -> ocfs2_commit_trans

So filesystem should be set to read-only at the moment.

Signed-off-by: Yiwen Jiang <jiangyiwen@huawei.com>
Cc: Joseph Qi <joseph.qi@huawei.com>
Cc: Joel Becker <jlbec@evilplan.org>
Reviewed-by: Mark Fasheh <mfasheh@suse.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
(cherry picked from commit 807a7907114c7c703017ed7a96477a2eeb0d08e0)

Orabug: 24939243

Signed-off-by: Junxiao Bi <junxiao.bi@oracle.com>
8 years agoocfs2: fix NULL pointer dereference in function ocfs2_abort_trigger()
Xue jiufei [Wed, 24 Jun 2015 23:55:20 +0000 (16:55 -0700)]
ocfs2: fix NULL pointer dereference in function ocfs2_abort_trigger()

ocfs2_abort_trigger() use bh->b_assoc_map to get sb.  But there's no
function to set bh->b_assoc_map in ocfs2, it will trigger NULL pointer
dereference while calling this function.  We can get sb from
bh->b_bdev->bd_super instead of b_assoc_map.

[akpm@linux-foundation.org: update comment, per Joseph]
Signed-off-by: joyce.xue <xuejiufei@huawei.com>
Cc: Joseph Qi <joseph.qi@huawei.com>
Cc: Mark Fasheh <mfasheh@suse.com>
Cc: Joel Becker <jlbec@evilplan.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
(cherry picked from commit 74e364ad1b13fd518a0bd4e5aec56d5e8706152f)

Orabug: 24939243

Signed-off-by: Junxiao Bi <junxiao.bi@oracle.com>
8 years agokvm:vmx: more complete state update on APICv on/off
Roman Kagan [Wed, 18 May 2016 14:48:20 +0000 (17:48 +0300)]
kvm:vmx: more complete state update on APICv on/off

The function to update APICv on/off state (in particular, to deactivate
it when enabling Hyper-V SynIC) is incomplete: it doesn't adjust
APICv-related fields among secondary processor-based VM-execution
controls.  As a result, Windows 2012 guests get stuck when SynIC-based
auto-EOI interrupt intersected with e.g. an IPI in the guest.

In addition, the MSR intercept bitmap isn't updated every time "virtualize
x2APIC mode" is toggled.  This path can only be triggered by a malicious
guest, because Windows didn't use x2APIC but rather their own synthetic
APIC access MSRs; however a guest running in a SynIC-enabled VM could
switch to x2APIC and thus obtain direct access to host APIC MSRs
(CVE-2016-4440).

The patch fixes those omissions.

Signed-off-by: Roman Kagan <rkagan@virtuozzo.com>
Reported-by: Steve Rutherford <srutherford@google.com>
Reported-by: Yang Zhang <yang.zhang.wz@gmail.com>
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Orabug: 23347009
CVE: CVE-2016-4440
Signed-off-by: Manjunath Govindashetty <manjunath.govindashetty@oracle.com>
8 years agovfs: add vfs_select_inode() helper
Miklos Szeredi [Thu, 21 Jul 2016 20:30:58 +0000 (13:30 -0700)]
vfs: add vfs_select_inode() helper

Signed-off-by: Miklos Szeredi <mszeredi@redhat.com>
Cc: <stable@vger.kernel.org> # v4.2+
Orabug: 24363418
CVE:CVE-2016-6198,CVE-2016-6197
Based on mainline v4.6 commit 54d5ca871e72f2bb172ec9323497f01cd5091ec7
Conflicts:
  include/linux/dcache.h - code base
Signed-off-by: Chuck Anderson <chuck.anderson@oracle.com>
9 years agoext4: only call ext4_truncate when size <= isize
Josef Bacik [Mon, 22 Jun 2015 04:31:26 +0000 (00:31 -0400)]
ext4: only call ext4_truncate when size <= isize

Orabug: 23598757

At LSF we decided that if we truncate up from isize we shouldn't trim
fallocated blocks that were fallocated with KEEP_SIZE and are past the
new i_size.  This patch fixes ext4 to do this.

[ Completely reworked patch so that i_disksize would actually get set
  when truncating up.  Also reworked the code for handling truncate so
  that it's easier to handle. -- tytso ]

Signed-off-by: Josef Bacik <jbacik@fb.com>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Reviewed-by: Lukas Czerner <lczerner@redhat.com>
(cherry picked from commit 3da40c7b089810ac9cf2bb1e59633f619f3a7312)
Signed-off-by: Liu Bo <bo.li.liu@oracle.com>
 Conflicts:
fs/ext4/inode.c

9 years agobtrfs: put delayed item hook into inode
David Sterba [Thu, 19 Nov 2015 13:15:51 +0000 (14:15 +0100)]
btrfs: put delayed item hook into inode

Orabug: 23513043

Inodes for delayed iput allocate a trivial helper structure, let's place
the list hook directly into the inode and save a kmalloc (killing a
__GFP_NOFAIL as a bonus) at the cost of increasing size of btrfs_inode.

The inode can be put into the delayed_iputs list more than once and we
have to keep the count. This means we can't use the list_splice to
process a bunch of inodes because we'd lost track of the count if the
inode is put into the delayed iputs again while it's processed.

Signed-off-by: David Sterba <dsterba@suse.com>
(cherry picked from commit 8089fe62c6603860f6796ca80519b92391292f21)
Signed-off-by: Liu Bo <bo.li.liu@oracle.com>
Conflicts:
fs/btrfs/inode.c

9 years agoRevert "net: preserve IP control block during GSO segmentation"
Dan Duval [Thu, 16 Jun 2016 22:45:07 +0000 (18:45 -0400)]
Revert "net: preserve IP control block during GSO segmentation"

Orabug: 23522263

This reverts commit a9e889c735eef731df3fce53fb997277f3b0b498.

Yuval Shaia and team discovered that the aforementioned commit
resulted in a substantial (900-to-1) performance loss on a TCP
bandwidth test over IPoIB.  Upstream is aware of the problem
but there is currently no solution in hand, so just revert the
offending commmit.

Note that the original commit purports to fix a kernel crash,
but so far we haven't seen the crash.

Signed-off-by: Dan Duval <dan.duval@oracle.com>
Signed-off-by: Yuval Shaia <yuval.shaia@oracle.com>
Signed-off-by: Ajaykumar Hotchandani <ajaykumar.hotchandani@oracle.com>
Tested-by: Rose Wang <rose.wang@oracle.com>
9 years agoCorrect backport of fa3c776 ("Thermal: Ignore invalid trip points")
Mike Galbraith [Sat, 23 Apr 2016 00:30:39 +0000 (20:30 -0400)]
Correct backport of fa3c776 ("Thermal: Ignore invalid trip points")

Orabug: 23331182

Backport of 81ad4276b505e987dd8ebbdf63605f92cd172b52 failed to adjust
for intervening ->get_trip_temp() argument type change, thus causing
stack protector to panic.

drivers/thermal/thermal_core.c: In function ‘thermal_zone_device_register’:
drivers/thermal/thermal_core.c:1569:41: warning: passing argument 3 of
‘tz->ops->get_trip_temp’ from incompatible pointer type [-Wincompatible-pointer-types]
   if (tz->ops->get_trip_temp(tz, count, &trip_temp))
                                         ^
drivers/thermal/thermal_core.c:1569:41: note: expected ‘long unsigned int *’
but argument is of type ‘int *’

CC: <stable@vger.kernel.org> #3.18,#4.1
Signed-off-by: Mike Galbraith <umgwanakikbuti@gmail.com>
Signed-off-by: Dan Duval <dan.duval@oracle.com>
(cherry picked from commit 5640c4c37eee293451388cd5ee74dfed3a30f32d)

Conflict:

drivers/thermal/thermal_core.c

9 years agotcp_cubic: do not set epoch_start in the future
Eric Dumazet [Thu, 17 Sep 2015 15:38:00 +0000 (08:38 -0700)]
tcp_cubic: do not set epoch_start in the future

Orabug: 23331181

[ Upstream commit c2e7204d180f8efc80f27959ca9cf16fa17f67db ]

Tracking idle time in bictcp_cwnd_event() is imprecise, as epoch_start
is normally set at ACK processing time, not at send time.

Doing a proper fix would need to add an additional state variable,
and does not seem worth the trouble, given CUBIC bug has been there
forever before Jana noticed it.

Let's simply not set epoch_start in the future, otherwise
bictcp_update() could overflow and CUBIC would again
grow cwnd too fast.

This was detected thanks to a packetdrill test Neal wrote that was flaky
before applying this fix.

Fixes: 30927520dbae ("tcp_cubic: better follow cubic curve after idle period")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: Neal Cardwell <ncardwell@google.com>
Signed-off-by: Yuchung Cheng <ycheng@google.com>
Cc: Jana Iyengar <jri@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
(cherry picked from commit af05df01c7a9f4b3cf3dc29365f4c7c4e2cd53ff)

Signed-off-by: Dan Duval <dan.duval@oracle.com>
9 years agousb: xhci: fix xhci locking up during hcd remove
Roger Quadros [Fri, 29 May 2015 14:01:49 +0000 (17:01 +0300)]
usb: xhci: fix xhci locking up during hcd remove

Orabug: 23331180

[ Upstream commit ad6b1d914a9e07f3b9a9ae3396f3c840d0070539 ]

The problem seems to be that if a new device is detected
while we have already removed the shared HCD, then many of the
xhci operations (e.g.  xhci_alloc_dev(), xhci_setup_device())
hang as command never completes.

I don't think XHCI can operate without the shared HCD as we've
already called xhci_halt() in xhci_only_stop_hcd() when shared HCD
goes away. We need to prevent new commands from being queued
not only when HCD is dying but also when HCD is halted.

The following lockup was detected while testing the otg state
machine.

[  178.199951] xhci-hcd xhci-hcd.0.auto: xHCI Host Controller
[  178.205799] xhci-hcd xhci-hcd.0.auto: new USB bus registered, assigned bus number 1
[  178.214458] xhci-hcd xhci-hcd.0.auto: hcc params 0x0220f04c hci version 0x100 quirks 0x00010010
[  178.223619] xhci-hcd xhci-hcd.0.auto: irq 400, io mem 0x48890000
[  178.230677] usb usb1: New USB device found, idVendor=1d6b, idProduct=0002
[  178.237796] usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
[  178.245358] usb usb1: Product: xHCI Host Controller
[  178.250483] usb usb1: Manufacturer: Linux 4.0.0-rc1-00024-g6111320 xhci-hcd
[  178.257783] usb usb1: SerialNumber: xhci-hcd.0.auto
[  178.267014] hub 1-0:1.0: USB hub found
[  178.272108] hub 1-0:1.0: 1 port detected
[  178.278371] xhci-hcd xhci-hcd.0.auto: xHCI Host Controller
[  178.284171] xhci-hcd xhci-hcd.0.auto: new USB bus registered, assigned bus number 2
[  178.294038] usb usb2: New USB device found, idVendor=1d6b, idProduct=0003
[  178.301183] usb usb2: New USB device strings: Mfr=3, Product=2, SerialNumber=1
[  178.308776] usb usb2: Product: xHCI Host Controller
[  178.313902] usb usb2: Manufacturer: Linux 4.0.0-rc1-00024-g6111320 xhci-hcd
[  178.321222] usb usb2: SerialNumber: xhci-hcd.0.auto
[  178.329061] hub 2-0:1.0: USB hub found
[  178.333126] hub 2-0:1.0: 1 port detected
[  178.567585] dwc3 48890000.usb: usb_otg_start_host 0
[  178.572707] xhci-hcd xhci-hcd.0.auto: remove, state 4
[  178.578064] usb usb2: USB disconnect, device number 1
[  178.586565] xhci-hcd xhci-hcd.0.auto: USB bus 2 deregistered
[  178.592585] xhci-hcd xhci-hcd.0.auto: remove, state 1
[  178.597924] usb usb1: USB disconnect, device number 1
[  178.603248] usb 1-1: new high-speed USB device number 2 using xhci-hcd
[  190.597337] INFO: task kworker/u4:0:6 blocked for more than 10 seconds.
[  190.604273]       Not tainted 4.0.0-rc1-00024-g6111320 #1058
[  190.610228] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[  190.618443] kworker/u4:0    D c05c0ac0     0     6      2 0x00000000
[  190.625120] Workqueue: usb_otg usb_otg_work
[  190.629533] [<c05c0ac0>] (__schedule) from [<c05c10ac>] (schedule+0x34/0x98)
[  190.636915] [<c05c10ac>] (schedule) from [<c05c1318>] (schedule_preempt_disabled+0xc/0x10)
[  190.645591] [<c05c1318>] (schedule_preempt_disabled) from [<c05c23d0>] (mutex_lock_nested+0x1ac/0x3fc)
[  190.655353] [<c05c23d0>] (mutex_lock_nested) from [<c046cf8c>] (usb_disconnect+0x3c/0x208)
[  190.664043] [<c046cf8c>] (usb_disconnect) from [<c0470cf0>] (_usb_remove_hcd+0x98/0x1d8)
[  190.672535] [<c0470cf0>] (_usb_remove_hcd) from [<c0485da8>] (usb_otg_start_host+0x50/0xf4)
[  190.681299] [<c0485da8>] (usb_otg_start_host) from [<c04849a4>] (otg_set_protocol+0x5c/0xd0)
[  190.690153] [<c04849a4>] (otg_set_protocol) from [<c0484b88>] (otg_set_state+0x170/0xbfc)
[  190.698735] [<c0484b88>] (otg_set_state) from [<c0485740>] (otg_statemachine+0x12c/0x470)
[  190.707326] [<c0485740>] (otg_statemachine) from [<c0053c84>] (process_one_work+0x1b4/0x4a0)
[  190.716162] [<c0053c84>] (process_one_work) from [<c00540f8>] (worker_thread+0x154/0x44c)
[  190.724742] [<c00540f8>] (worker_thread) from [<c0058f88>] (kthread+0xd4/0xf0)
[  190.732328] [<c0058f88>] (kthread) from [<c000e810>] (ret_from_fork+0x14/0x24)
[  190.739898] 5 locks held by kworker/u4:0/6:
[  190.744274]  #0:  ("%s""usb_otg"){.+.+.+}, at: [<c0053bf4>] process_one_work+0x124/0x4a0
[  190.752799]  #1:  ((&otgd->work)){+.+.+.}, at: [<c0053bf4>] process_one_work+0x124/0x4a0
[  190.761326]  #2:  (&otgd->fsm.lock){+.+.+.}, at: [<c048562c>] otg_statemachine+0x18/0x470
[  190.769934]  #3:  (usb_bus_list_lock){+.+.+.}, at: [<c0470ce8>] _usb_remove_hcd+0x90/0x1d8
[  190.778635]  #4:  (&dev->mutex){......}, at: [<c046cf8c>] usb_disconnect+0x3c/0x208
[  190.786700] INFO: task kworker/1:0:14 blocked for more than 10 seconds.
[  190.793633]       Not tainted 4.0.0-rc1-00024-g6111320 #1058
[  190.799567] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[  190.807783] kworker/1:0     D c05c0ac0     0    14      2 0x00000000
[  190.814457] Workqueue: usb_hub_wq hub_event
[  190.818866] [<c05c0ac0>] (__schedule) from [<c05c10ac>] (schedule+0x34/0x98)
[  190.826252] [<c05c10ac>] (schedule) from [<c05c4e40>] (schedule_timeout+0x13c/0x1ec)
[  190.834377] [<c05c4e40>] (schedule_timeout) from [<c05c19f0>] (wait_for_common+0xbc/0x150)
[  190.843062] [<c05c19f0>] (wait_for_common) from [<bf068a3c>] (xhci_setup_device+0x164/0x5cc [xhci_hcd])
[  190.852986] [<bf068a3c>] (xhci_setup_device [xhci_hcd]) from [<c046b7f4>] (hub_port_init+0x3f4/0xb10)
[  190.862667] [<c046b7f4>] (hub_port_init) from [<c046eb64>] (hub_event+0x704/0x1018)
[  190.870704] [<c046eb64>] (hub_event) from [<c0053c84>] (process_one_work+0x1b4/0x4a0)
[  190.878919] [<c0053c84>] (process_one_work) from [<c00540f8>] (worker_thread+0x154/0x44c)
[  190.887503] [<c00540f8>] (worker_thread) from [<c0058f88>] (kthread+0xd4/0xf0)
[  190.895076] [<c0058f88>] (kthread) from [<c000e810>] (ret_from_fork+0x14/0x24)
[  190.902650] 5 locks held by kworker/1:0/14:
[  190.907023]  #0:  ("usb_hub_wq"){.+.+.+}, at: [<c0053bf4>] process_one_work+0x124/0x4a0
[  190.915454]  #1:  ((&hub->events)){+.+.+.}, at: [<c0053bf4>] process_one_work+0x124/0x4a0
[  190.924070]  #2:  (&dev->mutex){......}, at: [<c046e490>] hub_event+0x30/0x1018
[  190.931768]  #3:  (&port_dev->status_lock){+.+.+.}, at: [<c046eb50>] hub_event+0x6f0/0x1018
[  190.940558]  #4:  (&bus->usb_address0_mutex){+.+.+.}, at: [<c046b458>] hub_port_init+0x58/0xb10

Signed-off-by: Roger Quadros <rogerq@ti.com>
Signed-off-by: Mathias Nyman <mathias.nyman@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
(cherry picked from commit 5d0b7d4792890a775d3765ba3e6e444b28c836cc)

Signed-off-by: Dan Duval <dan.duval@oracle.com>
9 years agousb: xhci: fix wild pointers in xhci_mem_cleanup
Lu Baolu [Fri, 8 Apr 2016 13:25:09 +0000 (16:25 +0300)]
usb: xhci: fix wild pointers in xhci_mem_cleanup

Orabug: 23331178

[ Upstream commit 71504062a7c34838c3fccd92c447f399d3cb5797 ]

This patch fixes some wild pointers produced by xhci_mem_cleanup.
These wild pointers will cause system crash if xhci_mem_cleanup()
is called twice.

Reported-and-tested-by: Pengcheng Li <lpc.li@hisilicon.com>
Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com>
Cc: stable@vger.kernel.org
Signed-off-by: Mathias Nyman <mathias.nyman@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
(cherry picked from commit bd713f99caa8c08b9bbc57f7c19eefc2eca25fde)

Signed-off-by: Dan Duval <dan.duval@oracle.com>
9 years agodebugfs: Make automount point inodes permanently empty
Seth Forshee [Wed, 9 Mar 2016 15:18:07 +0000 (09:18 -0600)]
debugfs: Make automount point inodes permanently empty

Orabug: 23331176

[ Upstream commit 87243deb88671f70def4c52dfa7ca7830707bd31 ]

Starting with 4.1 the tracing subsystem has its own filesystem
which is automounted in the tracing subdirectory of debugfs.
Prior to this debugfs could be bind mounted in a cloned mount
namespace, but if tracefs has been mounted under debugfs this
now fails because there is a locked child mount. This creates
a regression for container software which bind mounts debugfs
to satisfy the assumption of some userspace software.

In other pseudo filesystems such as proc and sysfs we're already
creating mountpoints like this in such a way that no dirents can
be created in the directories, allowing them to be exceptions to
some MNT_LOCKED tests. In fact we're already do this for the
tracefs mountpoint in sysfs.

Do the same in debugfs_create_automount(), since the intention
here is clearly to create a mountpoint. This fixes the regression,
as locked child mounts on permanently empty directories do not
cause a bind mount to fail.

Cc: stable@vger.kernel.org # v4.1+
Signed-off-by: Seth Forshee <seth.forshee@canonical.com>
Acked-by: Serge Hallyn <serge.hallyn@canonical.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
(cherry picked from commit 6650ec2e27f912376ea9ec02fe2a144954ce8394)

Signed-off-by: Dan Duval <dan.duval@oracle.com>
9 years agoBtrfs: fix file/data loss caused by fsync after rename and new inode
Filipe Manana [Wed, 30 Mar 2016 22:37:21 +0000 (23:37 +0100)]
Btrfs: fix file/data loss caused by fsync after rename and new inode

Orabug: 23331175

[ Upstream commit 56f23fdbb600e6087db7b009775b95ce07cc3195 ]

If we rename an inode A (be it a file or a directory), create a new
inode B with the old name of inode A and under the same parent directory,
fsync inode B and then power fail, at log tree replay time we end up
removing inode A completely. If inode A is a directory then all its files
are gone too.

Example scenarios where this happens:
This is reproducible with the following steps, taken from a couple of
test cases written for fstests which are going to be submitted upstream
soon:

   # Scenario 1

   mkfs.btrfs -f /dev/sdc
   mount /dev/sdc /mnt
   mkdir -p /mnt/a/x
   echo "hello" > /mnt/a/x/foo
   echo "world" > /mnt/a/x/bar
   sync
   mv /mnt/a/x /mnt/a/y
   mkdir /mnt/a/x
   xfs_io -c fsync /mnt/a/x
   <power failure happens>

   The next time the fs is mounted, log tree replay happens and
   the directory "y" does not exist nor do the files "foo" and
   "bar" exist anywhere (neither in "y" nor in "x", nor the root
   nor anywhere).

   # Scenario 2

   mkfs.btrfs -f /dev/sdc
   mount /dev/sdc /mnt
   mkdir /mnt/a
   echo "hello" > /mnt/a/foo
   sync
   mv /mnt/a/foo /mnt/a/bar
   echo "world" > /mnt/a/foo
   xfs_io -c fsync /mnt/a/foo
   <power failure happens>

   The next time the fs is mounted, log tree replay happens and the
   file "bar" does not exists anymore. A file with the name "foo"
   exists and it matches the second file we created.

Another related problem that does not involve file/data loss is when a
new inode is created with the name of a deleted snapshot and we fsync it:

   mkfs.btrfs -f /dev/sdc
   mount /dev/sdc /mnt
   mkdir /mnt/testdir
   btrfs subvolume snapshot /mnt /mnt/testdir/snap
   btrfs subvolume delete /mnt/testdir/snap
   rmdir /mnt/testdir
   mkdir /mnt/testdir
   xfs_io -c fsync /mnt/testdir # or fsync some file inside /mnt/testdir
   <power failure>

   The next time the fs is mounted the log replay procedure fails because
   it attempts to delete the snapshot entry (which has dir item key type
   of BTRFS_ROOT_ITEM_KEY) as if it were a regular (non-root) entry,
   resulting in the following error that causes mount to fail:

   [52174.510532] BTRFS info (device dm-0): failed to delete reference to snap, inode 257 parent 257
   [52174.512570] ------------[ cut here ]------------
   [52174.513278] WARNING: CPU: 12 PID: 28024 at fs/btrfs/inode.c:3986 __btrfs_unlink_inode+0x178/0x351 [btrfs]()
   [52174.514681] BTRFS: Transaction aborted (error -2)
   [52174.515630] Modules linked in: btrfs dm_flakey dm_mod overlay crc32c_generic ppdev xor raid6_pq acpi_cpufreq parport_pc tpm_tis sg parport tpm evdev i2c_piix4 proc
   [52174.521568] CPU: 12 PID: 28024 Comm: mount Tainted: G        W       4.5.0-rc6-btrfs-next-27+ #1
   [52174.522805] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS by qemu-project.org 04/01/2014
   [52174.524053]  0000000000000000 ffff8801df2a7710 ffffffff81264e93 ffff8801df2a7758
   [52174.524053]  0000000000000009 ffff8801df2a7748 ffffffff81051618 ffffffffa03591cd
   [52174.524053]  00000000fffffffe ffff88015e6e5000 ffff88016dbc3c88 ffff88016dbc3c88
   [52174.524053] Call Trace:
   [52174.524053]  [<ffffffff81264e93>] dump_stack+0x67/0x90
   [52174.524053]  [<ffffffff81051618>] warn_slowpath_common+0x99/0xb2
   [52174.524053]  [<ffffffffa03591cd>] ? __btrfs_unlink_inode+0x178/0x351 [btrfs]
   [52174.524053]  [<ffffffff81051679>] warn_slowpath_fmt+0x48/0x50
   [52174.524053]  [<ffffffffa03591cd>] __btrfs_unlink_inode+0x178/0x351 [btrfs]
   [52174.524053]  [<ffffffff8118f5e9>] ? iput+0xb0/0x284
   [52174.524053]  [<ffffffffa0359fe8>] btrfs_unlink_inode+0x1c/0x3d [btrfs]
   [52174.524053]  [<ffffffffa038631e>] check_item_in_log+0x1fe/0x29b [btrfs]
   [52174.524053]  [<ffffffffa0386522>] replay_dir_deletes+0x167/0x1cf [btrfs]
   [52174.524053]  [<ffffffffa038739e>] fixup_inode_link_count+0x289/0x2aa [btrfs]
   [52174.524053]  [<ffffffffa038748a>] fixup_inode_link_counts+0xcb/0x105 [btrfs]
   [52174.524053]  [<ffffffffa038a5ec>] btrfs_recover_log_trees+0x258/0x32c [btrfs]
   [52174.524053]  [<ffffffffa03885b2>] ? replay_one_extent+0x511/0x511 [btrfs]
   [52174.524053]  [<ffffffffa034f288>] open_ctree+0x1dd4/0x21b9 [btrfs]
   [52174.524053]  [<ffffffffa032b753>] btrfs_mount+0x97e/0xaed [btrfs]
   [52174.524053]  [<ffffffff8108e1b7>] ? trace_hardirqs_on+0xd/0xf
   [52174.524053]  [<ffffffff8117bafa>] mount_fs+0x67/0x131
   [52174.524053]  [<ffffffff81193003>] vfs_kern_mount+0x6c/0xde
   [52174.524053]  [<ffffffffa032af81>] btrfs_mount+0x1ac/0xaed [btrfs]
   [52174.524053]  [<ffffffff8108e1b7>] ? trace_hardirqs_on+0xd/0xf
   [52174.524053]  [<ffffffff8108c262>] ? lockdep_init_map+0xb9/0x1b3
   [52174.524053]  [<ffffffff8117bafa>] mount_fs+0x67/0x131
   [52174.524053]  [<ffffffff81193003>] vfs_kern_mount+0x6c/0xde
   [52174.524053]  [<ffffffff8119590f>] do_mount+0x8a6/0x9e8
   [52174.524053]  [<ffffffff811358dd>] ? strndup_user+0x3f/0x59
   [52174.524053]  [<ffffffff81195c65>] SyS_mount+0x77/0x9f
   [52174.524053]  [<ffffffff814935d7>] entry_SYSCALL_64_fastpath+0x12/0x6b
   [52174.561288] ---[ end trace 6b53049efb1a3ea6 ]---

Fix this by forcing a transaction commit when such cases happen.
This means we check in the commit root of the subvolume tree if there
was any other inode with the same reference when the inode we are
fsync'ing is a new inode (created in the current transaction).

Test cases for fstests, covering all the scenarios given above, were
submitted upstream for fstests:

  * fstests: generic test for fsync after renaming directory
    https://patchwork.kernel.org/patch/8694281/

  * fstests: generic test for fsync after renaming file
    https://patchwork.kernel.org/patch/8694301/

  * fstests: add btrfs test for fsync after snapshot deletion
    https://patchwork.kernel.org/patch/8670671/

Cc: stable@vger.kernel.org
Signed-off-by: Filipe Manana <fdmanana@suse.com>
Signed-off-by: Chris Mason <clm@fb.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
(cherry picked from commit 033ad030df0ea932a21499582fea59e1df95769b)

Signed-off-by: Dan Duval <dan.duval@oracle.com>
9 years agoBtrfs: fix fsync after truncate when no_holes feature is enabled
Filipe Manana [Thu, 25 Jun 2015 03:17:46 +0000 (04:17 +0100)]
Btrfs: fix fsync after truncate when no_holes feature is enabled

Orabug: 23331174

[ Upstream commit a89ca6f24ffe435edad57de02eaabd37a2c6bff6 ]

When we have the no_holes feature enabled, if a we truncate a file to a
smaller size, truncate it again but to a size greater than or equals to
its original size and fsync it, the log tree will not have any information
about the hole covering the range [truncate_1_offset, new_file_size[.
Which means if the fsync log is replayed, the file will remain with the
state it had before both truncate operations.

Without the no_holes feature this does not happen, since when the inode
is logged (full sync flag is set) it will find in the fs/subvol tree a
leaf with a generation matching the current transaction id that has an
explicit extent item representing the hole.

Fix this by adding an explicit extent item representing a hole between
the last extent and the inode's i_size if we are doing a full sync.

The issue is easy to reproduce with the following test case for fstests:

  . ./common/rc
  . ./common/filter
  . ./common/dmflakey

  _need_to_be_root
  _supported_fs generic
  _supported_os Linux
  _require_scratch
  _require_dm_flakey

  # This test was motivated by an issue found in btrfs when the btrfs
  # no-holes feature is enabled (introduced in kernel 3.14). So enable
  # the feature if the fs being tested is btrfs.
  if [ $FSTYP == "btrfs" ]; then
      _require_btrfs_fs_feature "no_holes"
      _require_btrfs_mkfs_feature "no-holes"
      MKFS_OPTIONS="$MKFS_OPTIONS -O no-holes"
  fi

  rm -f $seqres.full

  _scratch_mkfs >>$seqres.full 2>&1
  _init_flakey
  _mount_flakey

  # Create our test files and make sure everything is durably persisted.
  $XFS_IO_PROG -f -c "pwrite -S 0xaa 0 64K"         \
                  -c "pwrite -S 0xbb 64K 61K"       \
                  $SCRATCH_MNT/foo | _filter_xfs_io
  $XFS_IO_PROG -f -c "pwrite -S 0xee 0 64K"         \
                  -c "pwrite -S 0xff 64K 61K"       \
                  $SCRATCH_MNT/bar | _filter_xfs_io
  sync

  # Now truncate our file foo to a smaller size (64Kb) and then truncate
  # it to the size it had before the shrinking truncate (125Kb). Then
  # fsync our file. If a power failure happens after the fsync, we expect
  # our file to have a size of 125Kb, with the first 64Kb of data having
  # the value 0xaa and the second 61Kb of data having the value 0x00.
  $XFS_IO_PROG -c "truncate 64K" \
               -c "truncate 125K" \
               -c "fsync" \
               $SCRATCH_MNT/foo

  # Do something similar to our file bar, but the first truncation sets
  # the file size to 0 and the second truncation expands the size to the
  # double of what it was initially.
  $XFS_IO_PROG -c "truncate 0" \
               -c "truncate 253K" \
               -c "fsync" \
               $SCRATCH_MNT/bar

  _load_flakey_table $FLAKEY_DROP_WRITES
  _unmount_flakey

  # Allow writes again, mount to trigger log replay and validate file
  # contents.
  _load_flakey_table $FLAKEY_ALLOW_WRITES
  _mount_flakey

  # We expect foo to have a size of 125Kb, the first 64Kb of data all
  # having the value 0xaa and the remaining 61Kb to be a hole (all bytes
  # with value 0x00).
  echo "File foo content after log replay:"
  od -t x1 $SCRATCH_MNT/foo

  # We expect bar to have a size of 253Kb and no extents (any byte read
  # from bar has the value 0x00).
  echo "File bar content after log replay:"
  od -t x1 $SCRATCH_MNT/bar

  status=0
  exit

The expected file contents in the golden output are:

  File foo content after log replay:
  0000000 aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa
  *
  0200000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  *
  0372000
  File bar content after log replay:
  0000000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  *
  0772000

Without this fix, their contents are:

  File foo content after log replay:
  0000000 aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa aa
  *
  0200000 bb bb bb bb bb bb bb bb bb bb bb bb bb bb bb bb
  *
  0372000
  File bar content after log replay:
  0000000 ee ee ee ee ee ee ee ee ee ee ee ee ee ee ee ee
  *
  0200000 ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
  *
  0372000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
  *
  0772000

A test case submission for fstests follows soon.

Signed-off-by: Filipe Manana <fdmanana@suse.com>
Reviewed-by: Liu Bo <bo.li.liu@oracle.com>
Signed-off-by: Chris Mason <clm@fb.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
(cherry picked from commit 091537b5b3102af4765256d2b527a2a1de4fb184)

Signed-off-by: Dan Duval <dan.duval@oracle.com>
9 years agoBtrfs: fix fsync xattr loss in the fast fsync path
Filipe Manana [Fri, 19 Jun 2015 23:44:51 +0000 (00:44 +0100)]
Btrfs: fix fsync xattr loss in the fast fsync path

Orabug: 23331173

[ Upstream commit 36283bf777d963fac099213297e155d071096994 ]

After commit 4f764e515361 ("Btrfs: remove deleted xattrs on fsync log
replay"), we can end up in a situation where during log replay we end up
deleting xattrs that were never deleted when their file was last fsynced.

This happens in the fast fsync path (flag BTRFS_INODE_NEEDS_FULL_SYNC is
not set in the inode) if the inode has the flag BTRFS_INODE_COPY_EVERYTHING
set, the xattr was added in a past transaction and the leaf where the
xattr is located was not updated (COWed or created) in the current
transaction. In this scenario the xattr item never ends up in the log
tree and therefore at log replay time, which makes the replay code delete
the xattr from the fs/subvol tree as it thinks that xattr was deleted
prior to the last fsync.

Fix this by always logging all xattrs, which is the simplest and most
reliable way to detect deleted xattrs and replay the deletes at log replay
time.

This issue is reproducible with the following test case for fstests:

  seq=`basename $0`
  seqres=$RESULT_DIR/$seq
  echo "QA output created by $seq"

  here=`pwd`
  tmp=/tmp/$$
  status=1 # failure is the default!

  _cleanup()
  {
      _cleanup_flakey
      rm -f $tmp.*
  }
  trap "_cleanup; exit \$status" 0 1 2 3 15

  # get standard environment, filters and checks
  . ./common/rc
  . ./common/filter
  . ./common/dmflakey
  . ./common/attr

  # real QA test starts here

  # We create a lot of xattrs for a single file. Only btrfs and xfs are currently
  # able to store such a large mount of xattrs per file, other filesystems such
  # as ext3/4 and f2fs for example, fail with ENOSPC even if we attempt to add
  # less than 1000 xattrs with very small values.
  _supported_fs btrfs xfs
  _supported_os Linux
  _need_to_be_root
  _require_scratch
  _require_dm_flakey
  _require_attrs
  _require_metadata_journaling $SCRATCH_DEV

  rm -f $seqres.full

  _scratch_mkfs >> $seqres.full 2>&1
  _init_flakey
  _mount_flakey

  # Create the test file with some initial data and make sure everything is
  # durably persisted.
  $XFS_IO_PROG -f -c "pwrite -S 0xaa 0 32k" $SCRATCH_MNT/foo | _filter_xfs_io
  sync

  # Add many small xattrs to our file.
  # We create such a large amount because it's needed to trigger the issue found
  # in btrfs - we need to have an amount that causes the fs to have at least 3
  # btree leafs with xattrs stored in them, and it must work on any leaf size
  # (maximum leaf/node size is 64Kb).
  num_xattrs=2000
  for ((i = 1; i <= $num_xattrs; i++)); do
      name="user.attr_$(printf "%04d" $i)"
      $SETFATTR_PROG -n $name -v "val_$(printf "%04d" $i)" $SCRATCH_MNT/foo
  done

  # Sync the filesystem to force a commit of the current btrfs transaction, this
  # is a necessary condition to trigger the bug on btrfs.
  sync

  # Now update our file's data and fsync the file.
  # After a successful fsync, if the fsync log/journal is replayed we expect to
  # see all the xattrs we added before with the same values (and the updated file
  # data of course). Btrfs used to delete some of these xattrs when it replayed
  # its fsync log/journal.
  $XFS_IO_PROG -c "pwrite -S 0xbb 8K 16K" \
               -c "fsync" \
               $SCRATCH_MNT/foo | _filter_xfs_io

  # Simulate a crash/power loss.
  _load_flakey_table $FLAKEY_DROP_WRITES
  _unmount_flakey

  # Allow writes again and mount. This makes the fs replay its fsync log.
  _load_flakey_table $FLAKEY_ALLOW_WRITES
  _mount_flakey

  echo "File content after crash and log replay:"
  od -t x1 $SCRATCH_MNT/foo

  echo "File xattrs after crash and log replay:"
  for ((i = 1; i <= $num_xattrs; i++)); do
      name="user.attr_$(printf "%04d" $i)"
      echo -n "$name="
      $GETFATTR_PROG --absolute-names -n $name --only-values $SCRATCH_MNT/foo
      echo
  done

  status=0
  exit

The golden output expects all xattrs to be available, and with the correct
values, after the fsync log is replayed.

Signed-off-by: Filipe Manana <fdmanana@suse.com>
Signed-off-by: Chris Mason <clm@fb.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
(cherry picked from commit db4043dffa5b9f77160dfba088bbbbaa64dd9ed1)

Signed-off-by: Dan Duval <dan.duval@oracle.com>
9 years agoassoc_array: don't call compare_object() on a node
Jerome Marchand [Wed, 6 Apr 2016 13:06:48 +0000 (14:06 +0100)]
assoc_array: don't call compare_object() on a node

Orabug: 23331172

[ Upstream commit 8d4a2ec1e0b41b0cf9a0c5cd4511da7f8e4f3de2 ]

Changes since V1: fixed the description and added KASan warning.

In assoc_array_insert_into_terminal_node(), we call the
compare_object() method on all non-empty slots, even when they're
not leaves, passing a pointer to an unexpected structure to
compare_object(). Currently it causes an out-of-bound read access
in keyring_compare_object detected by KASan (see below). The issue
is easily reproduced with keyutils testsuite.
Only call compare_object() when the slot is a leave.

KASan warning:
==================================================================
BUG: KASAN: slab-out-of-bounds in keyring_compare_object+0x213/0x240 at addr ffff880060a6f838
Read of size 8 by task keyctl/1655
=============================================================================
BUG kmalloc-192 (Not tainted): kasan: bad access detected
-----------------------------------------------------------------------------

Disabling lock debugging due to kernel taint
INFO: Allocated in assoc_array_insert+0xfd0/0x3a60 age=69 cpu=1 pid=1647
___slab_alloc+0x563/0x5c0
__slab_alloc+0x51/0x90
kmem_cache_alloc_trace+0x263/0x300
assoc_array_insert+0xfd0/0x3a60
__key_link_begin+0xfc/0x270
key_create_or_update+0x459/0xaf0
SyS_add_key+0x1ba/0x350
entry_SYSCALL_64_fastpath+0x12/0x76
INFO: Slab 0xffffea0001829b80 objects=16 used=8 fp=0xffff880060a6f550 flags=0x3fff8000004080
INFO: Object 0xffff880060a6f740 @offset=5952 fp=0xffff880060a6e5d1

Bytes b4 ffff880060a6f730: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
Object ffff880060a6f740: d1 e5 a6 60 00 88 ff ff 0e 00 00 00 00 00 00 00  ...`............
Object ffff880060a6f750: 02 cf 8e 60 00 88 ff ff 02 c0 8e 60 00 88 ff ff  ...`.......`....
Object ffff880060a6f760: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
Object ffff880060a6f770: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
Object ffff880060a6f780: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
Object ffff880060a6f790: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
Object ffff880060a6f7a0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
Object ffff880060a6f7b0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
Object ffff880060a6f7c0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
Object ffff880060a6f7d0: 02 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
Object ffff880060a6f7e0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
Object ffff880060a6f7f0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
CPU: 0 PID: 1655 Comm: keyctl Tainted: G    B           4.5.0-rc4-kasan+ #291
Hardware name: Bochs Bochs, BIOS Bochs 01/01/2011
 0000000000000000 000000001b2800b4 ffff880060a179e0 ffffffff81b60491
 ffff88006c802900 ffff880060a6f740 ffff880060a17a10 ffffffff815e2969
 ffff88006c802900 ffffea0001829b80 ffff880060a6f740 ffff880060a6e650
Call Trace:
 [<ffffffff81b60491>] dump_stack+0x85/0xc4
 [<ffffffff815e2969>] print_trailer+0xf9/0x150
 [<ffffffff815e9454>] object_err+0x34/0x40
 [<ffffffff815ebe50>] kasan_report_error+0x230/0x550
 [<ffffffff819949be>] ? keyring_get_key_chunk+0x13e/0x210
 [<ffffffff815ec62d>] __asan_report_load_n_noabort+0x5d/0x70
 [<ffffffff81994cc3>] ? keyring_compare_object+0x213/0x240
 [<ffffffff81994cc3>] keyring_compare_object+0x213/0x240
 [<ffffffff81bc238c>] assoc_array_insert+0x86c/0x3a60
 [<ffffffff81bc1b20>] ? assoc_array_cancel_edit+0x70/0x70
 [<ffffffff8199797d>] ? __key_link_begin+0x20d/0x270
 [<ffffffff8199786c>] __key_link_begin+0xfc/0x270
 [<ffffffff81993389>] key_create_or_update+0x459/0xaf0
 [<ffffffff8128ce0d>] ? trace_hardirqs_on+0xd/0x10
 [<ffffffff81992f30>] ? key_type_lookup+0xc0/0xc0
 [<ffffffff8199e19d>] ? lookup_user_key+0x13d/0xcd0
 [<ffffffff81534763>] ? memdup_user+0x53/0x80
 [<ffffffff819983ea>] SyS_add_key+0x1ba/0x350
 [<ffffffff81998230>] ? key_get_type_from_user.constprop.6+0xa0/0xa0
 [<ffffffff828bcf4e>] ? retint_user+0x18/0x23
 [<ffffffff8128cc7e>] ? trace_hardirqs_on_caller+0x3fe/0x580
 [<ffffffff81004017>] ? trace_hardirqs_on_thunk+0x17/0x19
 [<ffffffff828bc432>] entry_SYSCALL_64_fastpath+0x12/0x76
Memory state around the buggy address:
 ffff880060a6f700: fc fc fc fc fc fc fc fc 00 00 00 00 00 00 00 00
 ffff880060a6f780: 00 00 00 00 00 00 00 00 00 00 00 fc fc fc fc fc
>ffff880060a6f800: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
                                        ^
 ffff880060a6f880: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc
 ffff880060a6f900: fc fc fc fc fc fc 00 00 00 00 00 00 00 00 00 00
==================================================================

Signed-off-by: Jerome Marchand <jmarchan@redhat.com>
Signed-off-by: David Howells <dhowells@redhat.com>
cc: stable@vger.kernel.org
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
(cherry picked from commit 32d1b6727390b22cc58d28eb9d7b2d7055e588b7)

Signed-off-by: Dan Duval <dan.duval@oracle.com>
9 years agomac80211: properly deal with station hashtable insert errors
Johannes Berg [Thu, 31 Mar 2016 15:22:45 +0000 (17:22 +0200)]
mac80211: properly deal with station hashtable insert errors

Orabug: 23331171

[ Upstream commit 62b14b241ca6f790a17ccd9dd9f62ce1b006d406 ]

The original hand-implemented hash-table in mac80211 couldn't result
in insertion errors, and while converting to rhashtable I evidently
forgot to check the errors.

This surfaced now only because Ben is adding many identical keys and
that resulted in hidden insertion errors.

Cc: stable@vger.kernel.org
Fixes: 7bedd0cfad4e1 ("mac80211: use rhashtable for station table")
Reported-by: Ben Greear <greearb@candelatech.com>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
(cherry picked from commit d2bccdc948e21a48d593181c05667d454f423fcd)

Signed-off-by: Dan Duval <dan.duval@oracle.com>
9 years agodrm/i915: Fix race condition in intel_dp_destroy_mst_connector()
Lyude [Wed, 16 Mar 2016 19:18:04 +0000 (15:18 -0400)]
drm/i915: Fix race condition in intel_dp_destroy_mst_connector()

Orabug: 23331170

[ Upstream commit 9e60290dbafdf577766e5fc5f2fdb3be450cf9a6 ]

After unplugging a DP MST display from the system, we have to go through
and destroy all of the DRM connectors associated with it since none of
them are valid anymore. Unfortunately, intel_dp_destroy_mst_connector()
doesn't do a good enough job of ensuring that throughout the destruction
process that no modesettings can be done with the connectors. As it is
right now, intel_dp_destroy_mst_connector() works like this:

* Take all modeset locks
* Clear the configuration of the crtc on the connector, if there is one
* Drop all modeset locks, this is required because of circular
  dependency issues that arise with trying to remove the connector from
  sysfs with modeset locks held
* Unregister the connector
* Take all modeset locks, again
* Do the rest of the required cleaning for destroying the connector
* Finally drop all modeset locks for good

This only works sometimes. During the destruction process, it's very
possible that a userspace application will attempt to do a modesetting
using the connector. When we drop the modeset locks, an ioctl handler
such as drm_mode_setcrtc has the oppurtunity to take all of the modeset
locks from us. When this happens, one thing leads to another and
eventually we end up committing a mode with the non-existent connector:

[drm:intel_dp_link_training_clock_recovery [i915]] *ERROR* failed to enable link training
[drm:intel_dp_aux_ch] dp_aux_ch timeout status 0x7cf0001f
[drm:intel_dp_start_link_train [i915]] *ERROR* failed to start channel equalization
[drm:intel_dp_aux_ch] dp_aux_ch timeout status 0x7cf0001f
[drm:intel_mst_pre_enable_dp [i915]] *ERROR* failed to allocate vcpi

And in some cases, such as with the T460s using an MST dock, this
results in breaking modesetting and/or panicking the system.

To work around this, we now unregister the connector at the very
beginning of intel_dp_destroy_mst_connector(), grab all the modesetting
locks, and then hold them until we finish the rest of the function.

CC: stable@vger.kernel.org
Signed-off-by: Lyude <cpaul@redhat.com>
Signed-off-by: Rob Clark <rclark@redhat.com>
Reviewed-by: Ville Syrjälä <ville.syrjala@linux.intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Link: http://patchwork.freedesktop.org/patch/msgid/1458155884-13877-1-git-send-email-cpaul@redhat.com
(cherry picked from commit 1f7717552ef1306be3b7ed28c66c6eff550e3a23)
Signed-off-by: Jani Nikula <jani.nikula@intel.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
Signed-off-by: Dan Duval <dan.duval@oracle.com>
(cherry picked from commit e4ad83b41115d3e57318874474cb655f3d6f6378)

Conflict:

drivers/gpu/drm/i915/intel_dp_mst.c

9 years agousb: renesas_usbhs: fix to avoid using a disabled ep in usbhsg_queue_done()
Yoshihiro Shimoda [Mon, 4 Apr 2016 11:40:20 +0000 (20:40 +0900)]
usb: renesas_usbhs: fix to avoid using a disabled ep in usbhsg_queue_done()

Orabug: stable_rc4

[ Upstream commit 4fccb0767fdbdb781a9c5b5c15ee7b219443c89d ]

This patch fixes an issue that usbhsg_queue_done() may cause kernel
panic when dma callback is running and usb_ep_disable() is called
by interrupt handler. (Especially, we can reproduce this issue using
g_audio with usb-dmac driver.)

For example of a flow:
 usbhsf_dma_complete (on tasklet)
  --> usbhsf_pkt_handler (on tasklet)
   --> usbhsg_queue_done (on tasklet)
    *** interrupt happened and usb_ep_disable() is called ***
    --> usbhsg_queue_pop (on tasklet)
     Then, oops happened.

Fixes: e73a989 ("usb: renesas_usbhs: add DMAEngine support")
Cc: <stable@vger.kernel.org> # v3.1+
Signed-off-by: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>
Signed-off-by: Felipe Balbi <felipe.balbi@linux.intel.com>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
(cherry picked from commit be851fafe2b011c725e107b8b70e6ec72bcf0d05)

Signed-off-by: Dan Duval <dan.duval@oracle.com>
9 years agoext4: add lockdep annotations for i_data_sem
Theodore Ts'o [Fri, 1 Apr 2016 05:31:28 +0000 (01:31 -0400)]
ext4: add lockdep annotations for i_data_sem

Orabug: 23331167

[ Upstream commit daf647d2dd58cec59570d7698a45b98e580f2076 ]

With the internal Quota feature, mke2fs creates empty quota inodes and
quota usage tracking is enabled as soon as the file system is mounted.
Since quotacheck is no longer preallocating all of the blocks in the
quota inode that are likely needed to be written to, we are now seeing
a lockdep false positive caused by needing to allocate a quota block
from inside ext4_map_blocks(), while holding i_data_sem for a data
inode.  This results in this complaint:

  Possible unsafe locking scenario:

        CPU0                    CPU1
        ----                    ----
   lock(&ei->i_data_sem);
                                lock(&s->s_dquot.dqio_mutex);
                                lock(&ei->i_data_sem);
   lock(&s->s_dquot.dqio_mutex);

Google-Bug-Id: 27907753

Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Cc: stable@vger.kernel.org
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
(cherry picked from commit 031b34ddc5b7714d957f7e8ed229e1c1a4c22a8f)

Signed-off-by: Dan Duval <dan.duval@oracle.com>
9 years agoUSB: cypress_m8: add endpoint sanity check
Oliver Neukum [Thu, 31 Mar 2016 16:04:25 +0000 (12:04 -0400)]
USB: cypress_m8: add endpoint sanity check

Orabug: 23331166

[ Upstream commit c55aee1bf0e6b6feec8b2927b43f7a09a6d5f754 ]

An attack using missing endpoints exists.

CVE-2016-3137

Signed-off-by: Oliver Neukum <ONeukum@suse.com>
CC: stable@vger.kernel.org
Signed-off-by: Johan Hovold <johan@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
(cherry picked from commit 45f4b9ca0cf8e53df5adc20d11ffb4b2076dd2c5)

Signed-off-by: Dan Duval <dan.duval@oracle.com>
9 years agoUSB: mct_u232: add sanity checking in probe
Oliver Neukum [Thu, 31 Mar 2016 16:04:24 +0000 (12:04 -0400)]
USB: mct_u232: add sanity checking in probe

Orabug: 23331165

[ Upstream commit 4e9a0b05257f29cf4b75f3209243ed71614d062e ]

An attack using the lack of sanity checking in probe is known. This
patch checks for the existence of a second port.

CVE-2016-3136

Signed-off-by: Oliver Neukum <ONeukum@suse.com>
CC: stable@vger.kernel.org
[johan: add error message ]
Signed-off-by: Johan Hovold <johan@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
(cherry picked from commit 4b8d00f5056e278b053ca183e15f4a8e48d79336)

Signed-off-by: Dan Duval <dan.duval@oracle.com>