In the listxattrs handler, we were not listing all the xattrs that are
packed in the same btree item, which happens when multiple xattrs have
a name that when crc32c hashed produce the same checksum value.
Fix this by processing them all.
The following test case for xfstests reproduces the issue:
seq=`basename $0`
seqres=$RESULT_DIR/$seq
echo "QA output created by $seq"
tmp=/tmp/$$
status=1 # failure is the default!
trap "_cleanup; exit \$status" 0 1 2 3 15
_cleanup()
{
cd /
rm -f $tmp.*
}
# get standard environment, filters and checks
. ./common/rc
. ./common/filter
. ./common/attr
# real QA test starts here
_supported_fs generic
_supported_os Linux
_require_scratch
_require_attrs
rm -f $seqres.full
_scratch_mkfs >>$seqres.full 2>&1
_scratch_mount
# Create our test file with a few xattrs. The first 3 xattrs have a name
# that when given as input to a crc32c function result in the same checksum.
# This made btrfs list only one of the xattrs through listxattrs system call
# (because it packs xattrs with the same name checksum into the same btree
# item).
touch $SCRATCH_MNT/testfile
$SETFATTR_PROG -n user.foobar -v 123 $SCRATCH_MNT/testfile
$SETFATTR_PROG -n user.WvG1c1Td -v qwerty $SCRATCH_MNT/testfile
$SETFATTR_PROG -n user.J3__T_Km3dVsW_ -v hello $SCRATCH_MNT/testfile
$SETFATTR_PROG -n user.something -v pizza $SCRATCH_MNT/testfile
$SETFATTR_PROG -n user.ping -v pong $SCRATCH_MNT/testfile
# Now call getfattr with --dump, which calls the listxattrs system call.
# It should list all the xattrs we have set before.
$GETFATTR_PROG --absolute-names --dump $SCRATCH_MNT/testfile | _filter_scratch
status=0
exit
Signed-off-by: Filipe Manana <fdmanana@suse.com> Signed-off-by: Chris Mason <clm@fb.com>
(cherry picked from commit daac7ba61a0d338c66b70c47d205ba7465718155) Signed-off-by: Liu Bo <bo.li.liu@oracle.com>
When listing a inode's xattrs we have a time window where we race against
a concurrent operation for adding a new hard link for our inode that makes
us not return any xattr to user space. In order for this to happen, the
first xattr of our inode needs to be at slot 0 of a leaf and the previous
leaf must still have room for an inode ref (or extref) item, and this can
happen because an inode's listxattrs callback does not lock the inode's
i_mutex (nor does the VFS does it for us), but adding a hard link to an
inode makes the VFS lock the inode's i_mutex before calling the inode's
link callback.
The race illustrated by the following sequence diagram is possible:
CPU 1 CPU 2
btrfs_listxattr()
searches for key (257 XATTR_ITEM 0)
gets path with path->nodes[0] == leaf X
and path->slots[0] == N
because path->slots[0] is >=
btrfs_header_nritems(leaf X), it calls
btrfs_next_leaf()
btrfs_next_leaf()
releases the path
adds key (257 INODE_REF 666)
to the end of leaf X (slot N),
and leaf X now has N + 1 items
searches for the key (257 INODE_REF 256),
with path->keep_locks == 1, because that
is the last key it saw in leaf X before
releasing the path
ends up at leaf X again and it verifies
that the key (257 INODE_REF 256) is no
longer the last key in leaf X, so it
returns with path->nodes[0] == leaf X
and path->slots[0] == N, pointing to
the new item with key (257 INODE_REF 666)
btrfs_listxattr's loop iteration sees that
the type of the key pointed by the path is
different from the type BTRFS_XATTR_ITEM_KEY
and so it breaks the loop and stops looking
for more xattr items
--> the application doesn't get any xattr
listed for our inode
So fix this by breaking the loop only if the key's type is greater than
BTRFS_XATTR_ITEM_KEY and skip the current key if its type is smaller.
Cc: stable@vger.kernel.org Signed-off-by: Filipe Manana <fdmanana@suse.com>
(cherry picked from commit f1cd1f0b7d1b5d4aaa5711e8f4e4898b0045cb6d) Signed-off-by: Liu Bo <bo.li.liu@oracle.com>
If we move a directory to a new parent and later log that parent and don't
explicitly log the old parent, when we replay the log we can end up with
entries for the moved directory in both the old and new parent directories.
Besides being ilegal to have directories with multiple hard links in linux,
it also resulted in the leaving the inode item with a link count of 1.
A similar issue also happens if we move a regular file - after the log tree
is replayed the file has a link in both the old and new parent directories,
when it should be only at the new directory.
Sample reproducer:
$ mkfs.btrfs -f /dev/sdc
$ mount /dev/sdc /mnt
$ mkdir /mnt/x
$ mkdir /mnt/y
$ touch /mnt/x/foo
$ mkdir /mnt/y/z
$ sync
$ ln /mnt/x/foo /mnt/x/bar
$ mv /mnt/y/z /mnt/x/z
< power fail >
$ mount /dev/sdc /mnt
$ ls -1Ri /mnt
/mnt:
257 x
258 y
/mnt/x:
259 bar
259 foo
260 z
/mnt/x/z:
/mnt/y:
260 z
/mnt/y/z:
$ umount /dev/sdc
$ btrfs check /dev/sdc
Checking filesystem on /dev/sdc
UUID: a67e2c4a-a4b4-4fdc-b015-9d9af1e344be
checking extents
checking free space cache
checking fs roots
root 5 inode 260 errors 2000, link count wrong
unresolved ref dir 257 index 4 namelen 1 name z filetype 2 errors 0
unresolved ref dir 258 index 2 namelen 1 name z filetype 2 errors 0
(...)
Attempting to remove the directory becomes impossible:
$ mount /dev/sdc /mnt
$ rmdir /mnt/y/z
$ ls -lh /mnt/y
ls: cannot access /mnt/y/z: No such file or directory
total 0
d????????? ? ? ? ? ? z
$ rmdir /mnt/x/z
rmdir: failed to remove ā/mnt/x/zā: Stale file handle
$ ls -lh /mnt/x
ls: cannot access /mnt/x/z: Stale file handle
total 0
-rw-r--r-- 2 root root 0 Apr 6 18:06 bar
-rw-r--r-- 2 root root 0 Apr 6 18:06 foo
d????????? ? ? ? ? ? z
So make sure that on rename we set the last_unlink_trans value for our
inode, even if it's a directory, to the value of the current transaction's
ID and that if the new parent directory is logged that we fallback to a
transaction commit.
A test case for fstests is being submitted as well.
Signed-off-by: Filipe Manana <fdmanana@suse.com>
(cherry picked from commit 657ed1aa4898c8304500e0d13f240d5a67e8be5f) Signed-off-by: Liu Bo <bo.li.liu@oracle.com>
We have two cases where we end up deleting a file at log replay time
when we should not. For this to happen the file must have been renamed
and a directory inode must have been fsynced/logged.
Two examples that exercise these two cases are listed below.
Case 1)
$ mkfs.btrfs -f /dev/sdb
$ mount /dev/sdb /mnt
$ mkdir -p /mnt/a/b
$ mkdir /mnt/c
$ touch /mnt/a/b/foo
$ sync
$ mv /mnt/a/b/foo /mnt/c/
# Create file bar just to make sure the fsync on directory a/ does
# something and it's not a no-op.
$ touch /mnt/a/bar
$ xfs_io -c "fsync" /mnt/a
< power fail / crash >
The next time the filesystem is mounted, the log replay procedure
deletes file foo.
The next time the filesystem is mounted, the log replay procedure
deletes file bar.
The reason why the files are deleted is because when we log inodes
other then the fsync target inode, we ignore their last_unlink_trans
value and leave the log without enough information to later replay the
rename operations. So we need to look at the last_unlink_trans values
and fallback to a transaction commit if they are greater than the
id of the last committed transaction.
So fix this by looking at the last_unlink_trans values and fallback to
transaction commits when needed. Also, when logging other inodes (for
case 1 we logged descendants of the fsync target inode while for case 2
we logged ascendants) we need to care about concurrent tasks updating
the last_unlink_trans of inodes we are logging (which was already an
existing problem in check_parent_dirs_for_sync()). Since we can not
acquire their inode mutex (vfs' struct inode ->i_mutex), as that causes
deadlocks with other concurrent operations that acquire the i_mutex of
2 inodes (other fsyncs or renames for example), we need to serialize on
the log_mutex of the inode we are logging. A task setting a new value for
an inode's last_unlink_trans must acquire the inode's log_mutex and it
must do this update before doing the actual unlink operation (which is
already the case except when deleting a snapshot). Conversely the task
logging the inode must first log the inode and then check the inode's
last_unlink_trans value while holding its log_mutex, as if its value is
not greater then the id of the last committed transaction it means it
logged a safe state of the inode's items, while if its value is not
smaller then the id of the last committed transaction it means the inode
state it has logged might not be safe (the concurrent task might have
just updated last_unlink_trans but hasn't done yet the unlink operation)
and therefore a transaction commit must be done.
Test cases for xfstests follow in separate patches.
Signed-off-by: Filipe Manana <fdmanana@suse.com> Signed-off-by: Chris Mason <clm@fb.com>
(cherry picked from commit 2be63d5ce929603d4e7cedabd9e992eb34a0ff95) Signed-off-by: Liu Bo <bo.li.liu@oracle.com>
We have one more case where after a log tree is replayed we get
inconsistent metadata leading to stale directory entries, due to
some directories having entries pointing to some inode while the
inode does not have a matching BTRFS_INODE_[REF|EXTREF]_KEY item.
To trigger the problem we need to have a file with multiple hard links
belonging to different parent directories. Then if one of those hard
links is removed and we fsync the file using one of its other links
that belongs to a different parent directory, we end up not logging
the fact that the removed hard link doesn't exists anymore in the
parent directory.
Simple reproducer:
seq=`basename $0`
seqres=$RESULT_DIR/$seq
echo "QA output created by $seq"
tmp=/tmp/$$
status=1 # failure is the default!
trap "_cleanup; exit \$status" 0 1 2 3 15
_cleanup()
{
_cleanup_flakey
rm -f $tmp.*
}
# get standard environment, filters and checks
. ./common/rc
. ./common/filter
. ./common/dmflakey
# real QA test starts here
_need_to_be_root
_supported_fs generic
_supported_os Linux
_require_scratch
_require_dm_flakey
_require_metadata_journaling $SCRATCH_DEV
# Create our test directory and file.
mkdir $SCRATCH_MNT/testdir
touch $SCRATCH_MNT/foo
ln $SCRATCH_MNT/foo $SCRATCH_MNT/testdir/foo2
ln $SCRATCH_MNT/foo $SCRATCH_MNT/testdir/foo3
# Make sure everything done so far is durably persisted.
sync
# Now we remove one of our file's hardlinks in the directory testdir.
unlink $SCRATCH_MNT/testdir/foo3
# We now fsync our file using the "foo" link, which has a parent that
# is not the directory "testdir".
$XFS_IO_PROG -c "fsync" $SCRATCH_MNT/foo
# Silently drop all writes and unmount to simulate a crash/power
# failure.
_load_flakey_table $FLAKEY_DROP_WRITES
_unmount_flakey
# Allow writes again, mount to trigger journal/log replay.
_load_flakey_table $FLAKEY_ALLOW_WRITES
_mount_flakey
# After the journal/log is replayed we expect to not see the "foo3"
# link anymore and we should be able to remove all names in the
# directory "testdir" and then remove it (no stale directory entries
# left after the journal/log replay).
echo "Entries in testdir:"
ls -1 $SCRATCH_MNT/testdir
generic/107 3s ... - output mismatch (see .../results/generic/107.out.bad)
--- tests/generic/107.out 2015-08-01 01:39:45.807462161 +0100
+++ /home/fdmanana/git/hub/xfstests/results//generic/107.out.bad
@@ -1,3 +1,5 @@
QA output created by 107
Entries in testdir:
foo2
+foo3
+rmdir: failed to remove '/home/fdmanana/btrfs-tests/scratch_1/testdir': Directory not empty
...
_check_btrfs_filesystem: filesystem on /dev/sdc is inconsistent \
(see /home/fdmanana/git/hub/xfstests/results//generic/107.full)
_check_dmesg: something found in dmesg (see .../results/generic/107.dmesg)
Ran: generic/107
Failures: generic/107
Failed 1 of 1 tests
$ cat /home/fdmanana/git/hub/xfstests/results//generic/107.full
(...)
checking fs roots
root 5 inode 257 errors 200, dir isize wrong
unresolved ref dir 257 index 3 namelen 4 name foo3 filetype 1 errors 5, no dir item, no inode ref
(...)
So fix this by logging all parent inodes, current and old ones, to make
sure we do not get stale entries after log replay. This is not a simple
solution such as triggering a full transaction commit because it would
imply full transaction commit when an inode is fsynced in the same
transaction that modified it and reloaded it after eviction (because its
last_unlink_trans is set to the same value as its last_trans as of the
commit with the title "Btrfs: fix stale dir entries after unlink, inode
eviction and fsync"), and it would also make fstest generic/066 fail
since one of the fsyncs triggers a full commit and the next fsync will
not find the inode in the log anymore (therefore not removing the xattr).
Signed-off-by: Filipe Manana <fdmanana@suse.com> Signed-off-by: Chris Mason <clm@fb.com>
(cherry picked from commit 18aa09229741364280d0a1670597b5207fc05b8d) Signed-off-by: Liu Bo <bo.li.liu@oracle.com>
If we delete a snapshot, fsync its parent directory and crash/power fail
before the next transaction commit, on the next mount when we attempt to
replay the log tree of the root containing the parent directory we will
fail and prevent the filesystem from mounting, which is solvable by wiping
out the log trees with the btrfs-zero-log tool but very inconvenient as
we will lose any data and metadata fsynced before the parent directory
was fsynced.
For example:
$ mkfs.btrfs -f /dev/sdc
$ mount /dev/sdc /mnt
$ mkdir /mnt/testdir
$ btrfs subvolume snapshot /mnt /mnt/testdir/snap
$ btrfs subvolume delete /mnt/testdir/snap
$ xfs_io -c "fsync" /mnt/testdir
< crash / power failure and reboot >
$ mount /dev/sdc /mnt
mount: mount(2) failed: No such file or directory
And in dmesg/syslog we get the following message and trace:
This happens because when we are replaying the log and processing the
directory entry pointing to the snapshot in the subvolume tree, we treat
its btrfs_dir_item item as having a location with a key type matching
BTRFS_INODE_ITEM_KEY, which is wrong because the type matches
BTRFS_ROOT_ITEM_KEY and therefore must be processed differently, as the
object id refers to a root number and not to an inode in the root
containing the parent directory.
So fix this by triggering a transaction commit if an fsync against the
parent directory is requested after deleting a snapshot. This is the
simplest approach for a rare use case. Some alternative that avoids the
transaction commit would require more code to explicitly delete the
snapshot at log replay time (factoring out common code from ioctl.c:
btrfs_ioctl_snap_destroy()), special care at fsync time to remove the
log tree of the snapshot's root from the log root of the root of tree
roots, amongst other steps.
A test case for xfstests that triggers the issue follows.
seq=`basename $0`
seqres=$RESULT_DIR/$seq
echo "QA output created by $seq"
tmp=/tmp/$$
status=1 # failure is the default!
trap "_cleanup; exit \$status" 0 1 2 3 15
_cleanup()
{
_cleanup_flakey
cd /
rm -f $tmp.*
}
# get standard environment, filters and checks
. ./common/rc
. ./common/filter
. ./common/dmflakey
# real QA test starts here
_need_to_be_root
_supported_fs btrfs
_supported_os Linux
_require_scratch
_require_dm_target flakey
_require_metadata_journaling $SCRATCH_DEV
# Create a snapshot at the root of our filesystem (mount point path), delete it,
# fsync the mount point path, crash and mount to replay the log. This should
# succeed and after the filesystem is mounted the snapshot should not be visible
# anymore.
_run_btrfs_util_prog subvolume snapshot $SCRATCH_MNT $SCRATCH_MNT/snap1
_run_btrfs_util_prog subvolume delete $SCRATCH_MNT/snap1
$XFS_IO_PROG -c "fsync" $SCRATCH_MNT
_flakey_drop_and_remount
[ -e $SCRATCH_MNT/snap1 ] && \
echo "Snapshot snap1 still exists after log replay"
# Similar scenario as above, but this time the snapshot is created inside a
# directory and not directly under the root (mount point path).
mkdir $SCRATCH_MNT/testdir
_run_btrfs_util_prog subvolume snapshot $SCRATCH_MNT $SCRATCH_MNT/testdir/snap2
_run_btrfs_util_prog subvolume delete $SCRATCH_MNT/testdir/snap2
$XFS_IO_PROG -c "fsync" $SCRATCH_MNT/testdir
_flakey_drop_and_remount
[ -e $SCRATCH_MNT/testdir/snap2 ] && \
echo "Snapshot snap2 still exists after log replay"
_unmount_flakey
echo "Silence is golden"
status=0
exit
Signed-off-by: Filipe Manana <fdmanana@suse.com> Tested-by: Liu Bo <bo.li.liu@oracle.com> Reviewed-by: Liu Bo <bo.li.liu@oracle.com> Signed-off-by: Chris Mason <clm@fb.com>
(cherry picked from commit 1ec9a1ae1e30c733077c0b288c4301b66b7a81f2) Signed-off-by: Liu Bo <bo.li.liu@oracle.com>
If we create a symlink, fsync its parent directory, crash/power fail and
mount the filesystem, we end up with an empty symlink, which not only is
useless it's also not allowed in linux (the man page symlink(2) is well
explicit about that). So we just need to make sure to fully log an inode
if it's a symlink, to ensure its inline extent gets logged, ensuring the
same behaviour as ext3, ext4, xfs, reiserfs, f2fs, nilfs2, etc.
Signed-off-by: Filipe Manana <fdmanana@suse.com>
(cherry picked from commit 3f9749f6e9edcf8ec569fb542efc3be35e06e84a) Signed-off-by: Liu Bo <bo.li.liu@oracle.com>
Journal replay will be run when do recovery for a dead node,
to avoid the stale cache impact, all blocks of dead node's
journal inode were reload from disk. This hurts the performance,
check whether one block is cached before reload it can improve
a lot performance. In my test env, the time doing recovery was
improved from 120s to 1s.
Manish Chopra [Thu, 30 Jun 2016 06:35:19 +0000 (02:35 -0400)]
qede: qede_poll refactoring
This patch cleanups qede_poll() routine a bit
and allows qede_poll() to do single iteration to handle
TX completion [As under heavy TX load qede_poll() might
run for indefinite time in the while(1) loop for TX
completion processing and cause CPU stuck].
Signed-off-by: Manish <manish.chopra@qlogic.com> Signed-off-by: Yuval Mintz <Yuval.Mintz@qlogic.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Orabug: 24361996 Signed-off-by: Manjunath Govindashetty <manjunath.govindashetty@oracle.com>
Manish Chopra [Thu, 30 Jun 2016 06:35:18 +0000 (02:35 -0400)]
qede: Add support for handling IP fragmented packets.
When handling IP fragmented packets with csum in their
transport header, the csum isn't changed as part of the
fragmentation. As a result, the packet containing the
transport headers would have the correct csum of the original
packet, but one that mismatches the actual packet that
passes on the wire. As a result, on receive path HW would
give an indication that the packet has incorrect csum,
which would cause qede to discard the incoming packet.
Since HW also delivers a notification of IP fragments,
change driver behavior to pass such incoming packets
to stack and let it make the decision whether it needs
to be dropped.
Signed-off-by: Manish <manish.chopra@qlogic.com> Signed-off-by: Yuval Mintz <Yuval.Mintz@qlogic.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Orabug: 24361996 Signed-off-by: Manjunath Govindashetty <manjunath.govindashetty@oracle.com>
Static checker warnings:
drivers/net/ethernet/qlogic/qede/qede_ethtool.c:435 qede_get_coalesce()
warn: passing casted pointer '&coal->rx_coalesce_usecs' to
'edev->ops->common->get_coalesce()' 32 vs 16.
The u32 pointer is being typecasted to u16 which may fail for big-endian
platforms.
Fixes: d552fa84cb35 ("qede: Add support for coalescing config read/update.") Reported-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Sudarsana Reddy Kalluru <sudarsana.kalluru@qlogic.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Orabug: 24361996 Signed-off-by: Manjunath Govindashetty <manjunath.govindashetty@oracle.com>
The data types for rx/tx_coalesce_usecs should be u16.
Fixes: commit 722003ac40c2 ("qed: Add support for coalescing config read/update.") Reported-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Sudarsana Reddy Kalluru <sudarsana.kalluru@qlogic.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Orabug: 24361996 Signed-off-by: Manjunath Govindashetty <manjunath.govindashetty@oracle.com>
qed: Add support for coalescing config read/update.
This patch adds support for configuring the device tx/rx coalescing
timeout values in the order of micro seconds. It also adds APIs for
upper layer drivers for reading/updating the coalescing values.
Signed-off-by: Sudarsana Reddy Kalluru <sudarsana.kalluru@qlogic.com> Signed-off-by: Yuval Mintz <Yuval.Mintz@qlogic.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Orabug: 24361996 Signed-off-by: Manjunath Govindashetty <manjunath.govindashetty@oracle.com>
Query API reads the dcbx data from the device shared memory and return it
to the caller. The config API configures the user provided dcbx values on
the device, and initiates the dcbx negotiation with the peer.
Signed-off-by: Sudarsana Reddy Kalluru <sudarsana.kalluru@qlogic.com> Signed-off-by: Yuval Mintz <Yuval.Mintz@qlogic.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Orabug: 24361996 Signed-off-by: Manjunath Govindashetty <manjunath.govindashetty@oracle.com>
Dan Carpenter [Tue, 7 Jun 2016 12:04:16 +0000 (15:04 +0300)]
qed: potential overflow in qed_cxt_src_t2_alloc()
In the current code "ent_per_page" could be more than "conn_num" making
"conn_num" negative after the subtraction. In the next iteration
through the loop then the negative is treated as a very high positive
meaning we don't put a limit on "ent_num". It could lead to memory
corruption.
Fixes: dbb799c39717 ('qed: Initialize hardware for new protocols') Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Acked-by: Yuval Mintz <Yuval.Mintz@qlogic.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Orabug: 24361996 Signed-off-by: Manjunath Govindashetty <manjunath.govindashetty@oracle.com>
Yuval Mintz [Sun, 5 Jun 2016 10:11:16 +0000 (13:11 +0300)]
qed: PF to reply to unknown messages
If a future VF would send the PF an unknown message, the PF today would
not send a reply. This would have 2 bad effects:
a. VF would have to timeout on the request.
b. If VF were to send an additional message to PF, firmware would mark
it as malicious.
Instead, if there's some valid reply-address on the message - let the PF
answer and tell the VF it doesn't know the message.
Signed-off-by: Yuval Mintz <Yuval.Mintz@qlogic.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Orabug: 24361996 Signed-off-by: Manjunath Govindashetty <manjunath.govindashetty@oracle.com>
Yuval Mintz [Sun, 5 Jun 2016 10:11:15 +0000 (13:11 +0300)]
qed: PF enforce MAC limitation of VFs
The only limitation relating to MACs the PF enforce today on its VFs
is in case it has a forced-unicast MAC address for them, in which case
they can't configure other unicast addresses.
Specifically, the PF isn't enforcing the number of MAC addresse a VF can
configure regardless of the nubmer of such filters agreed upon by PF and
VF during the acquisition process.
PF's shadow-config is now extended to also contain information about its
VFs' unicast addresses configuration, allowing such enforcement.
Signed-off-by: Yuval Mintz <Yuval.Mintz@qlogic.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Orabug: 24361996 Signed-off-by: Manjunath Govindashetty <manjunath.govindashetty@oracle.com>
Yuval Mintz [Sun, 5 Jun 2016 10:11:14 +0000 (13:11 +0300)]
qed: Move doorbell calculation from VF to PF
Today, the VF is aware of its queues context-ids, and calculates the
doorbell address when opening its queues on its own.
The configuration of doorbells in HW can sometime in the future be changed
by the PF [hw has several configurable features that might affect doorbell
addresses, e.g., dpm support], this would break compatibility with older
VFs as their calculated doorbell addresses would be incorrect for such a
configuration.
In order to avoid such a backward compatibility failure, let the PF make
the calculation of the doorbell offset based on the context-id, and pass
that to the VF.
Signed-off-by: Yuval Mintz <Yuval.Mintz@qlogic.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Orabug: 24361996 Signed-off-by: Manjunath Govindashetty <manjunath.govindashetty@oracle.com>
Yuval Mintz [Sun, 5 Jun 2016 10:11:13 +0000 (13:11 +0300)]
qed: Make PF more robust against malicious VF
There are several requests the VF can make toward the PF which the driver
would pass to firmware without checking the validity first - specifically,
opening queues and updating vports. Such configurations might cause the
firmware to assert.
This adds validation of the legality of said configurations on the PF side
before passing it onward via ramrod to firmware.
Signed-off-by: Yuval Mintz <Yuval.Mintz@qlogic.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Orabug: 24361996 Signed-off-by: Manjunath Govindashetty <manjunath.govindashetty@oracle.com>
Yuval Mintz [Sun, 5 Jun 2016 10:11:12 +0000 (13:11 +0300)]
qed: PF-VF resource negotiation
One of the goals of the vf's first message to the PF [acquire]
is to learn about the number of resources available to it [macs, vlans,
etc.]. This is done via negotiation - the VF requires a set of resources,
which the PF either approves or disaproves and sends a smaller set of
resources as alternative. In this later case, the VF is then expected to
either abort the probe or re-send the acquire message with less
required resources.
While this infrastructure exists since the initial submision of qed
SRIOV support, it's in fact completely inoperational - PF isn't really
looking into the resources the VF has asked for and is never going to
reply to the VF that it lacks resources.
This patch addresses this flow, fixing it and allowing the PF and VF
to actually agree on a set of resources.
Signed-off-by: Yuval Mintz <Yuval.Mintz@qlogic.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Orabug: 24361996 Signed-off-by: Manjunath Govindashetty <manjunath.govindashetty@oracle.com>
Yuval Mintz [Sun, 5 Jun 2016 10:11:11 +0000 (13:11 +0300)]
qed: Relax VF firmware requirements
Current driver require an exact match between VF and PF storm firmware;
Any difference would fail the VF acquire message, causing the VF probe
to be aborted.
While there's still dependencies between the two, the recent FW submission
has relaxed the match requirement - instead of an exact match, there's now
a 'fastpath' HSI major/minor scheme, where VFs and PFs that match in their
major number can co-exist even if their minor is different.
In order to accomadate this change some changes in the vf-start init flow
had to be made, as the VF start ramrod now has to be sent only after PF
learns which fastpath HSI its VF is requiring.
Signed-off-by: Yuval Mintz <Yuval.Mintz@qlogic.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Orabug: 24361996 Signed-off-by: Manjunath Govindashetty <manjunath.govindashetty@oracle.com>
Yuval Mintz [Sat, 4 Jun 2016 05:20:16 +0000 (08:20 +0300)]
qed: Fix next-ptr chains for BE / 32-bit
Commit a91eb52abb50 ("qed: Revisit chain implementation") contains an
incorrect implementation for BE platforms, as device's regpairs containing
addresses are LE and they're not converted correctly when read back.
In addition, it raises a compilation warning for 32-bit platforms where
dma_addr_t is a 32-bit variable.
Reported-by: kbuild test robot <fengguang.wu@intel.com> Signed-off-by: Yuval Mintz <Yuval.Mintz@qlogic.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Orabug: 24361996 Signed-off-by: Manjunath Govindashetty <manjunath.govindashetty@oracle.com>
Yuval Mintz [Fri, 3 Jun 2016 11:35:35 +0000 (14:35 +0300)]
qed: Initialize hardware for new protocols
RoCE and iSCSI would require some added/changed hw configuration in order
to properly run; The biggest single change being the requirement of
allocating and mapping host memory for several HW blocks that aren't being
used by qede [SRC, QM, TM, etc.].
In addition, whereas qede is only using context memory for HW blocks, the
new protocol would also require task memories to be added.
Signed-off-by: Yuval Mintz <Yuval.Mintz@qlogic.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Orabug: 24361996 Signed-off-by: Manjunath Govindashetty <manjunath.govindashetty@oracle.com>
Yuval Mintz [Fri, 3 Jun 2016 11:35:34 +0000 (14:35 +0300)]
qed: Add iscsi/rdma personalities
This patch adds in the ecore 2 new personalities in addition to
QED_PCI_ETH - QED_PCI_ISCSI and QED_PCI_ETH_ROCE.
Signed-off-by: Yuval Mintz <Yuval.Mintz@qlogic.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Orabug: 24361996 Signed-off-by: Manjunath Govindashetty <manjunath.govindashetty@oracle.com>
Yuval Mintz [Fri, 3 Jun 2016 11:35:33 +0000 (14:35 +0300)]
qed: Add common HSI for new protocols
This adds the qed portion of the RoCE & iSCSI firmware HSI,
as well as adding several new common HSI files which would be required
by both qed and qed* protocols.
Signed-off-by: Yuval Mintz <Yuval.Mintz@qlogic.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Orabug: 24361996 Signed-off-by: Manjunath Govindashetty <manjunath.govindashetty@oracle.com>
Yuval Mintz [Fri, 3 Jun 2016 11:35:32 +0000 (14:35 +0300)]
qed: Revisit chain implementation
RoCE driver is going to need a 32-bit chain [current chain implementation
for qed* currently supports only 16-bit producer/consumer chains].
This patch adds said support, as well as doing other slight tweaks and
modifications to qed's chain API.
Signed-off-by: Yuval Mintz <Yuval.Mintz@qlogic.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Orabug: 24361996 Signed-off-by: Manjunath Govindashetty <manjunath.govindashetty@oracle.com>
Yuval Mintz [Thu, 2 Jun 2016 07:23:29 +0000 (10:23 +0300)]
qed: Utilize FW 8.10.3.0
The New QED firmware contains several fixes, including:
- Wrong classification of packets in 4-port devices.
- Anti-spoof interoperability with encapsulated packets.
- Tx-switching of encapsulated packets.
It also slightly improves Tx performance of the device.
In addition, this firmware contains the necessary logic for
supporting iscsi & rdma, for which we plan on pushing protocol
drivers in the imminent future.
Signed-off-by: Yuval Mintz <Yuval.Mintz@qlogic.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Orabug: 24361996 Signed-off-by: Manjunath Govindashetty <manjunath.govindashetty@oracle.com>
Mauricio Faria de Oliveira [Tue, 7 Jun 2016 23:13:08 +0000 (20:13 -0300)]
lpfc: fix oops in lpfc_sli4_scmd_to_wqidx_distr() from lpfc_send_taskmgmt()
The lpfc_sli4_scmd_to_wqidx_distr() function expects the scsi_cmnd
'lpfc_cmd->pCmd' not to be null, and point to the midlayer command.
That's not true in the .eh_(device|target|bus)_reset_handler path,
because lpfc_send_taskmgmt() sends commands not from the midlayer, so
does not set 'lpfc_cmd->pCmd'.
That is true in the .queuecommand path because lpfc_queuecommand()
stores the scsi_cmnd from midlayer in lpfc_cmd->pCmd; and lpfc_cmd is
stored by lpfc_scsi_prep_cmnd() in piocbq->context1 -- which is passed
to lpfc_sli4_scmd_to_wqidx_distr() as lpfc_cmd parameter.
This problem can be hit on SCSI EH, and immediately with sg_reset.
These 2 test-cases demonstrate the problem/fix with next-20160601.
Fixes: 8b0dff14164d ("lpfc: Add support for using block multi-queue") Cc: <stable@vger.kernel.org> # v4.2+ Signed-off-by: Mauricio Faria de Oliveira <mauricfo@linux.vnet.ibm.com> Reviewed-by: Johannes Thumshirn <jthumshirn@suse.de> Acked-by: James Smart <james.smart@broadcom.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Ashok Vairavan <ashok.vairavan@oracle.com>
If __key_link_begin() failed then "edit" would be uninitialized. I've
added a check to fix that.
This allows a random user to crash the kernel, though it's quite
difficult to achieve. There are three ways it can be done as the user
would have to cause an error to occur in __key_link():
(1) Cause the kernel to run out of memory. In practice, this is difficult
to achieve without ENOMEM cropping up elsewhere and aborting the
attempt.
(2) Revoke the destination keyring between the keyring ID being looked up
and it being tested for revocation. In practice, this is difficult to
time correctly because the KEYCTL_REJECT function can only be used
from the request-key upcall process. Further, users can only make use
of what's in /sbin/request-key.conf, though this does including a
rejection debugging test - which means that the destination keyring
has to be the caller's session keyring in practice.
(3) Have just enough key quota available to create a key, a new session
keyring for the upcall and a link in the session keyring, but not then
sufficient quota to create a link in the nominated destination keyring
so that it fails with EDQUOT.
The bug can be triggered using option (3) above using something like the
following:
echo 80 >/proc/sys/kernel/keys/root_maxbytes
keyctl request2 user debug:fred negate @t
The above sets the quota to something much lower (80) to make the bug
easier to trigger, but this is dependent on the system. Note also that
the name of the keyring created contains a random number that may be
between 1 and 10 characters in size, so may throw the test off by
changing the amount of quota used.
Assuming the failure occurs, something like the following will be seen:
Fixes: f70e2e06196a ('KEYS: Do preallocation for __key_link()') Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: David Howells <dhowells@redhat.com>
cc: stable@vger.kernel.org Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
(cherry picked from commit 38327424b40bcebe2de92d07312c89360ac9229a) Reviewed-by: Chuck Anderson <chuck.anderson@oracle.com> Signed-off-by: Brian Maly <brian.maly@oracle.com>
Commit 2d8747c28478f85d9f04292780b1432edd2a384e ("blk-mq: avoid inserting
requests before establishing new mapping") introduced an extraneous call
to mutex_unlock(). The result is that the mutex gets unlocked twice.
Elena Ufimtseva [Thu, 21 Jul 2016 21:25:27 +0000 (17:25 -0400)]
xen-pciback: mark device to be hidden on AER error trigger
Some platforms are configured to reboot the machine upon
AER unrecoverable error and some virtualized systems are subject
to security risks described in XSA-124.
This patch allows for simple AER unrecoverable errors containment
together with killing the guest upon receiving of fatal AER error.
Patch stores in xenstore sbdf of passed through device that triggered
AER unrecoverable error. This will allow xend to make device
unassignable until next reboot or special hypervisor hypercall.
Eric Dumazet [Sun, 10 Jul 2016 08:04:02 +0000 (10:04 +0200)]
tcp: make challenge acks less predictable
Yue Cao claims that current host rate limiting of challenge ACKS
(RFC 5961) could leak enough information to allow a patient attacker
to hijack TCP sessions. He will soon provide details in an academic
paper.
This patch increases the default limit from 100 to 1000, and adds
some randomization so that the attacker can no longer hijack
sessions without spending a considerable amount of probes.
Based on initial analysis and patch from Linus.
Note that we also have per socket rate limiting, so it is tempting
to remove the host limit in the future.
v2: randomize the count of challenge acks per second, not the period.
Fixes: 282f23c6ee34 ("tcp: implement RFC 5961 3.2") Reported-by: Yue Cao <ycao009@ucr.edu> Signed-off-by: Eric Dumazet <edumazet@google.com> Suggested-by: Linus Torvalds <torvalds@linux-foundation.org> Cc: Yuchung Cheng <ycheng@google.com> Cc: Neal Cardwell <ncardwell@google.com> Acked-by: Neal Cardwell <ncardwell@google.com> Acked-by: Yuchung Cheng <ycheng@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
Orabug: 2401010
Conflicts:
net/ipv4/tcp_input.c Signed-off-by: Chuck Anderson <chuck.anderson@oracle.com>
Miklos Szeredi [Tue, 10 May 2016 23:16:37 +0000 (01:16 +0200)]
vfs: rename: check backing inode being equal
If a file is renamed to a hardlink of itself POSIX specifies that rename(2)
should do nothing and return success.
This condition is checked in vfs_rename(). However it won't detect hard
links on overlayfs where these are given separate inodes on the overlayfs
layer.
Overlayfs itself detects this condition and returns success without doing
anything, but then vfs_rename() will proceed as if this was a successful
rename (detach_mounts(), d_move()).
The correct thing to do is to detect this condition before even calling
into overlayfs. This patch does this by calling vfs_select_inode() to get
the underlying inodes.
Signed-off-by: Miklos Szeredi <mszeredi@redhat.com> Cc: <stable@vger.kernel.org> # v4.2+
Orabug: 24363418
CVE:CVE-2016-6198,CVE-2016-6197
Same as mainline v4.6 commit 9409e22acdfc9153f88d9b1ed2bd2a5b34d2d3ca Signed-off-by: Chuck Anderson <chuck.anderson@oracle.com>
Unlink and rename in overlayfs checked the upper dentry for staleness by
verifying upper->d_parent against upperdir. However the dentry can go
stale also by being unhashed, for example.
Expand the verification to actually look up the name again (under parent
lock) and check if it matches the upper dentry. This matches what the VFS
does before passing the dentry to filesytem's unlink/rename methods, which
excludes any inconsistency caused by overlayfs.
Signed-off-by: Miklos Szeredi <mszeredi@redhat.com>
Orabug: 24363418
CVE:CVE-2016-6198,CVE-2016-6197
Based on mainline v4.6 commit 11f3710417d026ea2f4fcf362d866342c5274185
Conflicts:
fs/overlayfs/dir.c - code base Signed-off-by: Chuck Anderson <chuck.anderson@oracle.com>
Rui Wang [Thu, 21 Jul 2016 20:21:39 +0000 (13:21 -0700)]
ovl: fix getcwd() failure after unsuccessful rmdir
ovl_remove_upper() should do d_drop() only after it successfully
removes the dir, otherwise a subsequent getcwd() system call will
fail, breaking userspace programs.
This is to fix: https://bugzilla.kernel.org/show_bug.cgi?id=110491
Signed-off-by: Rui Wang <rui.y.wang@intel.com> Reviewed-by: Konstantin Khlebnikov <koct9i@gmail.com> Signed-off-by: Miklos Szeredi <miklos@szeredi.hu> Cc: <stable@vger.kernel.org>
Orabug: 24363418
CVE:CVE-2016-6198,CVE-2016-6197
Based on mainline v4.5 commit ce9113bbcbf45a57c082d6603b9a9f342be3ef74
Pre-req for mainline v4.6 commit 11f3710417d026ea2f4fcf362d866342c5274185
Conflicts:
fs/overlayfs/dir.c - code base Signed-off-by: Chuck Anderson <chuck.anderson@oracle.com>
sif: pqp: Be less aggressive in invoking cond_resched()
This commit attempts to avoid unnecessary or potentially dangerous
calls to cond_resched() in the privileged QP completion polling code.
Privileged QP requests typically takes a few microseconds to
complete. Since we usually need the result of the operation
to be able to continue, user code usually calls poll_cq_waitfor()
to busy wait for the completion of the request. This commit
adds two measures to make this logic better:
1) Avoid rescheduling while interrupts have been turned off:
The driver was using the in_interrupt() test to avoid calling
cond_resched() from interrupt context, and leaving the rest of
the decision making of whether or not to reschedule to cond_resched().
Testing indicates that this could lead to deadlock prone calls to
schedule(), as cond_resched() would actually allow rescheduling if interrupts
have been disabled. Switch this logic to use irqs_disabled() instead, which will
cover both the interrupt case and the cases where interrupts have been disabled
by the caller.
2) Busywait for the completion for a few cycles before even trying to reschedule
or cpu_relax. Measurements indicate that 10 tries are enough to cover
a large fraction of cases on a lightly loaded system.
sif: xrc: Add handling for xrc_domain_violation & invalid_xrceth events
XRC spec defines xrc_domain_violation and invalid_xrceth events as
affiliated asynchronous error on XRC TGTQP, the QP is marked in ERROR.
Map these to ofed's IB_EVENT_QP_FATAL type and qp event handler.
sif: During driver load, hold back events instead of ignoring them
The current semantics when a queued event is received before the driver
is done loading is to ignore it with a log warning. This was not sufficient
to implement the flush_retry_qp setup, which relies on lid change events.
Unfortunately the solution to make an exception for lid events for the
flush_retry_qp is not valid because it defeats the purpose of the
check in the first place by allowing such an event to be handled before
the data structure needed to handle it is initialized.
This commit introduces a new kernel completion that the driver completes
when the whole driver load is finished. The first EPSC event queued on the
sif work queue will now block on this completion.
This covers all the remaining cases not handled by commit
"eq: Avoid enabling interrupts on TSU EQs until the initialization is complete"
and solves the original problem that introduced the need for a fix.
sif: Let sif_remove implement the shutdown entry point
Due to bugs in the FLR handling on PSIF 2.1, we need to make sure
that the driver gets to do a full unload whenever possible.
Simply let the shutdown entry point be sif_remove similar
to the remove entry point.
sif: pqp: Fix potential null pointer exception under high load
If a high number of invalidate requests are posted
without requesting completions, the PQP may run full
enough not to be able to allow a posted req anymore.
To handle this scenario, an additional attempt to send a
synchronous invalidate request was added. Unfortunately
that request ended up being posted with synchronous semantics
but without a handle to handle the completion.
This commit fixes this case by dynamically allocating/freeing
a handle in such situations.
sif: fmr: call sif_post_flush_tlb with ptw flush and in SR/IOV cases
Flush the ptw cache as part of the bulk tlb flush.
With the introduction of more dynamic page tables,
the code to distinguish between the two types of cases,
simple page table entries and entries with interior nodes
was simplified because we no longer always know if PTW entries
have been consumed. Unfortunately the code was unified to the
wrong case which does not flush the ptw cache possibly causing
cache entries to remain in the cache and theoretically have
sif look up pages that no longer exists or that has been reused
for other purposes.
Also now that the EPSC handles the tlb flushing, it is just fine
to call that code even in virtualized setups.
Remove tests for whether VFs exists or we are running in a VF.
sif: eq: Avoid enabling interrupts on TSU EQs until the initialization is complete
During driver load we might have some rare conditions where external events
are occuring before the driver is ready to accept them. The hardware workarounds
to handle issues with QP flushing are particularly sensitive to this.
Delay enabling of the IRQs that can generate interrupts for all
event queues except the EPS event queue(s) until everything is set up and ready.
Note that this commit will also implicitly cause interrupts for EQs 1-3 for each EPSA
not to be enabled. This is no big deal as they are currently not used anyway.
Francisco TriviƱo [Wed, 13 Jul 2016 15:23:40 +0000 (17:23 +0200)]
sif: sif_eq: fix missing qp->refcnt decrement for COMM_EST events
qp->refcnt is increased by 1 when event_status_communication_established
is dispatched but later it is not decremented when handling the event
work for IB_EVENT_COMM_EST for UD & RAW QP types.
This commit decrements qp->refcnt for those cases too and fixes another
potential bug by moving the sif_log line up before the qp->refcnt
decrement.
PSIF supports 48 msi-x interrupts. We associate one msi-x per event
queue (EQ). Further, PSIF need one eq for epsc and one for async
events from the hardware. That leaves 46 for completion notification
events or completion vectors.
This commit also reduces the number of completion notification event
queues to the lesser of the number of cpus present and the default.
TriviƱo [Thu, 30 Jun 2016 14:30:07 +0000 (16:30 +0200)]
sif: sif_r3: implemented WA#4074 stats counters
This commit added both wa4074 and wa4059 statistics
to help to identify potential issues when the work-
around are applied.
The wa4074 stats implementation is based on:
a) pre_wa4074_cnt == post_wa4074_cnt. This means the
w/a is triggered from the modify_qp_hw.
b) pre_wa4074_cnt != post_wa4074_cnt. post_wa4074 is
triggered from other scenarios too.
c) post_wa4074_err_cnt != 0. It means that post_wa4074
fails.
d) wrs_csum_corr_wa4074_cnt indicates the number of
WRs that were csum corrupted.
e) rcv_snd_gen_wa4074_cnt shows the number of recv
and send cqe's were generated.
The wa4059 stats indicate the number of keep-alive
events that have been sent.
This commit also improves wa3714 stats implementation
by using atomic64 counters and enumeration values,
and other minor changes such as clean up and fix
typos on comment messages.
The rq can be invalidated from reset_qp or flush_rq. Nevertheless,
the rq_sw data structure has been reset after rq is invalidated in
reset_qp regardless of the completion of the flush_rq. Thus, move
the rq synchronization to reset_qp, and place the synchronization
in between of reset_qp and flush_rq. After invalidating and
reseting the rq, no flush rq is required as both head and tail
have been reset to 0.
This commit creates another atomic_t variable for the synchronization
between reset rq and flush_rq.
Signed-off-by: Wei Lin Guay <wei.lin.guay@oracle.com> Reviewed-by: Knut Omang <knut.omang@oracle.com>
(commit 5f009d3f8e6685fe8c6215082c1696a08b411220 of upstream)
The new queue limit is not used by the majority of block drivers, and
should be initialized to 0 for the driver's requested settings to be used.
Signed-off-by: Keith Busch <keith.busch@intel.com> Acked-by: Martin K. Petersen <martin.petersen@oracle.com> Reviewed-by: Sagi Grimberg <sagig@mellanox.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Jens Axboe <axboe@fb.com> Signed-off-by: Joe Jin <joe.jin@oracle.com>
(commit 6b7e9cde49691e04314342b7dce90c67ad567fcc of upstream)
For historic reasons, io_opt is in bytes and max_sectors in block layer
sectors. This interface inconsistency is error prone and should be
fixed. But for 4.4--4.7 let's make the unit difference explicit via a
wrapper function.
Fixes: d0eb20a863ba ("sd: Optimal I/O size is in bytes, not sectors") Cc: stable@vger.kernel.org # 4.4+ Reported-by: Fam Zheng <famz@redhat.com> Reviewed-by: Bart Van Assche <bart.vanassche@sandisk.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Tested-by: Andrew Patterson <andrew.patterson@hpe.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Joe Jin <joe.jin@oracle.com>
(commit f08bb1e0dbdd0297258d0b8cd4dbfcc057e57b2a of upstream)
During revalidate we check whether device capacity has changed before we
decide whether to output disk information or not.
The check for old capacity failed to take into account that we scaled
sdkp->capacity based on the reported logical block size. And therefore
the capacity test would always fail for devices with sectors bigger than
512 bytes and we would print several copies of the same discovery
information.
Avoid scaling sdkp->capacity and instead adjust the value on the fly
when setting the block device capacity and generating fake C/H/S
geometry.
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com> Cc: <stable@vger.kernel.org> Reported-by: Hannes Reinecke <hare@suse.de> Reviewed-by: Hannes Reinicke <hare@suse.de> Reviewed-by: Ewan Milne <emilne@redhat.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Joe Jin <joe.jin@oracle.com>
(commit d0eb20a863ba7dc1d3f4b841639671f134560be2 of upstream)
Commit ca369d51b3e1 ("block/sd: Fix device-imposed transfer length
limits") accidentally switched optimal I/O size reporting from bytes to
block layer sectors.
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com> Reported-by: Christian Borntraeger <borntraeger@de.ibm.com> Tested-by: Christian Borntraeger <borntraeger@de.ibm.com> Fixes: ca369d51b3e1649be4a72addd6d6a168cfb3f537 Cc: stable@vger.kernel.org # 4.4+ Reviewed-by: James E.J. Bottomley <James.Bottomley@HansenPartnership.com> Reviewed-by: Ewan D. Milne <emilne@redhat.com> Reviewed-by: Matthew R. Ochs <mrochs@linux.vnet.ibm.com> Signed-off-by: Joe Jin <joe.jin@oracle.com>
(commit 9c1d9c207bb800498347a2716da298043ee280c5 of upstream)
Eryu Guan reported that loading scsi_debug would fail. This turned out
to be caused by scsi_debug reporting an optimal I/O size of 32KB which
is smaller than the 64KB page size on the PowerPC system in question.
Add a check to ensure that we only use the device-reported OPTIMAL
TRANSFER LENGTH if it is bigger than or equal to the page cache size.
Reported-by: Eryu Guan <guaneryu@gmail.com> Reported-by: Ming Lei <tom.leiming@gmail.com> Reviewed-by: Douglas Gilbert <dgilbert@interlog.com> Reviewed-by: Ewan Milne <emilne@redhat.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Joe Jin <joe.jin@oracle.com>
(commit ca369d51b3e1649be4a72addd6d6a168cfb3f537 of upstream)
Commit 4f258a46346c ("sd: Fix maximum I/O size for BLOCK_PC requests")
had the unfortunate side-effect of removing an implicit clamp to
BLK_DEF_MAX_SECTORS for REQ_TYPE_FS requests in the block layer
code. This caused problems for some SMR drives.
Debugging this issue revealed a few problems with the existing
infrastructure since the block layer didn't know how to deal with
device-imposed limits, only limits set by the I/O controller.
- Introduce a new queue limit, max_dev_sectors, which is used by the
ULD to signal the maximum sectors for a REQ_TYPE_FS request.
- Ensure that max_dev_sectors is correctly stacked and taken into
account when overriding max_sectors through sysfs.
- Rework sd_read_block_limits() so it saves the max_xfer and opt_xfer
values for later processing.
- In sd_revalidate() set the queue's max_dev_sectors based on the
MAXIMUM TRANSFER LENGTH value in the Block Limits VPD. If this value
is not reported, fall back to a cap based on the CDB TRANSFER LENGTH
field size.
- In sd_revalidate(), use OPTIMAL TRANSFER LENGTH from the Block Limits
VPD--if reported and sane--to signal the preferred device transfer
size for FS requests. Otherwise use BLK_DEF_MAX_SECTORS.
- blk_limits_max_hw_sectors() is no longer used and can be removed.
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=93581 Reviewed-by: Christoph Hellwig <hch@lst.de> Tested-by: sweeneygj@gmx.com Tested-by: Arzeets <anatol.pomozov@gmail.com> Tested-by: David Eisner <david.eisner@oriel.oxon.org> Tested-by: Mario Kicherer <dev@kicherer.org> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: Joe Jin <joe.jin@oracle.com>
Conflicts:
drivers/scsi/sd.h
include/linux/blkdev.h
Adding opt_xfer_blocks and max_dev_sectors breaks kABI.
They will be added using by UEK_KABI_USE2() in a fix-up commit. Signed-off-by: Chuck Anderson <chuck.anderson@oracle.com>
This commit made rolling upgrade fail. When one node is upgraded
to new version with this commit, the remaining nodes will fail to
establish connections to it, then vms on the remaining nodes can't
be live migrated to that node. This will cause an outage.
Orabug: 24292852 Signed-off-by: Junxiao Bi <junxiao.bi@oracle.com>
When doing a direct IO write, __blockdev_direct_IO() can call the
btrfs_get_blocks_direct() callback one or more times before it calls the
btrfs_submit_direct() callback. However it can fail after calling the
first callback and before calling the second callback, which is a problem
because the first one creates ordered extents and the second one is the
one that submits bios that cover the ordered extents created by the first
one. That means the ordered extents will never complete nor have any of
the flags BTRFS_ORDERED_IO_DONE / BTRFS_ORDERED_IOERR set, resulting in
subsequent operations (such as other direct IO writes, buffered writes or
hole punching) that lock the same IO range and lookup for ordered extents
in the range to hang forever waiting for those ordered extents because
they can not complete ever, since no bio was submitted.
Fix this by tracking a range of created ordered extents that don't have
yet corresponding bios submitted and completing the ordered extents in
the range if __blockdev_direct_IO() fails with an error.
Signed-off-by: Filipe Manana <fdmanana@suse.com>
(cherry picked from commit f28a492878170f39002660a26c329201cf678d74) Signed-off-by: Liu Bo <bo.li.liu@oracle.com>
Commit 61de718fceb6 ("Btrfs: fix memory corruption on failure to submit
bio for direct IO") fixed problems with the error handling code after we
fail to submit a bio for direct IO. However there were 2 problems that it
did not address when the failure is due to memory allocation failures for
direct IO writes:
1) We considered that there could be only one ordered extent for the whole
IO range, which is not always true, as we can have multiple;
2) It did not set the bit BTRFS_ORDERED_IO_DONE in the ordered extent,
which can make other tasks running btrfs_wait_logged_extents() hang
forever, since they wait for that bit to be set. The general assumption
is that regardless of an error, the BTRFS_ORDERED_IO_DONE is always set
and it precedes setting the bit BTRFS_ORDERED_COMPLETE.
Fix these issues by moving part of the btrfs_endio_direct_write() handler
into a new helper function and having that new helper function called when
we fail to allocate memory to submit the bio (and its private object) for
a direct IO write.
Signed-off-by: Filipe Manana <fdmanana@suse.com> Reviewed-by: Liu Bo <bo.li.liu@oracle.com>
(cherry picked from commit 14543774bd67a64f616431e5c9d1472f58979841) Signed-off-by: Liu Bo <bo.li.liu@oracle.com>
If we fail to submit a bio for a direct IO request, we were grabbing the
corresponding ordered extent and decrementing its reference count twice,
once for our lookup reference and once for the ordered tree reference.
This was a problem because it caused the ordered extent to be freed
without removing it from the ordered tree and any lists it might be
attached to, leaving dangling pointers to the ordered extent around.
Example trace with CONFIG_DEBUG_PAGEALLOC=y:
For read requests we weren't doing any cleanup either (none of the work
done by btrfs_endio_direct_read()), so a failure submitting a bio for a
read request would leave a range in the inode's io_tree locked forever,
blocking any future operations (both reads and writes) against that range.
So fix this by making sure we do the same cleanup that we do for the case
where the bio submission succeeds.
Signed-off-by: Filipe Manana <fdmanana@suse.com> Signed-off-by: Chris Mason <clm@fb.com>
(cherry picked from commit 61de718fceb6bc028dafe4d06a1f87a9e0998303) Signed-off-by: Liu Bo <bo.li.liu@oracle.com>
When doing a write using direct IO we can end up not doing the whole write
operation using the direct IO path, in that case we fallback to a buffered
write to do the remaining IO. This happens for example if the range we are
writing to contains a compressed extent.
When we do a partial write and fallback to buffered IO, due to the
existence of a compressed extent for example, we end up not adjusting the
outstanding extents counter of our inode which ends up getting decremented
twice, once by the DIO ordered extent for the partial write and once again
by btrfs_direct_IO(), resulting in an arithmetic underflow at
extent-tree.c:drop_outstanding_extent(). For example if we have:
extents [ prealloc extent ] [ compressed extent ]
offsets A B C D E
and at the moment our inode's outstanding extents counter is 0, if we do a
direct IO write against the range [B, D[ (which has a length smaller than
128Mb), we end up bumping our inode's outstanding extents counter to 1, we
create a DIO ordered extent for the range [B, C[ and then fallback to a
buffered write for the range [C, D[. The direct IO handler
(inode.c:btrfs_direct_IO()) decrements the outstanding extents counter by
1, leaving it with a value of 0, through a call to
btrfs_delalloc_release_space() and then shortly after the DIO ordered
extent finishes and calls btrfs_delalloc_release_metadata() which ends
up to attempt to decrement the inode's outstanding extents counter by 1,
resulting in an assertion failure at drop_outstanding_extent() because
the operation would result in an arithmetic underflow (0 - 1). This
produces the following trace:
So fix this by ensuring we adjust the outstanding extents counter when we
do the fallback just like we do for the case where the whole write can be
done through the direct IO path.
We were also adjusting the outstanding extents counter by a constant value
of 1, which is incorrect because we were ignorning that we account extents
in BTRFS_MAX_EXTENT_SIZE units, o fix that as well.
The following test case for fstests reproduces this issue:
seq=`basename $0`
seqres=$RESULT_DIR/$seq
echo "QA output created by $seq"
tmp=/tmp/$$
status=1 # failure is the default!
trap "_cleanup; exit \$status" 0 1 2 3 15
_cleanup()
{
rm -f $tmp.*
}
# get standard environment, filters and checks
. ./common/rc
. ./common/filter
# real QA test starts here
_need_to_be_root
_supported_fs btrfs
_supported_os Linux
_require_scratch
_require_xfs_io_command "falloc"
# Create a compressed extent covering the range [700K, 800K[.
$XFS_IO_PROG -f -s -c "pwrite -S 0xaa -b 100K 700K 100K" \
$SCRATCH_MNT/foo | _filter_xfs_io
# Create prealloc extent covering the range [600K, 700K[.
$XFS_IO_PROG -c "falloc 600K 100K" $SCRATCH_MNT/foo
# Write 80K of data to the range [640K, 720K[ using direct IO. This
# range covers both the prealloc extent and the compressed extent.
# Because there's a compressed extent in the range we are writing to,
# the DIO write code path ends up only writing the first 60k of data,
# which goes to the prealloc extent, and then falls back to buffered IO
# for writing the remaining 20K of data - because that remaining data
# maps to a file range containing a compressed extent.
# When falling back to buffered IO, we used to trigger an assertion when
# releasing reserved space due to bad accounting of the inode's
# outstanding extents counter, which was set to 1 but we ended up
# decrementing it by 1 twice, once through the ordered extent for the
# 60K of data we wrote using direct IO, and once through the main direct
# IO handler (inode.cbtrfs_direct_IO()) because the direct IO write
# wrote less than 80K of data (60K).
$XFS_IO_PROG -d -c "pwrite -S 0xbb -b 80K 640K 80K" \
$SCRATCH_MNT/foo | _filter_xfs_io
# Now similar test as above but for very large write operations. This
# triggers special cases for an inode's outstanding extents accounting,
# as internally btrfs logically splits extents into 128Mb units.
$XFS_IO_PROG -f -s \
-c "pwrite -S 0xaa -b 128M 258M 128M" \
-c "falloc 0 258M" \
$SCRATCH_MNT/bar | _filter_xfs_io
$XFS_IO_PROG -d -c "pwrite -S 0xbb -b 256M 3M 256M" $SCRATCH_MNT/bar \
| _filter_xfs_io
# Now verify the file contents are correct and that they are the same
# even after unmounting and mounting the fs again (or evicting the page
# cache).
#
# For file foo, all bytes in the range [0, 640K[ must have a value of
# 0x00, all bytes in the range [640K, 720K[ must have a value of 0xbb
# and all bytes in the range [720K, 800K[ must have a value of 0xaa.
#
# For file bar, all bytes in the range [0, 3M[ must havea value of 0x00,
# all bytes in the range [3M, 259M[ must have a value of 0xbb and all
# bytes in the range [259M, 386M[ must have a value of 0xaa.
#
echo "File digests before remounting the file system:"
md5sum $SCRATCH_MNT/foo | _filter_scratch
md5sum $SCRATCH_MNT/bar | _filter_scratch
_scratch_remount
echo "File digests after remounting the file system:"
md5sum $SCRATCH_MNT/foo | _filter_scratch
md5sum $SCRATCH_MNT/bar | _filter_scratch
status=0
exit
Fixes: e1cbbfa5f5aa ("Btrfs: fix outstanding_extents accounting in DIO") Fixes: 3e05bde8c3c2 ("Btrfs: only adjust outstanding_extents when we do a short write") Signed-off-by: Filipe Manana <fdmanana@suse.com>
(cherry picked from commit 9c9464cc92668984ebed79e22b5063877a8d97db) Signed-off-by: Liu Bo <bo.li.liu@oracle.com>
This means that the inode had non-zero "outstanding extents" during
eviction. This occurs because, during direct I/O a task which successfully
used up its reserved data space would set BTRFS_INODE_DIO_READY bit and does
not clear the bit after finishing the DIO write. A future DIO write could
actually fail and the unused reserve space won't be freed because of the
previously set BTRFS_INODE_DIO_READY bit.
Clearing the BTRFS_INODE_DIO_READY bit in btrfs_direct_IO() caused the
following issue,
|-----------------------------------+-------------------------------------|
| Task A | Task B |
|-----------------------------------+-------------------------------------|
| Start direct i/o write on inode X.| |
| reserve space | |
| Allocate ordered extent | |
| release reserved space | |
| Set BTRFS_INODE_DIO_READY bit. | |
| | splice() |
| | Transfer data from pipe buffer to |
| | destination file. |
| | - kmap(pipe buffer page) |
| | - Start direct i/o write on |
| | inode X. |
| | - reserve space |
| | - dio_refill_pages() |
| | - sdio->blocks_available == 0 |
| | - Since a kernel address is |
| | being passed instead of a |
| | user space address, |
| | iov_iter_get_pages() returns |
| | -EFAULT. |
| | - Since BTRFS_INODE_DIO_READY is |
| | set, we don't release reserved |
| | space. |
| | - Clear BTRFS_INODE_DIO_READY bit.|
| -EIOCBQUEUED is returned. | |
|-----------------------------------+-------------------------------------|
Hence this commit introduces "struct btrfs_dio_data" to track the usage of
reserved data space. The remaining unused "reserve space" can now be freed
reliably.
Signed-off-by: Chandan Rajendra <chandan@linux.vnet.ibm.com> Reviewed-by: Liu Bo <bo.li.liu@oracle.com> Signed-off-by: Chris Mason <clm@fb.com>
(cherry picked from commit 50745b0a7f46f68574cd2b9ae24566bf026e7ebd) Signed-off-by: Liu Bo <bo.li.liu@oracle.com>
While running generic/019, dmesg got several warnings from
btrfs_free_reserved_data_space().
Test generic/019 produces some disk failures so sumbit dio will get errors,
in which case, btrfs_direct_IO() goes to the error handling and free
bytes_may_use, but the problem is that bytes_may_use has been free'd
during get_block().
This adds a runtime flag to show if we've gone through get_block(), if so,
don't do the cleanup work.
Signed-off-by: Liu Bo <bo.li.liu@oracle.com> Reviewed-by: Filipe Manana <fdmanana@suse.com> Tested-by: Filipe Manana <fdmanana@suse.com> Signed-off-by: Chris Mason <clm@fb.com>
(cherry picked from commit ddba1bfc2369cd0566bcfdab47599834a32d1c19) Signed-off-by: Liu Bo <bo.li.liu@oracle.com>
Juergen Gross [Wed, 18 May 2016 14:44:54 +0000 (16:44 +0200)]
xen: use same main loop for counting and remapping pages
Instead of having two functions for cycling through the E820 map in
order to count to be remapped pages and remap them later, just use one
function with a caller supplied sub-function called for each region to
be processed. This eliminates the possibility of a mismatch between
both loops which showed up in certain configurations.
Suggested-by: Ed Swierk <eswierk@skyportsystems.com> Signed-off-by: Juergen Gross <jgross@suse.com> Cc: <stable@vger.kernel.org> Signed-off-by: David Vrabel <david.vrabel@citrix.com>
(cherry picked from commit dd14be92fbf5bc1ef7343f34968440e44e21b46a) Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
OraBug: 24012238
Bob Liu [Fri, 1 Jul 2016 01:11:15 +0000 (21:11 -0400)]
xen-blkfront: dynamic configuration of per-vbd resources
The current VBD layer reserves buffer space for each attached device based on
three statically configured settings which are read at boot time.
* max_indirect_segs: Maximum amount of segments.
* max_ring_page_order: Maximum order of pages to be used for the shared ring.
* max_queues: Maximum of queues(rings) to be used.
But the storage backend, workload, and guest memory result in very different
tuning requirements. It's impossible to centrally predict application
characteristics so it's best to leave allow the settings can be dynamiclly
adjusted based on workload inside the Guest.
Usage:
Show current values:
cat /sys/devices/vbd-xxx/max_indirect_segs
cat /sys/devices/vbd-xxx/max_ring_page_order
cat /sys/devices/vbd-xxx/max_queues
Bob Liu [Fri, 1 Jul 2016 21:43:39 +0000 (17:43 -0400)]
xen-blkfront: introduce blkif_set_queue_limits()
blk_mq_update_nr_hw_queues() reset all queue limits to default which it's not
as xen-blkfront expected, introducing blkif_set_queue_limits() to reset limits
with initial correct values.
Orabug: 23720696 Signed-off-by: Bob Liu <bob.liu@oracle.com>