Kent Overstreet [Tue, 5 Dec 2023 20:22:25 +0000 (15:22 -0500)]
bcachefs: Improve error message when finding wrong btree node
single_device.merge_torture_flakey is, very rarely, finding a btree node
that doesn't match the key that points to it: this patch improves the
error message to print out more fields from the btree node header, so
that we can see what else does or does not match the key.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Brian Foster [Tue, 5 Dec 2023 13:24:39 +0000 (08:24 -0500)]
bcachefs: return from fsync on writeback error to avoid early shutdown
When investigating transient failures of generic/441 on bcachefs, it
was determined that the cause of the failure was a combination of
unconditional emergency shutdown and racing between background
journal activity and the test switchover from a working device
mapper table to an error injecting table.
Part of the reason for this sequence of events is that bcachefs
aggressively flushes as much as possible during fsync(), regardless
of errors. While this is reasonable behavior, it is technically
unnecessary because once an error is returned from fsync(), the
caller cannot make any assumptions about the resilience of data.
Tweak the bch2_fsync() logic to return an error on failure of any of
the steps involved in the flush. Note that this change alone does
not prevent generic/441 failure, but in combination with a test
tweak to avoid racing during the dm-error table switchover it avoids
the unnecessary shutdowns and allows the test to pass reliably on
bcachefs.
Signed-off-by: Brian Foster <bfoster@redhat.com> Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Brian Foster [Thu, 30 Nov 2023 19:17:11 +0000 (14:17 -0500)]
bcachefs: remove sb lock and flags update on explicit shutdown
bcachefs grabs s_umount and sets SB_RDONLY when the fs is shutdown
via the ioctl() interface. This has a couple issues related to
interactions between shutdown and freeze:
1. The flags == FSOP_GOING_FLAGS_DEFAULT case is a deadlock vector
because freeze_bdev() calls into freeze_super(), which also
acquires s_umount.
2. If an explicit shutdown occurs while the sb is frozen, SB_RDONLY
alters the thaw path as if the sb was read-only at freeze time.
This effectively leaks the frozen state and leaves the sb frozen
indefinitely.
The usage of SB_RDONLY here goes back to the initial bcachefs commit
and AFAICT is simply historical behavior. This behavior is unique to
bcachefs relative to the handful of other filesystems that support
the shutdown ioctl(). Typically, SB_RDONLY is reserved for the
proper remount path, which itself is restricted from modifying
frozen superblocks in reconfigure_super(). Drop the unnecessary sb
lock and flags update bch2_ioc_goingdown() to address both of these
issues.
Signed-off-by: Brian Foster <bfoster@redhat.com> Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Thu, 30 Nov 2023 07:16:19 +0000 (02:16 -0500)]
bcachefs: Make backpointer fsck wb flush check more rigorous
backpointers fsck now always runs in rw mode - the btree is being
modified while it runs, by e.g. copygc, rebalance, the discard worker,
the invalidate worker.
We could find a missing backpointer, flush the btree write buffer, and
then on the next iteration find a new key at the exact same position -
which will most likely need another write buffer flush.
Hence, we have to check for an exact match on last_flushed, not just the
pos.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Daniel Hill [Mon, 27 Nov 2023 10:37:44 +0000 (23:37 +1300)]
bcachefs: copygc shouldn't try moving buckets on error
Co-developed-by: Kent Overstreet <kent.overstreet@linux.dev> Signed-off-by: Daniel Hill <daniel@gluo.nz> Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Thu, 2 Nov 2023 23:37:15 +0000 (19:37 -0400)]
bcachefs: Clean up btree write buffer write ref handling
__bch2_btree_write_buffer_flush() now assumes a write ref is already
held (as called by the transaction commit path); and the wrappers
bch2_write_buffer_flush() and flush_sync() take an explicit write ref.
This means internally the write buffer code can always use
BTREE_INSERT_NOCHECK_RW, instead of in the previous code passing flags
around and hoping the NOCHECK_RW flag was always carried around
correctly.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Sat, 25 Nov 2023 20:46:02 +0000 (15:46 -0500)]
bcachefs: Refactor bch2_check_alloc_to_lru_ref()
This code was somewhat convoluted - because originally bch2_lru_set()
could modify the LRU index if there was a collision.
That's no longer the case, so the "create LRU entry" path has no reason
to update the alloc key, so we can separate the handling of the two fsck
errors.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Fri, 24 Nov 2023 00:26:27 +0000 (19:26 -0500)]
bcachefs: BCH_IOCTL_DEV_USAGE_V2
BCH_IOCTL_DEV_USAGE mistakenly put the per-data-type array in struct
bch_ioctl_dev_usage; since ioctl numbers encode the size of the arg,
that means adding new data types breaks the ioctl.
This adds a new version that includes the number of data types as a
parameter: the old version is fixed at 10 so as to not break when adding
new types.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Tue, 31 Oct 2023 01:03:32 +0000 (21:03 -0400)]
bcachefs: six locks: Simplify optimistic spinning
osq lock maintainers don't want it to be used outside of kernel/locking/
- but, we can do better.
Since we have lock handoff signalled via waitlist entries, there's no
reason for optimistic spinning to have to look at the lock at all -
aside from checking lock-owner; we can just spin looking at our waitlist
entry.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Reported-by: Abaci Robot <abaci@linux.alibaba.com> Closes: https://bugzilla.openanolis.cn/show_bug.cgi?id=7585 Signed-off-by: Yang Li <yang.lee@linux.alibaba.com> Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Fri, 10 Nov 2023 04:43:35 +0000 (23:43 -0500)]
bcachefs: Don't flush journal after replay
The flush_all_pins() after journal replay was unecessary, and trying to
completely flush the journal while RW is not a great idea - it's not
guaranteed to terminate if other threads keep adding things to the
jorunal.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Mon, 13 Nov 2023 01:35:51 +0000 (20:35 -0500)]
bcachefs: Kill BTREE_ITER_ALL_LEVELS
As discussed in the previous patch, BTREE_ITER_ALL_LEVELS appears to be
racy with concurrent interior node updates - and perhaps it is fixable,
but it's tricky and unnecessary.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Mon, 13 Nov 2023 01:20:35 +0000 (20:20 -0500)]
bcachefs: backpointers fsck no longer uses BTREE_ITER_ALL_LEVELS
It appears that BTREE_ITER_ALL_LEVELS is racy with concurrent interior
node btree updates; unfortunate but not terribly surprising it's a
difficult problem - that was the original reason for gc_lock.
BTREE_ITER_ALL_LEVELS will probably be deleted in a subsequent patch,
this changes backpointers fsck to instead walk keys at one level of the
btree at a time.
This fixes the tiering_drop_alloc test, which stopped working with the
patch to not flush the journal after journal replay.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Fri, 10 Nov 2023 01:41:58 +0000 (20:41 -0500)]
bcachefs: Go rw before journal replay
This gets us slightly nicer log messages.
Also, this slightly clarifies synchronization of c->journal_keys; after
we go RW it's in use by multiple threads (so that the btree iterator
code can overlay keys from the journal); so it has to be prepped before
that point.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Thu, 9 Nov 2023 03:00:00 +0000 (22:00 -0500)]
bcachefs: BTREE_INSERT_JOURNAL_REPLAY now "don't init trans->journal_res"
This slightly changes how trans->journal_res works, in preparation for
changing the btree write buffer flush path to use it.
Now, BTREE_INSERT_JOURNAL_REPLAY means "don't take a journal
reservation; trans->journal_res.seq already refers to the journal
sequence number to pin".
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Tue, 7 Nov 2023 16:16:14 +0000 (11:16 -0500)]
bcachefs: Clear k->needs_whitout earlier in commit path
The upcoming btree write buffer rework is going to use the journal
itself as the first stage of the write buffer; this is a cleanup to make
sure k->needs_whiteout is initialized before keys hit the journal.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Tue, 7 Nov 2023 17:32:50 +0000 (12:32 -0500)]
bcachefs: Add an assertion in bch2_journal_pin_set()
Previously, bch2_journal_pin_set() would silently ignore a request to
pin a journal sequence number that was no longer dirty, because it was
used internally by bch2_journal_pin_copy() which could race with the src
pin being flushed.
Split these apart so that we can properly assert that @seq is a
currently dirty journal sequence number - this is almost always a bug.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Thu, 16 Nov 2023 20:46:50 +0000 (15:46 -0500)]
bcachefs: kill INODE_LOCK, use lock_two_nondirectories()
In an ideal world, we'd have a common helper that could be used for
sorting a list of inodes into the correct lock order, and then the same
lock ordering could be used for any type of inode lock, not just
i_rwsem.
But the lock ordering rules for i_rwsem are a bit complicated, so -
abandon that dream for now and do it the more standard way.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Fri, 29 Dec 2023 20:25:07 +0000 (15:25 -0500)]
bcachefs: bch_sb_field_downgrade
Add a new superblock section that contains a list of
{ minor version, recovery passes, errors_to_fix }
that is - a list of recovery passes that must be run when downgrading
past a given version, and a list of errors to silently fix.
The upcoming disk accounting rewrite is not going to be fully
compatible: we're going to have to regenerate accounting both when
upgrading to the new version, and also from downgrading from the new
version, since the new method of doing disk space accounting is a
completely different architecture based on deltas, and synchronizing
them for every jounal entry write to maintain compatibility is going to
be too expensive and impractical.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Fri, 29 Dec 2023 20:15:14 +0000 (15:15 -0500)]
bcachefs: bch_sb.recovery_passes_required
Add two new superblock fields. Since the main section of the superblock
is now fully, we have to add a new variable length section for them -
bch_sb_field_ext.
- recovery_passes_requried: recovery passes that must be run on the
next mount
- errors_silent: errors that will be silently fixed
These are to improve upgrading and dwongrading: these fields won't be
cleared until after recovery successfully completes, so there won't be
any issues with crashing partway through an upgrade or a downgrade.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Fri, 29 Dec 2023 19:40:03 +0000 (14:40 -0500)]
bcachefs: Add persistent identifiers for recovery passes
The next patch will start to refer to recovery passes from the
superblock; naturally, we now need identifiers that don't change, since
the existing enum is in the order in which they are run and is not
fixed.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
Kent Overstreet [Sat, 30 Dec 2023 20:32:05 +0000 (15:32 -0500)]
bcachefs: fix buffer overflow in nocow write path
BCH_REPLICAS_MAX isn't the actual maximum number of pointers in an
extent, it's the maximum number of dirty pointers.
We don't have a real restriction on the number of cached pointers, and
we don't want a fixed size array here anyways - so switch to
DARRAY_PREALLOCATED().
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev> Reported-and-tested-by: Daniel J Blueman <daniel@quora.org>
Kent Overstreet [Fri, 17 Nov 2023 00:44:48 +0000 (19:44 -0500)]
bcachefs: Switch darray to kvmalloc()
We sometimes use darrays for quite large buffers - the btree write
buffer in particular needs large buffers, since it must be sized to hold
all the write buffer keys outstanding in the journal.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev>
peek_upto() checks against the end position and bails out before
FILTER_SNAPSHOTS checks; this is because if we end up at a different
inode number than the original search key none of the keys we see might
be visibile in the current snapshot - we might be looking at inode in a
completely different subvolume.
But this is broken, because when we're iterating over extents we're
checking against the extent start position to decide when to bail out,
and the extent start position isn't monotonically increasing until after
we've run FILTER_SNAPSHOTS.
Fix this by adding a simple inode number check where the old bailout
check was, and moving the main check to the correct position.
Signed-off-by: Kent Overstreet <kent.overstreet@linux.dev> Reported-by: "Carl E. Thompson" <list-bcachefs@carlthompson.net>
Linus Torvalds [Sat, 30 Dec 2023 19:37:35 +0000 (11:37 -0800)]
Merge tag 'trace-v6.7-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace
Pull tracing fixes from Steven Rostedt:
- Fix readers that are blocked on the ring buffer when buffer_percent
is 100%. They are supposed to wake up when the buffer is full, but
because the sub-buffer that the writer is on is never considered
"dirty" in the calculation, dirty pages will never equal nr_pages.
Add +1 to the dirty count in order to count for the sub-buffer that
the writer is on.
- When a reader is blocked on the "snapshot_raw" file, it is to be
woken up when a snapshot is done and be able to read the snapshot
buffer. But because the snapshot swaps the buffers (the main one with
the snapshot one), and the snapshot reader is waiting on the old
snapshot buffer, it was not woken up (because it is now on the main
buffer after the swap). Worse yet, when it reads the buffer after a
snapshot, it's not reading the snapshot buffer, it's reading the live
active main buffer.
Fix this by forcing a wakeup of all readers on the snapshot buffer
when a new snapshot happens, and then update the buffer that the
reader is reading to be back on the snapshot buffer.
- Fix the modification of the direct_function hash. There was a race
when new functions were added to the direct_function hash as when it
moved function entries from the old hash to the new one, a direct
function trace could be hit and not see its entry.
This is fixed by allocating the new hash, copy all the old entries
onto it as well as the new entries, and then use rcu_assign_pointer()
to update the new direct_function hash with it.
This also fixes a memory leak in that code.
- Fix eventfs ownership
* tag 'trace-v6.7-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace:
ftrace: Fix modification of direct_function hash while in use
tracing: Fix blocked reader of snapshot buffer
ring-buffer: Fix wake ups when buffer_percent is set to 100
eventfs: Fix file and directory uid and gid ownership
osq_wait_next() is passed 'prev' from osq_lock() and NULL from
osq_unlock() but only needs the 'cpu' value to write to lock->tail.
Just pass prev->cpu or OSQ_UNLOCKED_VAL instead.
Should have no effect on the generated code since gcc manages to assume
that 'prev != NULL' due to an earlier dereference.
Signed-off-by: David Laight <david.laight@aculab.com>
[ Changed 'old' to 'old_cpu' by request from Waiman Long - Linus ] Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>