process_dinode_int() reports bad flags if dino->di_flags &
~XFS_DIFLAG_ANY - i.e. if any flags are set outside the known set.
But then instead of clearing them, it does flags &= ~XFS_DIFLAG_ANY
which keeps *only* the bad flags. This leads to persistent,
unrepairable errors of the form:
"Bad flags set in inode XXX"
Fix this.
While we are at it, fix a couple lines which look like they used to
be continuation lines, but are no longer.
Signed-off-by: Eric Sandeen <sandeen@redhat.com>
Reviewed-by: Brian Foster <bfoster@redhat.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Dave Chinner <david@fromorbit.com>
_("Bad flags set in inode %" PRIu64 "\n"),
lino);
}
- flags &= ~XFS_DIFLAG_ANY;
+ flags &= XFS_DIFLAG_ANY;
}
if (flags & (XFS_DIFLAG_REALTIME | XFS_DIFLAG_RTINHERIT)) {
}
if (!verify_mode && flags != be16_to_cpu(dino->di_flags)) {
if (!no_modify) {
- do_warn(_(", fixing bad flags.\n"));
+ do_warn(_("fixing bad flags.\n"));
dino->di_flags = cpu_to_be16(flags);
*dirty = 1;
} else
- do_warn(_(", would fix bad flags.\n"));
+ do_warn(_("would fix bad flags.\n"));
}
}