Linus Torvalds [Mon, 27 Jan 2025 02:36:23 +0000 (18:36 -0800)]
Merge tag 'mm-stable-2025-01-26-14-59' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
Pull MM updates from Andrew Morton:
"The various patchsets are summarized below. Plus of course many
indivudual patches which are described in their changelogs.
- "Allocate and free frozen pages" from Matthew Wilcox reorganizes
the page allocator so we end up with the ability to allocate and
free zero-refcount pages. So that callers (ie, slab) can avoid a
refcount inc & dec
- "Support large folios for tmpfs" from Baolin Wang teaches tmpfs to
use large folios other than PMD-sized ones
- "Fix mm/rodata_test" from Petr Tesarik performs some maintenance
and fixes for this small built-in kernel selftest
- "mas_anode_descend() related cleanup" from Wei Yang tidies up part
of the mapletree code
- "mm: fix format issues and param types" from Keren Sun implements a
few minor code cleanups
- "simplify split calculation" from Wei Yang provides a few fixes and
a test for the mapletree code
- "mm/vma: make more mmap logic userland testable" from Lorenzo
Stoakes continues the work of moving vma-related code into the
(relatively) new mm/vma.c
- "mm/page_alloc: gfp flags cleanups for alloc_contig_*()" from David
Hildenbrand cleans up and rationalizes handling of gfp flags in the
page allocator
- "readahead: Reintroduce fix for improper RA window sizing" from Jan
Kara is a second attempt at fixing a readahead window sizing issue.
It should reduce the amount of unnecessary reading
- "synchronously scan and reclaim empty user PTE pages" from Qi Zheng
addresses an issue where "huge" amounts of pte pagetables are
accumulated:
Qi's series addresses this windup by synchronously freeing PTE
memory within the context of madvise(MADV_DONTNEED)
- "selftest/mm: Remove warnings found by adding compiler flags" from
Muhammad Usama Anjum fixes some build warnings in the selftests
code when optional compiler warnings are enabled
- "mm: don't use __GFP_HARDWALL when migrating remote pages" from
David Hildenbrand tightens the allocator's observance of
__GFP_HARDWALL
- "pkeys kselftests improvements" from Kevin Brodsky implements
various fixes and cleanups in the MM selftests code, mainly
pertaining to the pkeys tests
- "mm/damon: add sample modules" from SeongJae Park enhances DAMON to
estimate application working set size
- "memcg/hugetlb: Rework memcg hugetlb charging" from Joshua Hahn
provides some cleanups to memcg's hugetlb charging logic
- "mm/swap_cgroup: remove global swap cgroup lock" from Kairui Song
removes the global swap cgroup lock. A speedup of 10% for a
tmpfs-based kernel build was demonstrated
- "zram: split page type read/write handling" from Sergey Senozhatsky
has several fixes and cleaups for zram in the area of
zram_write_page(). A watchdog softlockup warning was eliminated
- "move pagetable_*_dtor() to __tlb_remove_table()" from Kevin
Brodsky cleans up the pagetable destructor implementations. A rare
use-after-free race is fixed
- "mm/debug: introduce and use VM_WARN_ON_VMG()" from Lorenzo Stoakes
simplifies and cleans up the debugging code in the VMA merging
logic
- "Account page tables at all levels" from Kevin Brodsky cleans up
and regularizes the pagetable ctor/dtor handling. This results in
improvements in accounting accuracy
- "mm/damon: replace most damon_callback usages in sysfs with new
core functions" from SeongJae Park cleans up and generalizes
DAMON's sysfs file interface logic
- "mm/damon: enable page level properties based monitoring" from
SeongJae Park increases the amount of information which is
presented in response to DAMOS actions
- "mm/damon: remove DAMON debugfs interface" from SeongJae Park
removes DAMON's long-deprecated debugfs interfaces. Thus the
migration to sysfs is completed
- "mm/hugetlb: Refactor hugetlb allocation resv accounting" from
Peter Xu cleans up and generalizes the hugetlb reservation
accounting
- "mm: alloc_pages_bulk: small API refactor" from Luiz Capitulino
removes a never-used feature of the alloc_pages_bulk() interface
- "mm/damon: extend DAMOS filters for inclusion" from SeongJae Park
extends DAMOS filters to support not only exclusion (rejecting),
but also inclusion (allowing) behavior
- "Add zpdesc memory descriptor for zswap.zpool" from Alex Shi
introduces a new memory descriptor for zswap.zpool that currently
overlaps with struct page for now. This is part of the effort to
reduce the size of struct page and to enable dynamic allocation of
memory descriptors
- "mm, swap: rework of swap allocator locks" from Kairui Song redoes
and simplifies the swap allocator locking. A speedup of 400% was
demonstrated for one workload. As was a 35% reduction for kernel
build time with swap-on-zram
- "mm: update mips to use do_mmap(), make mmap_region() internal"
from Lorenzo Stoakes reworks MIPS's use of mmap_region() so that
mmap_region() can be made MM-internal
- "mm/mglru: performance optimizations" from Yu Zhao fixes a few
MGLRU regressions and otherwise improves MGLRU performance
- "Docs/mm/damon: add tuning guide and misc updates" from SeongJae
Park updates DAMON documentation
- "Cleanup for memfd_create()" from Isaac Manjarres does that thing
- "mm: hugetlb+THP folio and migration cleanups" from David
Hildenbrand provides various cleanups in the areas of hugetlb
folios, THP folios and migration
- "Uncached buffered IO" from Jens Axboe implements the new
RWF_DONTCACHE flag which provides synchronous dropbehind for
pagecache reading and writing. To permite userspace to address
issues with massive buildup of useless pagecache when
reading/writing fast devices
- "selftests/mm: virtual_address_range: Reduce memory" from Thomas
Weißschuh fixes and optimizes some of the MM selftests"
* tag 'mm-stable-2025-01-26-14-59' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm: (321 commits)
mm/compaction: fix UBSAN shift-out-of-bounds warning
s390/mm: add missing ctor/dtor on page table upgrade
kasan: sw_tags: use str_on_off() helper in kasan_init_sw_tags()
tools: add VM_WARN_ON_VMG definition
mm/damon/core: use str_high_low() helper in damos_wmark_wait_us()
seqlock: add missing parameter documentation for raw_seqcount_try_begin()
mm/page-writeback: consolidate wb_thresh bumping logic into __wb_calc_thresh
mm/page_alloc: remove the incorrect and misleading comment
zram: remove zcomp_stream_put() from write_incompressible_page()
mm: separate move/undo parts from migrate_pages_batch()
mm/kfence: use str_write_read() helper in get_access_type()
selftests/mm/mkdirty: fix memory leak in test_uffdio_copy()
kasan: hw_tags: Use str_on_off() helper in kasan_init_hw_tags()
selftests/mm: virtual_address_range: avoid reading from VM_IO mappings
selftests/mm: vm_util: split up /proc/self/smaps parsing
selftests/mm: virtual_address_range: unmap chunks after validation
selftests/mm: virtual_address_range: mmap() without PROT_WRITE
selftests/memfd/memfd_test: fix possible NULL pointer dereference
mm: add FGP_DONTCACHE folio creation flag
mm: call filemap_fdatawrite_range_kick() after IOCB_DONTCACHE issue
...
Linus Torvalds [Mon, 27 Jan 2025 01:50:53 +0000 (17:50 -0800)]
Merge tag 'mm-nonmm-stable-2025-01-24-23-16' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm
Pull non-MM updates from Andrew Morton:
"Mainly individually changelogged singleton patches. The patch series
in this pull are:
- "lib min_heap: Improve min_heap safety, testing, and documentation"
from Kuan-Wei Chiu provides various tightenings to the min_heap
library code
- "xarray: extract __xa_cmpxchg_raw" from Tamir Duberstein preforms
some cleanup and Rust preparation in the xarray library code
- "Update reference to include/asm-<arch>" from Geert Uytterhoeven
fixes pathnames in some code comments
- "Converge on using secs_to_jiffies()" from Easwar Hariharan uses
the new secs_to_jiffies() in various places where that is
appropriate
- "ocfs2, dlmfs: convert to the new mount API" from Eric Sandeen
switches two filesystems to the new mount API
- "Convert ocfs2 to use folios" from Matthew Wilcox does that
- "Remove get_task_comm() and print task comm directly" from Yafang
Shao removes now-unneeded calls to get_task_comm() in various
places
- "squashfs: reduce memory usage and update docs" from Phillip
Lougher implements some memory savings in squashfs and performs
some maintainability work
- "lib: clarify comparison function requirements" from Kuan-Wei Chiu
tightens the sort code's behaviour and adds some maintenance work
- "nilfs2: protect busy buffer heads from being force-cleared" from
Ryusuke Konishi fixes an issues in nlifs when the fs is presented
with a corrupted image
- "nilfs2: fix kernel-doc comments for function return values" from
Ryusuke Konishi fixes some nilfs kerneldoc
- "nilfs2: fix issues with rename operations" from Ryusuke Konishi
addresses some nilfs BUG_ONs which syzbot was able to trigger
- "minmax.h: Cleanups and minor optimisations" from David Laight does
some maintenance work on the min/max library code
- "Fixes and cleanups to xarray" from Kemeng Shi does maintenance
work on the xarray library code"
* tag 'mm-nonmm-stable-2025-01-24-23-16' of git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm: (131 commits)
ocfs2: use str_yes_no() and str_no_yes() helper functions
include/linux/lz4.h: add some missing macros
Xarray: use xa_mark_t in xas_squash_marks() to keep code consistent
Xarray: remove repeat check in xas_squash_marks()
Xarray: distinguish large entries correctly in xas_split_alloc()
Xarray: move forward index correctly in xas_pause()
Xarray: do not return sibling entries from xas_find_marked()
ipc/util.c: complete the kernel-doc function descriptions
gcov: clang: use correct function param names
latencytop: use correct kernel-doc format for func params
minmax.h: remove some #defines that are only expanded once
minmax.h: simplify the variants of clamp()
minmax.h: move all the clamp() definitions after the min/max() ones
minmax.h: use BUILD_BUG_ON_MSG() for the lo < hi test in clamp()
minmax.h: reduce the #define expansion of min(), max() and clamp()
minmax.h: update some comments
minmax.h: add whitespace around operators and after commas
nilfs2: do not update mtime of renamed directory that is not moved
nilfs2: handle errors that nilfs_prepare_chunk() may return
CREDITS: fix spelling mistake
...
Linus Torvalds [Mon, 27 Jan 2025 00:12:44 +0000 (16:12 -0800)]
Merge tag 'scsi-misc' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi
Pull SCSI updates from James Bottomley:
"Updates to the usual drivers (ufs, lpfc, fnic, qla2xx, mpi3mr).
The major core change is the renaming of the slave_ methods plus a bit
of constification. The rest are minor updates and fixes"
* tag 'scsi-misc' of git://git.kernel.org/pub/scm/linux/kernel/git/jejb/scsi: (103 commits)
scsi: fnic: Propagate SCSI error code from fnic_scsi_drv_init()
scsi: fnic: Test for memory allocation failure and return error code
scsi: fnic: Return appropriate error code from failure of scsi drv init
scsi: fnic: Return appropriate error code for mem alloc failure
scsi: fnic: Remove always-true IS_FNIC_FCP_INITIATOR macro
scsi: fnic: Fix use of uninitialized value in debug message
scsi: fnic: Delete incorrect debugfs error handling
scsi: fnic: Remove unnecessary else to fix warning in FDLS FIP
scsi: fnic: Remove extern definition from .c files
scsi: fnic: Remove unnecessary else and unnecessary break in FDLS
scsi: mpi3mr: Fix possible crash when setting up bsg fails
scsi: ufs: bsg: Set bsg_queue to NULL after removal
scsi: ufs: bsg: Delete bsg_dev when setting up bsg fails
scsi: st: Don't set pos_unknown just after device recognition
scsi: aic7xxx: Fix build 'aicasm' warning
scsi: Revert "scsi: ufs: core: Probe for EXT_IID support"
scsi: storvsc: Ratelimit warning logs to prevent VM denial of service
scsi: scsi_debug: Constify sdebug_driver_template
scsi: documentation: Corrections for struct updates
scsi: driver-api: documentation: Change what is added to docbook
...
Linus Torvalds [Sun, 26 Jan 2025 23:59:47 +0000 (15:59 -0800)]
Merge tag 'firewire-updates-6.14' of git://git.kernel.org/pub/scm/linux/kernel/git/ieee1394/linux1394
Pull firewire updates from Takashi Sakamoto:
"Two changes for the 6.14 kernel.
The first change concerns the PCI driver for 1394 OHCI hardware.
Previously, it used legacy PCI suspend/resume callbacks, which have
now been replaced with callbacks defined in the Linux generic power
management framework. This original patch was posted in 2020 and has
been adapted with some modifications for the latest kernel. Note that
the driver still includes platform-specific operations for PowerPC,
and these operations have not been tested in the new implementation
yet. It would be helpful to share the results of suspending/resuming
on the platform.
The other one is a minor fix for the memory allocation in some KUnit
tests"
* tag 'firewire-updates-6.14' of git://git.kernel.org/pub/scm/linux/kernel/git/ieee1394/linux1394:
firewire: test: Fix potential null dereference in firewire kunit test
firewire: ohci: use generic power management
Linus Torvalds [Sun, 26 Jan 2025 22:30:43 +0000 (14:30 -0800)]
Merge tag 'modules-6.14-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/modules/linux
Pull modules updates from Petr Pavlu:
- Sign modules with sha512 instead of sha1 by default
- Don't fail module loading when failing to set the
ro_after_init section read-only
- Constify 'struct module_attribute'
- Cleanups and preparation for const struct bin_attribute
- Put known GPL offenders in an array
- Extend the preempt disabled section in
dereference_symbol_descriptor()
* tag 'modules-6.14-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/modules/linux:
module: sign with sha512 instead of sha1 by default
module: Don't fail module loading when setting ro_after_init section RO failed
module: Split module_enable_rodata_ro()
module: sysfs: Use const 'struct bin_attribute'
module: sysfs: Add notes attributes through attribute_group
module: sysfs: Simplify section attribute allocation
module: sysfs: Drop 'struct module_sect_attr'
module: sysfs: Drop member 'module_sect_attr::address'
module: sysfs: Drop member 'module_sect_attrs::nsections'
module: Constify 'struct module_attribute'
module: Handle 'struct module_version_attribute' as const
params: Prepare for 'const struct module_attribute *'
module: Put known GPL offenders in an array
module: Extend the preempt disabled section in dereference_symbol_descriptor().
Linus Torvalds [Sun, 26 Jan 2025 22:25:58 +0000 (14:25 -0800)]
Merge tag 'trace-tools-v6.14' of git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace
Pull rv and tools/rtla updates from Steven Rostedt:
- Add a test suite to test the tool
Add a small test suite that can be used to test rtla's basic features
to at least have something to test when applying changes.
- Automate manual steps in monitor creation
While creating a new monitor in RV, besides generating code from
dot2k, there are a few manual steps which can be tedious and error
prone, like adding the tracepoints, makefile lines and kconfig, or
selecting events that start the monitor in the initial state.
Updates were made to try and automate as much as possible among those
steps to make creating a new RV monitor much quicker. It is still
requires to select proper tracepoints, this step is harder to
automate in a general way and, in several cases, would still need
user intervention.
- Have rtla timerlat hist and top set OSNOISE_WORKLOAD flag
Have both rtla-timerlat-hist and rtla-timerlat-top set
OSNOISE_WORKLOAD to the proper value ("on" when running with -k,
"off" when running with -u) every time the option is available
instead of setting it only when running with -u.
This prevents rtla timerlat -k from giving no results when
NO_OSNOISE_WORKLOAD is set, either manually or by an abnormally
exited earlier run of rtla timerlat -u.
- Stop rtla timerlat on signal properly when overloaded
There is an issue where if rtla is run on machines with a high number
of CPUs (100+), timerlat can generate more samples than rtla is able
to process via tracefs_iterate_raw_events. This is especially common
when the interval is set to 100us (rteval and cyclictest default) as
opposed to the rtla default of 1000us, but also happens with the rtla
default.
Currently, this leads to rtla hanging and having to be terminated
with SIGTERM. SIGINT setting stop_tracing is not enough, since more
and more events are coming and tracefs_iterate_raw_events never
exits.
To fix this: Stop the timerlat tracer on SIGINT/SIGALRM to ensure no
more events are generated when rtla is supposed to exit.
Also on receiving SIGINT/SIGALRM twice, abort iteration immediately
with tracefs_iterate_stop, making rtla exit right away instead of
waiting for all events to be processed.
- Account for missed events
Due to tracefs buffer overflow, it can happen that rtla misses
events, making the tracing results inaccurate.
Count both the number of missed events and the total number of
processed events, and display missed events as well as their
percentage. The numbers are displayed for both osnoise and timerlat,
even though for the earlier, missed events are generally not
expected.
For hist, the number is displayed at the end of the run; for top, it
is displayed on each printing of the top table.
- Changes to make osnoise more robust
There was a dependency in the code that the first field of the
osnoise_tool structure was the trace field. If that that ever
changed, then the code work break. Change the code to encapsulate
this dependency where the code that uses the structure does not have
this dependency.
* tag 'trace-tools-v6.14' of git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace: (22 commits)
rtla: Report missed event count
rtla: Add function to report missed events
rtla: Count all processed events
rtla: Count missed trace events
tools/rtla: Add osnoise_trace_is_off()
rtla/timerlat_top: Set OSNOISE_WORKLOAD for kernel threads
rtla/timerlat_hist: Set OSNOISE_WORKLOAD for kernel threads
rtla/osnoise: Distinguish missing workload option
rtla/timerlat_top: Abort event processing on second signal
rtla/timerlat_hist: Abort event processing on second signal
rtla/timerlat_top: Stop timerlat tracer on signal
rtla/timerlat_hist: Stop timerlat tracer on signal
rtla: Add trace_instance_stop
tools/rtla: Add basic test suite
verification/dot2k: Implement event type detection
verification/dot2k: Auto patch current kernel source
verification/dot2k: Simplify manual steps in monitor creation
rv: Simplify manual steps in monitor creation
verification/dot2k: Add support for name and description options
verification/dot2k: More robust template variables
...
Linus Torvalds [Sun, 26 Jan 2025 22:19:45 +0000 (14:19 -0800)]
Merge tag 'trace-rv-v6.14' of git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace
Pull runtime verifier and osnoise fixes from Steven Rostedt:
- Reset idle tasks on reset for runtime verifier
When the runtime verifier is reset, it resets the task's data that is
being monitored. But it only iterates for_each_process() which does
not include the idle tasks. As the idle tasks can be monitored, they
need to be reset as well.
- Fix the enabling and disabling of tracepoints in osnoise
If timerlat is enabled and the WORKLOAD flag is not set, then the
osnoise tracer will enable the migrate task tracepoint to monitor it
for its own workload. The test to enable the tracepoint is done
against user space modifiable parameters. On disabling of the tracer,
those same parameters are used to determine if the tracepoint should
be disabled. The problem is if user space were to modify the
parameters after it enables the tracer then it may not disable the
tracepoint.
Instead, a static variable is used to keep track if the tracepoint
was enabled or not. Then when the tracer shuts down, it will use this
variable to decide to disable the tracepoint or not, instead of
looking at the user space parameters.
* tag 'trace-rv-v6.14' of git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace:
tracing/osnoise: Fix resetting of tracepoints
rv: Reset per-task monitors also for idle tasks
Linus Torvalds [Sun, 26 Jan 2025 22:03:44 +0000 (14:03 -0800)]
Merge tag 'bitmap-for-6.14' of https://github.com:/norov/linux
Pull bitmap updates from Yury Norov:
"This includes const_true() series from Vincent Mailhol, another
__always_inline rework from Nathan Chancellor for RISCV, and a couple
of random fixes from Dr. David Alan Gilbert and I Hsin Cheng"
* tag 'bitmap-for-6.14' of https://github.com:/norov/linux:
cpumask: Rephrase comments for cpumask_any*() APIs
cpu: Remove unused init_cpu_online
riscv: Always inline bitops
linux/bits.h: simplify GENMASK_INPUT_CHECK()
compiler.h: add const_true()
Thorsten Leemhuis [Wed, 16 Oct 2024 14:18:41 +0000 (16:18 +0200)]
module: sign with sha512 instead of sha1 by default
Switch away from using sha1 for module signing by default and use the
more modern sha512 instead, which is what among others Arch, Fedora,
RHEL, and Ubuntu are currently using for their kernels.
Sha1 has not been considered secure against well-funded opponents since
2005[1]; since 2011 the NIST and other organizations furthermore
recommended its replacement[2]. This is why OpenSSL on RHEL9, Fedora
Linux 41+[3], and likely some other current and future distributions
reject the creation of sha1 signatures, which leads to a build error of
allmodconfig configurations:
Once module init has succeded it is too late to cancel loading.
If setting ro_after_init data section to read-only fails, all we
can do is to inform the user through a warning.
Reported-by: Thomas Gleixner <tglx@linutronix.de> Closes: https://lore.kernel.org/all/20230915082126.4187913-1-ruanjinjie@huawei.com/ Fixes: d1909c022173 ("module: Don't ignore errors from set_memory_XX()") Signed-off-by: Christophe Leroy <christophe.leroy@csgroup.eu> Reviewed-by: Luis Chamberlain <mcgrof@kernel.org> Link: https://lore.kernel.org/r/d6c81f38da76092de8aacc8c93c4c65cb0fe48b8.1733427536.git.christophe.leroy@csgroup.eu Signed-off-by: Petr Pavlu <petr.pavlu@suse.com>
Christophe Leroy [Thu, 5 Dec 2024 19:46:15 +0000 (20:46 +0100)]
module: Split module_enable_rodata_ro()
module_enable_rodata_ro() is called twice, once before module init
to set rodata sections readonly and once after module init to set
rodata_after_init section readonly.
The second time, only the rodata_after_init section needs to be
set to read-only, no need to re-apply it to already set rodata.
Thomas Weißschuh [Fri, 27 Dec 2024 13:23:24 +0000 (14:23 +0100)]
module: sysfs: Add notes attributes through attribute_group
A kobject is meant to manage the lifecycle of some resource.
However the module sysfs code only creates a kobject to get a
"notes" subdirectory in sysfs.
This can be achieved easier and cheaper by using a sysfs group.
Switch the notes attribute code to such a group, similar to how the
section allocation in the same file already works.
The existing allocation logic manually stuffs two allocations into one.
This is hard to understand and of limited value, given that all the
section names are allocated on their own anyways.
Une one allocation per datastructure.
Thomas Weißschuh [Fri, 27 Dec 2024 13:23:21 +0000 (14:23 +0100)]
module: sysfs: Drop member 'module_sect_attr::address'
'struct bin_attribute' already contains the member 'private' to pass
custom data to the attribute handlers.
Use that instead of the custom 'address' member.
Thomas Weißschuh [Fri, 27 Dec 2024 13:23:20 +0000 (14:23 +0100)]
module: sysfs: Drop member 'module_sect_attrs::nsections'
The member is only used to iterate over all attributes in
free_sect_attrs(). However the attribute group can already be used for
that. Use the group and drop 'nsections'.
Thomas Weißschuh [Mon, 16 Dec 2024 17:25:10 +0000 (18:25 +0100)]
module: Constify 'struct module_attribute'
These structs are never modified, move them to read-only memory.
This makes the API clearer and also prepares for the constification of
'struct attribute' itself.
Thomas Weißschuh [Mon, 16 Dec 2024 17:25:09 +0000 (18:25 +0100)]
module: Handle 'struct module_version_attribute' as const
The structure is always read-only due to its placement in the read-only
section __modver. Reflect this at its usage sites.
Also prepare for the const handling of 'struct module_attribute' itself.
Thomas Weißschuh [Mon, 16 Dec 2024 17:25:08 +0000 (18:25 +0100)]
params: Prepare for 'const struct module_attribute *'
The 'struct module_attribute' sysfs callbacks are about to change to
receive a 'const struct module_attribute *' parameter.
Prepare for that by avoid casting away the constness through
container_of() and using const pointers to 'struct param_attribute'.
Uwe Kleine-König [Fri, 15 Nov 2024 18:50:30 +0000 (19:50 +0100)]
module: Put known GPL offenders in an array
Instead of repeating the add_taint_module() call for each offender, create
an array and loop over that one. This simplifies adding new entries
considerably.
Signed-off-by: Uwe Kleine-König <ukleinek@kernel.org> Reviewed-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Reviewed-by: Christoph Hellwig <hch@lst.de> Signed-off-by: Werner Sembach <wse@tuxedocomputers.com> Link: https://lore.kernel.org/r/20241115185253.1299264-2-wse@tuxedocomputers.com
[ppavlu: make the array const] Signed-off-by: Petr Pavlu <petr.pavlu@suse.com>
Sebastian Andrzej Siewior [Wed, 8 Jan 2025 09:04:30 +0000 (10:04 +0100)]
module: Extend the preempt disabled section in dereference_symbol_descriptor().
dereference_symbol_descriptor() needs to obtain the module pointer
belonging to pointer in order to resolve that pointer.
The returned mod pointer is obtained under RCU-sched/ preempt_disable()
guarantees and needs to be used within this section to ensure that the
module is not removed in the meantime.
Extend the preempt_disable() section to also cover
dereference_module_function_descriptor().
Fixes: 04b8eb7a4ccd9 ("symbol lookup: introduce dereference_symbol_descriptor()") Cc: James E.J. Bottomley <James.Bottomley@HansenPartnership.com> Cc: Christophe Leroy <christophe.leroy@csgroup.eu> Cc: Helge Deller <deller@gmx.de> Cc: Madhavan Srinivasan <maddy@linux.ibm.com> Cc: Michael Ellerman <mpe@ellerman.id.au> Cc: Naveen N Rao <naveen@kernel.org> Cc: Nicholas Piggin <npiggin@gmail.com> Cc: Sergey Senozhatsky <sergey.senozhatsky.work@gmail.com> Cc: linux-parisc@vger.kernel.org Cc: linuxppc-dev@lists.ozlabs.org Reviewed-by: Sergey Senozhatsky <senozhatsky@chromium.org> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Link: https://lore.kernel.org/r/20250108090457.512198-2-bigeasy@linutronix.de Signed-off-by: Petr Pavlu <petr.pavlu@suse.com>
syzkaller reported a UBSAN shift-out-of-bounds warning of (1UL << order)
in isolate_freepages_block(). The bogus compound_order can be any value
because it is union with flags. Add back the MAX_PAGE_ORDER check to fix
the warning.
Link: https://lkml.kernel.org/r/20250123021029.2826736-1-liushixin2@huawei.com Fixes: 3da0272a4c7d ("mm/compaction: correctly return failure with bogus compound_order in strict mode") Signed-off-by: Liu Shixin <liushixin2@huawei.com> Reviewed-by: Kemeng Shi <shikemeng@huaweicloud.com> Acked-by: David Hildenbrand <david@redhat.com> Reviewed-by: Oscar Salvador <osalvador@suse.de> Cc: Baolin Wang <baolin.wang@linux.alibaba.com> Cc: David Hildenbrand <david@redhat.com> Cc: Kemeng Shi <shikemeng@huaweicloud.com> Cc: Matthew Wilcox <willy@infradead.org> Cc: Mel Gorman <mgorman@techsingularity.net> Cc: Nanyong Sun <sunnanyong@huawei.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Alexander Gordeev [Thu, 23 Jan 2025 16:03:49 +0000 (17:03 +0100)]
s390/mm: add missing ctor/dtor on page table upgrade
Commit 78966b550289 ("s390: pgtable: add statistics for PUD and P4D level
page table") misses the call to pagetable_p4d_ctor() against a newly
allocated P4D table in crst_table_upgrade();
Commit 68c601de75d8 ("mm: introduce ctor/dtor at PGD level") misses the
call to pagetable_pgd_ctor() against a newly allocated PGD and the call to
pagetable_dtor() against a newly allocated P4D that is about to be freed
on crst_table_upgrade() PGD upgrade fail path.
The missed constructors and destructor break (at least) the page table
accounting when a process memory space is upgraded.
Link: https://lkml.kernel.org/r/20250123160349.200154-1-agordeev@linux.ibm.com Fixes: 78966b550289 ("s390: pgtable: add statistics for PUD and P4D level page table") Fixes: 68c601de75d8 ("mm: introduce ctor/dtor at PGD level") Signed-off-by: Alexander Gordeev <agordeev@linux.ibm.com> Reported-by: Heiko Carstens <hca@linux.ibm.com> Closes: https://lore.kernel.org/all/20250122074954.8685-A-hca@linux.ibm.com/ Suggested-by: Heiko Carstens <hca@linux.ibm.com> Reviewed-by: Gerald Schaefer <gerald.schaefer@linux.ibm.com> Acked-by: Qi Zheng <zhengqi.arch@bytedance.com> Reviewed-by: Kevin Brodsky <kevin.brodsky@arm.com> Cc: Heiko Carstens <hca@linux.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Jim Zhao [Thu, 21 Nov 2024 10:05:39 +0000 (18:05 +0800)]
mm/page-writeback: consolidate wb_thresh bumping logic into __wb_calc_thresh
Address the feedback from 39ac99852fca ("mm/page-writeback: raise
wb_thresh to prevent write blocking with strictlimit)". The wb_thresh
bumping logic is scattered across wb_position_ratio, __wb_calc_thresh, and
wb_update_dirty_ratelimit. For consistency, consolidate all wb_thresh
bumping logic into __wb_calc_thresh.
Link: https://lkml.kernel.org/r/20241121100539.605818-1-jimzhao.ai@gmail.com Signed-off-by: Jim Zhao <jimzhao.ai@gmail.com> Reviewed-by: Jan Kara <jack@suse.cz> Cc: Matthew Wilcox <willy@infradead.org> Cc: Kemeng Shi <shikemeng@huaweicloud.com> Cc: Guenter Roeck <linux@roeck-us.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Yuntao Wang [Wed, 15 Jan 2025 04:16:34 +0000 (12:16 +0800)]
mm/page_alloc: remove the incorrect and misleading comment
The comment removed in this patch originally belonged to the
build_zonelists_in_zone_order() function, which was introduced by commit f0c0b2b808f2 ("change zonelist order: zonelist order selection logic").
Later, commit c9bff3eebc09 ("mm, page_alloc: rip out ZONELIST_ORDER_ZONE")
removed build_zonelists_in_zone_order() but left its comment behind.
Subsequently, commit 9d3be21bf9c0 ("mm, page_alloc: simplify zonelist
initialization") moved the node_order variable into build_zonelists(),
making the comment originally belonged to build_zonelists_in_zone_order()
appear as if it were part of build_zonelists().
Sergey Senozhatsky [Wed, 15 Jan 2025 07:19:16 +0000 (16:19 +0900)]
zram: remove zcomp_stream_put() from write_incompressible_page()
We cannot and should not put per-CPU compression stream in
write_incompressible_page() because that function never gets any
per-CPU streams in the first place. It's zram_write_page() that
puts the stream before it calls write_incompressible_page().
Byungchul Park [Thu, 8 Aug 2024 06:53:58 +0000 (15:53 +0900)]
mm: separate move/undo parts from migrate_pages_batch()
Functionally, no change. This is a preparation for luf mechanism that
requires to use separated folio lists for its own handling during
migration. Refactored migrate_pages_batch() so as to separate move/undo
parts from migrate_pages_batch().
Thomas Weißschuh [Tue, 14 Jan 2025 16:06:48 +0000 (17:06 +0100)]
selftests/mm: virtual_address_range: avoid reading from VM_IO mappings
The virtual_address_range selftest reads from the start of each mapping
listed in /proc/self/maps. However not all mappings are valid to be
arbitrarily accessed.
For example the vvar data used for virtual clocks on x86 [vvar_vclock] can
only be accessed if 1) the kernel configuration enables virtual clocks and
2) the hypervisor provided the data for it. Only the VDSO itself has the
necessary information to know this. Since commit e93d2521b27f ("x86/vdso:
Split virtual clock pages into dedicated mapping") the virtual clock data
was split out into its own mapping, leading to EFAULT from read() during
the validation.
Check for the VM_IO flag as a proxy. It is present for the VVAR mappings
and MMIO ranges can be dangerous to access arbitrarily.
Link: https://lkml.kernel.org/r/20250114-virtual_address_range-tests-v4-4-6fd7269934a5@linutronix.de Reported-by: kernel test robot <oliver.sang@intel.com> Closes: https://lore.kernel.org/oe-lkp/202412271148.2656e485-lkp@intel.com Fixes: e93d2521b27f ("x86/vdso: Split virtual clock pages into dedicated mapping") Fixes: 010409649885 ("selftests/mm: confirm VA exhaustion without reliance on correctness of mmap()") Signed-off-by: Thomas Weißschuh <thomas.weissschuh@linutronix.de> Suggested-by: David Hildenbrand <david@redhat.com> Link: https://lore.kernel.org/lkml/e97c2a5d-c815-4936-a767-ac42a3220a90@redhat.com/ Acked-by: David Hildenbrand <david@redhat.com> Cc: Anshuman Khandual <khandual@linux.vnet.ibm.com> Cc: Dev Jain <dev.jain@arm.com> Cc: Shuah Khan (Samsung OSG) <shuah@kernel.org> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Thomas Weißschuh [Tue, 14 Jan 2025 16:06:46 +0000 (17:06 +0100)]
selftests/mm: virtual_address_range: unmap chunks after validation
For each accessed chunk a PTE is created. More than 1GiB of PTEs is used
in this way. Remove each PTE after validating a chunk to reduce peak
memory usage.
It is important to only unmap memory that previously mmap()ed, as
unmapping other mappings like the stack, heap or executable mappings will
crash the process.
The mappings read from /proc/self/maps and the return values from mmap()
don't allow a simple correlation due to merging and no guaranteed order.
To correlate the pointers and mappings use prctl(PR_SET_VMA_ANON_NAME).
While it introduces a test dependency, other alternatives would introduce
runtime or development overhead.
Link: https://lkml.kernel.org/r/20250114-virtual_address_range-tests-v4-2-6fd7269934a5@linutronix.de Fixes: 010409649885 ("selftests/mm: confirm VA exhaustion without reliance on correctness of mmap()") Signed-off-by: Thomas Weißschuh <thomas.weissschuh@linutronix.de> Acked-by: David Hildenbrand <david@redhat.com> Cc: Anshuman Khandual <khandual@linux.vnet.ibm.com> Cc: Dev Jain <dev.jain@arm.com> Cc: kernel test robot <oliver.sang@intel.com> Cc: Shuah Khan (Samsung OSG) <shuah@kernel.org> Cc: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Thomas Weißschuh [Tue, 14 Jan 2025 16:06:45 +0000 (17:06 +0100)]
selftests/mm: virtual_address_range: mmap() without PROT_WRITE
Patch series "selftests/mm: virtual_address_range: Reduce memory", v4.
The selftest started failing since commit e93d2521b27f ("x86/vdso: Split
virtual clock pages into dedicated mapping") was merged. While debugging
I stumbled upon some memory usage optimizations.
With these test now runs on a VM with only 60MiB of memory.
This patch (of 4):
When mapping a larger chunk than physical memory is available with
PROT_WRITE and overcommit is disabled, the mapping will fail. This will
prevent the test from running on systems with less then ~1GiB of memory
and triggering an inscrutinable test failure. As the mappings are never
written to anyways, the flag can be removed.
Jens Axboe [Fri, 20 Dec 2024 15:47:50 +0000 (08:47 -0700)]
mm: add FGP_DONTCACHE folio creation flag
Callers can pass this in for uncached folio creation, in which case if a
folio is newly created it gets marked as uncached. If a folio exists for
this index and lookup succeeds, then it will not get marked as uncached.
If an !uncached lookup finds a cached folio, clear the flag. For that
case, there are competeting uncached and cached users of the folio, and it
should not get pruned.
Link: https://lkml.kernel.org/r/20241220154831.1086649-13-axboe@kernel.dk Signed-off-by: Jens Axboe <axboe@kernel.dk> Cc: Brian Foster <bfoster@redhat.com> Cc: Chris Mason <clm@meta.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: Matthew Wilcox (Oracle) <willy@infradead.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Jens Axboe [Fri, 20 Dec 2024 15:47:49 +0000 (08:47 -0700)]
mm: call filemap_fdatawrite_range_kick() after IOCB_DONTCACHE issue
When a buffered write submitted with IOCB_DONTCACHE has been successfully
submitted, call filemap_fdatawrite_range_kick() to kick off the IO. File
systems call generic_write_sync() for any successful buffered write
submission, hence add the logic here rather than needing to modify the
file system.
Link: https://lkml.kernel.org/r/20241220154831.1086649-12-axboe@kernel.dk Signed-off-by: Jens Axboe <axboe@kernel.dk> Cc: Brian Foster <bfoster@redhat.com> Cc: Chris Mason <clm@meta.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: Matthew Wilcox (Oracle) <willy@infradead.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Works like filemap_fdatawrite_range(), except it's a non-integrity data
writeback and hence only starts writeback on the specified range. Will
help facilitate generically starting uncached writeback from
generic_write_sync(), as header dependencies preclude doing this inline
from fs.h.
Link: https://lkml.kernel.org/r/20241220154831.1086649-11-axboe@kernel.dk Signed-off-by: Jens Axboe <axboe@kernel.dk> Cc: Brian Foster <bfoster@redhat.com> Cc: Chris Mason <clm@meta.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: Matthew Wilcox (Oracle) <willy@infradead.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Jens Axboe [Fri, 20 Dec 2024 15:47:47 +0000 (08:47 -0700)]
mm/filemap: drop streaming/uncached pages when writeback completes
If the folio is marked as streaming, drop pages when writeback completes.
Intended to be used with RWF_DONTCACHE, to avoid needing sync writes for
uncached IO.
Link: https://lkml.kernel.org/r/20241220154831.1086649-10-axboe@kernel.dk Signed-off-by: Jens Axboe <axboe@kernel.dk> Cc: Brian Foster <bfoster@redhat.com> Cc: Chris Mason <clm@meta.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: Matthew Wilcox (Oracle) <willy@infradead.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Jens Axboe [Fri, 20 Dec 2024 15:47:46 +0000 (08:47 -0700)]
mm/filemap: add read support for RWF_DONTCACHE
Add RWF_DONTCACHE as a read operation flag, which means that any data read
wil be removed from the page cache upon completion. Uses the page cache
to synchronize, and simply prunes folios that were instantiated when the
operation completes. While it would be possible to use private pages for
this, using the page cache as synchronization is handy for a variety of
reasons:
1) No special truncate magic is needed
2) Async buffered reads need some place to serialize, using the page
cache is a lot easier than writing extra code for this
3) The pruning cost is pretty reasonable
and the code to support this is much simpler as a result.
You can think of uncached buffered IO as being the much more attractive
cousin of O_DIRECT - it has none of the restrictions of O_DIRECT. Yes, it
will copy the data, but unlike regular buffered IO, it doesn't run into
the unpredictability of the page cache in terms of reclaim. As an
example, on a test box with 32 drives, reading them with buffered IO looks
as follows:
where it's quite easy to see where the page cache filled up, and
performance went from good to erratic, and finally settles at a much
lower rate. Looking at top while this is ongoing, we see:
which is just chugging along at ~155GB/sec of read performance. Looking
at top, we see:
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
7961 root 20 0 267004 0 0 S 3180 0.0 5:37.95 uncached
8024 axboe 20 0 14292 4096 0 R 1.0 0.0 0:00.13 top
where just the test app is using CPU, no reclaim is taking place outside
of the main thread. Not only is performance 65% better, it's also using
half the CPU to do it.
Link: https://lkml.kernel.org/r/20241220154831.1086649-9-axboe@kernel.dk Signed-off-by: Jens Axboe <axboe@kernel.dk> Cc: Brian Foster <bfoster@redhat.com> Cc: Chris Mason <clm@meta.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: Matthew Wilcox (Oracle) <willy@infradead.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Jens Axboe [Fri, 20 Dec 2024 15:47:45 +0000 (08:47 -0700)]
fs: add RWF_DONTCACHE iocb and FOP_DONTCACHE file_operations flag
If a file system supports uncached buffered IO, it may set FOP_DONTCACHE
and enable support for RWF_DONTCACHE. If RWF_DONTCACHE is attempted
without the file system supporting it, it'll get errored with -EOPNOTSUPP.
Link: https://lkml.kernel.org/r/20241220154831.1086649-8-axboe@kernel.dk Signed-off-by: Jens Axboe <axboe@kernel.dk> Cc: Brian Foster <bfoster@redhat.com> Cc: Chris Mason <clm@meta.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: Matthew Wilcox (Oracle) <willy@infradead.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Jens Axboe [Fri, 20 Dec 2024 15:47:44 +0000 (08:47 -0700)]
mm/truncate: add folio_unmap_invalidate() helper
Add a folio_unmap_invalidate() helper, which unmaps and invalidates a
given folio. The caller must already have locked the folio. Embed the
old invalidate_complete_folio2() helper in there as well, as nobody else
calls it.
Use this new helper in invalidate_inode_pages2_range(), rather than
duplicate the code there.
In preparation for using this elsewhere as well, have it take a gfp_t mask
rather than assume GFP_KERNEL is the right choice. This bubbles back to
invalidate_complete_folio2() as well.
Link: https://lkml.kernel.org/r/20241220154831.1086649-7-axboe@kernel.dk Signed-off-by: Jens Axboe <axboe@kernel.dk> Cc: Brian Foster <bfoster@redhat.com> Cc: Chris Mason <clm@meta.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Cc: Matthew Wilcox (Oracle) <willy@infradead.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Jens Axboe [Fri, 20 Dec 2024 15:47:40 +0000 (08:47 -0700)]
mm/filemap: use page_cache_sync_ra() to kick off read-ahead
Rather than use the page_cache_sync_readahead() helper, define our own
ractl and use page_cache_sync_ra() directly. In preparation for needing
to modify ractl inside filemap_get_pages().
No functional changes in this patch.
Link: https://lkml.kernel.org/r/20241220154831.1086649-3-axboe@kernel.dk Signed-off-by: Jens Axboe <axboe@kernel.dk> Reviewed-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> Reviewed-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Matthew Wilcox (Oracle) <willy@infradead.org> Cc: Brian Foster <bfoster@redhat.com> Cc: Chris Mason <clm@meta.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Jens Axboe [Fri, 20 Dec 2024 15:47:39 +0000 (08:47 -0700)]
mm/filemap: change filemap_create_folio() to take a struct kiocb
Patch series "Uncached buffered IO", v8.
5 years ago I posted patches adding support for RWF_UNCACHED, as a way to
do buffered IO that isn't page cache persistent. The approach back then
was to have private pages for IO, and then get rid of them once IO was
done. But that then runs into all the issues that O_DIRECT has, in terms
of synchronizing with the page cache.
So here's a new approach to the same concent, but using the page cache as
synchronization. Due to excessive bike shedding on the naming, this is
now named RWF_DONTCACHE, and is less special in that it's just page cache
IO, except it prunes the ranges once IO is completed.
Why do this, you may ask? The tldr is that device speeds are only getting
faster, while reclaim is not. Doing normal buffered IO can be very
unpredictable, and suck up a lot of resources on the reclaim side. This
leads people to use O_DIRECT as a work-around, which has its own set of
restrictions in terms of size, offset, and length of IO. It's also
inherently synchronous, and now you need async IO as well. While the
latter isn't necessarily a big problem as we have good options available
there, it also should not be a requirement when all you want to do is read
or write some data without caching.
Even on desktop type systems, a normal NVMe device can fill the entire
page cache in seconds. On the big system I used for testing, there's a
lot more RAM, but also a lot more devices. As can be seen in some of the
results in the following patches, you can still fill RAM in seconds even
when there's 1TB of it. Hence this problem isn't solely a "big
hyperscaler system" issue, it's common across the board.
Common for both reads and writes with RWF_DONTCACHE is that they use the
page cache for IO. Reads work just like a normal buffered read would,
with the only exception being that the touched ranges will get pruned
after data has been copied. For writes, the ranges will get writeback
kicked off before the syscall returns, and then writeback completion will
prune the range. Hence writes aren't synchronous, and it's easy to
pipeline writes using RWF_DONTCACHE. Folios that aren't instantiated by
RWF_DONTCACHE IO are left untouched. This means you that uncached IO will
take advantage of the page cache for uptodate data, but not leave anything
it instantiated/created in cache.
File systems need to support this. This patchset adds support for the
generic read path, which covers file systems like ext4. Patches exist to
add support for iomap/XFS and btrfs as well, which sit on top of this
series. If RWF_DONTCACHE IO is attempted on a file system that doesn't
support it, -EOPNOTSUPP is returned. Hence the user can rely on it either
working as designed, or flagging and error if that's not the case. The
intent here is to give the application a sensible fallback path - eg, it
may fall back to O_DIRECT if appropriate, or just live with the fact that
uncached IO isn't available and do normal buffered IO.
Adding "support" to other file systems should be trivial, most of the time
just a one-liner adding FOP_DONTCACHE to the fop_flags in the
file_operations struct, if the file system is using either iomap or the
generic filemap helpers for reading and writing.
Performance results are in patch 8 for reads, and you can find the write
side results in the XFS patch adding support for DONTCACHE writes for XFS:
with the tldr being that I see about a 65% improvement in performance for
both, with fully predictable IO times. CPU reduction is substantial as
well, with no kswapd activity at all for reclaim when using uncached IO.
Using it from applications is trivial - just set RWF_DONTCACHE for the
read or write, using pwritev2(2) or preadv2(2). For io_uring, same thing,
just set RWF_DONTCACHE in sqe->rw_flags for a buffered read/write
operation. And that's it.
Patches 1..7 are just prep patches, and should have no functional changes
at all. Patch 8 adds support for the filemap path for RWF_DONTCACHE
reads, and patches 9..12 are just prep patches for supporting the write
side of uncached writes. In the below mentioned branch, there are then
patches to adopt uncached reads and writes for xfs, btrfs, and ext4. The
latter currently relies on bit of a hack for passing whether this is an
uncached write or not through ->write_begin(), which can hopefully go away
once ext4 adopts iomap for buffered writes. I say this is a hack as it's
not the prettiest way to do it, however it is fully solid and will work
just fine.
Passes full xfstests and fsx overnight runs, no issues observed. That
includes the vm running the testing also using RWF_DONTCACHE on the host.
I'll post fsstress and fsx patches for RWF_DONTCACHE separately. As far
as I'm concerned, no further work needs doing here.
Rather than pass in both the file and position directly from the kiocb,
just take a struct kiocb instead. With the kiocb being passed in, skip
passing in the address_space separately as well. While doing so, move the
ki_flags checking into filemap_create_folio() as well. In preparation for
actually needing the kiocb in the function.
David Hildenbrand [Mon, 13 Jan 2025 13:16:11 +0000 (14:16 +0100)]
mm/hugetlb: use folio->lru int demote_free_hugetlb_folios()
We are demoting hugetlb folios to smaller hugetlb folios; let's avoid
messing with pages where avoidable and handle it more similar to
__split_huge_page_tail().
Link: https://lkml.kernel.org/r/20250113131611.2554758-7-david@redhat.com Signed-off-by: David Hildenbrand <david@redhat.com> Reviewed-by: Sidhartha Kumar <sidhartha.kumar@oracle.com> Cc: Baolin Wang <baolin.wang@linux.alibaba.com> Cc: Matthew Wilcox (Oracle) <willy@infradead.org> Cc: Muchun Song <muchun.song@linux.dev> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
David Hildenbrand [Mon, 13 Jan 2025 13:16:09 +0000 (14:16 +0100)]
mm/hugetlb: rename folio_putback_active_hugetlb() to folio_putback_hugetlb()
Now that folio_putback_hugetlb() is only called on folios that were
previously isolated through folio_isolate_hugetlb(), let's rename it to
match folio_putback_lru().
Add some kernel doc to clarify how this function is supposed to be used.
Link: https://lkml.kernel.org/r/20250113131611.2554758-5-david@redhat.com Signed-off-by: David Hildenbrand <david@redhat.com> Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com> Cc: Matthew Wilcox (Oracle) <willy@infradead.org> Cc: Muchun Song <muchun.song@linux.dev> Cc: Sidhartha Kumar <sidhartha.kumar@oracle.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
David Hildenbrand [Mon, 13 Jan 2025 13:16:08 +0000 (14:16 +0100)]
mm/migrate: don't call folio_putback_active_hugetlb() on dst hugetlb folio
We replaced a simple put_page() by a putback_active_hugepage() call in
commit 3aaa76e125c1 ("mm: migrate: hugetlb: putback destination hugepage
to active list"), to set the "active" flag on the dst hugetlb folio.
Nowadays, we decoupled the "active" list from the flag, by calling the
flag "migratable".
Calling "putback" on something that wasn't allocated is weird and not
future proof, especially if we might reach that path when migration failed
and we just want to free the freshly allocated hugetlb folio.
Let's simply handle the migratable flag and the active list flag in
move_hugetlb_state(), where we know that allocation succeeded and already
handle the temporary flag; use a simple folio_put() to return our
reference.
Link: https://lkml.kernel.org/r/20250113131611.2554758-4-david@redhat.com Signed-off-by: David Hildenbrand <david@redhat.com> Reviewed-by: Baolin Wang <baolin.wang@linux.alibaba.com> Cc: Matthew Wilcox (Oracle) <willy@infradead.org> Cc: Muchun Song <muchun.song@linux.dev> Cc: Sidhartha Kumar <sidhartha.kumar@oracle.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Usama Arif [Mon, 13 Jan 2025 19:07:38 +0000 (19:07 +0000)]
mm/damon/paddr: increment pa_stat damon address range by folio size
This is to avoid going through all the pages in a folio. For folio_size >
PAGE_SIZE, damon_get_folio will return NULL for tail pages, so the for
loop in those instances will be a nop. Have a more efficient loop by just
incrementing the address by folio_size.
Randy Dunlap [Sat, 11 Jan 2025 06:32:49 +0000 (22:32 -0800)]
kasan: use correct kernel-doc format
Use the correct kernel-doc character following function parameters or
struct members (':' instead of '-') to eliminate kernel-doc warnings.
kasan.h:509: warning: Function parameter or struct member 'addr' not described in 'kasan_poison'
kasan.h:509: warning: Function parameter or struct member 'size' not described in 'kasan_poison'
kasan.h:509: warning: Function parameter or struct member 'value' not described in 'kasan_poison'
kasan.h:509: warning: Function parameter or struct member 'init' not described in 'kasan_poison'
kasan.h:522: warning: Function parameter or struct member 'addr' not described in 'kasan_unpoison'
kasan.h:522: warning: Function parameter or struct member 'size' not described in 'kasan_unpoison'
kasan.h:522: warning: Function parameter or struct member 'init' not described in 'kasan_unpoison'
kasan.h:539: warning: Function parameter or struct member 'address' not described in 'kasan_poison_last_granule'
kasan.h:539: warning: Function parameter or struct member 'size' not described in 'kasan_poison_last_granule'
Link: https://lkml.kernel.org/r/20250111063249.910975-1-rdunlap@infradead.org Signed-off-by: Randy Dunlap <rdunlap@infradead.org> Reviewed-by: Andrey Konovalov <andreyknvl@gmail.com> Cc: Andrey Ryabinin <ryabinin.a.a@gmail.com> Cc: Alexander Potapenko <glider@google.com> Cc: Dmitry Vyukov <dvyukov@google.com> Cc: Vincenzo Frascino <vincenzo.frascino@arm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Zi Yan [Fri, 10 Jan 2025 23:50:27 +0000 (18:50 -0500)]
selftests/mm: use selftests framework to print test result
Otherwise the number of tests does not match the reality.
Link: https://lkml.kernel.org/r/20250110235028.96824-1-ziy@nvidia.com Fixes: 391e86971161 ("mm: selftest to verify zero-filled pages are mapped to zeropage") Signed-off-by: Zi Yan <ziy@nvidia.com> Cc: Alexander Zhu <alexlzhu@fb.com> Cc: Rik van Riel <riel@surriel.com> Cc: Shuah Khan <shuah@kernel.org> Cc: Usama Arif <usamaarif642@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Andrew Morton [Sat, 11 Jan 2025 00:38:41 +0000 (16:38 -0800)]
Documentation/filesystems/proc.rst: fix possessive form of "process"
The possessive form of "process" is "process's". Fix up various
misdirected attempts at this. Also reflow some paragraphs.
Cc: David Hildenbrand <david@redhat.com> Cc: Wang Yaxin <wang.yaxin@zte.com.cn> Cc: xu xin <xu.xin16@zte.com.cn> Cc: Yang Yang <yang.yang29@zte.com.cn> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
xu xin [Fri, 10 Jan 2025 09:40:34 +0000 (17:40 +0800)]
ksm: add ksm involvement information for each process
In /proc/<pid>/ksm_stat, add two extra ksm involvement items including
KSM_mergeable and KSM_merge_any. It helps administrators to better know
the system's KSM behavior at process level.
ksm_merge_any: yes/no
whether the process'mm is added by prctl() into the candidate list
of KSM or not, and fully enabled at process level.
ksm_mergeable: yes/no
whether any VMAs of the process'mm are currently applicable to KSM.
Purpose
=======
These two items are just to improve the observability of KSM at process
level, so that users can know if a certain process has enabled KSM.
For example, if without these two items, when we look at
/proc/<pid>/ksm_stat and there's no merging pages found, We are not sure
whether it is because KSM was not enabled or because KSM did not
successfully merge any pages.
Although "mg" in /proc/<pid>/smaps indicate VM_MERGEABLE, it's opaque
and not very obvious for non professionals.
[akpm@linux-foundation.org: wording tweaks, per David and akpm] Link: https://lkml.kernel.org/r/20250110174034304QOb8eDoqtFkp3_t8mqnqc@zte.com.cn Signed-off-by: xu xin <xu.xin16@zte.com.cn> Acked-by: David Hildenbrand <david@redhat.com> Tested-by: Mario Casquero <mcasquer@redhat.com> Cc: Wang Yaxin <wang.yaxin@zte.com.cn> Cc: Yang Yang <yang.yang29@zte.com.cn> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Isaac J. Manjarres [Fri, 10 Jan 2025 16:59:00 +0000 (08:59 -0800)]
mm/memfd: use strncpy_from_user() to read memfd name
The existing logic uses strnlen_user() to calculate the length of the
memfd name from userspace and then copies the string into a buffer using
copy_from_user(). This is error-prone, as the string length could have
changed between the time when it was calculated and when the string was
copied. The existing logic handles this by ensuring that the last byte in
the buffer is the terminating zero.
This handling is contrived and can better be handled by using
strncpy_from_user(), which gets the length of the string and copies it in
one shot. Therefore, simplify the logic for copying the memfd name by
using strncpy_from_user().
No functional change.
Link: https://lkml.kernel.org/r/20250110165904.3437374-3-isaacmanjarres@google.com Signed-off-by: Isaac J. Manjarres <isaacmanjarres@google.com> Reviewed-by: Alice Ryhl <aliceryhl@google.com> Reviewed-by: Lorenzo Stoakes <lorenzo.stoakes@oracle.com> Cc: John Stultz <jstultz@google.com> Cc: Kalesh Singh <kaleshsingh@google.com> Cc: Suren Baghdasaryan <surenb@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Isaac J. Manjarres [Fri, 10 Jan 2025 16:58:59 +0000 (08:58 -0800)]
mm/memfd: refactor and cleanup the logic in memfd_create()
Patch series "Cleanup for memfd_create()", v4.
memfd_create() handles all of its logic in a single function. Some of the
logic in the function is also somewhat contrived (i.e. copying the memfd
name from userpace).
This series aims to cleanup memfd_create() by splitting out the logic into
helper functions, and simplifying the memfd name copying to make the code
easier to follow.
This has no intended functional changes.
Thank you Alice and Lorenzo for reviewing v3 of this series and for your
feedback!
This patch (of 2):
memfd_create() is a pretty busy function that could be easier to read if
some of the logic was split out into helper functions.
Therefore, split the flags sanitization, name allocation, and file
structure allocation into their own helper functions.
SeongJae Park [Fri, 10 Jan 2025 18:52:32 +0000 (10:52 -0800)]
mm/damon: explain "effective quota" on kernel-doc comment
The kernel-doc comment for 'struct damos_quota' describes how "effective
quota" is calculated, but does not explain what it is. Actually there was
an input[1] about it. Add the explanation on the comment.
Also, fix a trivial typo on the comment block: s/empt/empty/
Link: https://lkml.kernel.org/r/20250110185232.54907-6-sj@kernel.org Signed-off-by: SeongJae Park <sj@kernel.org> Suggested-by: Honggyu Kim <honggyu.kim@sk.com> Cc: Yunjeong Mun <yunjeong.mun@sk.com> Cc: Honggyu Kim <honggyu.kim@sk.com> Cc: Jonathan Corbet <corbet@lwn.net> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
SeongJae Park [Fri, 10 Jan 2025 18:52:31 +0000 (10:52 -0800)]
Docs/admin-guide/mm/damon/start: update snapshot example
Two of DAMON user-space tool (damo) commands that are used for examples on
DAMON getting started document, namely 'damo show' and 'damo report heats'
are deprecated[1,2], and replaced by new commands that provides same
functions with unified and simplified user interfaces. Also the example
output of 'damo show' is outdated. 'damo schemes' command is not
deprecated, but users are recommended to use 'damo start' or 'damo tune'
instead.
Update the examples to use the replacements, recommendations, and
up-to-date output formats.
Link: https://lkml.kernel.org/r/20250110185232.54907-5-sj@kernel.org Signed-off-by: SeongJae Park <sj@kernel.org> Cc: Honggyu Kim <honggyu.kim@sk.com> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Yunjeong Mun <yunjeong.mun@sk.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
SeongJae Park [Fri, 10 Jan 2025 18:52:30 +0000 (10:52 -0800)]
Docs/admin-guide/mm/damon/usage: fix and add missing DAMOS filter sysfs files on files hierarchy
DAMOS filter directory part of DAMON sysfs files hierarchy on the usage
document is wrong. 'memcg_path' file under the directory is wrongly
written as 'memcg_id'. Also the directory has 'addr_start', 'addr_end',
and 'target_idx' files, but the list is missing those. Fix the wrong name
and add missing files.
Link: https://lkml.kernel.org/r/20250110185232.54907-4-sj@kernel.org Signed-off-by: SeongJae Park <sj@kernel.org> Cc: Honggyu Kim <honggyu.kim@sk.com> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Yunjeong Mun <yunjeong.mun@sk.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
SeongJae Park [Fri, 10 Jan 2025 18:52:29 +0000 (10:52 -0800)]
Docs/mm/damon: add an example monitoring intervals tuning
Add a DAMON monitoring intervals tuning example that contains output from
a demonstration of the guide on a real server workload system. The
example with real world numbers will help users better understanding the
guide instructions and what outputs they can expect and verify. Those
will again help finding the rooms for improvements on the guide.
Link: https://lkml.kernel.org/r/20250110185232.54907-3-sj@kernel.org Signed-off-by: SeongJae Park <sj@kernel.org> Cc: Honggyu Kim <honggyu.kim@sk.com> Cc: Jonathan Corbet <corbet@lwn.net> Cc: Yunjeong Mun <yunjeong.mun@sk.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Patch series "Docs/mm/damon: add tuning guide and misc updates".
Add DAMON monitoring parameters tuning guide (patches 1 and 2), with misc
documentation fixes (patch 3), updates (patch 4) and clarifications (patch
5).
This patch (of 5):
DAMON monitoring parameters including sampling and aggregation intervals
should be tuned for given workloads. However, the fact is not explicitly
documented. Also there is no official guide to help the tuning. This
apparently confused a number of people[1] at best, or made people forgive
DAMON without tuning. Add a guide on the design document.
Yu Zhao [Tue, 31 Dec 2024 04:35:38 +0000 (21:35 -0700)]
mm/mglru: fix PTE-mapped large folios
Count the accessed bits from PTEs mapping the same large folio as one
access rather than multiple accesses.
The last patch changed how folios accessed through page tables are
promoted: rather than getting promoted after the accessed bit is cleared
for the first time, a folio only gets promoted thereafter. Counting the
accessed bits from the same large folio as multiple accesses can cause
that folio to be promoted prematurely, which in turn can cause
overprotection of single-use large folios.
This patch reduced the sys time of the kernel compilation by 95% CI [2,
5]% on Altra M128-30 with 3GB DRAM, 12GB zram, 16KB THPs and -j32.
Link: https://lkml.kernel.org/r/20241231043538.4075764-8-yuzhao@google.com Signed-off-by: Yu Zhao <yuzhao@google.com> Reported-by: Barry Song <v-songbaohua@oppo.com> Tested-by: Kalesh Singh <kaleshsingh@google.com> Cc: Bharata B Rao <bharata@amd.com> Cc: David Stevens <stevensd@chromium.org> Cc: Kairui Song <kasong@tencent.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Yu Zhao [Tue, 31 Dec 2024 04:35:37 +0000 (21:35 -0700)]
mm/mglru: rework workingset protection
With the aging feedback no longer considering the distribution of folios
in each generation, rework workingset protection to better distribute
folios across MAX_NR_GENS. This is achieved by reusing PG_workingset and
PG_referenced/LRU_REFS_FLAGS in a slightly different way.
For folios accessed multiple times through file descriptors, make
lru_gen_inc_refs() set additional bits of LRU_REFS_WIDTH in folio->flags
after PG_referenced, then PG_workingset after LRU_REFS_WIDTH. After all
its bits are set, i.e., LRU_REFS_FLAGS|BIT(PG_workingset), a folio is
lazily promoted into the second oldest generation in the eviction path.
And when folio_inc_gen() does that, it clears LRU_REFS_FLAGS so that
lru_gen_inc_refs() can start over. For this case, LRU_REFS_MASK is only
valid when PG_referenced is set.
For folios accessed multiple times through page tables, folio_update_gen()
from a page table walk or lru_gen_set_refs() from a rmap walk sets
PG_referenced after the accessed bit is cleared for the first time.
Thereafter, those two paths set PG_workingset and promote folios to the
youngest generation. Like folio_inc_gen(), when folio_update_gen() does
that, it also clears PG_referenced. For this case, LRU_REFS_MASK is not
used.
For both of the cases, after PG_workingset is set on a folio, it remains
until this folio is either reclaimed, or "deactivated" by
lru_gen_clear_refs(). It can be set again if lru_gen_test_recent()
returns true upon a refault.
When adding folios to the LRU lists, lru_gen_folio_seq() distributes
them as follows:
+---------------------------------+---------------------------------+
| Accessed thru page tables | Accessed thru file descriptors |
+---------------------------------+---------------------------------+
| PG_active (set while isolated) | |
+----------------+----------------+----------------+----------------+
| PG_workingset | PG_referenced | PG_workingset | LRU_REFS_FLAGS |
+---------------------------------+---------------------------------+
|<--------- MIN_NR_GENS --------->| |
|<-------------------------- MAX_NR_GENS -------------------------->|
After this patch, some typical client and server workloads showed
improvements under heavy memory pressure. For example, Python TPC-C,
which was used to benchmark a different approach [1] to better detect
refault distances, showed a significant decrease in total refaults:
Before After Change
Time (seconds) 10801 10801 0%
Executed (transactions) 41472 43663 +5%
workingset_nodes 109070 120244 +10%
workingset_refault_anon 50196277281831 +45%
workingset_refault_file 1294678786554855564 -57%
workingset_refault_total 1299698413562137395 -57%
Yu Zhao [Tue, 31 Dec 2024 04:35:36 +0000 (21:35 -0700)]
mm/mglru: rework refault detection
With anon and file min_seq being able to move independently, rework
workingset protection as well so that the comparison of refaults between
anon and file is always on an equal footing.
Specifically, make lru_gen_test_recent() return true for refaults
happening within the distance of MAX_NR_GENS. For example, if min_seq of
a type is max_seq-MIN_NR_GENS, refaults from min_seq-1, i.e.,
max_seq-MIN_NR_GENS-1, are also considered recent, since the distance
max_seq-(max_seq-MIN_NR_GENS-1), i.e., MIN_NR_GENS+1 is less than
MAX_NR_GENS.
As an intermediate step to the final optimization, this change by itself
should not have userspace-visiable effects beyond performance.
Link: https://lkml.kernel.org/r/20241231043538.4075764-6-yuzhao@google.com Signed-off-by: Yu Zhao <yuzhao@google.com> Reported-by: Kairui Song <kasong@tencent.com> Closes: https://lore.kernel.org/CAOUHufahuWcKf5f1Sg3emnqX+cODuR=2TQo7T4Gr-QYLujn4RA@mail.gmail.com/ Tested-by: Kalesh Singh <kaleshsingh@google.com> Cc: Barry Song <v-songbaohua@oppo.com> Cc: Bharata B Rao <bharata@amd.com> Cc: David Stevens <stevensd@chromium.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Yu Zhao [Tue, 31 Dec 2024 04:35:35 +0000 (21:35 -0700)]
mm/mglru: rework type selection
With anon and file min_seq being able to move independently, rework type
selection so that it is based on the total refaults from all tiers of each
type. Also allow a type to be selected until that type reaches
MIN_NR_GENS, regardless of whether that type has a larger min_seq or not,
to accommodate extreme swappiness.
Since some tiers of a selected type can have higher refaults than the
first tier of the other type, use a less larger gain factor 2:3 instead of
1:2, in order for those tiers in the selected type to be better protected.
As an intermediate step to the final optimization, this change by itself
should not have userspace-visiable effects beyond performance.
Link: https://lkml.kernel.org/r/20241231043538.4075764-5-yuzhao@google.com Signed-off-by: Yu Zhao <yuzhao@google.com> Reported-by: David Stevens <stevensd@chromium.org> Tested-by: Kalesh Singh <kaleshsingh@google.com> Cc: Barry Song <v-songbaohua@oppo.com> Cc: Bharata B Rao <bharata@amd.com> Cc: Kairui Song <kasong@tencent.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Yu Zhao [Tue, 31 Dec 2024 04:35:34 +0000 (21:35 -0700)]
mm/mglru: rework aging feedback
The aging feedback is based on both the number of generations and the
distribution of folios in each generation. The number of generations is
currently the distance between max_seq and anon min_seq. This is because
anon min_seq is not allowed to move past file min_seq. The rationale for
that is that file is always evictable whereas anon is not. However, for
use cases where anon is a lot cheaper than file:
1. Anon in the second oldest generation can be a better choice than
file in the oldest generation.
2. A large amount of file in the oldest generation can skew the
distribution, making should_run_aging() return false negative.
Allow anon and file min_seq to move independently, and use solely the
number of generations as the feedback for aging. Specifically, when both
anon and file are evictable, anon min_seq can now be greater than file
min_seq, and therefore the number of generations becomes the distance
between max_seq and min(min_seq[0],min_seq[1]). And should_run_aging()
returns true if and only if the number of generations is less than
MAX_NR_GENS.
As the first step to the final optimization, this change by itself should
not have userspace-visiable effects beyond performance. The next twos
patch will take advantage of this change; the last patch in this series
will better distribute folios across MAX_NR_GENS.
[yuzhao@google.com: restore behaviour for systems with swappiness == 200] Link: https://lkml.kernel.org/r/Z4S3-aJy5dj9tBTk@google.com Link: https://lkml.kernel.org/r/20241231043538.4075764-4-yuzhao@google.com Signed-off-by: Yu Zhao <yuzhao@google.com> Reported-by: David Stevens <stevensd@chromium.org> Tested-by: Kalesh Singh <kaleshsingh@google.com> Cc: Barry Song <v-songbaohua@oppo.com> Cc: Bharata B Rao <bharata@amd.com> Cc: Kairui Song <kasong@tencent.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
After this patch, deactivate_file_folio() bails out early without taking
the LRU lock.
A side effect is that a folio can be left at the head of the oldest
generation, rather than the tail. If reclaim happens at the same time, it
cannot reclaim this folio immediately. Since there is no known
correlation between truncation and reclaim, this side effect is considered
insignificant.
Link: https://lkml.kernel.org/r/20241231043538.4075764-3-yuzhao@google.com Reported-by: Bharata B Rao <bharata@amd.com> Closes: https://lore.kernel.org/CAOUHufawNerxqLm7L9Yywp3HJFiYVrYO26ePUb1jH-qxNGWzyA@mail.gmail.com/ Signed-off-by: Yu Zhao <yuzhao@google.com> Tested-by: Kalesh Singh <kaleshsingh@google.com> Cc: Barry Song <v-songbaohua@oppo.com> Cc: David Stevens <stevensd@chromium.org> Cc: Kairui Song <kasong@tencent.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Yu Zhao [Tue, 31 Dec 2024 04:35:32 +0000 (21:35 -0700)]
mm/mglru: clean up workingset
Patch series "mm/mglru: performance optimizations", v4.
This series improves performance for some previously reported test cases.
Most of the code changes gathered here has been floating on the mailing
list [1][2]. They are now properly organized and have gone through
various benchmarks on client and server devices, including Android, FIO,
memcached, multiple VMs and MongoDB.
In addition to the syzbot regressions fixed in v2 [3] and v3 [4], this
version fixes two more regressions: one reported by Oliver Sang [5] and
the other by Barry Song.
Move VM_BUG_ON_FOLIO() to cover both the default and MGLRU paths. Also
use a pair of rcu_read_lock() and rcu_read_unlock() within each path, to
improve readability.
Before SLUB initialization, various subsystems used memblock_alloc to
allocate memory. In most cases, when memory allocation fails, an
immediate panic is required. To simplify this behavior and reduce
repetitive checks, introduce `memblock_alloc_or_panic`. This function
ensures that memory allocation failures result in a panic automatically,
improving code readability and consistency across subsystems that require
this behavior.
Lorenzo Stoakes [Thu, 2 Jan 2025 12:10:52 +0000 (12:10 +0000)]
mm: make mmap_region() internal
Now that we have removed the one user of mmap_region() outside of mm, make
it internal and add it to vma.c so it can be userland tested.
This ensures that all external memory mappings are performed using the
appropriate interfaces and allows us to modify memory mapping logic as we
see fit.
Additionally expand test stubs to allow for the mmap_region() code to
compile and be userland testable.
Lorenzo Stoakes [Thu, 2 Jan 2025 12:10:51 +0000 (12:10 +0000)]
mips: vdso: prefer do_mmap() to mmap_region()
Patch series "mm: update mips to use do_mmap(), make mmap_region()
internal".
Currently the only user of mmap_region() outside of the memory management
code is the MIPS VDSO implementation.
This uses mmap_region() to map a 'delay slot emulation page' at the top of
the stack which is read-only and executable.
This mapping requires that an already-acquired mmap write lock is utilised
and that uffd and populate logic is ignored. This rules out vm_mmap(),
however do_mmap() fits the bill.
Adapt this code to use do_mmap() and then once done, make mmap_region()
internal and userland testable, and avoid any other uses of mmap_region(),
which is absolutely and strictly an internal mm function which bypasses a
great number of checks and logic.
This patch (of 2):
mmap_region() is an internal memory management implementation detail that
is not intended to be used outside of the memory management subsystem.
Map the delay slot emulation page using do_mmap() which makes use of the
already-held mmap write lock and bypasses unneeded populate and
userfaultfd logic.
This should have the precise same behaviour as the existing logic.
Kairui Song [Mon, 13 Jan 2025 17:57:32 +0000 (01:57 +0800)]
mm, swap_slots: remove slot cache for freeing path
The slot cache for freeing path is mostly for reducing the overhead of
si->lock. As we have basically eliminated the si->lock usage for freeing
path, it can be removed.
This helps simplify the code, and avoids swap entries from being hold in
cache upon freeing. The delayed freeing of entries have been causing
trouble for further optimizations for zswap [1] and in theory will also
cause more fragmentation, and extra overhead.
Test with build linux kernel showed both performance and fragmentation is
better without the cache:
tiem make -j96 / 768M memcg, 4K pages, 10G ZRAM, avg of 4 test run::
Before:
Sys time: 36047.78, Real time: 472.43
After: (-7.6% sys time, -7.3% real time)
Sys time: 33314.76, Real time: 437.67
time make -j96 / 1152M memcg, 64K mTHP, 10G ZRAM, avg of 4 test run:
Before:
Sys time: 46859.04, Real time: 562.63
hugepages-64kB/stats/swpout: 1783392
hugepages-64kB/stats/swpout_fallback: 240875
After: (-23.3% sys time, -21.3% real time)
Sys time: 35958.87, Real time: 442.69
hugepages-64kB/stats/swpout: 1866267
hugepages-64kB/stats/swpout_fallback: 158330
Sequential SWAP should be also slightly faster, tests didn't show a
measurable difference though, at least no regression:
Link: https://lore.kernel.org/all/CAMgjq7ACohT_uerSz8E_994ZZCv709Zor+43hdmesW_59W1BWw@mail.gmail.com/[1] Link: https://lkml.kernel.org/r/20250113175732.48099-14-ryncsn@gmail.com Signed-off-by: Kairui Song <kasong@tencent.com> Suggested-by: Chris Li <chrisl@kernel.org> Cc: Baoquan He <bhe@redhat.com> Cc: Barry Song <v-songbaohua@oppo.com> Cc: "Huang, Ying" <ying.huang@linux.alibaba.com> Cc: Hugh Dickens <hughd@google.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Kalesh Singh <kaleshsingh@google.com> Cc: Nhat Pham <nphamcs@gmail.com> Cc: Ryan Roberts <ryan.roberts@arm.com> Cc: Yosry Ahmed <yosryahmed@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Kairui Song [Mon, 13 Jan 2025 17:57:31 +0000 (01:57 +0800)]
mm, swap: use a global swap cluster for non-rotation devices
Non-rotational devices (SSD / ZRAM) can tolerate fragmentation, so the
goal of the SWAP allocator is to avoid contention for clusters. It uses a
per-CPU cluster design, and each CPU will use a different cluster as much
as possible.
However, HDDs are very sensitive to fragmentation, contention is trivial
in comparison. Therefore, we use one global cluster instead. This
ensures that each order will be written to the same cluster as much as
possible, which helps make the I/O more continuous.
This ensures that the performance of the cluster allocator is as good as
that of the old allocator. Tests after this commit compared to those
before this series:
Tested using 'make -j32' with tinyconfig, a 1G memcg limit, and HDD swap:
make -j32 with tinyconfig, using 1G memcg limit and HDD swap:
Before this series:
114.44user 29.11system 39:42.90elapsed 6%CPU (0avgtext+0avgdata 157284maxresident)k
2901232inputs+0outputs (238877major+4227640minor)pagefaults
After this commit:
113.90user 23.81system 38:11.77elapsed 6%CPU (0avgtext+0avgdata 157260maxresident)k
2548728inputs+0outputs (235471major+4238110minor)pagefaults
[ryncsn@gmail.com: check kmalloc() return in setup_clusters] Link: https://lkml.kernel.org/r/CAMgjq7Au+o04ckHyT=iU-wVx9az=t0B-ZiC5E0bDqNrAtNOP-g@mail.gmail.com Link: https://lkml.kernel.org/r/20250113175732.48099-13-ryncsn@gmail.com Signed-off-by: Kairui Song <kasong@tencent.com> Suggested-by: Chris Li <chrisl@kernel.org> Cc: Baoquan He <bhe@redhat.com> Cc: Barry Song <v-songbaohua@oppo.com> Cc: "Huang, Ying" <ying.huang@linux.alibaba.com> Cc: Hugh Dickens <hughd@google.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Kalesh Singh <kaleshsingh@google.com> Cc: Nhat Pham <nphamcs@gmail.com> Cc: Ryan Roberts <ryan.roberts@arm.com> Cc: Yosry Ahmed <yosryahmed@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Kairui Song [Mon, 13 Jan 2025 17:57:30 +0000 (01:57 +0800)]
mm, swap: introduce a helper for retrieving cluster from offset
It's a common operation to retrieve the cluster info from offset,
introduce a helper for this.
Link: https://lkml.kernel.org/r/20250113175732.48099-12-ryncsn@gmail.com Signed-off-by: Kairui Song <kasong@tencent.com> Suggested-by: Chris Li <chrisl@kernel.org> Cc: Baoquan He <bhe@redhat.com> Cc: Barry Song <v-songbaohua@oppo.com> Cc: "Huang, Ying" <ying.huang@linux.alibaba.com> Cc: Hugh Dickens <hughd@google.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Kalesh Singh <kaleshsingh@google.com> Cc: Nhat Pham <nphamcs@gmail.com> Cc: Ryan Roberts <ryan.roberts@arm.com> Cc: Yosry Ahmed <yosryahmed@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Kairui Song [Mon, 13 Jan 2025 17:57:29 +0000 (01:57 +0800)]
mm, swap: simplify percpu cluster updating
Instead of using a returning argument, we can simply store the next
cluster offset to the fixed percpu location, which reduce the stack usage
and simplify the function:
Object size:
./scripts/bloat-o-meter mm/swapfile.o mm/swapfile.o.new
add/remove: 0/0 grow/shrink: 0/2 up/down: 0/-271 (-271)
Function old new delta
get_swap_pages 2847 2733 -114
alloc_swap_scan_cluster 894 737 -157
Total: Before=30833, After=30562, chg -0.88%
Link: https://lkml.kernel.org/r/20250113175732.48099-11-ryncsn@gmail.com Signed-off-by: Kairui Song <kasong@tencent.com> Cc: Baoquan He <bhe@redhat.com> Cc: Barry Song <v-songbaohua@oppo.com> Cc: Chis Li <chrisl@kernel.org> Cc: "Huang, Ying" <ying.huang@linux.alibaba.com> Cc: Hugh Dickens <hughd@google.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Kalesh Singh <kaleshsingh@google.com> Cc: Nhat Pham <nphamcs@gmail.com> Cc: Ryan Roberts <ryan.roberts@arm.com> Cc: Yosry Ahmed <yosryahmed@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Kairui Song [Mon, 13 Jan 2025 17:57:28 +0000 (01:57 +0800)]
mm, swap: reduce contention on device lock
Currently, swap locking is mainly composed of two locks: the cluster lock
(ci->lock) and the device lock (si->lock).
The cluster lock is much more fine-grained, so it is best to use ci->lock
instead of si->lock as much as possible.
We have cleaned up other hard dependencies on si->lock. Following the new
cluster allocator design, most operations don't need to touch si->lock at
all. In practice, we only need to take si->lock when moving clusters
between lists.
To achieve this, this commit reworks the locking pattern of all si->lock
and ci->lock users, eliminates all usage of ci->lock inside si->lock, and
introduces a new design to avoid touching si->lock unless needed.
For minimal contention and easier understanding of the system, two ideas
are introduced with the corresponding helpers: isolation and relocation.
- Clusters will be `isolated` from the list when iterating the list
to search for an allocatable cluster.
This ensures other CPUs won't walk into the same cluster easily,
and it releases si->lock after acquiring ci->lock, providing the
only place that handles the inversion of two locks, and avoids
contention.
Iterating the cluster list almost always moves the cluster
(free -> nonfull, nonfull -> frag, frag -> frag tail), but it
doesn't know where the cluster should be moved to until scanning
is done. So keeping the cluster off-list is a good option with
low overhead.
The off-list time window of a cluster is also minimal. In the worst
case, one CPU will return the cluster after scanning the 512 entries
on it, which we used to busy wait with a spin lock.
This is done with the new helper `isolate_lock_cluster`.
- Clusters will be `relocated` after allocation or freeing, according
to their usage count and status.
Allocations no longer hold si->lock now, and may drop ci->lock for
reclaim, so the cluster could be moved to any location while no lock
is held. Besides, isolation clears all flags when it takes the
cluster off the list (the flags must be in sync with the list status,
so cluster users don't need to touch si->lock for checking its list
status). So the cluster has to be relocated to the right list
according to its usage after allocation or freeing.
Relocation is optional, if the cluster flags indicate it's already
on the right list, it will skip touching the list or si->lock.
This is done with `relocate_cluster` after allocation or with
`[partial_]free_cluster` after freeing.
This handled usage of all kinds of clusters in a clean way.
Scanning and allocation by iterating the cluster list is handled by
"isolate - <scan / allocate> - relocate".
Scanning and allocation of per-CPU clusters will only involve
"<scan / allocate> - relocate", as it knows which cluster to lock
and use.
Freeing will only involve "relocate".
Each CPU will keep using its per-CPU cluster until the 512 entries
are all consumed. Freeing also has to free 512 entries to trigger
cluster movement in the best case, so si->lock is rarely touched.
Testing with building the Linux kernel with defconfig showed huge
improvement:
tiem make -j96 / 768M memcg, 4K pages, 10G ZRAM, on Intel 8255C:
Before:
Sys time: 73578.30, Real time: 864.05
After: (-50.7% sys time, -44.8% real time)
Sys time: 36227.49, Real time: 476.66
time make -j96 / 1152M memcg, 64K mTHP, 10G ZRAM, on Intel 8255C:
(avg of 4 test run)
Before:
Sys time: 74044.85, Real time: 846.51
hugepages-64kB/stats/swpout: 1735216
hugepages-64kB/stats/swpout_fallback: 430333
After: (-40.4% sys time, -37.1% real time)
Sys time: 44160.56, Real time: 532.07
hugepages-64kB/stats/swpout: 1786288
hugepages-64kB/stats/swpout_fallback: 243384
time make -j32 / 512M memcg, 4K pages, 5G ZRAM, on AMD 7K62:
Before:
Sys time: 8098.21, Real time: 401.3
After: (-22.6% sys time, -12.8% real time )
Sys time: 6265.02, Real time: 349.83
The allocation success rate also slightly improved as we sanitized the
usage of clusters with new defined helpers, previously dropping
si->lock or ci->lock during scan will cause cluster order shuffle.
Link: https://lkml.kernel.org/r/20250113175732.48099-10-ryncsn@gmail.com Signed-off-by: Kairui Song <kasong@tencent.com> Suggested-by: Chris Li <chrisl@kernel.org> Cc: Baoquan He <bhe@redhat.com> Cc: Barry Song <v-songbaohua@oppo.com> Cc: "Huang, Ying" <ying.huang@linux.alibaba.com> Cc: Hugh Dickens <hughd@google.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Kalesh Singh <kaleshsingh@google.com> Cc: Nhat Pham <nphamcs@gmail.com> Cc: Ryan Roberts <ryan.roberts@arm.com> Cc: Yosry Ahmed <yosryahmed@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Kairui Song [Mon, 13 Jan 2025 17:57:27 +0000 (01:57 +0800)]
mm, swap: use an enum to define all cluster flags and wrap flags changes
Currently, we are only using flags to indicate which list the cluster is
on. Using one bit for each list type might be a waste, as the list type
grows, we will consume too many bits. Additionally, the current mixed
usage of '&' and '==' is a bit confusing.
Make it clean by using an enum to define all possible cluster statuses.
Only an off-list cluster will have the NONE (0) flag. And use a wrapper
to annotate and sanitize all flag settings and list movements.
Link: https://lkml.kernel.org/r/20250113175732.48099-9-ryncsn@gmail.com Signed-off-by: Kairui Song <kasong@tencent.com> Suggested-by: Chris Li <chrisl@kernel.org> Cc: Baoquan He <bhe@redhat.com> Cc: Barry Song <v-songbaohua@oppo.com> Cc: "Huang, Ying" <ying.huang@linux.alibaba.com> Cc: Hugh Dickens <hughd@google.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Kalesh Singh <kaleshsingh@google.com> Cc: Nhat Pham <nphamcs@gmail.com> Cc: Ryan Roberts <ryan.roberts@arm.com> Cc: Yosry Ahmed <yosryahmed@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Kairui Song [Mon, 13 Jan 2025 17:57:26 +0000 (01:57 +0800)]
mm, swap: hold a reference during scan and cleanup flag usage
The flag SWP_SCANNING was used as an indicator of whether a device is
being scanned for allocation, and prevents swapoff. Combined with
SWP_WRITEOK, they work as a set of barriers for a clean swapoff:
1. Swapoff clears SWP_WRITEOK, allocation requests will see
~SWP_WRITEOK and abort as it's serialized by si->lock.
2. Swapoff unuses all allocated entries.
3. Swapoff waits for SWP_SCANNING flag to be cleared, so ongoing
allocations will stop, preventing UAF.
4. Now swapoff can free everything safely.
This will make the allocation path have a hard dependency on si->lock.
Allocation always have to acquire si->lock first for setting SWP_SCANNING
and checking SWP_WRITEOK.
This commit removes this flag, and just uses the existing per-CPU refcount
instead to prevent UAF in step 3, which serves well for such usage without
dependency on si->lock, and scales very well too. Just hold a reference
during the whole scan and allocation process. Swapoff will kill and wait
for the counter.
And for preventing any allocation from happening after step 1 so the unuse
in step 2 can ensure all slots are free, swapoff will acquire the ci->lock
of each cluster one by one to ensure all allocations see ~SWP_WRITEOK and
abort.
This way these dependences on si->lock are gone. And worth noting we
can't kill the refcount as the first step for swapoff as the unuse process
have to acquire the refcount.
Link: https://lkml.kernel.org/r/20250113175732.48099-8-ryncsn@gmail.com Signed-off-by: Kairui Song <kasong@tencent.com> Cc: Baoquan He <bhe@redhat.com> Cc: Barry Song <v-songbaohua@oppo.com> Cc: Chis Li <chrisl@kernel.org> Cc: "Huang, Ying" <ying.huang@linux.alibaba.com> Cc: Hugh Dickens <hughd@google.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Kalesh Singh <kaleshsingh@google.com> Cc: Nhat Pham <nphamcs@gmail.com> Cc: Ryan Roberts <ryan.roberts@arm.com> Cc: Yosry Ahmed <yosryahmed@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Kairui Song [Mon, 13 Jan 2025 17:57:25 +0000 (01:57 +0800)]
mm, swap: clean up plist removal and adding
When the swap device is full (inuse_pages == pages), it should be removed
from the allocation available plist. If any slot is freed, the swap
device should be added back to the plist. Additionally, during swapon or
swapoff, the swap device is forcefully added or removed.
Currently, the condition (inuse_pages == pages) is checked after every
counter update, then remove or add the device accordingly. This is
serialized by si->lock.
This commit decouples it from the protection of si->lock and reworked
plist removal and adding, making it possible to get rid of the hard
dependency on si->lock in allocation path in later commits.
To achieve this, simply using another lock is not an optimal approach, as
the overhead is observable for a hot counter, and may cause complex
locking issues. Thus, this commit manages to make it a lock-free atomic
operation, by embedding the plist state into the second highest bit of the
atomic counter.
Simply making the counter an atomic will not work, if the update and plist
status check are not performed atomically, we may miss an addition or
removal. With the embedded info we can update the counter and check the
plist status with single atomic operations, and avoid any extra overheads:
If the counter is full (inuse_pages == pages) and the off-list bit is
unset, we attempt to remove it from the plist. If the counter is not full
(inuse_pages != pages) and the off-list bit is set, we attempt to add it
to the plist. Removing, adding and bit update is serialized with a lock,
which is a cold path. Ordinary counter updates will be lock-free.
Link: https://lkml.kernel.org/r/20250113175732.48099-7-ryncsn@gmail.com Signed-off-by: Kairui Song <kasong@tencent.com> Cc: Baoquan He <bhe@redhat.com> Cc: Barry Song <v-songbaohua@oppo.com> Cc: Chis Li <chrisl@kernel.org> Cc: "Huang, Ying" <ying.huang@linux.alibaba.com> Cc: Hugh Dickens <hughd@google.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Kalesh Singh <kaleshsingh@google.com> Cc: Nhat Pham <nphamcs@gmail.com> Cc: Ryan Roberts <ryan.roberts@arm.com> Cc: Yosry Ahmed <yosryahmed@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Kairui Song [Mon, 13 Jan 2025 17:57:24 +0000 (01:57 +0800)]
mm, swap: clean up device availability check
Remove highest_bit and lowest_bit. After the HDD allocation path has been
removed, the only purpose of these two fields is to determine whether the
device is full or not, which can instead be determined by checking the
inuse_pages.
Link: https://lkml.kernel.org/r/20250113175732.48099-6-ryncsn@gmail.com Signed-off-by: Kairui Song <kasong@tencent.com> Reviewed-by: Baoquan He <bhe@redhat.com> Cc: Barry Song <v-songbaohua@oppo.com> Cc: Chis Li <chrisl@kernel.org> Cc: "Huang, Ying" <ying.huang@linux.alibaba.com> Cc: Hugh Dickens <hughd@google.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Kalesh Singh <kaleshsingh@google.com> Cc: Nhat Pham <nphamcs@gmail.com> Cc: Ryan Roberts <ryan.roberts@arm.com> Cc: Yosry Ahmed <yosryahmed@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Kairui Song [Mon, 13 Jan 2025 17:57:23 +0000 (01:57 +0800)]
mm, swap: use cluster lock for HDD
Cluster lock (ci->lock) was introduced to reduce contention for certain
operations. Using cluster lock for HDD is not helpful as HDD have a poor
performance, so locking isn't the bottleneck. But having different set of
locks for HDD / non-HDD prevents further rework of device lock (si->lock).
This commit just changed all lock_cluster_or_swap_info to lock_cluster,
which is a safe and straight conversion since cluster info is always
allocated now, also removed all cluster_info related checks.
Link: https://lkml.kernel.org/r/20250113175732.48099-5-ryncsn@gmail.com Signed-off-by: Kairui Song <kasong@tencent.com> Suggested-by: Chris Li <chrisl@kernel.org> Reviewed-by: Baoquan He <bhe@redhat.com> Cc: Barry Song <v-songbaohua@oppo.com> Cc: "Huang, Ying" <ying.huang@linux.alibaba.com> Cc: Hugh Dickens <hughd@google.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Kalesh Singh <kaleshsingh@google.com> Cc: Nhat Pham <nphamcs@gmail.com> Cc: Ryan Roberts <ryan.roberts@arm.com> Cc: Yosry Ahmed <yosryahmed@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Kairui Song [Mon, 13 Jan 2025 17:57:22 +0000 (01:57 +0800)]
mm, swap: remove old allocation path for HDD
We are currently using different swap allocation algorithm for HDD and
non-HDD. This leads to the existence of a different set of locks, and the
code path is heavily bloated, causing difficulties for further
optimization and maintenance.
This commit removes all HDD swap allocation and related dead code, and
uses the cluster allocation algorithm instead.
The performance may drop temporarily, but this should be negligible: The
main advantage of the legacy HDD allocation algorithm is that it tends to
use continuous slots, but swap device gets fragmented quickly anyway, and
the attempt to use continuous slots will fail easily.
This commit also enables mTHP swap on HDD, which is expected to be
beneficial, and following commits will adapt and optimize the cluster
allocator for HDD.
Link: https://lkml.kernel.org/r/20250113175732.48099-4-ryncsn@gmail.com Signed-off-by: Kairui Song <kasong@tencent.com> Suggested-by: Chris Li <chrisl@kernel.org> Suggested-by: "Huang, Ying" <ying.huang@linux.alibaba.com> Reviewed-by: Baoquan He <bhe@redhat.com> Cc: Barry Song <v-songbaohua@oppo.com> Cc: Hugh Dickens <hughd@google.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Kalesh Singh <kaleshsingh@google.com> Cc: Nhat Pham <nphamcs@gmail.com> Cc: Ryan Roberts <ryan.roberts@arm.com> Cc: Yosry Ahmed <yosryahmed@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Kairui Song [Mon, 13 Jan 2025 17:57:21 +0000 (01:57 +0800)]
mm, swap: fold swap_info_get_cont in the only caller
The name of the function is confusing, and the code is much easier to
follow after folding, also rename the confusing naming "p" to more
meaningful "si".
Link: https://lkml.kernel.org/r/20250113175732.48099-3-ryncsn@gmail.com Signed-off-by: Kairui Song <kasong@tencent.com> Reviewed-by: Baoquan He <bhe@redhat.com> Cc: Barry Song <v-songbaohua@oppo.com> Cc: Chis Li <chrisl@kernel.org> Cc: "Huang, Ying" <ying.huang@linux.alibaba.com> Cc: Hugh Dickens <hughd@google.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Kalesh Singh <kaleshsingh@google.com> Cc: Nhat Pham <nphamcs@gmail.com> Cc: Ryan Roberts <ryan.roberts@arm.com> Cc: Yosry Ahmed <yosryahmed@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Kairui Song [Mon, 13 Jan 2025 17:57:20 +0000 (01:57 +0800)]
mm, swap: minor clean up for swap entry allocation
Patch series "mm, swap: rework of swap allocator locks", v4.
This series greatly improved swap performance by reworking the locking
design and simplify a lot of code path. Test showed a up to 400%
vm-scalability improvement with pmem as SWAP, and up to 37% reduce of
kernel compile real time with ZRAM as SWAP (up to 60% improvement in
system time).
This is part of the new swap allocator discussed during the "Swap
Abstraction" discussion at LSF/MM 2024, and "mTHP and swap allocator"
discussion at LPC 2024.
This is a follow up of previous swap cluster allocator series:
https://lore.kernel.org/linux-mm/20240730-swap-allocator-v5-0-cb9c148b9297@kernel.org/
Also enables further optimizations which will come later.
Previous series introduced a fully cluster based allocator, this series
completely get rid of the old allocator and makes the new allocator avoid
touching the si->lock unless needed. This bring huge performance gain and
get rid of slot cache for freeing path.
Currently, swap locking is mainly composed of two locks, cluster lock
(ci->lock) and device lock (si->lock). The device lock is widely used to
protect many things, causing it to be the main bottleneck for SWAP.
Cluster lock is much more fine-grained, so it will be best to use ci->lock
instead of si->lock as much as possible.
`perf lock' indicates this issue clearly. Doing linux kernel build using
tmpfs and ZRAM with limited memory (make -j64 with 1G memcg and 4k pages),
result of "perf lock contention -ab sleep 3" shows:
contended total wait max wait avg wait type caller
34948 53.63 s 7.11 ms 1.53 ms spinlock free_swap_and_cache_nr+0x350
16569 40.05 s 6.45 ms 2.42 ms spinlock get_swap_pages+0x231
11191 28.41 s 7.03 ms 2.54 ms spinlock swapcache_free_entries+0x59
4147 22.78 s 122.66 ms 5.49 ms spinlock page_vma_mapped_walk+0x6f3
4595 7.17 s 6.79 ms 1.56 ms spinlock swapcache_free_entries+0x59
406027 2.74 s 2.59 ms 6.74 us spinlock list_lru_add+0x39
...snip...
The top 5 caller are all users of si->lock, total wait time sums to
several minutes in the 3 seconds time window.
Following the new allocator design, many operation doesn't need to touch
si->lock at all. We only need to take si->lock when doing operations
across multiple clusters (changing the cluster list). So ideally
allocator should always take ci->lock first, then take si->lock only if
needed. But due to historical reasons, ci->lock is used inside si->lock
critical section, causing lock inversion if we simply try to acquire
si->lock after acquiring ci->lock.
This series audited all si->lock usage, clean up legacy codes, eliminate
usage of si->lock as much as possible by introducing new designs based on
the new cluster allocator.
Old HDD allocation codes are removed, cluster allocator is adapted with
small changes for HDD usage, test is looking OK.
And this also removed slot cache for freeing path. The performance is
even better without it now, and this enables other clean up and
optimizations as discussed before:
After this series, lock contention on si->lock is nearly unobservable
with `perf lock` with the same test above:
contended total wait max wait avg wait type caller
... snip ...
52 127.12 us 3.82 us 2.44 us spinlock move_cluster+0x2c
56 120.77 us 12.41 us 2.16 us spinlock move_cluster+0x2c
... snip ...
10 21.96 us 2.78 us 2.20 us spinlock isolate_lock_cluster+0x20
... snip ...
9 19.27 us 2.70 us 2.14 us spinlock move_cluster+0x2c
... snip ...
5 11.07 us 2.70 us 2.21 us spinlock isolate_lock_cluster+0x20
`move_cluster' and `isolate_lock_cluster' (two new introduced helper) are
basically the only users of si->lock now, performance gain is huge, and
LOC is reduced.
Tests Results:
vm-scalability
==============
Running `usemem --init-time -O -y -x -R -31 1G` from vm-scalability in a
12G memory cgroup using simulated pmem as SWAP backend (32G pmem, 32
CPUs).
Using 4K folio by default, 64k mTHP and sequential access (!-R) results
are also provided. 6 test runs for each case, Total Throughput:
Test Before (KB/s) (stdev) After (KB/s) (stdev) Delta
---------------------------------------------------------------------------
Random (4K): 69937.11 (16449.77) 369816.17 (24476.68) +428.78%
Random (64k): 123442.83 (13207.51) 216379.00 (25024.83) +75.28%
Sequential (4K): 6313909.83 (148856.12) 6419860.66 (183563.38) +1.7%
Sequential access will cause lower stress for the allocator so the gain is
limited, but with random access (which is much closer to real workloads)
the performance gain is huge.
Build kernel with defconfig on tmpfs with ZRAM
==============================================
Below results shows a test matrix using different memory cgroup limit and
job numbets, and scaled up progressive for a intuitive result. Done on a
48c96t system.
6 test run for each case, it can be seen clearly that as concurrent job
number goes higher the performance gain is higher, but even -j6 is showing
slight improvement.
The fragmentation are reduced too:
With: make -j96 / 1152M memcg, 64K mTHP:
(avg of 4 test run)
Before:
hugepages-64kB/stats/swpout: 1696184
hugepages-64kB/stats/swpout_fallback: 414318
After: (-63.2% mTHP swapout failure)
hugepages-64kB/stats/swpout: 1866267
hugepages-64kB/stats/swpout_fallback: 158330
There is a up to 65.1% improvement in sys time for build kernel test,
and lower fragmentation rate.
Build kernel with tinyconfig on tmpfs with HDD as swap:
=======================================================
This test is similar to above, but HDD test is very noisy and slow, the
deviation is huge, so just use tinyconfig instead and take the median test
result of 3 test run, which looks OK:
Before this series:
114.44user 29.11system 39:42.90elapsed 6%CPU
2901232inputs+0outputs (238877major+4227640minor)pagefaults
After this commit:
113.90user 23.81system 38:11.77elapsed 6%CPU
2548728inputs+0outputs (235471major+4238110minor)pagefaults
Single thread SWAP:
===================
Sequential SWAP should also be slightly faster as we removed a lot of
unnecessary parts. Test using micro benchmark for swapout/in 4G
zero memory using ZRAM, 10 test runs:
Suren Baghdasaryan [Thu, 26 Dec 2024 21:16:38 +0000 (13:16 -0800)]
alloc_tag: avoid current->alloc_tag manipulations when profiling is disabled
When memory allocation profiling is disabled there is no need to update
current->alloc_tag and these manipulations add unnecessary overhead. Fix
the overhead by skipping these extra updates.
I ran comprehensive testing on Pixel 6 on Big, Medium and Little cores:
Overhead before fixes Overhead after fixes
slab alloc page alloc slab alloc page alloc
Big 6.21% 5.32% 3.31% 4.93%
Medium 4.51% 5.05% 3.79% 4.39%
Little 7.62% 1.82% 6.68% 1.02%
This is an allocation microbenchmark doing allocations in a tight loop.
Not a really realistic scenario and useful only to make performance
comparisons.
Link: https://lkml.kernel.org/r/20241226211639.1357704-1-surenb@google.com Fixes: b951aaff5035 ("mm: enable page allocation tagging") Signed-off-by: Suren Baghdasaryan <surenb@google.com> Cc: David Wang <00107082@163.com> Cc: Kent Overstreet <kent.overstreet@linux.dev> Cc: Yu Zhao <yuzhao@google.com> Cc: Zhenhua Huang <quic_zhenhuah@quicinc.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Chen Ridong [Tue, 24 Dec 2024 02:52:38 +0000 (02:52 +0000)]
memcg: fix soft lockup in the OOM process
A soft lockup issue was found in the product with about 56,000 tasks were
in the OOM cgroup, it was traversing them when the soft lockup was
triggered.
This is because thousands of processes are in the OOM cgroup, it takes a
long time to traverse all of them. As a result, this lead to soft lockup
in the OOM process.
To fix this issue, call 'cond_resched' in the 'mem_cgroup_scan_tasks'
function per 1000 iterations. For global OOM, call
'touch_softlockup_watchdog' per 1000 iterations to avoid this issue.
Link: https://lkml.kernel.org/r/20241224025238.3768787-1-chenridong@huaweicloud.com Fixes: 9cbb78bb3143 ("mm, memcg: introduce own oom handler to iterate only over its own threads") Signed-off-by: Chen Ridong <chenridong@huawei.com> Acked-by: Michal Hocko <mhocko@suse.com> Cc: Roman Gushchin <roman.gushchin@linux.dev> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Shakeel Butt <shakeelb@google.com> Cc: Muchun Song <songmuchun@bytedance.com> Cc: Michal Koutný <mkoutny@suse.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>