sh_def@163.com [Thu, 31 Dec 2020 22:04:09 +0000 (22:04 +0000)]
mm/page_reporting: use list_entry_is_head() in page_reporting_cycle()
Replace '&next->lru != list' with list_entry_is_head(). No functional
change.
Link: https://lkml.kernel.org/r/20201222182735.GA1257912@ubuntu-A520I-AC Signed-off-by: sh <sh_def@163.com> Reviewed-by: Andrew Morton <akpm@linux-foundation.org> Cc: Alexander Duyck <alexander.h.duyck@linux.intel.com> Reviewed-by: David Hildenbrand <david@redhat.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
jianhong chen [Thu, 31 Dec 2020 22:04:09 +0000 (22:04 +0000)]
mm/mmap.c: fix the adjusted length error
In linux version 4.4, a 32-bit process may fail to allocate 64M hugepage
memory by function shmat even though there is a 64M memory gap in the
process.
It is the adjusted length that causes the problem, introduced by db4fbfb9523c935 ("mm: vm_unmapped_area() lookup function"). Accounting
for the worst case alignment overhead, unmapped_area() and
unmapped_area_topdown() adjust the search length before searching for
available vma gap. This is an estimated length, sum of the desired length
and the longest alignment offset, which can cause misjudgement if the
system has very little virtual memory left. For example, if the longest
memory gap available is 64M, we can't get it from the system by allocating
64M hugepage memory via shmat function. The reason is that it requires a
longer length, the sum of the desired length(64M) and the longest
alignment offset.
To fix this error we can calculate the alignment offset of gap_start or
gap_end to get a desired gap_start or gap_end value, before searching for
the available gap. In this way, we don't need to adjust the search
length.
Problem reproduces procedure:
1. allocate a lot of virtual memory segments via shmat and malloc
2. release one of the biggest memory segment via shmdt
3. attach the biggest memory segment via shmat
Adrian Huang [Thu, 31 Dec 2020 22:04:08 +0000 (22:04 +0000)]
mm/mmap.c: remove unnecessary local variable
The local variable 'retval' is assigned just for once in __do_sys_brk(),
and the function returns the value of the local variable right after the
assignment. Remove unnecessary assignment and local variable declaration.
Alex Shi [Thu, 31 Dec 2020 22:04:08 +0000 (22:04 +0000)]
mm/memcg: remove rcu locking for lock_page_lruvec function series
lock_page_lruvec() and its variants used rcu_read_lock() with the
intention of safeguarding against the mem_cgroup being destroyed
concurrently; but so long as they are called under the specified
conditions (as they are), there is no way for the page's mem_cgroup to be
destroyed. Delete the unnecessary rcu_read_lock() and _unlock().
Hugh Dickins polished the commit log. Thanks a lot!
Link: https://lkml.kernel.org/r/1608614453-10739-2-git-send-email-alex.shi@linux.alibaba.com Signed-off-by: Alex Shi <alex.shi@linux.alibaba.com> Acked-by: Hugh Dickins <hughd@google.com> Cc: Hugh Dickins <hughd@google.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Michal Hocko <mhocko@kernel.org> Cc: Vladimir Davydov <vdavydov.dev@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Alex Shi [Thu, 31 Dec 2020 22:04:07 +0000 (22:04 +0000)]
mm/memcg: revise the using condition of lock_page_lruvec function series
lock_page_lruvec() and its variants are safe to use under the same
conditions as commit_charge(): add lock_page_memcg() to the comment.
Polished with Hugh Dickins' suggestions, thanks!
Link: https://lkml.kernel.org/r/1608614453-10739-1-git-send-email-alex.shi@linux.alibaba.com Signed-off-by: Alex Shi <alex.shi@linux.alibaba.com> Acked-by: Hugh Dickins <hughd@google.com> Cc: Hugh Dickins <hughd@google.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Michal Hocko <mhocko@kernel.org> Cc: Vladimir Davydov <vdavydov.dev@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Muchun Song [Thu, 31 Dec 2020 22:04:07 +0000 (22:04 +0000)]
mm: memcontrol: make the slab calculation consistent
Although the ratio of the slab is one, we also should read the ratio from
the related memory_stats instead of hard-coding. And the local variable
of size is already the value of slab_unreclaimable. So we do not need to
read again.
It requires a series of VM_BUG_ONs or comments to ensure these two items
are actually adjacent and in the right order. So it would probably be
easier to implement this using a wrapper that has a big switch() for unit
conversion.
Muchun Song [Thu, 31 Dec 2020 22:04:06 +0000 (22:04 +0000)]
mm: memcontrol: convert NR_FILE_PMDMAPPED account to pages
Currently we use struct per_cpu_nodestat to cache the vmstat counters,
which leads to inaccurate statistics especially THP vmstat counters. In
the systems with hundreds of processors it can be GBs of memory. For
example, for a 96 CPUs system, the threshold is the maximum number of 125.
And the per cpu counters can cache 23.4375 GB in total.
The THP page is already a form of batched addition (it will add 512 worth
of memory in one go) so skipping the batching seems like sensible.
Although every THP stats update overflows the per-cpu counter, resorting
to atomic global updates. But it can make the statistics more accuracy
for the THP vmstat counters.
So we convert the NR_FILE_PMDMAPPED account to pages. This patch is
consistent with 8f182270dfec ("mm/swap.c: flush lru pvecs on compound page
arrival"). Doing this also can make the unit of vmstat counters more
unified. Finally, the unit of the vmstat counters are pages, kB and
bytes. The B/KB suffix can tell us that the unit is bytes or kB. The
rest which is without suffix are pages.
Link: https://lkml.kernel.org/r/20201228164110.2838-7-songmuchun@bytedance.com Signed-off-by: Muchun Song <songmuchun@bytedance.com> Cc: Alexey Dobriyan <adobriyan@gmail.com> Cc: Feng Tang <feng.tang@intel.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Hugh Dickins <hughd@google.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Michal Hocko <mhocko@suse.com> Cc: NeilBrown <neilb@suse.de> Cc: Pankaj Gupta <pankaj.gupta@cloud.ionos.com> Cc: Rafael. J. Wysocki <rafael@kernel.org> Cc: Randy Dunlap <rdunlap@infradead.org> Cc: Roman Gushchin <guro@fb.com> Cc: Sami Tolvanen <samitolvanen@google.com> Cc: Shakeel Butt <shakeelb@google.com> Cc: Vladimir Davydov <vdavydov.dev@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Muchun Song [Thu, 31 Dec 2020 22:04:06 +0000 (22:04 +0000)]
mm: memcontrol: convert NR_SHMEM_PMDMAPPED account to pages
Currently we use struct per_cpu_nodestat to cache the vmstat counters,
which leads to inaccurate statistics especially THP vmstat counters. In
the systems with hundreds of processors it can be GBs of memory. For
example, for a 96 CPUs system, the threshold is the maximum number of 125.
And the per cpu counters can cache 23.4375 GB in total.
The THP page is already a form of batched addition (it will add 512 worth
of memory in one go) so skipping the batching seems like sensible.
Although every THP stats update overflows the per-cpu counter, resorting
to atomic global updates. But it can make the statistics more accuracy
for the THP vmstat counters.
So we convert the NR_SHMEM_PMDMAPPED account to pages. This patch is
consistent with 8f182270dfec ("mm/swap.c: flush lru pvecs on compound page
arrival"). Doing this also can make the unit of vmstat counters more
unified. Finally, the unit of the vmstat counters are pages, kB and
bytes. The B/KB suffix can tell us that the unit is bytes or kB. The
rest which is without suffix are pages.
Link: https://lkml.kernel.org/r/20201228164110.2838-6-songmuchun@bytedance.com Signed-off-by: Muchun Song <songmuchun@bytedance.com> Cc: Alexey Dobriyan <adobriyan@gmail.com> Cc: Feng Tang <feng.tang@intel.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Hugh Dickins <hughd@google.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Michal Hocko <mhocko@suse.com> Cc: NeilBrown <neilb@suse.de> Cc: Pankaj Gupta <pankaj.gupta@cloud.ionos.com> Cc: Rafael. J. Wysocki <rafael@kernel.org> Cc: Randy Dunlap <rdunlap@infradead.org> Cc: Roman Gushchin <guro@fb.com> Cc: Sami Tolvanen <samitolvanen@google.com> Cc: Shakeel Butt <shakeelb@google.com> Cc: Vladimir Davydov <vdavydov.dev@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Muchun Song [Thu, 31 Dec 2020 22:04:05 +0000 (22:04 +0000)]
mm: memcontrol: convert NR_SHMEM_THPS account to pages
Currently we use struct per_cpu_nodestat to cache the vmstat counters,
which leads to inaccurate statistics especially THP vmstat counters. In
the systems with hundreds of processors it can be GBs of memory. For
example, for a 96 CPUs system, the threshold is the maximum number of 125.
And the per cpu counters can cache 23.4375 GB in total.
The THP page is already a form of batched addition (it will add 512 worth
of memory in one go) so skipping the batching seems like sensible.
Although every THP stats update overflows the per-cpu counter, resorting
to atomic global updates. But it can make the statistics more accuracy
for the THP vmstat counters.
So we convert the NR_SHMEM_THPS account to pages. This patch is
consistent with 8f182270dfec ("mm/swap.c: flush lru pvecs on compound page
arrival"). Doing this also can make the unit of vmstat counters more
unified. Finally, the unit of the vmstat counters are pages, kB and
bytes. The B/KB suffix can tell us that the unit is bytes or kB. The
rest which is without suffix are pages.
Link: https://lkml.kernel.org/r/20201228164110.2838-5-songmuchun@bytedance.com Signed-off-by: Muchun Song <songmuchun@bytedance.com> Cc: Alexey Dobriyan <adobriyan@gmail.com> Cc: Feng Tang <feng.tang@intel.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Hugh Dickins <hughd@google.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Michal Hocko <mhocko@suse.com> Cc: NeilBrown <neilb@suse.de> Cc: Pankaj Gupta <pankaj.gupta@cloud.ionos.com> Cc: Rafael. J. Wysocki <rafael@kernel.org> Cc: Randy Dunlap <rdunlap@infradead.org> Cc: Roman Gushchin <guro@fb.com> Cc: Sami Tolvanen <samitolvanen@google.com> Cc: Shakeel Butt <shakeelb@google.com> Cc: Vladimir Davydov <vdavydov.dev@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Muchun Song [Thu, 31 Dec 2020 22:04:04 +0000 (22:04 +0000)]
mm: memcontrol: convert NR_FILE_THPS account to pages
Currently we use struct per_cpu_nodestat to cache the vmstat counters,
which leads to inaccurate statistics especially THP vmstat counters. In
the systems with if hundreds of processors it can be GBs of memory. For
example, for a 96 CPUs system, the threshold is the maximum number of 125.
And the per cpu counters can cache 23.4375 GB in total.
The THP page is already a form of batched addition (it will add 512 worth
of memory in one go) so skipping the batching seems like sensible.
Although every THP stats update overflows the per-cpu counter, resorting
to atomic global updates. But it can make the statistics more accuracy
for the THP vmstat counters.
So we convert the NR_FILE_THPS account to pages. This patch is consistent
with 8f182270dfec ("mm/swap.c: flush lru pvecs on compound page arrival").
Doing this also can make the unit of vmstat counters more unified.
Finally, the unit of the vmstat counters are pages, kB and bytes. The
B/KB suffix can tell us that the unit is bytes or kB. The rest which is
without suffix are pages.
Link: https://lkml.kernel.org/r/20201228164110.2838-4-songmuchun@bytedance.com Signed-off-by: Muchun Song <songmuchun@bytedance.com> Cc: Alexey Dobriyan <adobriyan@gmail.com> Cc: Feng Tang <feng.tang@intel.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Hugh Dickins <hughd@google.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Michal Hocko <mhocko@suse.com> Cc: NeilBrown <neilb@suse.de> Cc: Pankaj Gupta <pankaj.gupta@cloud.ionos.com> Cc: Rafael. J. Wysocki <rafael@kernel.org> Cc: Randy Dunlap <rdunlap@infradead.org> Cc: Roman Gushchin <guro@fb.com> Cc: Sami Tolvanen <samitolvanen@google.com> Cc: Shakeel Butt <shakeelb@google.com> Cc: Vladimir Davydov <vdavydov.dev@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Muchun Song [Thu, 31 Dec 2020 22:04:03 +0000 (22:04 +0000)]
mm: memcontrol: convert NR_ANON_THPS account to pages
Currently we use struct per_cpu_nodestat to cache the vmstat counters,
which leads to inaccurate statistics especially THP vmstat counters. In
the systems with hundreds of processors it can be GBs of memory. For
example, for a 96 CPUs system, the threshold is the maximum number of 125.
And the per cpu counters can cache 23.4375 GB in total.
The THP page is already a form of batched addition (it will add 512 worth
of memory in one go) so skipping the batching seems like sensible.
Although every THP stats update overflows the per-cpu counter, resorting
to atomic global updates. But it can make the statistics more accuracy
for the THP vmstat counters.
So we convert the NR_ANON_THPS account to pages. This patch is consistent
with 8f182270dfec ("mm/swap.c: flush lru pvecs on compound page arrival").
Doing this also can make the unit of vmstat counters more unified.
Finally, the unit of the vmstat counters are pages, kB and bytes. The
B/KB suffix can tell us that the unit is bytes or kB. The rest which is
without suffix are pages.
Link: https://lkml.kernel.org/r/20201228164110.2838-3-songmuchun@bytedance.com Signed-off-by: Muchun Song <songmuchun@bytedance.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Rafael. J. Wysocki <rafael@kernel.org> Cc: Alexey Dobriyan <adobriyan@gmail.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Vladimir Davydov <vdavydov.dev@gmail.com> Cc: Hugh Dickins <hughd@google.com> Cc: Shakeel Butt <shakeelb@google.com> Cc: Roman Gushchin <guro@fb.com> Cc: Sami Tolvanen <samitolvanen@google.com> Cc: Feng Tang <feng.tang@intel.com> Cc: NeilBrown <neilb@suse.de> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Randy Dunlap <rdunlap@infradead.org> Cc: Michal Hocko <mhocko@suse.com> Cc: Pankaj Gupta <pankaj.gupta@cloud.ionos.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Muchun Song [Thu, 31 Dec 2020 22:04:02 +0000 (22:04 +0000)]
mm: memcontrol: fix NR_ANON_THPS accounting in charge moving
Patch series "Convert all THP vmstat counters to pages", v6.
This patch series is aimed to convert all THP vmstat counters to pages.
The unit of some vmstat counters are pages, some are bytes, some are
HPAGE_PMD_NR, and some are KiB. When we want to expose these vmstat
counters to the userspace, we have to know the unit of the vmstat counters
is which one. When the unit is bytes or kB, both clearly distinguishable
by the B/KB suffix. But for the THP vmstat counters, we may make mistakes.
For example, the below is some bug fix for the THP vmstat counters:
- 7de2e9f195b9 ("mm: memcontrol: correct the NR_ANON_THPS counter of hierarchical memcg")
- The first commit in this series ("fix NR_ANON_THPS accounting in charge moving")
This patch series can make the code clear. And make all the unit of the THP
vmstat counters in pages. Finally, the unit of the vmstat counters are
pages, kB and bytes. The B/KB suffix can tell us that the unit is bytes
or kB. The rest which is without suffix are pages.
In this series, I changed the following vmstat counters unit from HPAGE_PMD_NR
to pages. However, there is no change to the print format of output to user
space.
Doing this also can make the statistics more accuracy for the THP vmstat
counters. This series is consistent with 8f182270dfec ("mm/swap.c: flush lru
pvecs on compound page arrival").
Because we use struct per_cpu_nodestat to cache the vmstat counters, which
leads to inaccurate statistics especially THP vmstat counters. In the systems
with hundreds of processors it can be GBs of memory. For example, for a 96
CPUs system, the threshold is the maximum number of 125. And the per cpu
counters can cache 23.4375 GB in total.
The THP page is already a form of batched addition (it will add 512 worth of
memory in one go) so skipping the batching seems like sensible. Although every
THP stats update overflows the per-cpu counter, resorting to atomic global
updates. But it can make the statistics more accuracy for the THP vmstat
counters. From this point of view, I think that do this converting is
reasonable.
Thanks Hugh for mentioning this. This was inspired by Johannes and Roman.
Thanks to them.
This patch (of 7):
The unit of NR_ANON_THPS is HPAGE_PMD_NR already. So it should inc/dec by
one rather than nr_pages.
Link: https://lkml.kernel.org/r/20201228164110.2838-1-songmuchun@bytedance.com Link: https://lkml.kernel.org/r/20201228164110.2838-2-songmuchun@bytedance.com Fixes: 468c398233da ("mm: memcontrol: switch to native NR_ANON_THPS counter") Signed-off-by: Muchun Song <songmuchun@bytedance.com> Acked-by: Michal Hocko <mhocko@suse.com> Acked-by: Johannes Weiner <hannes@cmpxchg.org> Acked-by: Pankaj Gupta <pankaj.gupta@cloud.ionos.com> Reviewed-by: Roman Gushchin <guro@fb.com> Reviewed-by: Shakeel Butt <shakeelb@google.com> Cc: Alexey Dobriyan <adobriyan@gmail.com> Cc: Feng Tang <feng.tang@intel.com> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Hugh Dickins <hughd@google.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: NeilBrown <neilb@suse.de> Cc: Rafael. J. Wysocki <rafael@kernel.org> Cc: Randy Dunlap <rdunlap@infradead.org> Cc: Sami Tolvanen <samitolvanen@google.com> Cc: Vladimir Davydov <vdavydov.dev@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Michal Hocko <mhocko@kernel.org> Cc: Muchun Song <songmuchun@bytedance.com> Cc: Shakeel Butt <shakeelb@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
The vmstat threshold is 32 (MEMCG_CHARGE_BATCH), Actually the threshold
can be as big as MEMCG_CHARGE_BATCH * PAGE_SIZE. It still fits into s32.
So introduce struct batched_lruvec_stat to optimize memory usage.
The size of struct lruvec_stat is 304 bytes on 64 bit systems. As it is a
per-cpu structure. So with this patch, we can save 304 / 2 * ncpu bytes
per-memcg per-node where ncpu is the number of the possible CPU. If there
are c memory cgroup (include dying cgroup) and n NUMA node in the system.
Finally, we can save (152 * ncpu * c * n) bytes.
Link: https://lkml.kernel.org/r/20201210042121.39665-1-songmuchun@bytedance.com Signed-off-by: Muchun Song <songmuchun@bytedance.com> Reviewed-by: Shakeel Butt <shakeelb@google.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Michal Hocko <mhocko@kernel.org> Cc: Vladimir Davydov <vdavydov.dev@gmail.com> Cc: Shakeel Butt <shakeelb@google.com> Cc: Roman Gushchin <guro@fb.com> Cc: Stephen Rothwell <sfr@canb.auug.org.au> Cc: Chris Down <chris@chrisdown.name> Cc: Yafang Shao <laoar.shao@gmail.com> Cc: Wei Yang <richard.weiyang@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
fix it for
mm-slub-call-account_slab_page-after-slab-page-initialization-fix.patch
Cc: Christoph Lameter <cl@linux.com> Cc: Johannes Weiner <hannes@cmpxchg.org> Cc: Michal Hocko <mhocko@kernel.org> Cc: Roman Gushchin <guro@fb.com> Cc: Shakeel Butt <shakeelb@google.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Roman Gushchin [Thu, 31 Dec 2020 22:04:00 +0000 (22:04 +0000)]
mm: memcg/slab: pre-allocate obj_cgroups for slab caches with SLAB_ACCOUNT
In general it's unknown in advance if a slab page will contain accounted
objects or not. In order to avoid memory waste, an obj_cgroup vector is
allocated dynamically when a need to account of a new object arises. Such
approach is memory efficient, but requires an expensive cmpxchg() to set
up the memcg/objcgs pointer, because an allocation can race with a
different allocation on another cpu.
But in some common cases it's known for sure that a slab page will contain
accounted objects: if the page belongs to a slab cache with a SLAB_ACCOUNT
flag set. It includes such popular objects like vm_area_struct, anon_vma,
task_struct, etc.
In such cases we can pre-allocate the objcgs vector and simple assign it
to the page without any atomic operations, because at this early stage the
page is not visible to anyone else.
A very simplistic benchmark (allocating 10000000 64-bytes objects in a
row) shows ~15% win. In the real life it seems that most workloads are
not very sensitive to the speed of (accounted) slab allocations.
Yu Zhao [Thu, 31 Dec 2020 22:03:59 +0000 (22:03 +0000)]
mm/swap: don't SetPageWorkingset unconditionally during swapin
We are capable of SetPageWorkingset based on refault distances after
commit aae466b0052e ("mm/swap: implement workingset detection for
anonymous LRU"). This is done by workingset_refault(), which is right
above the unconditional SetPageWorkingset deleted by this patch.
The unconditional SetPageWorkingset miscategorizes pages that are read
ahead or never belonged to the working set (e.g., tmpfs pages accessed
only once by fd). When those pages are swapped in (after they were
swapped out) for the first time, they skew PSI (when using async swap).
When this happens again, depending on their refault distances, they might
skew workingset_restore_anon counter in addition to PSI because their
shadows indicate they were part of the working set.
Historically, SetPageWorkingset was added as part of the PSI series, and
Johannes said:
"It was meant to mark incoming pages under IO with SetPageWorkingset
when waiting for them constituted a memory stall.
On the page cache side, because we HAVE workingset detection, this was
specific to recently evicted pages that had been active in their
previous life. On the anon side, the aging algorithm had no
distinction between workingset and sporadically used pages. Given the
choice between a) no swapin stalls are pressure and b) all swapin
stalls are pressure, I went with the latter in order to detect swap
storms. The false positive case - high rate of swapin without severe
memory pressure - was relatively unlikely, because we tried to avoid
swapping until everything was completely on fire in the first place."
Nikita Ermakov [Thu, 31 Dec 2020 22:03:58 +0000 (22:03 +0000)]
mm/msync: exit early when the flags is an MS_ASYNC and start < vm_start
If an unmapped region was found and the flag is MS_ASYNC (without
MS_INVALIDATE) there is nothing to do and the result would be always
-ENOMEM, so return immediately.
We can use it to understand what the RCU core is going to free. For
example, some users maybe interested in when the RCU core starts
freeing reclaimable slabs like dentry to reduce memory pressure.
Link: https://lkml.kernel.org/r/20201216072804.8838-1-jian.w.wen@oracle.com Signed-off-by: Jacob Wen <jian.w.wen@oracle.com> Cc: Christoph Lameter <cl@linux.com> Cc: Pekka Enberg <penberg@kernel.org> Cc: David Rientjes <rientjes@google.com> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Konstantin Khlebnikov [Thu, 31 Dec 2020 22:03:56 +0000 (22:03 +0000)]
kernel/watchdog: flush all printk nmi buffers when hardlockup detected
In NMI context printk() could save messages into per-cpu buffers and
schedule flush by irq_work when IRQ are unblocked. This means message
about hardlockup appears in kernel log only when/if lockup is gone.
Comment in irq_work_queue_on() states that remote IPI aren't NMI safe thus
printk() cannot schedule flush work to another cpu.
This patch adds simple atomic counter of detected hardlockups and flushes
all per-cpu printk buffers in context softlockup watchdog at any other cpu
when it sees changes of this counter.
Link: http://lkml.kernel.org/r/158132813726.1980.17382047082627699898.stgit@buzz Signed-off-by: Konstantin Khlebnikov <khlebnikov@yandex-team.ru> Cc: Petr Mladek <pmladek@suse.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: Sergey Senozhatsky <sergey.senozhatsky@gmail.com> Cc: Dmitry Monakhov <dmtrmonakhov@yandex-team.ru> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Randy Dunlap [Thu, 31 Dec 2020 22:03:55 +0000 (22:03 +0000)]
fs: delete repeated words in comments
Delete duplicate words in fs/*.c.
The doubled words that are being dropped are:
that, be, the, in, and, for
Link: https://lkml.kernel.org/r/20201224052810.25315-1-rdunlap@infradead.org Signed-off-by: Randy Dunlap <rdunlap@infradead.org> Reviewed-by: Matthew Wilcox (Oracle) <willy@infradead.org> Cc: Alexander Viro <viro@zeniv.linux.org.uk> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Wangyan [Thu, 31 Dec 2020 22:03:54 +0000 (22:03 +0000)]
ocfs2: fix ocfs2 corrupt when iputting an inode
In this condition, it will cause an bug on error.
ocfs2_mkdir()
->ocfs2_mknod()
->ocfs2_mknod_locked()
->__ocfs2_mknod_locked()
//Assume inode->i_generation is genN.
->inode->i_generation = osb->s_next_generation++;
// The inode lockres has been initialized.
->ocfs2_populate_inode()
->ocfs2_create_new_inode_locks()
->An error happened, returned value is non-zero
// free the start_bit x in bg_blkno
->ocfs2_free_suballoc_bits()
->... /* Another process execute mkdir success in this place,
and it occupied the start_bit x in bg_blkno
which has been freed before. Its inode->i_generation
is genN + 1 */
->iput(inode)
->evict()
->ocfs2_evict_inode()
->ocfs2_delete_inode()
->ocfs2_inode_lock()
->ocfs2_inode_lock_update()
/* Bug on here, genN != genN + 1 */
->mlog_bug_on_msg(inode->i_generation !=
le32_to_cpu(fe->i_generation))
So, we need not to reclaim the inode when the inode->ip_inode_lockres
has been initialized. It will be freed in iput().
Link: http://lkml.kernel.org/r/ef080ca3-5d74-e276-17a1-d9e7c7e662c9@huawei.com Fixes: b1529a41f777 ("ocfs2: should reclaim the inode if '__ocfs2_mknod_locked' returns an error") Signed-off-by: Yan Wang <wangyan122@huawei.com> Reviewed-by: Jun Piao <piaojun@huawei.com> Cc: Mark Fasheh <mark@fasheh.com> Cc: Joel Becker <jlbec@evilplan.org> Cc: Junxiao Bi <junxiao.bi@oracle.com> Cc: Joseph Qi <jiangqi903@gmail.com> Cc: Changwei Ge <gechangwei@live.cn> Cc: Gang He <ghe@suse.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Wangyan [Thu, 31 Dec 2020 22:03:54 +0000 (22:03 +0000)]
ocfs2: clear links count in ocfs2_mknod() if an error occurs
In this condition, the inode can not be wiped when error happened.
ocfs2_mkdir()
->ocfs2_mknod()
->ocfs2_mknod_locked()
->__ocfs2_mknod_locked()
->ocfs2_set_links_count() // i_links_count is 2
-> ... // an error accrue, goto roll_back or leave.
->ocfs2_commit_trans()
->iput(inode)
->evict()
->ocfs2_evict_inode()
->ocfs2_delete_inode()
->ocfs2_inode_lock()
->ocfs2_inode_lock_update()
->ocfs2_refresh_inode()
->set_nlink(); // inode->i_nlink is 2 now.
/* if wipe is 0, it will goto bail_unlock_inode */
->ocfs2_query_inode_wipe()
->if (inode->i_nlink) return; // wipe is 0.
/* inode can not be wiped */
->ocfs2_wipe_inode()
So, we need clear links before the transaction committed.
Link: http://lkml.kernel.org/r/d8147c41-fb2b-bdf7-b660-1f3c8448c33f@huawei.com Signed-off-by: Yan Wang <wangyan122@huawei.com> Reviewed-by: Jun Piao <piaojun@huawei.com> Cc: Mark Fasheh <mark@fasheh.com> Cc: Joel Becker <jlbec@evilplan.org> Cc: Junxiao Bi <junxiao.bi@oracle.com> Cc: Joseph Qi <jiangqi903@gmail.com> Cc: Changwei Ge <gechangwei@live.cn> Cc: Gang He <ghe@suse.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
The reason for the panic is that stable_page_flags() which parses the page
flags uses uninitialized struct pages reserved by the ZONE_DEVICE driver.
Earlier approach to fix this was discussed here:
https://marc.info/?l=linux-mm&m=152964770000672&w=2
This is another approach. To avoid using the uninitialized struct page,
immediately return with KPF_RESERVED at the beginning of
stable_page_flags() if the page is reserved by ZONE_DEVICE driver.
Dan said:
: The nvdimm implementation uses vmem_altmap to arrange for the 'struct
: page' array to be allocated from a reservation of a pmem namespace. A
: namespace in this mode contains an info-block that consumes the first
: 8K of the namespace capacity, capacity designated for page mapping,
: capacity for padding the start of data to optionally 4K, 2MB, or 1GB
: (on x86), and then the namespace data itself. The implementation
: specifies a section aligned (now sub-section aligned) address to
: arch_add_memory() to establish the linear mapping to map the metadata,
: and then vmem_altmap indicates to memmap_init_zone() which pfns
: represent data. The implementation only specifies enough 'struct page'
: capacity for pfn_to_page() to operate on the data space, not the
: namespace metadata space.
:
: The proposal to validate ZONE_DEVICE pfns against the altmap seems the
: right approach to me.
Link: http://lkml.kernel.org/r/20190725023100.31141-3-t-fukasawa@vx.jp.nec.com Signed-off-by: Toshiki Fukasawa <t-fukasawa@vx.jp.nec.com> Cc: Alexey Dobriyan <adobriyan@gmail.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Dan Williams <dan.j.williams@intel.com> Cc: Junichi Nomura <j-nomura@ce.jp.nec.com> Cc: Michal Hocko <mhocko@kernel.org> Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Toshiki Fukasawa [Thu, 31 Dec 2020 22:03:53 +0000 (22:03 +0000)]
/proc/kpageflags: prevent an integer overflow in stable_page_flags()
stable_page_flags() returns kpageflags info in u64, but it uses "1 <<
KPF_*" internally which is considered as int. This type mismatch causes
no visible problem now, but it will if you set bit 32 or more as done in a
subsequent patch. So use BIT_ULL in order to avoid future overflow
issues.
Link: http://lkml.kernel.org/r/20190725023100.31141-2-t-fukasawa@vx.jp.nec.com Signed-off-by: Toshiki Fukasawa <t-fukasawa@vx.jp.nec.com> Cc: Michal Hocko <mhocko@kernel.org> Cc: Dan Williams <dan.j.williams@intel.com> Cc: Alexey Dobriyan <adobriyan@gmail.com> Cc: Christoph Hellwig <hch@lst.de> Cc: Naoya Horiguchi <n-horiguchi@ah.jp.nec.com> Cc: Junichi Nomura <j-nomura@ce.jp.nec.com> Cc: <stable@vger.kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Hailong liu [Thu, 31 Dec 2020 22:03:52 +0000 (22:03 +0000)]
mm/page_alloc: add a missing mm_page_alloc_zone_locked() tracepoint
The trace point *trace_mm_page_alloc_zone_locked()* in __rmqueue() does
not currently cover all branches. Add the missing tracepoint and check
the page before do that.
Jann Horn [Thu, 31 Dec 2020 22:03:51 +0000 (22:03 +0000)]
mm, slub: consider rest of partial list if acquire_slab() fails
acquire_slab() fails if there is contention on the freelist of the page
(probably because some other CPU is concurrently freeing an object from
the page). In that case, it might make sense to look for a different page
(since there might be more remote frees to the page from other CPUs, and
we don't want contention on struct page).
However, the current code accidentally stops looking at the partial list
completely in that case. Especially on kernels without CONFIG_NUMA set,
this means that get_partial() fails and new_slab_objects() falls back to
new_slab(), allocating new pages. This could lead to an unnecessary
increase in memory fragmentation.
Link: https://lkml.kernel.org/r/20201228130853.1871516-1-jannh@google.com Fixes: 7ced37197196 ("slub: Acquire_slab() avoid loop") Signed-off-by: Jann Horn <jannh@google.com> Acked-by: David Rientjes <rientjes@google.com> Acked-by: Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Christoph Lameter <cl@linux.com> Cc: Pekka Enberg <penberg@kernel.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Linus Torvalds [Sun, 27 Dec 2020 18:56:33 +0000 (10:56 -0800)]
proc mountinfo: make splice available again
Since commit 36e2c7421f02 ("fs: don't allow splice read/write without
explicit ops") we've required that file operation structures explicitly
enable splice support, rather than falling back to the default handlers.
Most /proc files use the indirect 'struct proc_ops' to describe their
file operations, and were fixed up to support splice earlier in commits 40be821d627c..b24c30c67863, but the mountinfo files interact with the
VFS directly using their own 'struct file_operations' and got missed as
a result.
This adds the necessary support for splice to work for /proc/*/mountinfo
and friends.
Reported-by: Joan Bruguera Micó <joanbrugueram@gmail.com> Reported-by: Jussi Kivilinna <jussi.kivilinna@iki.fi> Link: https://bugzilla.kernel.org/show_bug.cgi?id=209971 Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org> Cc: Christoph Hellwig <hch@lst.de> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Linus Torvalds [Sun, 27 Dec 2020 17:03:41 +0000 (09:03 -0800)]
Merge tag 'timers-urgent-2020-12-27' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull timer fixes from Ingo Molnar:
"Update/fix two CPU sanity checks in the hotplug and the boot code, and
fix a typo in the Kconfig help text.
[ Context: the first two commits are the result of an ongoing
annotation+review work of (intentional) tick_do_timer_cpu() data
races reported by KCSAN, but the annotations aren't fully cooked
yet ]"
* tag 'timers-urgent-2020-12-27' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
timekeeping: Fix spelling mistake in Kconfig "fullfill" -> "fulfill"
tick/sched: Remove bogus boot "safety" check
tick: Remove pointless cpu valid check in hotplug code
Linus Torvalds [Sat, 26 Dec 2020 17:19:49 +0000 (09:19 -0800)]
mfd: ab8500-debugfs: Remove extraneous seq_putc
Commit c9a3c4e637ac ("mfd: ab8500-debugfs: Remove extraneous curly
brace") removed a left-over curly brace that caused build failures, but
Joe Perches points out that the subsequent 'seq_putc()' should also be
removed, because the commit that caused all these problems already added
the final '\n' to the seq_printf() above it.
Reported-by: Joe Perches <joe@perches.com> Fixes: 886c8121659d ("mfd: ab8500-debugfs: Remove the racy fiddling with irq_desc") Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Nathan Chancellor <natechancellor@gmail.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Alexander Lobakin [Tue, 22 Dec 2020 15:07:43 +0000 (15:07 +0000)]
PCI: dwc: Fix inverted condition of DMA mask setup warning
Commit 660c486590aa ("PCI: dwc: Set 32-bit DMA mask for MSI target address
allocation") added dma_mask_set() call to explicitly set 32-bit DMA mask
for MSI message mapping, but for now it throws a warning on ret == 0, while
dma_set_mask() returns 0 in case of success.
Fix this by inverting the condition.
[bhelgaas: join string to make it greppable] Fixes: 660c486590aa ("PCI: dwc: Set 32-bit DMA mask for MSI target address allocation") Link: https://lore.kernel.org/r/20201222150708.67983-1-alobakin@pm.me Signed-off-by: Alexander Lobakin <alobakin@pm.me> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
0001:00:00.0 PCI bridge: Molex Incorporated Device 1ad2 (rev a1)
0001:01:00.0 SATA controller: Marvell Technology Group Ltd. Device 9171 (rev 13)
0005:00:00.0 PCI bridge: Molex Incorporated Device 1ad0 (rev a1)
0005:01:00.0 PCI bridge: PLX Technology, Inc. Device 3380 (rev ab)
0005:02:02.0 PCI bridge: PLX Technology, Inc. Device 3380 (rev ab)
0005:03:00.0 USB controller: PLX Technology, Inc. Device 3380 (rev ab)
The problem seems to be dw_pcie_setup_rc() is now called twice before and
after the link up handling. The fix is to move Tegra's link up handling to
.start_link() function like other DWC drivers. Tegra is a bit more
complicated than others as it re-inits the whole DWC controller to retry
the link. With this, the initialization ordering is restored to match the
prior sequence.
Fixes: b9ac0f9dc8ea ("PCI: dwc: Move dw_pcie_setup_rc() to DWC common code") Link: https://lore.kernel.org/r/20201218143905.1614098-1-robh@kernel.org Reported-by: Mian Yousaf Kaukab <ykaukab@suse.de> Tested-by: Mian Yousaf Kaukab <ykaukab@suse.de> Signed-off-by: Rob Herring <robh@kernel.org> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com> Cc: Thierry Reding <thierry.reding@gmail.com> Cc: Jonathan Hunter <jonathanh@nvidia.com> Cc: Vidya Sagar <vidyas@nvidia.com>
clang (quite rightly) complains fairly loudly about the newly added
mpc1_get_mpc_out_mux() function returning an uninitialized value if the
'opp_id' checks don't pass.
This may not happen in practice, but the code really shouldn't return
garbage if the sanity checks don't pass.
So just initialize 'val' to zero to avoid the issue.
Fixes: 110b055b2827 ("drm/amd/display: add getter routine to retrieve mpcc mux") Cc: Josip Pavic <Josip.Pavic@amd.com> Cc: Bindu Ramamurthy <bindu.r@amd.com> Cc: Alex Deucher <alexander.deucher@amd.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Linus Torvalds [Fri, 25 Dec 2020 19:07:34 +0000 (11:07 -0800)]
Merge tag 'perf-tools-2020-12-24' of git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux
Pull more perf tools updates from Arnaldo Carvalho de Melo:
- Refactor 'perf stat' per CPU/socket/die/thread aggregation fixing use
cases in ARM machines.
- Fix memory leak when synthesizing SDT probes in 'perf probe'.
- Update kernel header copies related to KVM, epol_pwait. msr-index and
powerpc and s390 syscall tables.
* tag 'perf-tools-2020-12-24' of git://git.kernel.org/pub/scm/linux/kernel/git/acme/linux: (24 commits)
perf probe: Fix memory leak when synthesizing SDT probes
perf stat aggregation: Add separate thread member
perf stat aggregation: Add separate core member
perf stat aggregation: Add separate die member
perf stat aggregation: Add separate socket member
perf stat aggregation: Add separate node member
perf stat aggregation: Start using cpu_aggr_id in map
perf cpumap: Drop in cpu_aggr_map struct
perf cpumap: Add new map type for aggregation
perf stat: Replace aggregation ID with a struct
perf cpumap: Add new struct for cpu aggregation
perf cpumap: Use existing allocator to avoid using malloc
perf tests: Improve topology test to check all aggregation types
perf tools: Update s390's syscall.tbl copy from the kernel sources
perf tools: Update powerpc's syscall.tbl copy from the kernel sources
perf s390: Move syscall.tbl check into check-headers.sh
perf powerpc: Move syscall.tbl check to check-headers.sh
tools headers UAPI: Synch KVM's svm.h header with the kernel
tools kvm headers: Update KVM headers from the kernel sources
tools headers UAPI: Sync KVM's vmx.h header with the kernel sources
...
Linus Torvalds [Fri, 25 Dec 2020 19:05:32 +0000 (11:05 -0800)]
Merge branch 'for-5.11' of git://git.kernel.org/pub/scm/linux/kernel/git/jlawall/linux
Pull coccinelle updates from Julia Lawall.
* 'for-5.11' of git://git.kernel.org/pub/scm/linux/kernel/git/jlawall/linux:
scripts: coccicheck: Correct usage of make coccicheck
coccinelle: update expiring email addresses
coccinnelle: Remove ptr_ret script
kbuild: do not use scripts/ld-version.sh for checking spatch version
remove boolinit.cocci
Michael Ellerman [Fri, 25 Dec 2020 11:30:58 +0000 (22:30 +1100)]
genirq: Fix export of irq_to_desc() for powerpc KVM
Commit 64a1b95bb9fe ("genirq: Restrict export of irq_to_desc()") removed
the export of irq_to_desc() unless powerpc KVM is being built, because
there is still a use of irq_to_desc() in modular code there.
However it used:
#ifdef CONFIG_KVM_BOOK3S_64_HV
Which doesn't work when that symbol is =m, leading to a build failure:
Linus Torvalds [Fri, 25 Dec 2020 18:54:29 +0000 (10:54 -0800)]
Merge branch 'work.misc' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs
Pull misc vfs updates from Al Viro:
"Assorted patches from previous cycle(s)..."
* 'work.misc' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs:
fix hostfs_open() use of ->f_path.dentry
Make sure that make_create_in_sticky() never sees uninitialized value of dir_mode
fs: Kill DCACHE_DONTCACHE dentry even if DCACHE_REFERENCED is set
fs: Handle I_DONTCACHE in iput_final() instead of generic_drop_inode()
fs/namespace.c: WARN if mnt_count has become negative
Linus Torvalds [Thu, 24 Dec 2020 22:20:33 +0000 (14:20 -0800)]
Merge tag 'docs-5.11-2' of git://git.lwn.net/linux
Pull documentation fixes from Jonathan Corbet:
"A small set of late-arriving, small documentation fixes"
* tag 'docs-5.11-2' of git://git.lwn.net/linux:
docs: admin-guide: Fix default value of max_map_count in sysctl/vm.rst
Documentation/submitting-patches: Document the SoB chain
Documentation: process: Correct numbering
docs: submitting-patches: Trivial - fix grammatical error
Linus Torvalds [Thu, 24 Dec 2020 22:16:02 +0000 (14:16 -0800)]
Merge tag 'ext4_for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tytso/ext4
Pull ext4 updates from Ted Ts'o:
"Various bug fixes and cleanups for ext4; no new features this cycle"
* tag 'ext4_for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tytso/ext4: (29 commits)
ext4: remove unnecessary wbc parameter from ext4_bio_write_page
ext4: avoid s_mb_prefetch to be zero in individual scenarios
ext4: defer saving error info from atomic context
ext4: simplify ext4 error translation
ext4: move functions in super.c
ext4: make ext4_abort() use __ext4_error()
ext4: standardize error message in ext4_protect_reserved_inode()
ext4: remove redundant sb checksum recomputation
ext4: don't remount read-only with errors=continue on reboot
ext4: fix deadlock with fs freezing and EA inodes
jbd2: add a helper to find out number of fast commit blocks
ext4: make fast_commit.h byte identical with e2fsprogs/fast_commit.h
ext4: fix fall-through warnings for Clang
ext4: add docs about fast commit idempotence
ext4: remove the unused EXT4_CURRENT_REV macro
ext4: fix an IS_ERR() vs NULL check
ext4: check for invalid block size early when mounting a file system
ext4: fix a memory leak of ext4_free_data
ext4: delete nonsensical (commented-out) code inside ext4_xattr_block_set()
ext4: update ext4_data_block_valid related comments
...
Linus Torvalds [Thu, 24 Dec 2020 22:08:43 +0000 (14:08 -0800)]
Merge tag 'Smack-for-5.11-io_uring-fix' of git://github.com/cschaufler/smack-next
Pull smack fix from Casey Schaufler:
"Provide a fix for the incorrect handling of privilege in the face of
io_uring's use of kernel threads. That invalidated an long standing
assumption regarding the privilege of kernel threads.
The fix is simple and safe. It was provided by Jens Axboe and has been
tested"
* tag 'Smack-for-5.11-io_uring-fix' of git://github.com/cschaufler/smack-next:
Smack: Handle io_uring kernel thread privileges
* tag 'powerpc-5.11-2' of git://git.kernel.org/pub/scm/linux/kernel/git/powerpc/linux:
powerpc/32: Fix vmap stack - Properly set r1 before activating MMU on syscall too
powerpc/vdso: Fix DOTSYM for 32-bit LE VDSO
powerpc/vdso: Don't pass 64-bit ABI cflags to 32-bit VDSO
powerpc/vdso: Block R_PPC_REL24 relocations
powerpc/smp: Add __init to init_big_cores()
powerpc/time: Force inlining of get_tb()
powerpc/boot: Fix build of dts/fsl
Linus Torvalds [Thu, 24 Dec 2020 21:50:23 +0000 (13:50 -0800)]
Merge tag 'irq-core-2020-12-23' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull irq updates from Thomas Gleixner:
"This is the second attempt after the first one failed miserably and
got zapped to unblock the rest of the interrupt related patches.
A treewide cleanup of interrupt descriptor (ab)use with all sorts of
racy accesses, inefficient and disfunctional code. The goal is to
remove the export of irq_to_desc() to prevent these things from
creeping up again"
* tag 'irq-core-2020-12-23' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: (30 commits)
genirq: Restrict export of irq_to_desc()
xen/events: Implement irq distribution
xen/events: Reduce irq_info:: Spurious_cnt storage size
xen/events: Only force affinity mask for percpu interrupts
xen/events: Use immediate affinity setting
xen/events: Remove disfunct affinity spreading
xen/events: Remove unused bind_evtchn_to_irq_lateeoi()
net/mlx5: Use effective interrupt affinity
net/mlx5: Replace irq_to_desc() abuse
net/mlx4: Use effective interrupt affinity
net/mlx4: Replace irq_to_desc() abuse
PCI: mobiveil: Use irq_data_get_irq_chip_data()
PCI: xilinx-nwl: Use irq_data_get_irq_chip_data()
NTB/msi: Use irq_has_action()
mfd: ab8500-debugfs: Remove the racy fiddling with irq_desc
pinctrl: nomadik: Use irq_has_action()
drm/i915/pmu: Replace open coded kstat_irqs() copy
drm/i915/lpe_audio: Remove pointless irq_to_desc() usage
s390/irq: Use irq_desc_kstat_cpu() in show_msi_interrupt()
parisc/irq: Use irq_desc_kstat_cpu() in show_interrupts()
...
Linus Torvalds [Thu, 24 Dec 2020 20:40:07 +0000 (12:40 -0800)]
Merge tag 'efi_updates_for_v5.11' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull EFI updates from Borislav Petkov:
"These got delayed due to a last minute ia64 build issue which got
fixed in the meantime.
EFI updates collected by Ard Biesheuvel:
- Don't move BSS section around pointlessly in the x86 decompressor
- Refactor helper for discovering the EFI secure boot mode
- Wire up EFI secure boot to IMA for arm64
- Some fixes for the capsule loader
- Expose the RT_PROP table via the EFI test module
- Relax DT and kernel placement restrictions on ARM
with a few followup fixes:
- fix the build breakage on IA64 caused by recent capsule loader
changes
- suppress a type mismatch build warning in the expansion of
EFI_PHYS_ALIGN on ARM"
* tag 'efi_updates_for_v5.11' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
efi: arm: force use of unsigned type for EFI_PHYS_ALIGN
efi: ia64: disable the capsule loader
efi: stub: get rid of efi_get_max_fdt_addr()
efi/efi_test: read RuntimeServicesSupported
efi: arm: reduce minimum alignment of uncompressed kernel
efi: capsule: clean scatter-gather entries from the D-cache
efi: capsule: use atomic kmap for transient sglist mappings
efi: x86/xen: switch to efi_get_secureboot_mode helper
arm64/ima: add ima_arch support
ima: generalize x86/EFI arch glue for other EFI architectures
efi: generalize efi_get_secureboot
efi/libstub: EFI_GENERIC_STUB_INITRD_CMDLINE_LOADER should not default to yes
efi/x86: Only copy the compressed kernel image in efi_relocate_kernel()
efi/libstub/x86: simplify efi_is_native()
Linus Torvalds [Thu, 24 Dec 2020 20:35:00 +0000 (12:35 -0800)]
Merge tag 'io_uring-5.11-2020-12-23' of git://git.kernel.dk/linux-block
Pull io_uring fixes from Jens Axboe:
"All straight fixes, or a prep patch for a fix, either bound for stable
or fixing issues from this merge window. In particular:
- Fix new shutdown op not breaking links on failure
- Hold mm->mmap_sem for mm->locked_vm manipulation
* tag 'io_uring-5.11-2020-12-23' of git://git.kernel.dk/linux-block:
io_uring: hold uring_lock while completing failed polled io in io_wq_submit_work()
io_uring: fix double io_uring free
io_uring: fix ignoring xa_store errors
io_uring: end waiting before task cancel attempts
io_uring: always progress task_work on task cancel
io-wq: kill now unused io_wq_cancel_all()
io_uring: make ctx cancel on exit targeted to actual ctx
io_uring: fix 0-iov read buffer select
io_uring: close a small race gap for files cancel
io_uring: fix io_wqe->work_list corruption
io_uring: limit {io|sq}poll submit locking scope
io_uring: inline io_cqring_mark_overflow()
io_uring: consolidate CQ nr events calculation
io_uring: remove racy overflow list fast checks
io_uring: cancel reqs shouldn't kill overflow list
io_uring: hold mmap_sem for mm->locked_vm manipulation
io_uring: break links on shutdown failure
Linus Torvalds [Thu, 24 Dec 2020 20:28:35 +0000 (12:28 -0800)]
Merge tag 'block-5.11-2020-12-23' of git://git.kernel.dk/linux-block
Pull block fixes from Jens Axboe:
"A few stragglers in here, but mostly just straight fixes. In
particular:
- Set of rnbd fixes for issues around changes for the merge window
(Gioh, Jack, Md Haris Iqbal)
- iocost tracepoint addition (Baolin)
- Copyright/maintainers update (Christoph)
- Remove old blk-mq fast path CPU warning (Daniel)
- loop max_part fix (Josh)
- Remote IPI threaded IRQ fix (Sebastian)
- dasd stable fixes (Stefan)
- bcache merge window fixup and style fixup (Yi, Zheng)"
* tag 'block-5.11-2020-12-23' of git://git.kernel.dk/linux-block:
md/bcache: convert comma to semicolon
bcache:remove a superfluous check in register_bcache
block: update some copyrights
block: remove a pointless self-reference in block_dev.c
MAINTAINERS: add fs/block_dev.c to the block section
blk-mq: Don't complete on a remote CPU in force threaded mode
s390/dasd: fix list corruption of lcu list
s390/dasd: fix list corruption of pavgroup group list
s390/dasd: prevent inconsistent LCU device data
s390/dasd: fix hanging device offline processing
blk-iocost: Add iocg idle state tracepoint
nbd: Respect max_part for all partition scans
block/rnbd-clt: Does not request pdu to rtrs-clt
block/rnbd-clt: Dynamically allocate sglist for rnbd_iu
block/rnbd: Set write-back cache and fua same to the target device
block/rnbd: Fix typos
block/rnbd-srv: Protect dev session sysfs removal
block/rnbd-clt: Fix possible memleak
block/rnbd-clt: Get rid of warning regarding size argument in strlcpy
blk-mq: Remove 'running from the wrong CPU' warning
Linus Torvalds [Thu, 24 Dec 2020 20:18:11 +0000 (12:18 -0800)]
Merge tag 'libnvdimm-for-5.11' of git://git.kernel.org/pub/scm/linux/kernel/git/nvdimm/nvdimm
Pull libnvdimm updates from Dan Williams:
"Twas the day before Christmas and the only thing stirring in libnvdimm
/ device-dax land is a pile of miscellaneous fixups and cleanups.
The bulk of it has appeared in -next save the last two patches to
device-dax that have passed my build and unit tests.
- Fix a long standing block-window-namespace issue surfaced by the
ndctl change to attempt to preserve the kernel device name over
a 'reconfigure'
- Fix a few error path memory leaks in nfit and device-dax
- Silence a smatch warning in the ioctl path
- Miscellaneous cleanups"
* tag 'libnvdimm-for-5.11' of git://git.kernel.org/pub/scm/linux/kernel/git/nvdimm/nvdimm:
device-dax: Avoid an unnecessary check in alloc_dev_dax_range()
device-dax: Fix range release
device-dax: delete a redundancy check in dev_dax_validate_align()
libnvdimm/label: Return -ENXIO for no slot in __blk_label_update
device-dax/core: Fix memory leak when rmmod dax.ko
device-dax/pmem: Convert comma to semicolon
libnvdimm: Cleanup include of badblocks.h
ACPI: NFIT: Fix input validation of bus-family
libnvdimm/namespace: Fix reaping of invalidated block-window-namespace labels
ACPI/nfit: avoid accessing uninitialized memory in acpi_nfit_ctl()
amdkfd:
- Properly require pcie atomics for gfx10"
* tag 'drm-next-2020-12-24' of git://anongit.freedesktop.org/drm/drm: (31 commits)
drm/amd/display: Fix memory leaks in S3 resume
drm/amdgpu: Fix a copy-pasta comment
drm/amdgpu: only set DP subconnector type on DP and eDP connectors
drm/amd/pm: bump Sienna Cichlid smu_driver_if version to match latest pmfw
drm/amd/display: add getter routine to retrieve mpcc mux
drm/amd/display: always program DPPDTO unless not safe to lower
drm/amd/display: [FW Promotion] Release 0.0.47
drm/amd/display: updated wm table for Renoir
drm/amd/display: Acquire DSC during split stream for ODM only if top_pipe
drm/amd/display: Multi-display underflow observed
drm/amd/display: Remove unnecessary NULL check
drm/amd/display: Update RN/VGH active display count workaround
drm/amd/display: change SMU repsonse timeout to 2s.
drm/amd/display: gradually ramp ABM intensity
drm/amd/display: To modify the condition in indicating branch device
drm/amd/display: Modify the hdcp device count check condition
drm/amd/display: Interfaces for hubp blank and soft reset
drm/amd/display: handler not correctly checked at remove_irq_handler
drm/amdgpu: check gfx pipe availability before toggling its interrupts
drm/amdgpu: remove unnecessary asic type check
...
Linus Torvalds [Thu, 24 Dec 2020 20:06:46 +0000 (12:06 -0800)]
Merge tag 'for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost
Pull virtio updates from Michael Tsirkin:
- vdpa sim refactoring
- virtio mem: Big Block Mode support
- misc cleanus, fixes
* tag 'for_linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mst/vhost: (61 commits)
vdpa: Use simpler version of ida allocation
vdpa: Add missing comment for virtqueue count
uapi: virtio_ids: add missing device type IDs from OASIS spec
uapi: virtio_ids.h: consistent indentions
vhost scsi: fix error return code in vhost_scsi_set_endpoint()
virtio_ring: Fix two use after free bugs
virtio_net: Fix error code in probe()
virtio_ring: Cut and paste bugs in vring_create_virtqueue_packed()
tools/virtio: add barrier for aarch64
tools/virtio: add krealloc_array
tools/virtio: include asm/bug.h
vdpa/mlx5: Use write memory barrier after updating CQ index
vdpa: split vdpasim to core and net modules
vdpa_sim: split vdpasim_virtqueue's iov field in out_iov and in_iov
vdpa_sim: make vdpasim->buffer size configurable
vdpa_sim: use kvmalloc to allocate vdpasim->buffer
vdpa_sim: set vringh notify callback
vdpa_sim: add set_config callback in vdpasim_dev_attr
vdpa_sim: add get_config callback in vdpasim_dev_attr
vdpa_sim: make 'config' generic and usable for any device type
...
Zhen Lei [Sat, 19 Dec 2020 08:18:40 +0000 (16:18 +0800)]
device-dax: Avoid an unnecessary check in alloc_dev_dax_range()
Swap the calling sequence of krealloc() and __request_region(), call the
latter first. In this way, the value of dev_dax->nr_range does not need to
be considered when __request_region() failed.
Dan Williams [Sat, 19 Dec 2020 02:41:41 +0000 (18:41 -0800)]
device-dax: Fix range release
There are multiple locations that open-code the release of the last
range in a device-dax instance. Consolidate this into a new
dev_dax_trim_range() helper.
Arnaldo Carvalho de Melo [Thu, 24 Dec 2020 13:52:10 +0000 (10:52 -0300)]
perf probe: Fix memory leak when synthesizing SDT probes
The argv_split() function must be paired with argv_free(), else we must
keep a reference to the argv array received or do the freeing ourselves,
in synthesize_sdt_probe_command() we were simply leaking that argv[]
array.
Fixes: 3b1f8311f6963cd1 ("perf probe: Add sdt probes arguments into the uprobe cmd string") Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Alexandre Truong <alexandre.truong@arm.com> Cc: Alexis Berlemont <alexis.berlemont@gmail.com> Cc: He Zhe <zhe.he@windriver.com> Cc: Ian Rogers <irogers@google.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: John Garry <john.garry@huawei.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Masami Hiramatsu <mhiramat@kernel.org> Cc: Mathieu Poirier <mathieu.poirier@linaro.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Sumanth Korikkar <sumanthk@linux.ibm.com> Cc: Thomas Richter <tmricht@linux.ibm.com> Cc: Will Deacon <will@kernel.org> Link: https://lore.kernel.org/r/20201224135139.GF477817@kernel.org Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
James Clark [Thu, 26 Nov 2020 14:13:28 +0000 (16:13 +0200)]
perf stat aggregation: Add separate thread member
A separate field isn't strictly required. The core field could be
re-used for thread IDs as a single field was used previously.
But separating them will avoid confusion and catch potential errors
where core IDs are read as thread IDs and vice versa.
Also remove the placeholder id field which is now no longer used.
Signed-off-by: James Clark <james.clark@arm.com> Acked-by: Namhyung Kim <namhyung@kernel.org> Acked-by: Jiri Olsa <jolsa@redhat.com> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Tested-by: John Garry <john.garry@huawei.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Richter <tmricht@linux.ibm.com> Link: https://lore.kernel.org/r/20201126141328.6509-13-james.clark@arm.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
James Clark [Thu, 26 Nov 2020 14:13:27 +0000 (16:13 +0200)]
perf stat aggregation: Add separate core member
Add core as a separate member so that it doesn't have to be packed into
the int value.
Signed-off-by: James Clark <james.clark@arm.com> Acked-by: Namhyung Kim <namhyung@kernel.org> Acked-by: Jiri Olsa <jolsa@redhat.com> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Tested-by: John Garry <john.garry@huawei.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Richter <tmricht@linux.ibm.com> Link: https://lore.kernel.org/r/20201126141328.6509-12-james.clark@arm.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
James Clark [Thu, 26 Nov 2020 14:13:26 +0000 (16:13 +0200)]
perf stat aggregation: Add separate die member
Add die as a separate member so that it doesn't have to be packed into
the int value.
Signed-off-by: James Clark <james.clark@arm.com> Acked-by: Namhyung Kim <namhyung@kernel.org> Acked-by: Jiri Olsa <jolsa@redhat.com> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Tested-by: John Garry <john.garry@huawei.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Richter <tmricht@linux.ibm.com> Link: https://lore.kernel.org/r/20201126141328.6509-11-james.clark@arm.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: James Clark <james.clark@arm.com> Acked-by: Namhyung Kim <namhyung@kernel.org> Acked-by: Jiri Olsa <jolsa@redhat.com> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Tested-by: John Garry <john.garry@huawei.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Richter <tmricht@linux.ibm.com> Link: https://lore.kernel.org/r/20201126141328.6509-10-james.clark@arm.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
James Clark [Thu, 26 Nov 2020 14:13:24 +0000 (16:13 +0200)]
perf stat aggregation: Add separate node member
Add node as a separate member so that it doesn't have to be packed into
the int value.
Signed-off-by: James Clark <james.clark@arm.com> Acked-by: Namhyung Kim <namhyung@kernel.org> Acked-by: Jiri Olsa <jolsa@redhat.com> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Tested-by: John Garry <john.garry@huawei.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Richter <tmricht@linux.ibm.com> Link: https://lore.kernel.org/r/20201126141328.6509-9-james.clark@arm.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
James Clark [Thu, 26 Nov 2020 14:13:23 +0000 (16:13 +0200)]
perf stat aggregation: Start using cpu_aggr_id in map
Use the new cpu_aggr_id struct in the cpu map instead of int so that it
can store more data.
No functional changes.
Signed-off-by: James Clark <james.clark@arm.com> Acked-by: Namhyung Kim <namhyung@kernel.org> Acked-by: Jiri Olsa <jolsa@redhat.com> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Tested-by: John Garry <john.garry@huawei.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Thomas Richter <tmricht@linux.ibm.com> Link: https://lore.kernel.org/r/20201126141328.6509-8-james.clark@arm.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
James Clark [Thu, 26 Nov 2020 14:13:22 +0000 (16:13 +0200)]
perf cpumap: Drop in cpu_aggr_map struct
Replace usages of perf_cpu_map with cpu_aggr map in places that are
involved with 'perf stat' aggregation.
This will then later be changed to be a map of cpu_aggr_id rather than
an int so that more data can be stored.
No functional changes.
Signed-off-by: James Clark <james.clark@arm.com> Acked-by: Namhyung Kim <namhyung@kernel.org> Acked-by: Jiri Olsa <jolsa@redhat.com> Tested-by: John Garry <john.garry@huawei.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Ingo Molnar <mingo@redhat.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Thomas Richter <tmricht@linux.ibm.com> Link: https://lore.kernel.org/r/20201126141328.6509-7-james.clark@arm.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
James Clark [Thu, 26 Nov 2020 14:13:21 +0000 (16:13 +0200)]
perf cpumap: Add new map type for aggregation
Currently this is a duplicate of perf_cpu_map so that it can be used as
a drop in replacement.
In a later commit it will be changed from a map of ints to use the new
cpu_aggr_id struct.
No functional changes.
Signed-off-by: James Clark <james.clark@arm.com> Acked-by: Namhyung Kim <namhyung@kernel.org> Acked-by: Jiri Olsa <jolsa@redhat.com> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Tested-by: John Garry <john.garry@huawei.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Richter <tmricht@linux.ibm.com> Link: https://lore.kernel.org/r/20201126141328.6509-6-james.clark@arm.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
James Clark [Thu, 26 Nov 2020 14:13:20 +0000 (16:13 +0200)]
perf stat: Replace aggregation ID with a struct
Replace all occurences of the usage of int with the new struct
cpu_aggr_id.
No functional changes.
Signed-off-by: James Clark <james.clark@arm.com> Acked-by: Namhyung Kim <namhyung@kernel.org> Acked-by: Jiri Olsa <jolsa@redhat.com> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Tested-by: John Garry <john.garry@huawei.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Richter <tmricht@linux.ibm.com> Link: https://lore.kernel.org/r/20201126141328.6509-5-james.clark@arm.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
James Clark [Thu, 26 Nov 2020 14:13:19 +0000 (16:13 +0200)]
perf cpumap: Add new struct for cpu aggregation
This struct currently has only a single int member so that it can be
used as a drop in replacement for the existing behaviour.
Comparison and constructor functions have also been added that will
replace usages of '==' and '= -1'.
No functional changes.
Signed-off-by: James Clark <james.clark@arm.com> Acked-by: Namhyung Kim <namhyung@kernel.org> Acked-by: Jiri Olsa <jolsa@redhat.com> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Tested-by: John Garry <john.garry@huawei.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Richter <tmricht@linux.ibm.com> Link: https://lore.kernel.org/r/20201126141328.6509-4-james.clark@arm.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
James Clark [Thu, 26 Nov 2020 14:13:18 +0000 (16:13 +0200)]
perf cpumap: Use existing allocator to avoid using malloc
Use the existing allocator for perf_cpu_map to avoid use of raw malloc.
This could cause an issue in later commits where the size of
perf_cpu_map is changed.
No functional changes.
Signed-off-by: James Clark <james.clark@arm.com> Acked-by: Namhyung Kim <namhyung@kernel.org> Acked-by: Jiri Olsa <jolsa@redhat.com> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Tested-by: John Garry <john.garry@huawei.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Richter <tmricht@linux.ibm.com> Link: https://lore.kernel.org/r/20201126141328.6509-3-james.clark@arm.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
James Clark [Thu, 26 Nov 2020 14:13:17 +0000 (16:13 +0200)]
perf tests: Improve topology test to check all aggregation types
Improve the topology test to check all aggregation types. This is to
lock down the behaviour before 'id' is changed into a struct in later
commits.
Committer testing:
$ perf test topology
41: Session topology: Ok
$
$ perf test -v topology
41: Session topology:
--- start ---
test child forked, pid 965552
templ file: /tmp/perf-test-mO7NtI
Problems creating module maps, continuing anyway...
CPU 0, core 0, socket 0
CPU 1, core 1, socket 0
CPU 2, core 2, socket 0
CPU 3, core 4, socket 0
CPU 4, core 5, socket 0
CPU 5, core 6, socket 0
CPU 6, core 8, socket 0
CPU 7, core 9, socket 0
CPU 8, core 10, socket 0
CPU 9, core 12, socket 0
CPU 10, core 13, socket 0
CPU 11, core 14, socket 0
CPU 12, core 0, socket 0
CPU 13, core 1, socket 0
CPU 14, core 2, socket 0
CPU 15, core 4, socket 0
CPU 16, core 5, socket 0
CPU 17, core 6, socket 0
CPU 18, core 8, socket 0
CPU 19, core 9, socket 0
CPU 20, core 10, socket 0
CPU 21, core 12, socket 0
CPU 22, core 13, socket 0
CPU 23, core 14, socket 0
test child finished with 0
---- end ----
Session topology: Ok
$
Signed-off-by: James Clark <james.clark@arm.com> Acked-by: Namhyung Kim <namhyung@kernel.org> Acked-by: Jiri Olsa <jolsa@redhat.com> Tested-by: Arnaldo Carvalho de Melo <acme@redhat.com> Tested-by: John Garry <john.garry@huawei.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Thomas Richter <tmricht@linux.ibm.com> Link: https://lore.kernel.org/r/20201126141328.6509-2-james.clark@arm.com Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Signed-off-by: Tiezhu Yang <yangtiezhu@loongson.cn> Reviewed-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Xuefeng Li <lixuefeng@loongson.cn> Link: http://lore.kernel.org/lkml/1608278364-6733-5-git-send-email-yangtiezhu@loongson.cn
[ There were updates after Tiezhu's post, so I just updated the copy ] Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Tiezhu Yang [Wed, 23 Dec 2020 16:13:16 +0000 (13:13 -0300)]
perf tools: Update powerpc's syscall.tbl copy from the kernel sources
This silences the following tools/perf/ build warning:
Warning: Kernel ABI header at 'tools/perf/arch/powerpc/entry/syscalls/syscall.tbl' differs from latest version at 'arch/powerpc/kernel/syscalls/syscall.tbl'
Signed-off-by: Tiezhu Yang <yangtiezhu@loongson.cn> Reviewed-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Ingo Molnar <mingo@redhat.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Xuefeng Li <lixuefeng@loongson.cn> Link: http://lore.kernel.org/lkml/1608278364-6733-4-git-send-email-yangtiezhu@loongson.cn
[ There were updates after Tiezhu's post, so I just updated the copy ] Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Tiezhu Yang [Fri, 18 Dec 2020 07:59:22 +0000 (15:59 +0800)]
perf s390: Move syscall.tbl check into check-headers.sh
It is better to check syscall.tbl for s390 in check-headers.sh, it is
similar with commit c9b51a017065 ("perf tools: Move syscall_64.tbl check
into check-headers.sh").
Signed-off-by: Tiezhu Yang <yangtiezhu@loongson.cn> Reviewed-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Xuefeng Li <lixuefeng@loongson.cn> Link: http://lore.kernel.org/lkml/1608278364-6733-3-git-send-email-yangtiezhu@loongson.cn Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Tiezhu Yang [Fri, 18 Dec 2020 07:59:21 +0000 (15:59 +0800)]
perf powerpc: Move syscall.tbl check to check-headers.sh
It is better to check syscall.tbl for powerpc in check-headers.sh, it is
similar with commit c9b51a017065 ("perf tools: Move syscall_64.tbl check
into check-headers.sh").
Signed-off-by: Tiezhu Yang <yangtiezhu@loongson.cn> Reviewed-by: Naveen N. Rao <naveen.n.rao@linux.vnet.ibm.com> Cc: Alexander Shishkin <alexander.shishkin@linux.intel.com> Cc: Jiri Olsa <jolsa@redhat.com> Cc: Mark Rutland <mark.rutland@arm.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Xuefeng Li <lixuefeng@loongson.cn> Link: http://lore.kernel.org/lkml/1608278364-6733-2-git-send-email-yangtiezhu@loongson.cn Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Arnaldo Carvalho de Melo [Mon, 21 Dec 2020 23:04:45 +0000 (20:04 -0300)]
tools headers UAPI: Synch KVM's svm.h header with the kernel
To pick up the changes from:
d1949b93c60504b3 ("KVM: SVM: Add support for CR8 write traps for an SEV-ES guest") 5b51cb13160ae0ba ("KVM: SVM: Add support for CR4 write traps for an SEV-ES guest") f27ad38aac23263c ("KVM: SVM: Add support for CR0 write traps for an SEV-ES guest") 2985afbcdbb1957a ("KVM: SVM: Add support for EFER write traps for an SEV-ES guest") 291bd20d5d88814a ("KVM: SVM: Add initial support for a VMGEXIT VMEXIT")
Warning: Kernel ABI header at 'tools/arch/x86/include/uapi/asm/svm.h' differs from latest version at 'arch/x86/include/uapi/asm/svm.h'
diff -u tools/arch/x86/include/uapi/asm/svm.h arch/x86/include/uapi/asm/svm.h
Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Ian Rogers <irogers@google.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Tom Lendacky <thomas.lendacky@amd.com> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Arnaldo Carvalho de Melo [Mon, 21 Dec 2020 15:53:44 +0000 (12:53 -0300)]
tools kvm headers: Update KVM headers from the kernel sources
To pick the changes from:
8d14797b53f044fd ("KVM: arm64: Move 'struct kvm_arch_memory_slot' out of uapi/")
That don't causes any changes in tooling, only addresses this perf build
warning:
Warning: Kernel ABI header at 'tools/arch/arm64/include/uapi/asm/kvm.h' differs from latest version at 'arch/arm64/include/uapi/asm/kvm.h'
diff -u tools/arch/arm64/include/uapi/asm/kvm.h arch/arm64/include/uapi/asm/kvm.h
Cc: Marc Zyngier <maz@kernel.org> Cc: Will Deacon <will@kernel.org> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Arnaldo Carvalho de Melo [Mon, 21 Dec 2020 15:51:03 +0000 (12:51 -0300)]
tools headers UAPI: Sync KVM's vmx.h header with the kernel sources
To pick the changes in:
bf0cd88ce363a2de ("KVM: x86: emulate wait-for-SIPI and SIPI-VMExit")
That makes 'perf kvm-stat' aware of this new SIPI_SIGNAL exit reason,
thus addressing the following perf build warning:
Warning: Kernel ABI header at 'tools/arch/x86/include/uapi/asm/vmx.h' differs from latest version at 'arch/x86/include/uapi/asm/vmx.h'
diff -u tools/arch/x86/include/uapi/asm/vmx.h arch/x86/include/uapi/asm/vmx.h
Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Ian Rogers <irogers@google.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Yadong Qi <yadong.qi@intel.com> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
$ tools/perf/trace/beauty/kvm_ioctl.sh > before
$ cp include/uapi/linux/kvm.h tools/include/uapi/linux/kvm.h
$ cp arch/x86/include/uapi/asm/kvm.h tools/arch/x86/include/uapi/asm/kvm.h
$ tools/perf/trace/beauty/kvm_ioctl.sh > after
$ diff -u before after
--- before 2020-12-21 11:55:45.229737066 -0300
+++ after 2020-12-21 11:55:56.379983393 -0300
@@ -90,6 +90,7 @@
[0xc0] = "CLEAR_DIRTY_LOG",
[0xc1] = "GET_SUPPORTED_HV_CPUID",
[0xc6] = "X86_SET_MSR_FILTER",
+ [0xc7] = "RESET_DIRTY_RINGS",
[0xe0] = "CREATE_DEVICE",
[0xe1] = "SET_DEVICE_ATTR",
[0xe2] = "GET_DEVICE_ATTR",
$
Now one can use that string in filters when tracing ioctls, etc.
And silences this perf build warning:
Warning: Kernel ABI header at 'tools/include/uapi/linux/kvm.h' differs from latest version at 'include/uapi/linux/kvm.h'
diff -u tools/include/uapi/linux/kvm.h include/uapi/linux/kvm.h
Warning: Kernel ABI header at 'tools/arch/x86/include/uapi/asm/kvm.h' differs from latest version at 'arch/x86/include/uapi/asm/kvm.h'
diff -u tools/arch/x86/include/uapi/asm/kvm.h arch/x86/include/uapi/asm/kvm.h
Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Ian Rogers <irogers@google.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Peter Xu <peterx@redhat.com> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
The new MSR has a pattern that wasn't matched to avoid a clash with
IA32_UCODE_REV, change the regex to prefer the more relevant AMD_
prefixed ones to catch this new AMD64_VM_PAGE_FLUSH MSR.
Which causes these parts of tools/perf/ to be rebuilt:
CC /tmp/build/perf/trace/beauty/tracepoints/x86_msr.o
LD /tmp/build/perf/trace/beauty/tracepoints/perf-in.o
LD /tmp/build/perf/trace/beauty/perf-in.o
LD /tmp/build/perf/perf-in.o
LINK /tmp/build/perf/perf
This addresses this perf tools build warning:
diff -u tools/arch/x86/include/asm/msr-index.h arch/x86/include/asm/msr-index.h
Warning: Kernel ABI header at 'tools/arch/x86/include/asm/msr-index.h' differs from latest version at 'arch/x86/include/asm/msr-index.h'
Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Ian Rogers <irogers@google.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Tom Lendacky <thomas.lendacky@amd.com> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Arnaldo Carvalho de Melo [Mon, 21 Dec 2020 12:04:54 +0000 (09:04 -0300)]
tools headers cpufeatures: Sync with the kernel sources
To pick the changes in:
69372cf01290b958 ("x86/cpu: Add VM page flush MSR availablility as a CPUID feature") e1b35da5e624f8b0 ("x86: Enumerate AVX512 FP16 CPUID feature flag")
That causes only these 'perf bench' objects to rebuild:
CC /tmp/build/perf/bench/mem-memcpy-x86-64-asm.o
CC /tmp/build/perf/bench/mem-memset-x86-64-asm.o
And addresses these perf build warnings:
Warning: Kernel ABI header at 'tools/arch/x86/include/asm/cpufeatures.h' differs from latest version at 'arch/x86/include/asm/cpufeatures.h'
diff -u tools/arch/x86/include/asm/cpufeatures.h arch/x86/include/asm/cpufeatures.h
Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Ian Rogers <irogers@google.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Kyung Min Park <kyung.min.park@intel.com> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Paolo Bonzini <pbonzini@redhat.com> Cc: Tom Lendacky <thomas.lendacky@amd.com> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Warning: Kernel ABI header at 'tools/include/uapi/asm-generic/unistd.h' differs from latest version at 'include/uapi/asm-generic/unistd.h'
diff -u tools/include/uapi/asm-generic/unistd.h include/uapi/asm-generic/unistd.h
Warning: Kernel ABI header at 'tools/perf/arch/x86/entry/syscalls/syscall_64.tbl' differs from latest version at 'arch/x86/entry/syscalls/syscall_64.tbl'
diff -u tools/perf/arch/x86/entry/syscalls/syscall_64.tbl arch/x86/entry/syscalls/syscall_64.tbl
Cc: Adrian Hunter <adrian.hunter@intel.com> Cc: Ian Rogers <irogers@google.com> Cc: Jiri Olsa <jolsa@kernel.org> Cc: Namhyung Kim <namhyung@kernel.org> Cc: Willem de Bruijn <willemb@google.com> Signed-off-by: Arnaldo Carvalho de Melo <acme@redhat.com>
Sumera Priyadarsini [Tue, 24 Nov 2020 20:32:12 +0000 (02:02 +0530)]
scripts: coccicheck: Correct usage of make coccicheck
The command "make coccicheck C=1 CHECK=scripts/coccicheck" results in the
error:
./scripts/coccicheck: line 65: -1: shift count out of range
This happens because every time the C variable is specified,
the shell arguments need to be "shifted" in order to take only
the last argument, which is the C file to test. These shell arguments
mostly comprise flags that have been set in the Makefile. However,
when coccicheck is specified in the make command as a rule, the
number of shell arguments is zero, thus passing the invalid value -1
to the shift command, resulting in an error.
Modify coccicheck to print correct usage of make coccicheck so as to
avoid the error.
Signed-off-by: Sumera Priyadarsini <sylphrenadin@gmail.com> Signed-off-by: Julia Lawall <Julia.Lawall@inria.fr>
Dave Airlie [Thu, 24 Dec 2020 00:08:10 +0000 (10:08 +1000)]
Merge tag 'drm-misc-next-fixes-2020-12-22' of git://anongit.freedesktop.org/drm/drm-misc into drm-next
Short summary of fixes pull:
* dma-buf: Include <linux/vmalloc.h> for building on MIPS
* komeda: Fix order of operation in commit tail; Fix NULL-pointer and
out-of-bounds access; Cleanups
* ttm: Fix an unused-function warning
Linus Torvalds [Wed, 23 Dec 2020 23:11:08 +0000 (15:11 -0800)]
Merge tag 'sound-fix-5.11-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/tiwai/sound
Pull sound fixes from Takashi Iwai:
"A collection of small fixes that came up recently for 5.11.
The majority of fixes are usual HD-audio and USB-audio quirks, with a
few PCM core fixes for addressing the information leak and yet more
UBSAN fixes in the core side"
* tag 'sound-fix-5.11-rc1' of git://git.kernel.org/pub/scm/linux/kernel/git/tiwai/sound:
ALSA/hda: apply jack fixup for the Acer Veriton N4640G/N6640G/N2510G
ALSA: hda/realtek: Apply jack fixup for Quanta NL3
ALSA: usb-audio: Add implicit feeback support for the BOSS GT-1
ALSA: usb-audio: Add alias entry for ASUS PRIME TRX40 PRO-S
ALSA: core: Remove redundant comments
ALSA: hda/realtek: Add quirk for MSI-GP73
ALSA: pcm: oss: Fix a few more UBSAN fixes
ALSA: pcm: Clear the full allocated memory at hw_params
ALSA: memalloc: Align buffer allocations in page size
ALSA: usb-audio: Disable sample read check if firmware doesn't give back
ALSA: pcm: Remove snd_pcm_lib_preallocate_dma_free()
ALSA: usb-audio: Add VID to support native DSD reproduction on FiiO devices
ALSA: core: memalloc: add page alignment for iram
ALSA: hda/realtek - Supported Dell fixed type headset
ALSA: hda/realtek: Remove dummy lineout on Acer TravelMate P648/P658
Linus Torvalds [Wed, 23 Dec 2020 23:01:49 +0000 (15:01 -0800)]
Merge tag 'linux-watchdog-5.11-rc1' of git://www.linux-watchdog.org/linux-watchdog
Pull watchdog updates from Wim Van Sebroeck:
- Removal of the pnx83xx driver
- Add a binding for A100's watchdog controller
- Add Rockchip compatibles to snps,dw-wdt.yaml
- hpwdt: Disable NMI in Crash Kernel
- Fix potential dereferencing of null pointer in watchdog_core
- Several other small fixes and improvements
* tag 'linux-watchdog-5.11-rc1' of git://www.linux-watchdog.org/linux-watchdog: (23 commits)
watchdog: convert comma to semicolon
watchdog: iTCO_wdt: use dev_*() instead of pr_*() for logging
dt-binding: watchdog: add Rockchip compatibles to snps,dw-wdt.yaml
watchdog: coh901327: add COMMON_CLK dependency
dt-bindings: watchdog: sun4i: Add A100 compatible
watchdog: qcom: Avoid context switch in restart handler
watchdog: iTCO_wdt: use module_platform_device() macro
watchdog: Fix potential dereferencing of null pointer
watchdog: wdat_wdt: Fix missing kerneldoc reported by W=1
watchdog/hpwdt: Reflect changes
watchdog/hpwdt: Disable NMI in Crash Kernel
wdt: sp805: add watchdog_stop on reboot
watchdog: sbc_fitpc2_wdt: add __user annotations
watchdog: geodewdt: remove unneeded break
watchdog: rti-wdt: fix reference leak in rti_wdt_probe
watchdog: qcom_wdt: set WDOG_HW_RUNNING bit when appropriate
watchdog: remove pnx83xx driver
watchdog: stm32_iwdg: don't print an error on probe deferral
watchdog: sprd: change to use usleep_range() instead of busy loop
watchdog: sprd: check busy bit before new loading rather than after that
...
Stylon Wang [Tue, 10 Nov 2020 07:40:06 +0000 (15:40 +0800)]
drm/amd/display: Fix memory leaks in S3 resume
EDID parsing in S3 resume pushes new display modes
to probed_modes list but doesn't consolidate to actual
mode list. This creates a race condition when
amdgpu_dm_connector_ddc_get_modes() re-initializes the
list head without walking the list and results in memory leak.
Bug: https://bugzilla.kernel.org/show_bug.cgi?id=209987 Acked-by: Harry Wentland <harry.wentland@amd.com> Acked-by: Alex Deucher <alexander.deucher@amd.com> Reviewed-by: Nicholas Kazlauskas <nicholas.kazlauskas@amd.com> Signed-off-by: Stylon Wang <stylon.wang@amd.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Cc: stable@vger.kernel.org
Alex Deucher [Thu, 17 Dec 2020 17:11:36 +0000 (12:11 -0500)]
drm/amdgpu: only set DP subconnector type on DP and eDP connectors
Fixes a crash in drm_object_property_set_value() because the property
is not set for internal DP ports that connect to a bridge chips
(e.g., DP to VGA or DP to LVDS).
Bug: https://bugzilla.kernel.org/show_bug.cgi?id=210739 Fixes: 65bf2cf95d3ade ("drm/amdgpu: utilize subconnector property for DP through atombios") Tested-By: Kris Karas <bugs-a17@moonlit-rail.com> Cc: Oleg Vasilev <oleg.vasilev@intel.com> Signed-off-by: Alex Deucher <alexander.deucher@amd.com> Cc: stable@vger.kernel.org # 5.10.x