]> www.infradead.org Git - users/jedix/linux-maple.git/log
users/jedix/linux-maple.git
4 years agoMerge branch 'for-next/sve' into for-next/core
Will Deacon [Wed, 2 Jun 2021 18:10:44 +0000 (19:10 +0100)]
Merge branch 'for-next/sve' into for-next/core

* for-next/sve:
  arm64/sve: Skip flushing Z registers with 128 bit vectors
  arm64/sve: Use the sve_flush macros in sve_load_from_fpsimd_state()
  arm64/sve: Split _sve_flush macro into separate Z and predicate flushes

4 years agoMerge branch 'for-next/smccc' into for-next/core
Will Deacon [Wed, 2 Jun 2021 18:10:37 +0000 (19:10 +0100)]
Merge branch 'for-next/smccc' into for-next/core

* for-next/smccc:

4 years agoMerge branch 'for-next/selftests' into for-next/core
Will Deacon [Wed, 2 Jun 2021 18:10:32 +0000 (19:10 +0100)]
Merge branch 'for-next/selftests' into for-next/core

* for-next/selftests:
  kselftest/arm64: Add missing newline to SVE test skipping output

4 years agoMerge branch 'for-next/perf' into for-next/core
Will Deacon [Wed, 2 Jun 2021 18:10:28 +0000 (19:10 +0100)]
Merge branch 'for-next/perf' into for-next/core

* for-next/perf: (22 commits)
  perf: qcom_l2_pmu: move to use request_irq by IRQF_NO_AUTOEN flag
  arm_pmu: move to use request_irq by IRQF_NO_AUTOEN flag
  perf: arm_spe: use DEVICE_ATTR_RO macro
  perf: xgene_pmu: use DEVICE_ATTR_RO macro
  perf: qcom: use DEVICE_ATTR_RO macro
  perf: arm_pmu: use DEVICE_ATTR_RO macro
  drivers/perf: hisi: use the correct HiSilicon copyright
  arm64: perf: Convert snprintf to sysfs_emit
  arm_pmu: Fix write counter incorrect in ARMv7 big-endian mode
  drivers/perf: arm-cci: Fix checkpatch spacing error
  drivers/perf: arm-cmn: Add space after ','
  drivers/perf: arm_pmu: Fix some coding style issues
  drivers/perf: arm_spe_pmu: Fix some coding style issues
  drivers/perf: Remove redundant dev_err call in tx2_uncore_pmu_init_dev()
  perf/hisi: Use irq_set_affinity()
  perf/imx_ddr: Use irq_set_affinity()
  perf/arm-smmuv3: Use irq_set_affinity()
  perf/arm-dsu: Use irq_set_affinity()
  perf/arm-dmc620: Use irq_set_affinity()
  perf/arm-cmn: Use irq_set_affinity()
  ...

4 years agoMerge branch 'for-next/mte' into for-next/core
Will Deacon [Wed, 2 Jun 2021 18:10:24 +0000 (19:10 +0100)]
Merge branch 'for-next/mte' into for-next/core

* for-next/mte:
  kasan: speed up mte_set_mem_tag_range

4 years agoMerge branch 'for-next/mm' into for-next/core
Will Deacon [Wed, 2 Jun 2021 18:10:21 +0000 (19:10 +0100)]
Merge branch 'for-next/mm' into for-next/core

* for-next/mm:
  arm64: cache: Lower ARCH_DMA_MINALIGN to 64 (L1_CACHE_BYTES)
  arm64: mm: Remove unused support for Normal-WT memory type
  arm64: acpi: Map EFI_MEMORY_WT memory as Normal-NC
  arm64: mm: Remove unused support for Device-GRE memory type
  arm64: mm: Use better bitmap_zalloc()
  arm64/mm: Make vmemmap_free() available only with CONFIG_MEMORY_HOTPLUG
  arm64/mm: Remove [PUD|PMD]_TABLE_BIT from [pud|pmd]_bad()
  arm64/mm: Validate CONFIG_PGTABLE_LEVELS

4 years agoMerge branch 'for-next/kasan' into for-next/core
Will Deacon [Wed, 2 Jun 2021 18:10:16 +0000 (19:10 +0100)]
Merge branch 'for-next/kasan' into for-next/core

* for-next/kasan:
  kasan: arm64: support specialized outlined tag mismatch checks

4 years agoMerge branch 'for-next/insn' into for-next/core
Will Deacon [Wed, 2 Jun 2021 18:07:26 +0000 (19:07 +0100)]
Merge branch 'for-next/insn' into for-next/core

* for-next/insn:
  arm64: insn: Add load/store decoding helpers
  arm64: insn: Add some opcodes to instruction decoder
  arm64: insn: Add barrier encodings
  arm64: insn: Add SVE instruction class
  arm64: Move instruction encoder/decoder under lib/
  arm64: Move aarch32 condition check functions
  arm64: Move patching utilities out of instruction encoding/decoding

4 years agoMerge branch 'for-next/ffa' into for-next/core
Will Deacon [Wed, 2 Jun 2021 17:00:11 +0000 (18:00 +0100)]
Merge branch 'for-next/ffa' into for-next/core

* for-next/ffa:
  arm64: smccc: Add support for SMCCCv1.2 extended input/output registers

4 years agoMerge branch 'for-next/docs' into for-next/core
Will Deacon [Wed, 2 Jun 2021 17:00:07 +0000 (18:00 +0100)]
Merge branch 'for-next/docs' into for-next/core

* for-next/docs:
  arm64: Document requirement for access to FEAT_HCX

4 years agoMerge branch 'for-next/cpufeature' into for-next/core
Will Deacon [Wed, 2 Jun 2021 17:00:02 +0000 (18:00 +0100)]
Merge branch 'for-next/cpufeature' into for-next/core

* for-next/cpufeature:
  arm64: Check if GMID_EL1.BS is the same on all CPUs
  arm64: Change the cpuinfo_arm64 member type for some sysregs to u64

4 years agoMerge branch 'for-next/cortex-strings' into for-next/core
Will Deacon [Wed, 2 Jun 2021 16:59:55 +0000 (17:59 +0100)]
Merge branch 'for-next/cortex-strings' into for-next/core

* for-next/cortex-strings:
  arm64: update string routine copyrights and URLs
  arm64: Rewrite __arch_clear_user()
  arm64: Better optimised memchr()
  arm64: Import latest memcpy()/memmove() implementation
  arm64: Add assembly annotations for weak-PI-alias madness
  arm64: Import latest version of Cortex Strings' strncmp
  arm64: Import updated version of Cortex Strings' strlen
  arm64: Import latest version of Cortex Strings' strcmp
  arm64: Import latest version of Cortex Strings' memcmp

4 years agoarm64: update string routine copyrights and URLs
Mark Rutland [Wed, 2 Jun 2021 15:13:58 +0000 (16:13 +0100)]
arm64: update string routine copyrights and URLs

To make future archaeology easier, let's have the string routine comment
blocks encode the specific upstream commit ID they were imported from.
These are the same commit IDs as listed in the commits importing the
code, expanded to 16 characters. Note that the routines have different
commit IDs, each reprsenting the latest upstream commit which changed
the particular routine.

At the same time, let's consistently include 2021 in the copyright
dates.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Robin Murphy <robin.murphy@arm.com>
Cc: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/r/20210602151358.35571-1-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoMerge branch 'for-next/caches' into for-next/core
Will Deacon [Wed, 2 Jun 2021 16:57:43 +0000 (17:57 +0100)]
Merge branch 'for-next/caches' into for-next/core

* for-next/caches:
  arm64: Rename arm64-internal cache maintenance functions
  arm64: Fix cache maintenance function comments
  arm64: sync_icache_aliases to take end parameter instead of size
  arm64: __clean_dcache_area_pou to take end parameter instead of size
  arm64: __clean_dcache_area_pop to take end parameter instead of size
  arm64: __clean_dcache_area_poc to take end parameter instead of size
  arm64: __flush_dcache_area to take end parameter instead of size
  arm64: dcache_by_line_op to take end parameter instead of size
  arm64: __inval_dcache_area to take end parameter instead of size
  arm64: Fix comments to refer to correct function __flush_icache_range
  arm64: Move documentation of dcache_by_line_op
  arm64: assembler: remove user_alt
  arm64: Downgrade flush_icache_range to invalidate
  arm64: Do not enable uaccess for invalidate_icache_range
  arm64: Do not enable uaccess for flush_icache_range
  arm64: Apply errata to swsusp_arch_suspend_exit
  arm64: assembler: add conditional cache fixups
  arm64: assembler: replace `kaddr` with `addr`

4 years agoMerge branch 'for-next/boot' into for-next/core
Will Deacon [Wed, 2 Jun 2021 16:57:29 +0000 (17:57 +0100)]
Merge branch 'for-next/boot' into for-next/core

* for-next/boot:
  arm64: scs: Drop unused 'tmp' argument to scs_{load, save} asm macros
  arm64: smp: initialize cpu offset earlier
  arm64: smp: unify task and sp setup
  arm64: smp: remove stack from secondary_data
  arm64: smp: remove pointless secondary_data maintenance
  arm64: assembler: add set_this_cpu_offset
  arm64: stacktrace: Relax frame record alignment requirement to 8 bytes
  arm64: Change the on_*stack functions to take a size argument
  arm64: Implement stack trace termination record

4 years agoperf: qcom_l2_pmu: move to use request_irq by IRQF_NO_AUTOEN flag
Tian Tao [Wed, 2 Jun 2021 01:00:42 +0000 (09:00 +0800)]
perf: qcom_l2_pmu: move to use request_irq by IRQF_NO_AUTOEN flag

request_irq() after setting IRQ_NOAUTOEN as below
irq_set_status_flags(irq, IRQ_NOAUTOEN); request_irq(dev, irq...); can
be replaced by request_irq() with IRQF_NO_AUTOEN flag.

this patch is made base on "add IRQF_NO_AUTOEN for request_irq" which
is being merged: https://lore.kernel.org/patchwork/patch/1388765/

Signed-off-by: Tian Tao <tiantao6@hisilicon.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/1622595642-61678-3-git-send-email-tiantao6@hisilicon.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm_pmu: move to use request_irq by IRQF_NO_AUTOEN flag
Tian Tao [Wed, 2 Jun 2021 01:00:41 +0000 (09:00 +0800)]
arm_pmu: move to use request_irq by IRQF_NO_AUTOEN flag

request_irq() after setting IRQ_NOAUTOEN as below
irq_set_status_flags(irq, IRQ_NOAUTOEN);
request_irq(dev, irq...);
can be replaced by request_irq() with IRQF_NO_AUTOEN flag.

this patch is made base on "add IRQF_NO_AUTOEN for request_irq" which
is being merged: https://lore.kernel.org/patchwork/patch/1388765/

Signed-off-by: Tian Tao <tiantao6@hisilicon.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/1622595642-61678-2-git-send-email-tiantao6@hisilicon.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: cache: Lower ARCH_DMA_MINALIGN to 64 (L1_CACHE_BYTES)
Will Deacon [Thu, 27 May 2021 12:43:56 +0000 (13:43 +0100)]
arm64: cache: Lower ARCH_DMA_MINALIGN to 64 (L1_CACHE_BYTES)

Back in 97303480753e ("arm64: Increase the max granular size"),
ARCH_DMA_MINALIGN was effectively increased to 128 bytes thanks to an
increase in L1_CACHE_BYTES due to an unsubstantiated performance claim
on the now obsolete ThunderX-1. Although this was reverted in
d93277b9839b, ARCH_DMA_MINALIGN was kept at 128 bytes by ebc7e21e0fa2
("arm64: Increase ARCH_DMA_MINALIGN to 128").

During discussion of the original patch, it was reported that the change
also prevented a warning during boot on (again, now obsolete) Qualcomm
server hardware where the cache writeback granule was larger than 64
bytes. The reason for this warning was because non-coherent DMA could
lead to data corruption due to unexpected writeback from the CPU where a
cacheline is shared with other allocations.

Since then, systems have appeared with larger cachelines still, and so
commit 8f5c9037a55b ("arm64/mm: Correct the cache line size warning with
non coherent device") reworked the warning so that it only appears on
systems where non-coherent DMA is actually required and taints the
kernel with TAINT_CPU_OUT_OF_SPEC. We are not aware of any systems, even
including the aforementioned obsolete machines, which have a CWG larger
than 64 bytes and require non-coherent DMA.

More recently, it has been reported that a ARCH_DMA_MINALIGN of 128
bytes wastes considerable memory (~6% immediately after boot on one
system).

Reduce ARCH_DMA_MINALIGN to 64 bytes and allow the warning/taint to
indicate if there are machines that unknowingly rely on this.

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Ard Biesheuvel <ardb@kernel.org>
Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Vincent Whitchurch <vincent.whitchurch@axis.com>
Link: https://lore.kernel.org/linux-arm-kernel/1442944788-17254-1-git-send-email-rric@kernel.org/
Link: https://lore.kernel.org/linux-arm-kernel/CAOZdJXUiRMAguDV+HEJqPg57MyBNqEcTyaH+ya=U93NHb-pdJA@mail.gmail.com/
Link: https://lore.kernel.org/linux-arm-kernel/20190614131141.4428-1-msys.mizuma@gmail.com/
Link: https://lore.kernel.org/r/20210517074332.28280-1-vincent.whitchurch@axis.com
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20210527124356.22367-1-will@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: mm: Remove unused support for Normal-WT memory type
Will Deacon [Thu, 27 May 2021 11:03:19 +0000 (12:03 +0100)]
arm64: mm: Remove unused support for Normal-WT memory type

The Normal-WT memory type is unused, so remove it and reclaim a MAIR.

Cc: Christoph Hellwig <hch@lst.de>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/20210527110319.22157-4-will@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: acpi: Map EFI_MEMORY_WT memory as Normal-NC
Will Deacon [Thu, 27 May 2021 11:03:18 +0000 (12:03 +0100)]
arm64: acpi: Map EFI_MEMORY_WT memory as Normal-NC

The only user we have of Normal Write-Through memory is in the ACPI code
when mapping memory regions advertised as EFI_MEMORY_WT. Since most (all?)
CPUs treat write-through as non-cacheable under the hood, don't bother
with the extra memory type here and just treat EFI_MEMORY_WT the same way
as EFI_MEMORY_WC by mapping it to the Normal-NC memory type instead and
emitting a warning if we have failed to find an alternative EFI memory
type.

Cc: Ard Biesheuvel <ardb@kernel.org>
Cc: Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
Cc: Christoph Hellwig <hch@lst.de>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20210527110319.22157-3-will@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: mm: Remove unused support for Device-GRE memory type
Will Deacon [Thu, 27 May 2021 11:03:17 +0000 (12:03 +0100)]
arm64: mm: Remove unused support for Device-GRE memory type

The Device-GRE memory type is unused, so remove it and reclaim a MAIR.

Cc: Christoph Hellwig <hch@lst.de>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Suggested-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/20210505180228.GA3874@arm.com
Link: https://lore.kernel.org/r/20210527110319.22157-2-will@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: mm: Use better bitmap_zalloc()
Kefeng Wang [Sat, 29 May 2021 11:15:10 +0000 (19:15 +0800)]
arm64: mm: Use better bitmap_zalloc()

Use better bitmap_zalloc() to allocate bitmap.

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: linux-arm-kernel@lists.infradead.org
Signed-off-by: Kefeng Wang <wangkefeng.wang@huawei.com>
Link: https://lore.kernel.org/r/20210529111510.186355-1-wangkefeng.wang@huawei.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: Rewrite __arch_clear_user()
Robin Murphy [Thu, 27 May 2021 15:34:48 +0000 (16:34 +0100)]
arm64: Rewrite __arch_clear_user()

Now that we're always using STTR variants rather than abstracting two
different addressing modes, the user_ldst macro here is frankly more
obfuscating than helpful. Rewrite __arch_clear_user() with regular
USER() annotations so that it's clearer what's going on, and take the
opportunity to minimise the branchiness in the most common paths, while
also allowing the exception fixup to return an accurate result.

Apparently some folks examine large reads from /dev/zero closely enough
to notice the loop being hot, so align it per the other critical loops
(presumably around a typical instruction fetch granularity).

Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Link: https://lore.kernel.org/r/1cbd78b12c076a8ad4656a345811cfb9425df0b3.1622128527.git.robin.murphy@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: Better optimised memchr()
Robin Murphy [Thu, 27 May 2021 15:34:47 +0000 (16:34 +0100)]
arm64: Better optimised memchr()

Although we implement our own assembly version of memchr(), it turns
out to be barely any better than what GCC can generate for the generic
C version (and would go wrong if the size_t argument were ever large
enough to be interpreted as negative). Unfortunately we can't import the
tuned implementation from the Arm optimized-routines library, since that
has some Advanced SIMD parts which are not really viable for general
kernel library code. What we can do, however, is pep things up with some
relatively straightforward word-at-a-time logic for larger calls.

Adding some timing to optimized-routines' memchr() test for a simple
benchmark, overall this version comes in around half as fast as the SIMD
code, but still nearly 4x faster than our existing implementation.

Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Link: https://lore.kernel.org/r/58471b42f9287e039dafa9e5e7035077152438fd.1622128527.git.robin.murphy@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: Import latest memcpy()/memmove() implementation
Robin Murphy [Thu, 27 May 2021 15:34:46 +0000 (16:34 +0100)]
arm64: Import latest memcpy()/memmove() implementation

Import the latest implementation of memcpy(), based on the
upstream code of string/aarch64/memcpy.S at commit afd6244 from
https://github.com/ARM-software/optimized-routines, and subsuming
memmove() in the process.

Note that for simplicity Arm have chosen to contribute this code
to Linux under GPLv2 rather than the original MIT license.

Note also that the needs of the usercopy routines vs. regular memcpy()
have now diverged so far that we abandon the shared template idea
and the damage which that incurred to the tuning of LDP/STP loops.
We'll be back to tackle those routines separately in future.

Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Link: https://lore.kernel.org/r/3c953af43506581b2422f61952261e76949ba711.1622128527.git.robin.murphy@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: Add assembly annotations for weak-PI-alias madness
Robin Murphy [Thu, 27 May 2021 15:34:45 +0000 (16:34 +0100)]
arm64: Add assembly annotations for weak-PI-alias madness

Add yet another set of assembly symbol annotations, this time for the
borderline-absurd situation of a function aliasing to a weak symbol
which itself also wants a position-independent alias.

Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Link: https://lore.kernel.org/r/75545b3c4129b20b887474bb58a9cf302bf2132b.1622128527.git.robin.murphy@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: Import latest version of Cortex Strings' strncmp
Sam Tebbs [Thu, 27 May 2021 15:34:44 +0000 (16:34 +0100)]
arm64: Import latest version of Cortex Strings' strncmp

Import the latest version of the former Cortex Strings - now
Arm Optimized Routines - strncmp function based on the upstream
code of string/aarch64/strncmp.S at commit e823e3a from
https://github.com/ARM-software/optimized-routines

Note that for simplicity Arm have chosen to contribute this code
to Linux under GPLv2 rather than the original MIT license.

Signed-off-by: Sam Tebbs <sam.tebbs@arm.com>
[ rm: update attribution and commit message ]
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Link: https://lore.kernel.org/r/26110bee02ad360596c9a7536af7eaaf6890d0e8.1622128527.git.robin.murphy@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: Import updated version of Cortex Strings' strlen
Sam Tebbs [Thu, 27 May 2021 15:34:43 +0000 (16:34 +0100)]
arm64: Import updated version of Cortex Strings' strlen

Import an updated version of the former Cortex Strings - now Arm
Optimized Routines - strcmp function. The latest version introduces
Advanced SIMD usage which rules it out for our purposes, but we can
still pick an intermediate improvement from the previous version,
namely string/aarch64/strlen.S at commit 98e4d6a from
https://github.com/ARM-software/optimized-routines

Note that for simplicity Arm have chosen to contribute this code
to Linux under GPLv2 rather than the original MIT license.

Signed-off-by: Sam Tebbs <sam.tebbs@arm.com>
[ rm: update attribution and commit message ]
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Link: https://lore.kernel.org/r/32e3489398a24b23ae6e996935ac4818f8fd9dfd.1622128527.git.robin.murphy@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: Import latest version of Cortex Strings' strcmp
Sam Tebbs [Thu, 27 May 2021 15:34:42 +0000 (16:34 +0100)]
arm64: Import latest version of Cortex Strings' strcmp

Import the latest version of the former Cortex Strings - now
Arm Optimized Routines - strcmp function based on the upstream
code of string/aarch64/strcmp.S at commit afd6244 from
https://github.com/ARM-software/optimized-routines

Note that for simplicity Arm have chosen to contribute this code
to Linux under GPLv2 rather than the original MIT license.

Signed-off-by: Sam Tebbs <sam.tebbs@arm.com>
[ rm: update attribution and commit message ]
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Link: https://lore.kernel.org/r/0fe90c90b96b569fbdfd46e47bd1298abb02079e.1622128527.git.robin.murphy@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: Import latest version of Cortex Strings' memcmp
Sam Tebbs [Thu, 27 May 2021 15:34:41 +0000 (16:34 +0100)]
arm64: Import latest version of Cortex Strings' memcmp

Import the latest version of the former Cortex Strings - now
Arm Optimized Routines - memcmp function based on the upstream
code of string/aarch64/memcmp.S at commit e823e3a from
https://github.com/ARM-software/optimized-routines

Note that for simplicity Arm have chosen to contribute this code
to Linux under GPLv2 rather than the original MIT license.

Signed-off-by: Sam Tebbs <sam.tebbs@arm.com>
[ rm: update attribution and commit message ]
Signed-off-by: Robin Murphy <robin.murphy@arm.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/2889de2d41054f3f508fb3addad784a3606ef383.1622128527.git.robin.murphy@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoperf: arm_spe: use DEVICE_ATTR_RO macro
YueHaibing [Fri, 28 May 2021 06:17:38 +0000 (14:17 +0800)]
perf: arm_spe: use DEVICE_ATTR_RO macro

Use DEVICE_ATTR_RO() helper instead of plain DEVICE_ATTR(),
which makes the code a bit shorter and easier to read.

Signed-off-by: YueHaibing <yuehaibing@huawei.com>
Link: https://lore.kernel.org/r/20210528061738.23392-1-yuehaibing@huawei.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoperf: xgene_pmu: use DEVICE_ATTR_RO macro
YueHaibing [Fri, 28 May 2021 01:49:40 +0000 (09:49 +0800)]
perf: xgene_pmu: use DEVICE_ATTR_RO macro

Use DEVICE_ATTR_RO() helper instead of plain DEVICE_ATTR(),
which makes the code a bit shorter and easier to read.

Signed-off-by: YueHaibing <yuehaibing@huawei.com>
Link: https://lore.kernel.org/r/20210528014940.4184-1-yuehaibing@huawei.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoperf: qcom: use DEVICE_ATTR_RO macro
YueHaibing [Fri, 28 May 2021 01:47:49 +0000 (09:47 +0800)]
perf: qcom: use DEVICE_ATTR_RO macro

Use DEVICE_ATTR_RO() helper instead of plain DEVICE_ATTR(),
which makes the code a bit shorter and easier to read.

Signed-off-by: YueHaibing <yuehaibing@huawei.com>
Link: https://lore.kernel.org/r/20210528014749.24068-1-yuehaibing@huawei.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoperf: arm_pmu: use DEVICE_ATTR_RO macro
YueHaibing [Fri, 28 May 2021 01:41:30 +0000 (09:41 +0800)]
perf: arm_pmu: use DEVICE_ATTR_RO macro

Use DEVICE_ATTR_RO helper instead of plain DEVICE_ATTR,
which makes the code a bit shorter and easier to read.

Signed-off-by: YueHaibing <yuehaibing@huawei.com>
Link: https://lore.kernel.org/r/20210528014130.7708-1-yuehaibing@huawei.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agodrivers/perf: hisi: use the correct HiSilicon copyright
Hao Fang [Sat, 22 May 2021 10:23:57 +0000 (18:23 +0800)]
drivers/perf: hisi: use the correct HiSilicon copyright

s/Hisilicon/HiSilicon/.
It should use capital S, according to the official website
https://www.hisilicon.com/en.

Signed-off-by: Hao Fang <fanghao11@huawei.com>
Link: https://lore.kernel.org/r/1621679037-15323-1-git-send-email-fanghao11@huawei.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: perf: Convert snprintf to sysfs_emit
Tian Tao [Thu, 20 May 2021 07:59:45 +0000 (15:59 +0800)]
arm64: perf: Convert snprintf to sysfs_emit

Use sysfs_emit instead of snprintf to avoid buf overrun,because in
sysfs_emit it strictly checks whether buf is null or buf whether
pagesize aligned, otherwise it returns an error.

Signed-off-by: Tian Tao <tiantao6@hisilicon.com>
Link: https://lore.kernel.org/r/1621497585-30887-1-git-send-email-tiantao6@hisilicon.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm_pmu: Fix write counter incorrect in ARMv7 big-endian mode
Yang Jihong [Fri, 30 Apr 2021 01:26:59 +0000 (09:26 +0800)]
arm_pmu: Fix write counter incorrect in ARMv7 big-endian mode

Commit 3a95200d3f89 ("arm_pmu: Change API to support 64bit counter values")
changes the input "value" type from 32-bit to 64-bit, which introduces the
following problem: ARMv7 PMU counters is 32-bit width, in big-endian mode,
write counter uses high 32-bit, which writes an incorrect value.

Before:

 Performance counter stats for 'ls':

              2.22 msec task-clock                #    0.675 CPUs utilized
                 0      context-switches          #    0.000 K/sec
                 0      cpu-migrations            #    0.000 K/sec
                49      page-faults               #    0.022 M/sec
        2150476593      cycles                    #  966.663 GHz
        2148588788      instructions              #    1.00  insn per cycle
        2147745484      branches                  # 965435.074 M/sec
        2147508540      branch-misses             #   99.99% of all branches

None of the above hw event counters are correct.

Solution:

"value" forcibly converted to 32-bit type before being written to PMU register.

After:

 Performance counter stats for 'ls':

              2.09 msec task-clock                #    0.681 CPUs utilized
                 0      context-switches          #    0.000 K/sec
                 0      cpu-migrations            #    0.000 K/sec
                46      page-faults               #    0.022 M/sec
           2807301      cycles                    #    1.344 GHz
           1060159      instructions              #    0.38  insn per cycle
            250496      branches                  #  119.914 M/sec
             23192      branch-misses             #    9.26% of all branches

Fixes: 3a95200d3f89 ("arm_pmu: Change API to support 64bit counter values")
Cc: <stable@vger.kernel.org>
Signed-off-by: Yang Jihong <yangjihong1@huawei.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/20210430012659.232110-1-yangjihong1@huawei.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: scs: Drop unused 'tmp' argument to scs_{load, save} asm macros
Will Deacon [Thu, 27 May 2021 10:55:29 +0000 (11:55 +0100)]
arm64: scs: Drop unused 'tmp' argument to scs_{load, save} asm macros

The scs_load and scs_save asm macros don't make use of the mandatory
'tmp' register argument, so drop it and fix up the callers.

Cc: Sami Tolvanen <samitolvanen@google.com>
Cc: Mark Rutland <mark.rutland@arm.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Sami Tolvanen <samitolvanen@google.com>
Link: https://lore.kernel.org/r/20210527105529.21967-1-will@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: insn: Add load/store decoding helpers
Julien Thierry [Wed, 3 Mar 2021 17:05:36 +0000 (18:05 +0100)]
arm64: insn: Add load/store decoding helpers

Provide some function to group different load/store instructions.

Signed-off-by: Julien Thierry <jthierry@redhat.com>
Link: https://lore.kernel.org/r/20210303170536.1838032-9-jthierry@redhat.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: insn: Add some opcodes to instruction decoder
Julien Thierry [Wed, 3 Mar 2021 17:05:35 +0000 (18:05 +0100)]
arm64: insn: Add some opcodes to instruction decoder

Add decoding capability for some instructions that objtool will need
to decode.

Signed-off-by: Julien Thierry <jthierry@redhat.com>
Link: https://lore.kernel.org/r/20210303170536.1838032-8-jthierry@redhat.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: insn: Add barrier encodings
Julien Thierry [Wed, 3 Mar 2021 17:05:34 +0000 (18:05 +0100)]
arm64: insn: Add barrier encodings

Create necessary functions to encode/decode aarch64 barrier
instructions.

DSB needs special case handling as it has multiple encodings.

Signed-off-by: Julien Thierry <jthierry@redhat.com>
Link: https://lore.kernel.org/r/20210303170536.1838032-7-jthierry@redhat.com
[will: Don't reject DSB #4]
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: insn: Add SVE instruction class
Julien Thierry [Wed, 3 Mar 2021 17:05:33 +0000 (18:05 +0100)]
arm64: insn: Add SVE instruction class

SVE has been public for some time now. Let the decoder acknowledge
its existence.

Signed-off-by: Julien Thierry <jthierry@redhat.com>
Link: https://lore.kernel.org/r/20210303170536.1838032-6-jthierry@redhat.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: Move instruction encoder/decoder under lib/
Julien Thierry [Wed, 3 Mar 2021 17:05:32 +0000 (18:05 +0100)]
arm64: Move instruction encoder/decoder under lib/

Aarch64 instruction set encoding and decoding logic can prove useful
for some features/tools both part of the kernel and outside the kernel.

Isolate the function dealing only with encoding/decoding instructions,
with minimal dependency on kernel utilities in order to be able to reuse
that code.

Code was only moved, no code should have been added, removed nor
modifier.

Signed-off-by: Julien Thierry <jthierry@redhat.com>
Link: https://lore.kernel.org/r/20210303170536.1838032-5-jthierry@redhat.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: Move aarch32 condition check functions
Julien Thierry [Wed, 3 Mar 2021 17:05:30 +0000 (18:05 +0100)]
arm64: Move aarch32 condition check functions

The functions to check condition flags for aarch32 execution is only
used to emulate aarch32 instructions. Move them from the instruction
encoding/decoding code to the trap handling files.

Signed-off-by: Julien Thierry <jthierry@redhat.com>
Link: https://lore.kernel.org/r/20210303170536.1838032-3-jthierry@redhat.com
[will: leave aarch32_opcode_cond_checks where it is]
Signed-off-by: Will Deacon <will@kernel.org>
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: Move patching utilities out of instruction encoding/decoding
Julien Thierry [Wed, 3 Mar 2021 17:05:29 +0000 (18:05 +0100)]
arm64: Move patching utilities out of instruction encoding/decoding

Files insn.[c|h] containt some functions used for instruction patching.
In order to reuse the instruction encoder/decoder, move the patching
utilities to their own file.

Signed-off-by: Julien Thierry <jthierry@redhat.com>
Link: https://lore.kernel.org/r/20210303170536.1838032-2-jthierry@redhat.com
[will: Include patching.h in insn.h to fix header mess; add __ASSEMBLY__ guards]
Signed-off-by: Will Deacon <will@kernel.org>
Signed-off-by: Will Deacon <will@kernel.org>
4 years agokasan: arm64: support specialized outlined tag mismatch checks
Peter Collingbourne [Wed, 26 May 2021 17:49:27 +0000 (10:49 -0700)]
kasan: arm64: support specialized outlined tag mismatch checks

By using outlined checks we can achieve a significant code size
improvement by moving the tag-based ASAN checks into separate
functions. Unlike the existing CONFIG_KASAN_OUTLINE mode these
functions have a custom calling convention that preserves most
registers and is specialized to the register containing the address
and the type of access, and as a result we can eliminate the code
size and performance overhead of a standard calling convention such
as AAPCS for these functions.

This change depends on a separate series of changes to Clang [1] to
support outlined checks in the kernel, although the change works fine
without them (we just don't get outlined checks). This is because the
flag -mllvm -hwasan-inline-all-checks=0 has no effect until the Clang
changes land. The flag was introduced in the Clang 9.0 timeframe as
part of the support for outlined checks in userspace and because our
minimum Clang version is 10.0 we can pass it unconditionally.

Outlined checks require a new runtime function with a custom calling
convention. Add this function to arch/arm64/lib.

I measured the code size of defconfig + tag-based KASAN, as well
as boot time (i.e. time to init launch) on a DragonBoard 845c with
an Android arm64 GKI kernel. The results are below:

                               code size    boot time
CONFIG_KASAN_INLINE=y before    92824064      6.18s
CONFIG_KASAN_INLINE=y after     38822400      6.65s
CONFIG_KASAN_OUTLINE=y          39215616     11.48s

We can see straight away that specialized outlined checks beat the
existing CONFIG_KASAN_OUTLINE=y on both code size and boot time
for tag-based ASAN.

As for the comparison between CONFIG_KASAN_INLINE=y before and after
we saw similar performance numbers in userspace [2] and decided
that since the performance overhead is minimal compared to the
overhead of tag-based ASAN itself as well as compared to the code
size improvements we would just replace the inlined checks with the
specialized outlined checks without the option to select between them,
and that is what I have implemented in this patch.

Signed-off-by: Peter Collingbourne <pcc@google.com>
Acked-by: Andrey Konovalov <andreyknvl@gmail.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Mark Rutland <mark.rutland@arm.com>
Link: https://linux-review.googlesource.com/id/I1a30036c70ab3c3ee78d75ed9b87ef7cdc3fdb76
Link: [1] https://reviews.llvm.org/D90426
Link: [2] https://reviews.llvm.org/D56954
Link: https://lore.kernel.org/r/20210526174927.2477847-3-pcc@google.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoMerge branch 'for-next/stacktrace' into for-next/kasan
Will Deacon [Wed, 26 May 2021 22:27:42 +0000 (23:27 +0100)]
Merge branch 'for-next/stacktrace' into for-next/kasan

Merge in stack unwinding work to cater for 8-byte aligned stack frames
which may be generated following optimisations to CONFIG_KASAN_OUTLINE.

* for-next/stacktrace:
  arm64: stacktrace: Relax frame record alignment requirement to 8 bytes
  arm64: Change the on_*stack functions to take a size argument
  arm64: Implement stack trace termination record

4 years agoarm64: smp: initialize cpu offset earlier
Mark Rutland [Thu, 20 May 2021 11:50:31 +0000 (12:50 +0100)]
arm64: smp: initialize cpu offset earlier

Now that we have a consistent place to initialize CPU context registers
early in the boot path, let's also initialize the per-cpu offset here.
This makes the primary and secondary boot paths more consistent, and
allows for the use of per-cpu operations earlier, which will be
necessary for instrumentation with KCSAN.

Note that smp_prepare_boot_cpu() still needs to re-initialize CPU0's
offset as immediately prior to this the per-cpu areas may be
reallocated, and hence the boot-time offset may be stale. A comment is
added to make this clear.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Ard Biesheuvel <ardb@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Suzuki Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will@kernel.org>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20210520115031.18509-7-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: smp: unify task and sp setup
Mark Rutland [Thu, 20 May 2021 11:50:30 +0000 (12:50 +0100)]
arm64: smp: unify task and sp setup

Once we enable the MMU, we have to initialize:

* SP_EL0 to point at the active task
* SP to point at the active task's stack
* SCS_SP to point at the active task's shadow stack

For all tasks (including init_task), this information can be derived
from the task's task_struct.

Let's unify __primary_switched and __secondary_switched to consistently
acquire this information from the relevant task_struct. At the same
time, let's fold this together with initializing a task's final frame.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Ard Biesheuvel <ardb@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Suzuki Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will@kernel.org>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20210520115031.18509-6-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: smp: remove stack from secondary_data
Mark Rutland [Thu, 20 May 2021 11:50:29 +0000 (12:50 +0100)]
arm64: smp: remove stack from secondary_data

When we boot a secondary CPU, we pass it a task and a stack to use. As
the stack is always the task's stack, which can be derived from the
task, let's have the secondary CPU derive this itself and avoid passing
redundant information.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Ard Biesheuvel <ardb@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Suzuki Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will@kernel.org>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20210520115031.18509-5-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: smp: remove pointless secondary_data maintenance
Mark Rutland [Thu, 20 May 2021 11:50:28 +0000 (12:50 +0100)]
arm64: smp: remove pointless secondary_data maintenance

All reads and writes of secondary_data occur with the MMU on, using
coherent attributes, so there's no need to perform any cache maintenance
for this.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Ard Biesheuvel <ardb@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Suzuki Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will@kernel.org>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20210520115031.18509-4-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: assembler: add set_this_cpu_offset
Mark Rutland [Thu, 20 May 2021 11:50:27 +0000 (12:50 +0100)]
arm64: assembler: add set_this_cpu_offset

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Cc: Ard Biesheuvel <ardb@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: James Morse <james.morse@arm.com>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Suzuki Poulose <suzuki.poulose@arm.com>
Cc: Will Deacon <will@kernel.org>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20210520115031.18509-3-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoMerge branch 'for-next/stacktrace' into for-next/boot
Will Deacon [Wed, 26 May 2021 21:42:51 +0000 (22:42 +0100)]
Merge branch 'for-next/stacktrace' into for-next/boot

Merge in stack unwinding work to minimise conflicts in head.S.

* for-next/stacktrace:
  arm64: stacktrace: Relax frame record alignment requirement to 8 bytes
  arm64: Change the on_*stack functions to take a size argument
  arm64: Implement stack trace termination record

4 years agoarm64: Check if GMID_EL1.BS is the same on all CPUs
Catalin Marinas [Wed, 26 May 2021 19:36:21 +0000 (20:36 +0100)]
arm64: Check if GMID_EL1.BS is the same on all CPUs

The GMID_EL1.BS field determines the number of tags accessed by the
LDGM/STGM instructions (EL1 and up), used by the kernel for copying or
zeroing page tags.

Taint the kernel if GMID_EL1.BS differs between CPUs but only of
CONFIG_ARM64_MTE is enabled.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Suzuki K Poulose <Suzuki.Poulose@arm.com>
Link: https://lore.kernel.org/r/20210526193621.21559-3-catalin.marinas@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: Change the cpuinfo_arm64 member type for some sysregs to u64
Catalin Marinas [Wed, 26 May 2021 19:36:20 +0000 (20:36 +0100)]
arm64: Change the cpuinfo_arm64 member type for some sysregs to u64

The architecture has been updated and the CTR_EL0, CNTFRQ_EL0,
DCZID_EL0, MIDR_EL1, REVIDR_EL1 registers are all 64-bit, even if most
of them have a RES0 top 32-bit.

Change their type to u64 in struct cpuinfo_arm64.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Suzuki K Poulose <Suzuki.Poulose@arm.com>
Link: https://lore.kernel.org/r/20210526193621.21559-2-catalin.marinas@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64/sve: Skip flushing Z registers with 128 bit vectors
Mark Brown [Wed, 12 May 2021 15:11:31 +0000 (16:11 +0100)]
arm64/sve: Skip flushing Z registers with 128 bit vectors

When the SVE vector length is 128 bits then there are no bits in the Z
registers which are not shared with the V registers so we can skip them
when zeroing state not shared with FPSIMD, this results in a minor
performance improvement.

Signed-off-by: Mark Brown <broonie@kernel.org>
Reviewed-by: Dave Martin <Dave.Martin@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/20210512151131.27877-4-broonie@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64/sve: Use the sve_flush macros in sve_load_from_fpsimd_state()
Mark Brown [Wed, 12 May 2021 15:11:30 +0000 (16:11 +0100)]
arm64/sve: Use the sve_flush macros in sve_load_from_fpsimd_state()

This makes the code a bit clearer and as a result we can also make the
indentation more normal, there is no change to the generated code.

Signed-off-by: Mark Brown <broonie@kernel.org>
Reviewed-by: Dave Martin <Dave.Martin@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/20210512151131.27877-3-broonie@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64/sve: Split _sve_flush macro into separate Z and predicate flushes
Mark Brown [Wed, 12 May 2021 15:11:29 +0000 (16:11 +0100)]
arm64/sve: Split _sve_flush macro into separate Z and predicate flushes

Trivial refactoring to support further work, no change to generated code.

Signed-off-by: Mark Brown <broonie@kernel.org>
Reviewed-by: Dave Martin <Dave.Martin@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/20210512151131.27877-2-broonie@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: stacktrace: Relax frame record alignment requirement to 8 bytes
Peter Collingbourne [Wed, 26 May 2021 17:49:26 +0000 (10:49 -0700)]
arm64: stacktrace: Relax frame record alignment requirement to 8 bytes

The AAPCS places no requirements on the alignment of the frame
record. In theory it could be placed anywhere, although it seems
sensible to require it to be aligned to 8 bytes. With an upcoming
enhancement to tag-based KASAN Clang will begin creating frame records
located at an address that is only aligned to 8 bytes. Accommodate
such frame records in the stack unwinding code.

As pointed out by Mark Rutland, the userspace stack unwinding code
has the same problem, so fix it there as well.

Signed-off-by: Peter Collingbourne <pcc@google.com>
Link: https://linux-review.googlesource.com/id/Ia22c375230e67ca055e9e4bb639383567f7ad268
Acked-by: Andrey Konovalov <andreyknvl@gmail.com>
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/20210526174927.2477847-2-pcc@google.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: Change the on_*stack functions to take a size argument
Peter Collingbourne [Wed, 26 May 2021 17:49:25 +0000 (10:49 -0700)]
arm64: Change the on_*stack functions to take a size argument

unwind_frame() was previously implicitly checking that the frame
record is in bounds of the stack by enforcing that FP is both aligned
to 16 and in bounds of the stack. Once the FP alignment requirement
is relaxed to 8 this will not be sufficient because it does not
account for the case where FP points to 8 bytes before the end of the
stack.

Make the check explicit by changing the on_*stack functions to take a
size argument and adjusting the callers to pass the appropriate sizes.

Signed-off-by: Peter Collingbourne <pcc@google.com>
Link: https://linux-review.googlesource.com/id/Ib7a3eb3eea41b0687ffaba045ceb2012d077d8b4
Reviewed-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/20210526174927.2477847-1-pcc@google.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoMerge branch 'for-next/ffa' into for-next/smccc
Will Deacon [Wed, 26 May 2021 16:14:43 +0000 (17:14 +0100)]
Merge branch 'for-next/ffa' into for-next/smccc

Merge in SMCCC update from Sudeep, which is a branch shared with arm-soc
for the FF-A driver work.

* for-next/ffa:
  arm64: smccc: Add support for SMCCCv1.2 extended input/output registers

4 years agoarm64: smccc: Add support for SMCCCv1.2 extended input/output registers
Sudeep Holla [Tue, 18 May 2021 16:36:18 +0000 (17:36 +0100)]
arm64: smccc: Add support for SMCCCv1.2 extended input/output registers

SMCCC v1.2 allows x8-x17 to be used as parameter registers and x4—x17
to be used as result registers in SMC64/HVC64. Arm Firmware Framework
for Armv8-A specification makes use of x0-x7 as parameter and result
registers. There are other users like Hyper-V who intend to use beyond
x0-x7 as well.

Current SMCCC interface in the kernel just use x0-x7 as parameter and
x0-x3 as result registers as required by SMCCCv1.0. Let us add new
interface to support this extended set of input/output registers namely
x0-x17 as both parameter and result registers.

Acked-by: Mark Rutland <mark.rutland@arm.com>
Tested-by: Michael Kelley <mikelley@microsoft.com>
Reviewed-by: Michael Kelley <mikelley@microsoft.com>
Cc: Will Deacon <will@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Sudeep Holla <sudeep.holla@arm.com>
Reviewed-by: Mark Brown <broonie@kernel.org>
Link: https://lore.kernel.org/r/20210518163618.43950-1-sudeep.holla@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: Rename arm64-internal cache maintenance functions
Fuad Tabba [Mon, 24 May 2021 08:30:01 +0000 (09:30 +0100)]
arm64: Rename arm64-internal cache maintenance functions

Although naming across the codebase isn't that consistent, it
tends to follow certain patterns. Moreover, the term "flush"
isn't defined in the Arm Architecture reference manual, and might
be interpreted to mean clean, invalidate, or both for a cache.

Rename arm64-internal functions to make the naming internally
consistent, as well as making it consistent with the Arm ARM, by
specifying whether it applies to the instruction, data, or both
caches, whether the operation is a clean, invalidate, or both.
Also specify which point the operation applies to, i.e., to the
point of unification (PoU), coherency (PoC), or persistence
(PoP).

This commit applies the following sed transformation to all files
under arch/arm64:

"s/\b__flush_cache_range\b/caches_clean_inval_pou_macro/g;"\
"s/\b__flush_icache_range\b/caches_clean_inval_pou/g;"\
"s/\binvalidate_icache_range\b/icache_inval_pou/g;"\
"s/\b__flush_dcache_area\b/dcache_clean_inval_poc/g;"\
"s/\b__inval_dcache_area\b/dcache_inval_poc/g;"\
"s/__clean_dcache_area_poc\b/dcache_clean_poc/g;"\
"s/\b__clean_dcache_area_pop\b/dcache_clean_pop/g;"\
"s/\b__clean_dcache_area_pou\b/dcache_clean_pou/g;"\
"s/\b__flush_cache_user_range\b/caches_clean_inval_user_pou/g;"\
"s/\b__flush_icache_all\b/icache_inval_all_pou/g;"

Note that __clean_dcache_area_poc is deliberately missing a word
boundary check at the beginning in order to match the efistub
symbols in image-vars.h.

Also note that, despite its name, __flush_icache_range operates
on both instruction and data caches. The name change here
reflects that.

No functional change intended.

Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Fuad Tabba <tabba@google.com>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20210524083001.2586635-19-tabba@google.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: Fix cache maintenance function comments
Fuad Tabba [Mon, 24 May 2021 08:30:00 +0000 (09:30 +0100)]
arm64: Fix cache maintenance function comments

Fix and expand comments for the cache maintenance functions in
cacheflush.h. Adds comments to functions that weren't described
before. Explains what the functions do using Arm Architecture
Reference Manual terminology.

No functional change intended.

Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Fuad Tabba <tabba@google.com>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20210524083001.2586635-18-tabba@google.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: sync_icache_aliases to take end parameter instead of size
Fuad Tabba [Mon, 24 May 2021 08:29:59 +0000 (09:29 +0100)]
arm64: sync_icache_aliases to take end parameter instead of size

To be consistent with other functions with similar names and
functionality in cacheflush.h, cache.S, and cachetlb.rst, change
to specify the range in terms of start and end, as opposed to
start and size.

No functional change intended.

Reported-by: Will Deacon <will@kernel.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Fuad Tabba <tabba@google.com>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20210524083001.2586635-17-tabba@google.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: __clean_dcache_area_pou to take end parameter instead of size
Fuad Tabba [Mon, 24 May 2021 08:29:58 +0000 (09:29 +0100)]
arm64: __clean_dcache_area_pou to take end parameter instead of size

To be consistent with other functions with similar names and
functionality in cacheflush.h, cache.S, and cachetlb.rst, change
to specify the range in terms of start and end, as opposed to
start and size.

No functional change intended.

Reported-by: Will Deacon <will@kernel.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Fuad Tabba <tabba@google.com>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20210524083001.2586635-16-tabba@google.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: __clean_dcache_area_pop to take end parameter instead of size
Fuad Tabba [Mon, 24 May 2021 08:29:57 +0000 (09:29 +0100)]
arm64: __clean_dcache_area_pop to take end parameter instead of size

To be consistent with other functions with similar names and
functionality in cacheflush.h, cache.S, and cachetlb.rst, change
to specify the range in terms of start and end, as opposed to
start and size.

No functional change intended.

Reported-by: Will Deacon <will@kernel.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Fuad Tabba <tabba@google.com>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20210524083001.2586635-15-tabba@google.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: __clean_dcache_area_poc to take end parameter instead of size
Fuad Tabba [Mon, 24 May 2021 08:29:56 +0000 (09:29 +0100)]
arm64: __clean_dcache_area_poc to take end parameter instead of size

To be consistent with other functions with similar names and
functionality in cacheflush.h, cache.S, and cachetlb.rst, change
to specify the range in terms of start and end, as opposed to
start and size.

Because the code is shared with __dma_clean_area, it changes the
parameters for that as well. However, __dma_clean_area is local to
cache.S, so no other users are affected.

No functional change intended.

Reported-by: Will Deacon <will@kernel.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Fuad Tabba <tabba@google.com>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20210524083001.2586635-14-tabba@google.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: __flush_dcache_area to take end parameter instead of size
Fuad Tabba [Mon, 24 May 2021 08:29:55 +0000 (09:29 +0100)]
arm64: __flush_dcache_area to take end parameter instead of size

To be consistent with other functions with similar names and
functionality in cacheflush.h, cache.S, and cachetlb.rst, change
to specify the range in terms of start and end, as opposed to
start and size.

No functional change intended.

Reported-by: Will Deacon <will@kernel.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Fuad Tabba <tabba@google.com>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20210524083001.2586635-13-tabba@google.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: dcache_by_line_op to take end parameter instead of size
Fuad Tabba [Mon, 24 May 2021 08:29:54 +0000 (09:29 +0100)]
arm64: dcache_by_line_op to take end parameter instead of size

To be consistent with other functions with similar names and
functionality in cacheflush.h, cache.S, and cachetlb.rst, change
to specify the range in terms of start and end, as opposed to
start and size.

No functional change intended.

Reported-by: Will Deacon <will@kernel.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Fuad Tabba <tabba@google.com>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20210524083001.2586635-12-tabba@google.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: __inval_dcache_area to take end parameter instead of size
Fuad Tabba [Mon, 24 May 2021 08:29:53 +0000 (09:29 +0100)]
arm64: __inval_dcache_area to take end parameter instead of size

To be consistent with other functions with similar names and
functionality in cacheflush.h, cache.S, and cachetlb.rst, change
to specify the range in terms of start and end, as opposed to
start and size.

Because the code is shared with __dma_inv_area, it changes the
parameters for that as well. However, __dma_inv_area is local to
cache.S, so no other users are affected.

No functional change intended.

Reported-by: Will Deacon <will@kernel.org>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Fuad Tabba <tabba@google.com>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20210524083001.2586635-11-tabba@google.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: Fix comments to refer to correct function __flush_icache_range
Fuad Tabba [Mon, 24 May 2021 08:29:52 +0000 (09:29 +0100)]
arm64: Fix comments to refer to correct function __flush_icache_range

Many comments refer to the function flush_icache_range, where the
intent is in fact __flush_icache_range. Fix these comments to
refer to the intended function.

That's probably due to commit 3b8c9f1cdfc506e9 ("arm64: IPI each
CPU after invalidating the I-cache for kernel mappings"), which
renamed flush_icache_range() to __flush_icache_range() and added
a wrapper.

No functional change intended.

Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Fuad Tabba <tabba@google.com>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20210524083001.2586635-10-tabba@google.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: Move documentation of dcache_by_line_op
Fuad Tabba [Mon, 24 May 2021 08:29:51 +0000 (09:29 +0100)]
arm64: Move documentation of dcache_by_line_op

The comment describing the macro dcache_by_line_op is placed
right before the previous macro of the one it describes, which is
a bit confusing. Move it to the macro it describes (dcache_by_line_op).

No functional change intended.

Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Fuad Tabba <tabba@google.com>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20210524083001.2586635-9-tabba@google.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: assembler: remove user_alt
Fuad Tabba [Mon, 24 May 2021 08:29:50 +0000 (09:29 +0100)]
arm64: assembler: remove user_alt

user_alt isn't being used anymore. It's also simpler and clearer
to directly use alternative_insn and _cond_extable in-line when
needed.

Reported-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/linux-arm-kernel/20210520125735.GF17233@C02TD0UTHF1T.local/
Signed-off-by: Fuad Tabba <tabba@google.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20210524083001.2586635-8-tabba@google.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: Downgrade flush_icache_range to invalidate
Fuad Tabba [Mon, 24 May 2021 08:29:49 +0000 (09:29 +0100)]
arm64: Downgrade flush_icache_range to invalidate

Since __flush_dcache_area is called right before,
invalidate_icache_range is sufficient in this case.

Rewrite the comment to better explain the rationale behind the
cache maintenance operations used here.

No functional change intended.
Possible performance impact due to invalidating only the icache
rather than invalidating and cleaning both caches.

Reported-by: Catalin Marinas <catalin.marinas@arm.com>
Reported-by: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/linux-arch/20200511110014.lb9PEahJ4hVOYrbwIb_qUHXyNy9KQzNFdb_I3YlzY6A@z/
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Fuad Tabba <tabba@google.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20210524083001.2586635-7-tabba@google.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: Do not enable uaccess for invalidate_icache_range
Fuad Tabba [Mon, 24 May 2021 08:29:48 +0000 (09:29 +0100)]
arm64: Do not enable uaccess for invalidate_icache_range

invalidate_icache_range() works on kernel addresses, and doesn't
need uaccess. Remove the code that toggles uaccess_ttbr0_enable,
as well as the code that emits an entry into the exception table
(via the macro invalidate_icache_by_line).

Changes return type of invalidate_icache_range() from int (which
used to indicate a fault) to void, since it doesn't need uaccess
and won't fault. Note that return value was never checked by any
of the callers.

No functional change intended.
Possible performance impact due to the reduced number of
instructions.

Reported-by: Catalin Marinas <catalin.marinas@arm.com>
Reported-by: Will Deacon <will@kernel.org>
Link: https://lore.kernel.org/linux-arch/20200511110014.lb9PEahJ4hVOYrbwIb_qUHXyNy9KQzNFdb_I3YlzY6A@z/
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Fuad Tabba <tabba@google.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20210524083001.2586635-6-tabba@google.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: Do not enable uaccess for flush_icache_range
Fuad Tabba [Mon, 24 May 2021 08:29:47 +0000 (09:29 +0100)]
arm64: Do not enable uaccess for flush_icache_range

__flush_icache_range works on kernel addresses, and doesn't need
uaccess. The existing code is a side-effect of its current
implementation with __flush_cache_user_range fallthrough.

Instead of fallthrough to share the code, use a common macro for
the two where the caller specifies an optional fixup label if
user access is needed. If provided, this label would be used to
generate an extable entry.

Simplify the code to use dcache_by_line_op, instead of
replicating much of its functionality.

No functional change intended.
Possible performance impact due to the reduced number of
instructions.

Reported-by: Catalin Marinas <catalin.marinas@arm.com>
Reported-by: Will Deacon <will@kernel.org>
Reported-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/linux-arch/20200511110014.lb9PEahJ4hVOYrbwIb_qUHXyNy9KQzNFdb_I3YlzY6A@z/
Link: https://lore.kernel.org/linux-arm-kernel/20210521121846.GB1040@C02TD0UTHF1T.local/
Signed-off-by: Fuad Tabba <tabba@google.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20210524083001.2586635-5-tabba@google.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: Apply errata to swsusp_arch_suspend_exit
Fuad Tabba [Mon, 24 May 2021 08:29:46 +0000 (09:29 +0100)]
arm64: Apply errata to swsusp_arch_suspend_exit

The Arm errata covered by ARM64_WORKAROUND_CLEAN_CACHE require
that "dc cvau" instructions get promoted to "dc civac".

Reported-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Fuad Tabba <tabba@google.com>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20210524083001.2586635-4-tabba@google.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: assembler: add conditional cache fixups
Mark Rutland [Mon, 24 May 2021 08:29:45 +0000 (09:29 +0100)]
arm64: assembler: add conditional cache fixups

It would be helpful if we could use both `dcache_by_line_op` and
`invalidate_icache_by_line` for user memory without accidentally fixing
up unexpected faults when performing maintenance on kernel addresses.

Let's make this possible by having both macros take an optional fixup
label, and only generating an extable entry if a label is provided.

At the same time, let's clean up the labels used to be globally unique
using \@ as we do for other macros.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Fuad Tabba <tabba@google.com>
Cc: Ard Biesheuvel <aedb@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Fuad Tabba <tabba@google.com>
Cc: Will Deacon <will@kernel.org>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20210524083001.2586635-3-tabba@google.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: assembler: replace `kaddr` with `addr`
Mark Rutland [Mon, 24 May 2021 08:29:44 +0000 (09:29 +0100)]
arm64: assembler: replace `kaddr` with `addr`

The `__dcache_op_workaround_clean_cache` and `dcache_by_line_op` macros
are only expected to be usedc on kernel memory, without a user fault
fixup, and so we named their address variables `kaddr` to make this
clear.

Subseuqent patches will modify these to also work on user memory with an
(optional) user fault fixup, where `kaddr` won't make as much sense. To
aid the legibility of patches, this patch (only) replaces `kaddr` with
`addr` as a preparatory step.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Signed-off-by: Fuad Tabba <tabba@google.com>
Cc: Ard Biesheuvel <aedb@kernel.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Fuad Tabba <tabba@google.com>
Cc: Will Deacon <will@kernel.org>
Reviewed-by: Ard Biesheuvel <ardb@kernel.org>
Link: https://lore.kernel.org/r/20210524083001.2586635-2-tabba@google.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64/mm: Make vmemmap_free() available only with CONFIG_MEMORY_HOTPLUG
Anshuman Khandual [Mon, 24 May 2021 07:40:30 +0000 (13:10 +0530)]
arm64/mm: Make vmemmap_free() available only with CONFIG_MEMORY_HOTPLUG

vmemmap_free() callsites (mm/sparse.c) and declaration (include/linux/mm.h)
are protected with CONFIG_MEMORY_HOTPLUG. This function is not required if
CONFIG_MEMORY_HOTPLUG is not enabled. Hence move the config wrapper outside
the function definition.

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
Link: https://lore.kernel.org/r/1621842030-23256-1-git-send-email-anshuman.khandual@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agokasan: speed up mte_set_mem_tag_range
Evgenii Stepanov [Fri, 21 May 2021 01:00:23 +0000 (18:00 -0700)]
kasan: speed up mte_set_mem_tag_range

Use DC GVA / DC GZVA to speed up KASan memory tagging in HW tags mode.

The first cacheline is always tagged using STG/STZG even if the address is
cacheline-aligned, as benchmarks show it is faster than a conditional
branch.

Signed-off-by: Evgenii Stepanov <eugenis@google.com>
Co-developed-by: Peter Collingbourne <pcc@google.com>
Signed-off-by: Peter Collingbourne <pcc@google.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/20210521010023.3244784-1-eugenis@google.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agokselftest/arm64: Add missing newline to SVE test skipping output
Mark Brown [Tue, 18 May 2021 16:33:31 +0000 (17:33 +0100)]
kselftest/arm64: Add missing newline to SVE test skipping output

The newline is expected to come from the caller but got missed for this
test.

Signed-off-by: Mark Brown <broonie@kernel.org>
Link: https://lore.kernel.org/r/20210518163331.38268-1-broonie@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: Document requirement for access to FEAT_HCX
Mark Brown [Wed, 12 May 2021 16:23:50 +0000 (17:23 +0100)]
arm64: Document requirement for access to FEAT_HCX

v8.7 of the architecture introduced FEAT_HCX which adds an additional
hypervisor configuration register HCRX_EL2. Even though Linux does not
currently make use of this feature let's document that the EL3 trap for
access to the register should be disabled so that we are able to make
use of it in future.

Signed-off-by: Mark Brown <broonie@kernel.org>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Link: https://lore.kernel.org/r/20210512162350.20349-1-broonie@kernel.org
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64/mm: Remove [PUD|PMD]_TABLE_BIT from [pud|pmd]_bad()
Anshuman Khandual [Mon, 10 May 2021 11:07:51 +0000 (16:37 +0530)]
arm64/mm: Remove [PUD|PMD]_TABLE_BIT from [pud|pmd]_bad()

Semantics wise, [pud|pmd]_bad() have always implied that a given [PUD|PMD]
entry does not have a pointer to the next level page table. This had been
made clear in the commit a1c76574f345 ("arm64: mm: use *_sect to check for
section maps"). Hence explicitly check for a table entry rather than just
testing a single bit. This basically redefines [pud|pmd]_bad() in terms of
[pud|pmd]_table() making the semantics clear.

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/1620644871-26280-1-git-send-email-anshuman.khandual@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agodrivers/perf: arm-cci: Fix checkpatch spacing error
Junhao He [Tue, 11 May 2021 12:27:34 +0000 (20:27 +0800)]
drivers/perf: arm-cci: Fix checkpatch spacing error

Fix some coding style issues reported by checkpatch.pl, including
following types:

ERROR: need consistent spacing around '-' (ctx:WxV)
ERROR: space required before the open parenthesis '('

Signed-off-by: Junhao He <hejunhao2@hisilicon.com>
Signed-off-by: Jay Fang <f.fangjian@huawei.com>
Link: https://lore.kernel.org/r/1620736054-58412-5-git-send-email-f.fangjian@huawei.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agodrivers/perf: arm-cmn: Add space after ','
Junhao He [Tue, 11 May 2021 12:27:33 +0000 (20:27 +0800)]
drivers/perf: arm-cmn: Add space after ','

Fix a warning from checkpatch.pl.

ERROR: space required after that ',' (ctx:VxV)

Signed-off-by: Junhao He <hejunhao2@hisilicon.com>
Signed-off-by: Jay Fang <f.fangjian@huawei.com>
Link: https://lore.kernel.org/r/1620736054-58412-4-git-send-email-f.fangjian@huawei.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agodrivers/perf: arm_pmu: Fix some coding style issues
Junhao He [Tue, 11 May 2021 12:27:32 +0000 (20:27 +0800)]
drivers/perf: arm_pmu: Fix some coding style issues

Fix some coding style issues reported by checkpatch.pl, including
following types:

ERROR: spaces required around that '=' (ctx:VxW)
WARNING: Possible unnecessary 'out of memory' message

Signed-off-by: Junhao He <hejunhao2@hisilicon.com>
Signed-off-by: Jay Fang <f.fangjian@huawei.com>
Link: https://lore.kernel.org/r/1620736054-58412-3-git-send-email-f.fangjian@huawei.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agodrivers/perf: arm_spe_pmu: Fix some coding style issues
Junhao He [Tue, 11 May 2021 12:27:31 +0000 (20:27 +0800)]
drivers/perf: arm_spe_pmu: Fix some coding style issues

Fix some coding style issues reported by checkpatch.pl, including
following types:

WARNING: void function return statements are not generally useful
WARNING: Possible unnecessary 'out of memory' message

Signed-off-by: Junhao He <hejunhao2@hisilicon.com>
Signed-off-by: Jay Fang <f.fangjian@huawei.com>
Link: https://lore.kernel.org/r/1620736054-58412-2-git-send-email-f.fangjian@huawei.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agodrivers/perf: Remove redundant dev_err call in tx2_uncore_pmu_init_dev()
Zou Wei [Tue, 11 May 2021 06:42:44 +0000 (14:42 +0800)]
drivers/perf: Remove redundant dev_err call in tx2_uncore_pmu_init_dev()

There is a error message within devm_ioremap_resource
already, so remove the dev_err call to avoid redundant
error message.

Reported-by: Hulk Robot <hulkci@huawei.com>
Signed-off-by: Zou Wei <zou_wei@huawei.com>
Link: https://lore.kernel.org/r/1620715364-107460-1-git-send-email-zou_wei@huawei.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64/mm: Validate CONFIG_PGTABLE_LEVELS
Anshuman Khandual [Mon, 10 May 2021 12:22:06 +0000 (17:52 +0530)]
arm64/mm: Validate CONFIG_PGTABLE_LEVELS

CONFIG_PGTABLE_LEVELS has been statically defined in (arch/arm64/Kconfig)
depending on the page size and requested virtual address range. In order to
validate this page table levels selection this adds a BUILD_BUG_ON() as per
the existing formula ARM64_HW_PGTABLE_LEVELS(). This would help protect any
inadvertent changes to CONFIG_PGTABLE_LEVELS selection.

Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: linux-arm-kernel@lists.infradead.org
Cc: linux-kernel@vger.kernel.org
Signed-off-by: Anshuman Khandual <anshuman.khandual@arm.com>
Link: https://lore.kernel.org/r/1620649326-24115-1-git-send-email-anshuman.khandual@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoarm64: Implement stack trace termination record
Madhavan T. Venkataraman [Mon, 10 May 2021 11:00:26 +0000 (12:00 +0100)]
arm64: Implement stack trace termination record

Reliable stacktracing requires that we identify when a stacktrace is
terminated early. We can do this by ensuring all tasks have a final
frame record at a known location on their task stack, and checking
that this is the final frame record in the chain.

We'd like to use task_pt_regs(task)->stackframe as the final frame
record, as this is already setup upon exception entry from EL0. For
kernel tasks we need to consistently reserve the pt_regs and point x29
at this, which we can do with small changes to __primary_switched,
__secondary_switched, and copy_process().

Since the final frame record must be at a specific location, we must
create the final frame record in __primary_switched and
__secondary_switched rather than leaving this to start_kernel and
secondary_start_kernel. Thus, __primary_switched and
__secondary_switched will now show up in stacktraces for the idle tasks.

Since the final frame record is now identified by its location rather
than by its contents, we identify it at the start of unwind_frame(),
before we read any values from it.

External debuggers may terminate the stack trace when FP == 0. In the
pt_regs->stackframe, the PC is 0 as well. So, stack traces taken in the
debugger may print an extra record 0x0 at the end. While this is not
pretty, this does not do any harm. This is a small price to pay for
having reliable stack trace termination in the kernel. That said, gdb
does not show the extra record probably because it uses DWARF and not
frame pointers for stack traces.

Signed-off-by: Madhavan T. Venkataraman <madvenka@linux.microsoft.com>
Reviewed-by: Mark Brown <broonie@kernel.org>
[Mark: rebase, use ASM_BUG(), update comments, update commit message]
Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/20210510110026.18061-1-mark.rutland@arm.com
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoperf/hisi: Use irq_set_affinity()
Thomas Gleixner [Tue, 18 May 2021 09:17:33 +0000 (11:17 +0200)]
perf/hisi: Use irq_set_affinity()

These drivers use irq_set_affinity_hint() to set the affinity for the PMU
interrupts, which relies on the undocumented side effect that this function
actually sets the affinity under the hood.

Setting an hint is clearly not a guarantee and for these PMU interrupts an
affinity hint, which is supposed to guide userspace for setting affinity,
is beyond pointless, because the affinity of these interrupts cannot be
modified from user space.

Aside of that the error checks are bogus because the only error which is
returned from irq_set_affinity_hint() is when there is no irq descriptor
for the interrupt number, but not when the affinity set fails. That's on
purpose because the hint can point to an offline CPU.

Replace the mindless abuse with irq_set_affinity().

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Shaokun Zhang <zhangshaokun@hisilicon.com>
Cc: Will Deacon <will@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: linux-arm-kernel@lists.infradead.org
Acked-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/20210518093118.813375875@linutronix.de
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoperf/imx_ddr: Use irq_set_affinity()
Thomas Gleixner [Tue, 18 May 2021 09:17:32 +0000 (11:17 +0200)]
perf/imx_ddr: Use irq_set_affinity()

The driver uses irq_set_affinity_hint() to set the affinity for the PMU
interrupts, which relies on the undocumented side effect that this function
actually sets the affinity under the hood.

Setting an hint is clearly not a guarantee and for these PMU interrupts an
affinity hint, which is supposed to guide userspace for setting affinity,
is beyond pointless, because the affinity of these interrupts cannot be
modified from user space.

Aside of that the error checks are bogus because the only error which is
returned from irq_set_affinity_hint() is when there is no irq descriptor
for the interrupt number, but not when the affinity set fails. That's on
purpose because the hint can point to an offline CPU.

Replace the mindless abuse with irq_set_affinity().

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Frank Li <Frank.li@nxp.com>
Cc: Will Deacon <will@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: Shawn Guo <shawnguo@kernel.org>
Cc: Sascha Hauer <s.hauer@pengutronix.de>
Cc: Pengutronix Kernel Team <kernel@pengutronix.de>
Cc: Fabio Estevam <festevam@gmail.com>
Cc: NXP Linux Team <linux-imx@nxp.com>
Cc: linux-arm-kernel@lists.infradead.org
Acked-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/20210518093118.699566062@linutronix.de
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoperf/arm-smmuv3: Use irq_set_affinity()
Thomas Gleixner [Tue, 18 May 2021 09:17:31 +0000 (11:17 +0200)]
perf/arm-smmuv3: Use irq_set_affinity()

The driver uses irq_set_affinity_hint() to set the affinity for the PMU
interrupts, which relies on the undocumented side effect that this function
actually sets the affinity under the hood.

Setting an hint is clearly not a guarantee and for these PMU interrupts an
affinity hint, which is supposed to guide userspace for setting affinity,
is beyond pointless, because the affinity of these interrupts cannot be
modified from user space.

Aside of that the error checks are bogus because the only error which is
returned from irq_set_affinity_hint() is when there is no irq descriptor
for the interrupt number, but not when the affinity set fails. That's on
purpose because the hint can point to an offline CPU.

Replace the mindless abuse with irq_set_affinity().

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Will Deacon <will@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: linux-arm-kernel@lists.infradead.org
Acked-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/20210518093118.603636289@linutronix.de
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoperf/arm-dsu: Use irq_set_affinity()
Thomas Gleixner [Tue, 18 May 2021 09:17:30 +0000 (11:17 +0200)]
perf/arm-dsu: Use irq_set_affinity()

The driver uses irq_set_affinity_hint() to set the affinity for the PMU
interrupts, which relies on the undocumented side effect that this function
actually sets the affinity under the hood.

Setting an hint is clearly not a guarantee and for these PMU interrupts an
affinity hint, which is supposed to guide userspace for setting affinity,
is beyond pointless, because the affinity of these interrupts cannot be
modified from user space.

Aside of that the error checks are bogus because the only error which is
returned from irq_set_affinity_hint() is when there is no irq descriptor
for the interrupt number, but not when the affinity set fails. That's on
purpose because the hint can point to an offline CPU.

Replace the mindless abuse with irq_set_affinity().

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Will Deacon <will@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: linux-arm-kernel@lists.infradead.org
Acked-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/20210518093118.505110632@linutronix.de
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoperf/arm-dmc620: Use irq_set_affinity()
Thomas Gleixner [Tue, 18 May 2021 09:17:29 +0000 (11:17 +0200)]
perf/arm-dmc620: Use irq_set_affinity()

The driver uses irq_set_affinity_hint() to set the affinity for the PMU
interrupts, which relies on the undocumented side effect that this function
actually sets the affinity under the hood.

Setting an hint is clearly not a guarantee and for these PMU interrupts an
affinity hint, which is supposed to guide userspace for setting affinity,
is beyond pointless, because the affinity of these interrupts cannot be
modified from user space.

Aside of that the error checks are bogus because the only error which is
returned from irq_set_affinity_hint() is when there is no irq descriptor
for the interrupt number, but not when the affinity set fails. That's on
purpose because the hint can point to an offline CPU.

Replace the mindless abuse with irq_set_affinity().

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Will Deacon <will@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: linux-arm-kernel@lists.infradead.org
Acked-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/20210518093118.395086573@linutronix.de
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoperf/arm-cmn: Use irq_set_affinity()
Thomas Gleixner [Tue, 18 May 2021 09:17:28 +0000 (11:17 +0200)]
perf/arm-cmn: Use irq_set_affinity()

The driver uses irq_set_affinity_hint() to set the affinity for the PMU
interrupts, which relies on the undocumented side effect that this function
actually sets the affinity under the hood.

Setting an hint is clearly not a guarantee and for these PMU interrupts an
affinity hint, which is supposed to guide userspace for setting affinity,
is beyond pointless, because the affinity of these interrupts cannot be
modified from user space.

Aside of that the error checks are bogus because the only error which is
returned from irq_set_affinity_hint() is when there is no irq descriptor
for the interrupt number, but not when the affinity set fails. That's on
purpose because the hint can point to an offline CPU.

Replace the mindless abuse with irq_set_affinity().

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Will Deacon <will@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: linux-arm-kernel@lists.infradead.org
Acked-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/20210518093118.277228577@linutronix.de
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoperf/arm-ccn: Use irq_set_affinity()
Thomas Gleixner [Tue, 18 May 2021 09:17:27 +0000 (11:17 +0200)]
perf/arm-ccn: Use irq_set_affinity()

The driver uses irq_set_affinity_hint() to set the affinity for the PMU
interrupts, which relies on the undocumented side effect that this function
actually sets the affinity under the hood.

Setting an hint is clearly not a guarantee and for these PMU interrupts an
affinity hint, which is supposed to guide userspace for setting affinity,
is beyond pointless, because the affinity of these interrupts cannot be
modified from user space.

Aside of that the error checks are bogus because the only error which is
returned from irq_set_affinity_hint() is when there is no irq descriptor
for the interrupt number, but not when the affinity set fails. That's on
purpose because the hint can point to an offline CPU.

Replace the mindless abuse with irq_set_affinity().

Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Cc: Will Deacon <will@kernel.org>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: linux-arm-kernel@lists.infradead.org
Acked-by: Mark Rutland <mark.rutland@arm.com>
Link: https://lore.kernel.org/r/20210518093118.128250213@linutronix.de
Signed-off-by: Will Deacon <will@kernel.org>
4 years agoMerge tag 'irq-export-set-affinity' of git://git.kernel.org/pub/scm/linux/kernel...
Will Deacon [Mon, 24 May 2021 09:58:14 +0000 (10:58 +0100)]
Merge tag 'irq-export-set-affinity' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip into for-next/perf

Export irq_set_affinity() for cleaning up drivers/perf

Pull export of irq_set_affinity() from Thomas Gleixner, so we can convert
all new and exiting Arm PMU drivers to the new interface.

* tag 'irq-export-set-affinity' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
  genirq: Export affinity setter for modules