iommu: arm-smmu: Fix Tegra workaround for PAGE_SIZE mappings
PAGE_SIZE can be 16KB for Tegra which is not supported by MMU-500 on
both Tegra194 and Tegra234. Retain only valid granularities from
pgsize_bitmap which would either be 4KB or 64KB.
Will Deacon [Fri, 12 Jul 2024 15:57:47 +0000 (16:57 +0100)]
Merge branch 'iommu/iommufd/paging-domain-alloc' into iommu/next
* iommu/iommufd/paging-domain-alloc:
RDMA/usnic: Use iommu_paging_domain_alloc()
wifi: ath11k: Use iommu_paging_domain_alloc()
wifi: ath10k: Use iommu_paging_domain_alloc()
drm/msm: Use iommu_paging_domain_alloc()
vhost-vdpa: Use iommu_paging_domain_alloc()
vfio/type1: Use iommu_paging_domain_alloc()
iommufd: Use iommu_paging_domain_alloc()
iommu: Add iommu_paging_domain_alloc() interface
Will Deacon [Fri, 12 Jul 2024 15:57:42 +0000 (16:57 +0100)]
Merge branch 'iommu/iommufd/attach-handles' into iommu/next
* iommu/iommufd/attach-handles:
iommu: Extend domain attach group with handle support
iommu: Add attach handle to struct iopf_group
iommu: Remove sva handle list
iommu: Introduce domain attachment handle
Will Deacon [Fri, 12 Jul 2024 15:53:58 +0000 (16:53 +0100)]
Merge branch 'iommu/intel/vt-d' into iommu/next
* iommu/intel/vt-d:
iommu/vt-d: Fix identity map bounds in si_domain_init()
iommu/vt-d: Fix aligned pages in calculate_psi_aligned_address()
iommu/vt-d: Limit max address mask to MAX_AGAW_PFN_WIDTH
iommu/vt-d: Refactor PCI PRI enabling/disabling callbacks
iommu/vt-d: Add helper to flush caches for context change
iommu/vt-d: Add helper to allocate paging domain
iommu/vt-d: Downgrade warning for pre-enabled IR
iommu/vt-d: Remove control over Execute-Requested requests
iommu/vt-d: Remove comment for def_domain_type
iommu/vt-d: Handle volatile descriptor status read
iommu/vt-d: Use try_cmpxchg64() in intel_pasid_get_entry()
Will Deacon [Fri, 12 Jul 2024 15:53:45 +0000 (16:53 +0100)]
Merge branch 'iommu/arm/smmu' into iommu/next
* iommu/arm/smmu: (32 commits)
iommu: Move IOMMU_DIRTY_NO_CLEAR define
iommu/arm-smmu-qcom: Register the TBU driver in qcom_smmu_impl_init
iommu/arm-smmu-v3: Enable HTTU for stage1 with io-pgtable mapping
iommu/arm-smmu-v3: Add support for dirty tracking in domain alloc
iommu/io-pgtable-arm: Add read_and_clear_dirty() support
iommu/arm-smmu-v3: Add feature detection for HTTU
iommu/arm-smmu-v3: Add support for domain_alloc_user fn
iommu/arm-smmu-qcom: record reason for deferring probe
iommu/arm-smmu: Pretty-print context fault related regs
iommu/arm-smmu-qcom-debug: Do not print for handled faults
iommu/arm-smmu: Add CB prefix to register bitfields
dt-bindings: arm-smmu: Add X1E80100 GPU SMMU
iommu/arm-smmu-v3: add missing MODULE_DESCRIPTION() macro
iommu/arm-smmu-v3: Shrink the strtab l1_desc array
iommu/arm-smmu-v3: Do not zero the strtab twice
iommu/arm-smmu-v3: Allow setting a S1 domain to a PASID
iommu/arm-smmu-v3: Allow a PASID to be set when RID is IDENTITY/BLOCKED
iommu/arm-smmu-v3: Test the STE S1DSS functionality
iommu/arm-smmu-v3: Allow IDENTITY/BLOCKED to be set while PASID is used
iommu/arm-smmu-v3: Put the SVA mmu notifier in the smmu_domain
...
Jon Pan-Doh [Tue, 9 Jul 2024 23:49:13 +0000 (16:49 -0700)]
iommu/vt-d: Fix identity map bounds in si_domain_init()
Intel IOMMU operates on inclusive bounds (both generally aas well as
iommu_domain_identity_map()). Meanwhile, for_each_mem_pfn_range() uses
exclusive bounds for end_pfn. This creates an off-by-one error when
switching between the two.
Fixes: c5395d5c4a82 ("intel-iommu: Clean up iommu_domain_identity_map()") Signed-off-by: Jon Pan-Doh <pandoh@google.com> Tested-by: Sudheer Dantuluri <dantuluris@google.com> Suggested-by: Gary Zibrat <gzibrat@google.com> Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Link: https://lore.kernel.org/r/20240709234913.2749386-1-pandoh@google.com Signed-off-by: Will Deacon <will@kernel.org>
Lu Baolu [Tue, 9 Jul 2024 15:26:43 +0000 (23:26 +0800)]
iommu/vt-d: Fix aligned pages in calculate_psi_aligned_address()
The helper calculate_psi_aligned_address() is used to convert an arbitrary
range into a size-aligned one.
The aligned_pages variable is calculated from input start and end, but is
not adjusted when the start pfn is not aligned and the mask is adjusted,
which results in an incorrect number of pages returned.
The number of pages is used by qi_flush_piotlb() to flush caches for the
first-stage translation. With the wrong number of pages, the cache is not
synchronized, leading to inconsistencies in some cases.
Fixes: c4d27ffaa8eb ("iommu/vt-d: Add cache tag invalidation helpers") Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Link: https://lore.kernel.org/r/20240709152643.28109-3-baolu.lu@linux.intel.com Signed-off-by: Will Deacon <will@kernel.org>
Lu Baolu [Tue, 9 Jul 2024 15:26:42 +0000 (23:26 +0800)]
iommu/vt-d: Limit max address mask to MAX_AGAW_PFN_WIDTH
Address mask specifies the number of low order bits of the address field
that must be masked for the invalidation operation.
Since address bits masked start from bit 12, the max address mask should
be MAX_AGAW_PFN_WIDTH, as defined in Table 19 ("Invalidate Descriptor
Address Mask Encodings") of the spec.
Limit the max address mask returned from calculate_psi_aligned_address()
to MAX_AGAW_PFN_WIDTH to prevent potential integer overflow in the
following code:
The Documentation/userspace-api/iommu.rst file has become outdated due
to the removal of associated structures and APIs.
Specifically, struct such as iommu_cache_invalidate_info and guest
pasid related uapi were removed in commit 0c9f17877891 ("iommu:
Remove guest pasid related interfaces and definitions").
And the corresponding uapi/linux/iommu.h file was removed in
commit 00a9bc607043 ("iommu: Move iommu fault data to
linux/iommu.h").
Will Deacon [Thu, 4 Jul 2024 14:13:53 +0000 (15:13 +0100)]
Merge branch 'for-joerg/arm-smmu/updates' into for-joerg/arm-smmu/next
* for-joerg/arm-smmu/updates: (29 commits)
iommu/arm-smmu-qcom: Register the TBU driver in qcom_smmu_impl_init
iommu/arm-smmu-v3: Enable HTTU for stage1 with io-pgtable mapping
iommu/arm-smmu-v3: Add support for dirty tracking in domain alloc
iommu/io-pgtable-arm: Add read_and_clear_dirty() support
iommu/arm-smmu-v3: Add feature detection for HTTU
iommu/arm-smmu-v3: Add support for domain_alloc_user fn
iommu/arm-smmu-qcom: record reason for deferring probe
iommu/arm-smmu: Pretty-print context fault related regs
iommu/arm-smmu-qcom-debug: Do not print for handled faults
iommu/arm-smmu: Add CB prefix to register bitfields
iommu/arm-smmu-v3: add missing MODULE_DESCRIPTION() macro
iommu/arm-smmu-v3: Shrink the strtab l1_desc array
iommu/arm-smmu-v3: Do not zero the strtab twice
iommu/arm-smmu-v3: Allow setting a S1 domain to a PASID
iommu/arm-smmu-v3: Allow a PASID to be set when RID is IDENTITY/BLOCKED
iommu/arm-smmu-v3: Test the STE S1DSS functionality
iommu/arm-smmu-v3: Allow IDENTITY/BLOCKED to be set while PASID is used
iommu/arm-smmu-v3: Put the SVA mmu notifier in the smmu_domain
iommu/arm-smmu-v3: Keep track of arm_smmu_master_domain for SVA
iommu/arm-smmu-v3: Make SVA allocate a normal arm_smmu_domain
...
Jean-Philippe Brucker [Fri, 7 Jun 2024 10:54:15 +0000 (11:54 +0100)]
iommu/of: Support ats-supported device-tree property
Device-tree declares whether a PCI root-complex supports ATS by setting
the "ats-supported" property. Copy this flag into device fwspec to let
IOMMU drivers quickly check if they can enable ATS for a device.
Tested-by: Ketan Patil <ketanp@nvidia.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Liviu Dudau <liviu.dudau@arm.com> Reviewed-by: Robin Murphy <robin.murphy@arm.com> Reviewed-by: Nicolin Chen <nicolinc@nvidia.com> Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Link: https://lore.kernel.org/r/20240607105415.2501934-4-jean-philippe@linaro.org Signed-off-by: Will Deacon <will@kernel.org>
Add a way for firmware to tell the OS that ATS is supported by the PCI
root complex. An endpoint with ATS enabled may send Translation Requests
and Translated Memory Requests, which look just like Normal Memory
Requests with a non-zero AT field. So a root controller that ignores the
AT field may simply forward the request to the IOMMU as a Normal Memory
Request, which could end badly. In any case, the endpoint will be
unusable.
The ats-supported property allows the OS to only enable ATS in endpoints
if the root controller can handle ATS requests. Only add the property to
pcie-host-ecam-generic for the moment. For non-generic root controllers,
availability of ATS can be inferred from the compatible string.
Robin Murphy [Tue, 2 Jul 2024 11:40:51 +0000 (12:40 +0100)]
iommu: Remove iommu_fwspec ops
The ops in iommu_fwspec are only needed for the early configuration and
probe process, and by now are easy enough to derive on-demand in those
couple of places which need them, so remove the redundant stored copy.
Robin Murphy [Tue, 2 Jul 2024 11:40:50 +0000 (12:40 +0100)]
OF: Simplify of_iommu_configure()
We no longer have a notion of partially-initialised fwspecs existing,
and we also no longer need to use an iommu_ops pointer to return status
to of_dma_configure(). Clean up the remains of those, which lends itself
to clarifying the logic around the dma_range_map allocation as well.
Robin Murphy [Tue, 2 Jul 2024 11:40:48 +0000 (12:40 +0100)]
iommu: Resolve fwspec ops automatically
There's no real need for callers to resolve ops from a fwnode in order
to then pass both to iommu_fwspec_init() - it's simpler and more sensible
for that to resolve the ops itself. This in turn means we can centralise
the notion of checking for a present driver, and enforce that fwspecs
aren't allocated unless and until we know they will be usable.
Also use this opportunity to modernise with some "new" helpers that
arrived shortly after this code was first written; the generic
fwnode_handle_get() clears up that ugly get/put mismatch, while
of_fwnode_handle() can now abstract those open-coded dereferences.
Robin Murphy [Tue, 2 Jul 2024 11:40:47 +0000 (12:40 +0100)]
iommu/mediatek-v1: Clean up redundant fwspec checks
The driver explicitly clears any existing fwspec before calling
mtk_iommu_v1_create_mapping(), but even if it didn't, the checks it's
doing there duplicate what iommu_fwspec_init() would do anyway. Clean
them up.
Lu Baolu [Mon, 10 Jun 2024 08:55:48 +0000 (16:55 +0800)]
RDMA/usnic: Use iommu_paging_domain_alloc()
usnic_uiom_alloc_pd() allocates a paging domain for a given device.
In this case, iommu_domain_alloc(dev->bus) is equivalent to
iommu_paging_domain_alloc(dev). Replace it as iommu_domain_alloc()
has been deprecated.
Lu Baolu [Mon, 10 Jun 2024 08:55:44 +0000 (16:55 +0800)]
wifi: ath10k: Use iommu_paging_domain_alloc()
An iommu domain is allocated in ath10k_fw_init() and is attached to
ar_snoc->fw.dev in the same function. Use iommu_paging_domain_alloc() to
make it explicit.
Lu Baolu [Mon, 10 Jun 2024 08:55:36 +0000 (16:55 +0800)]
iommufd: Use iommu_paging_domain_alloc()
If the iommu driver doesn't implement its domain_alloc_user callback,
iommufd_hwpt_paging_alloc() rolls back to allocate an iommu paging domain.
Replace iommu_domain_alloc() with iommu_user_domain_alloc() to pass the
device pointer along the path.
Lu Baolu [Mon, 10 Jun 2024 08:55:35 +0000 (16:55 +0800)]
iommu: Add iommu_paging_domain_alloc() interface
Commit <17de3f5fdd35> ("iommu: Retire bus ops") removes iommu ops from
bus. The iommu subsystem no longer relies on bus for operations. So the
bus parameter in iommu_domain_alloc() is no longer relevant.
Add a new interface named iommu_paging_domain_alloc(), which explicitly
indicates the allocation of a paging domain for DMA managed by a kernel
driver. The new interface takes a device pointer as its parameter, that
better aligns with the current iommu subsystem.
Lu Baolu [Tue, 2 Jul 2024 06:34:38 +0000 (14:34 +0800)]
iommu: Extend domain attach group with handle support
Unlike the SVA case where each PASID of a device has an SVA domain
attached to it, the I/O page faults are handled by the fault handler
of the SVA domain. The I/O page faults for a user page table might
be handled by the domain attached to RID or the domain attached to
the PASID, depending on whether the PASID table is managed by user
space or kernel. As a result, there is a need for the domain attach
group interfaces to have attach handle support. The attach handle
will be forwarded to the fault handler of the user domain.
Add some variants of the domain attaching group interfaces so that they
could support the attach handle and export them for use in IOMMUFD.
Lu Baolu [Tue, 2 Jul 2024 06:34:37 +0000 (14:34 +0800)]
iommu: Add attach handle to struct iopf_group
Previously, the domain that a page fault targets is stored in an
iopf_group, which represents a minimal set of page faults. With the
introduction of attach handle, replace the domain with the handle
so that the fault handler can obtain more information as needed
when handling the faults.
iommu_report_device_fault() is currently used for SVA page faults,
which handles the page fault in an internal cycle. The domain is retrieved
with iommu_get_domain_for_dev_pasid() if the pasid in the fault message
is valid. This doesn't work in IOMMUFD case, where if the pasid table of
a device is wholly managed by user space, there is no domain attached to
the PASID of the device, and all page faults are forwarded through a
NESTING domain attaching to RID.
Add a static flag in iommu ops, which indicates if the IOMMU driver
supports user-managed PASID tables. In the iopf deliver path, if no
attach handle found for the iopf PASID, roll back to RID domain when
the IOMMU driver supports this capability.
iommu_get_domain_for_dev_pasid() is no longer used and can be removed.
Lu Baolu [Tue, 2 Jul 2024 06:34:36 +0000 (14:34 +0800)]
iommu: Remove sva handle list
The struct sva_iommu represents an association between an SVA domain and
a PASID of a device. It's stored in the iommu group's pasid array and also
tracked by a list in the per-mm data structure. Removes duplicate tracking
of sva_iommu by eliminating the list.
Lu Baolu [Tue, 2 Jul 2024 06:34:35 +0000 (14:34 +0800)]
iommu: Introduce domain attachment handle
Currently, when attaching a domain to a device or its PASID, domain is
stored within the iommu group. It could be retrieved for use during the
window between attachment and detachment.
With new features introduced, there's a need to store more information
than just a domain pointer. This information essentially represents the
association between a domain and a device. For example, the SVA code
already has a custom struct iommu_sva which represents a bond between
sva domain and a PASID of a device. Looking forward, the IOMMUFD needs
a place to store the iommufd_device pointer in the core, so that the
device object ID could be quickly retrieved in the critical fault handling
path.
Introduce domain attachment handle that explicitly represents the
attachment relationship between a domain and a device or its PASID.
Co-developed-by: Jason Gunthorpe <jgg@nvidia.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Link: https://lore.kernel.org/r/20240702063444.105814-2-baolu.lu@linux.intel.com Signed-off-by: Will Deacon <will@kernel.org>
Georgi Djakov [Thu, 4 Jul 2024 01:07:59 +0000 (18:07 -0700)]
iommu/arm-smmu-qcom: Register the TBU driver in qcom_smmu_impl_init
Currently the TBU driver will only probe when CONFIG_ARM_SMMU_QCOM_DEBUG
is enabled. The driver not probing would prevent the platform to reach
sync_state and the system will remain in sub-optimal power consumption
mode while waiting for all consumer drivers to probe. To address this,
let's register the TBU driver in qcom_smmu_impl_init(), so that it can
probe, but still enable its functionality only when the debug option in
Kconfig is enabled.
Reported-by: Dmitry Baryshkov <dmitry.baryshkov@linaro.org> Closes: https://lore.kernel.org/r/CAA8EJppcXVu72OSo+OiYEiC1HQjP3qCwKMumOsUhcn6Czj0URg@mail.gmail.com Fixes: 414ecb030870 ("iommu/arm-smmu-qcom-debug: Add support for TBUs") Signed-off-by: Georgi Djakov <quic_c_gdjako@quicinc.com> Link: https://lore.kernel.org/r/20240704010759.507798-1-quic_c_gdjako@quicinc.com Signed-off-by: Will Deacon <will@kernel.org>
Lu Baolu [Tue, 2 Jul 2024 13:08:39 +0000 (21:08 +0800)]
iommu/vt-d: Refactor PCI PRI enabling/disabling callbacks
Commit 0095bf83554f8 ("iommu: Improve iopf_queue_remove_device()")
specified the flow for disabling the PRI on a device. Refactor the
PRI callbacks in the intel iommu driver to better manage PRI
enabling and disabling and align it with the device queue interfaces
in the iommu core.
Lu Baolu [Tue, 2 Jul 2024 13:08:38 +0000 (21:08 +0800)]
iommu/vt-d: Add helper to flush caches for context change
This helper is used to flush the related caches following a change in a
context table entry that was previously present. The VT-d specification
provides guidance for such invalidations in section 6.5.3.3.
This helper replaces the existing open code in the code paths where a
present context entry is being torn down.
Lu Baolu [Tue, 2 Jul 2024 13:08:37 +0000 (21:08 +0800)]
iommu/vt-d: Add helper to allocate paging domain
The domain_alloc_user operation is currently implemented by allocating a
paging domain using iommu_domain_alloc(). This is because it needs to fully
initialize the domain before return. Add a helper to do this to avoid using
iommu_domain_alloc().
Lu Baolu [Tue, 2 Jul 2024 13:08:36 +0000 (21:08 +0800)]
iommu/vt-d: Downgrade warning for pre-enabled IR
Emitting a warning is overkill in intel_setup_irq_remapping() since the
interrupt remapping is pre-enabled. For example, there's no guarantee
that kexec will explicitly disable interrupt remapping before booting a
new kernel. As a result, users are seeing warning messages like below
when they kexec boot a kernel, though there is nothing wrong:
DMAR-IR: IRQ remapping was enabled on dmar18 but we are not in kdump mode
DMAR-IR: IRQ remapping was enabled on dmar17 but we are not in kdump mode
DMAR-IR: IRQ remapping was enabled on dmar16 but we are not in kdump mode
... ...
Downgrade the severity of this message to avoid user confusion.
Lu Baolu [Tue, 2 Jul 2024 13:08:35 +0000 (21:08 +0800)]
iommu/vt-d: Remove control over Execute-Requested requests
The VT-d specification has removed architectural support of the requests
with pasid with a value of 1 for Execute-Requested (ER). And the NXE bit
in the pasid table entry and XD bit in the first-stage paging Entries are
deprecated accordingly.
Remove the programming of these bits to make it consistent with the spec.
Lu Baolu [Tue, 2 Jul 2024 13:08:34 +0000 (21:08 +0800)]
iommu/vt-d: Remove comment for def_domain_type
The comment for def_domain_type is outdated. Part of it is irrelevant.
Furthermore, it could just be deleted since the iommu_ops::def_domain_type
callback is properly documented in iommu.h, so individual implementations
shouldn't need to repeat that. Remove it to avoid confusion.
Jacob Pan [Tue, 2 Jul 2024 13:08:33 +0000 (21:08 +0800)]
iommu/vt-d: Handle volatile descriptor status read
Queued invalidation wait descriptor status is volatile in that IOMMU
hardware writes the data upon completion.
Use READ_ONCE() to prevent compiler optimizations which ensures memory
reads every time. As a side effect, READ_ONCE() also enforces strict
types and may add an extra instruction. But it should not have negative
performance impact since we use cpu_relax anyway and the extra time(by
adding an instruction) may allow IOMMU HW request cacheline ownership
easier.
e.g. gcc 12.3
BEFORE:
81 38 ad de 00 00 cmpl $0x2,(%rax)
AFTER (with READ_ONCE())
772f: 8b 00 mov (%rax),%eax
7731: 3d ad de 00 00 cmp $0x2,%eax
//status data is 32 bit
iommu/arm-smmu-v3: Enable HTTU for stage1 with io-pgtable mapping
If io-pgtable quirk flag indicates support for hardware update of
dirty state, enable HA/HD bits in the SMMU CD and also set the DBM
bit in the page descriptor.
Now report the dirty page tracking capability of SMMUv3 and
select IOMMUFD_DRIVER for ARM_SMMU_V3 if IOMMUFD is enabled.
Co-developed-by: Keqian Zhu <zhukeqian1@huawei.com> Signed-off-by: Keqian Zhu <zhukeqian1@huawei.com> Signed-off-by: Kunkun Jiang <jiangkunkun@huawei.com> Signed-off-by: Joao Martins <joao.m.martins@oracle.com> Reviewed-by: Ryan Roberts <ryan.roberts@arm.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Nicolin Chen <nicolinc@nvidia.com> Signed-off-by: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com> Link: https://lore.kernel.org/r/20240703101604.2576-6-shameerali.kolothum.thodi@huawei.com Signed-off-by: Will Deacon <will@kernel.org>
iommu/io-pgtable-arm: Add read_and_clear_dirty() support
.read_and_clear_dirty() IOMMU domain op takes care of reading the dirty
bits (i.e. PTE has DBM set and AP[2] clear) and marshalling into a
bitmap of a given page size.
While reading the dirty bits we also set the PTE AP[2] bit to mark it
as writeable-clean depending on read_and_clear_dirty() flags.
Reviewed-by: Ryan Roberts <ryan.roberts@arm.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Signed-off-by: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com> Link: https://lore.kernel.org/r/20240703101604.2576-4-shameerali.kolothum.thodi@huawei.com Signed-off-by: Will Deacon <will@kernel.org>
If the SMMU supports it and the kernel was built with HTTU support,
Probe support for Hardware Translation Table Update (HTTU) which is
essentially to enable hardware update of access and dirty flags.
Probe and set the smmu::features for Hardware Dirty and Hardware Access
bits. This is in preparation, to enable it on the context descriptors of
stage 1 format.
Signed-off-by: Jean-Philippe Brucker <jean-philippe@linaro.org> Signed-off-by: Joao Martins <joao.m.martins@oracle.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Reviewed-by: Ryan Roberts <ryan.roberts@arm.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Reviewed-by: Nicolin Chen <nicolinc@nvidia.com> Signed-off-by: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com> Link: https://lore.kernel.org/r/20240703101604.2576-3-shameerali.kolothum.thodi@huawei.com Signed-off-by: Will Deacon <will@kernel.org>
iommu/arm-smmu-qcom: record reason for deferring probe
To avoid deferring probe smmu driver silently, record reason for it.
It can be checked through ../debugfs/devices_deferred as well:
/sys/kernel/debug# cat devices_deferred 15000000.iommu arm-smmu: qcom_scm not ready
With ARCH=arm64, make allmodconfig && make W=1 C=1 reports:
WARNING: modpost: missing MODULE_DESCRIPTION() in drivers/iommu/arm/arm-smmu-v3/arm-smmu-v3-test.o
Add the missing invocation of the MODULE_DESCRIPTION() macro.
Jason Gunthorpe [Tue, 11 Jun 2024 00:31:11 +0000 (21:31 -0300)]
iommu/arm-smmu-v3: Shrink the strtab l1_desc array
The top of the 2 level stream table is (at most) 128k entries big, and two
high order allocations are required. One of __le64 which is programmed
into the HW (1M), and one of struct arm_smmu_strtab_l1_desc which holds
the CPU pointer (3M).
There is no reason to store the l2ptr_dma as nothing reads it. devm stores
a copy of it and the DMA memory will be freed via devm mechanisms. span is
a constant of 8+1. Remove both.
This removes 16 bytes from each arm_smmu_l1_ctx_desc and saves up to 2M of
memory per iommu instance.
Jason Gunthorpe [Tue, 11 Jun 2024 00:31:10 +0000 (21:31 -0300)]
iommu/arm-smmu-v3: Do not zero the strtab twice
dmam_alloc_coherent() already returns zero'd memory so cfg->strtab.l1_desc
(the list of DMA addresses for the L2 entries) is already zero'd.
arm_smmu_init_l1_strtab() goes through and calls
arm_smmu_write_strtab_l1_desc() on the newly allocated (and zero'd) struct
arm_smmu_strtab_l1_desc, which ends up computing 'val = 0' and zeroing it
again.
Remove arm_smmu_init_l1_strtab() and just call devm_kcalloc() from
arm_smmu_init_strtab_2lvl to allocate the companion struct.
Jason Gunthorpe [Tue, 25 Jun 2024 12:37:44 +0000 (09:37 -0300)]
iommu/arm-smmu-v3: Allow a PASID to be set when RID is IDENTITY/BLOCKED
If the STE doesn't point to the CD table we can upgrade it by
reprogramming the STE with the appropriate S1DSS. We may also need to turn
on ATS at the same time.
Keep track if the installed STE is pointing at the cd_table and the ATS
state to trigger this path.
Tested-by: Nicolin Chen <nicolinc@nvidia.com> Tested-by: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com> Reviewed-by: Nicolin Chen <nicolinc@nvidia.com> Reviewed-by: Jerry Snitselaar <jsnitsel@redhat.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/13-v9-5cd718286059+79186-smmuv3_newapi_p2b_jgg@nvidia.com Signed-off-by: Will Deacon <will@kernel.org>
Jason Gunthorpe [Tue, 25 Jun 2024 12:37:42 +0000 (09:37 -0300)]
iommu/arm-smmu-v3: Allow IDENTITY/BLOCKED to be set while PASID is used
The HW supports this, use the S1DSS bits to configure the behavior
of SSID=0 which is the RID's translation.
If SSID's are currently being used in the CD table then just update the
S1DSS bits in the STE, remove the master_domain and leave ATS alone.
For iommufd the driver design has a small problem that all the unused CD
table entries are set with V=0 which will generate an event if VFIO
userspace tries to use the CD entry. This patch extends this problem to
include the RID as well if PASID is being used.
For BLOCKED with used PASIDs the
F_STREAM_DISABLED (STRTAB_STE_1_S1DSS_TERMINATE) event is generated on
untagged traffic and a substream CD table entry with V=0 (removed pasid)
will generate C_BAD_CD. Arguably there is no advantage to using S1DSS over
the CD entry 0 with V=0.
As we don't yet support PASID in iommufd this is a problem to resolve
later, possibly by using EPD0 for unused CD table entries instead of V=0,
and not using S1DSS for BLOCKED.
Tested-by: Nicolin Chen <nicolinc@nvidia.com> Tested-by: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com> Reviewed-by: Nicolin Chen <nicolinc@nvidia.com> Reviewed-by: Jerry Snitselaar <jsnitsel@redhat.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/11-v9-5cd718286059+79186-smmuv3_newapi_p2b_jgg@nvidia.com Signed-off-by: Will Deacon <will@kernel.org>
Jason Gunthorpe [Tue, 25 Jun 2024 12:37:41 +0000 (09:37 -0300)]
iommu/arm-smmu-v3: Put the SVA mmu notifier in the smmu_domain
This removes all the notifier de-duplication logic in the driver and
relies on the core code to de-duplicate and allocate only one SVA domain
per mm per smmu instance. This naturally gives a 1:1 relationship between
SVA domain and mmu notifier.
It is a significant simplication of the flow, as we end up with a single
struct arm_smmu_domain for each MM and the invalidation can then be
shifted to properly use the masters list like S1/S2 do.
Remove all of the previous mmu_notifier, bond, shared cd, and cd refcount
logic entirely.
The logic here is tightly wound together with the unusued BTM
support. Since the BTM logic requires holding all the iommu_domains in a
global ASID xarray it conflicts with the design to have a single SVA
domain per PASID, as multiple SMMU instances will need to have different
domains.
Following patches resolve this by making the ASID xarray per-instance
instead of global. However, converting the BTM code over to this
methodology requires many changes.
Thus, since ARM_SMMU_FEAT_BTM is never enabled, remove the parts of the
BTM support for ASID sharing that interact with SVA as well.
A followup series is already working on fully enabling the BTM support,
that requires iommufd's VIOMMU feature to bring in the KVM's VMID as
well. It will come with an already written patch to bring back the ASID
sharing using a per-instance ASID xarray.
Jason Gunthorpe [Tue, 25 Jun 2024 12:37:40 +0000 (09:37 -0300)]
iommu/arm-smmu-v3: Keep track of arm_smmu_master_domain for SVA
Fill in the smmu_domain->devices list in the new struct arm_smmu_domain
that SVA allocates. Keep track of every SSID and master that is using the
domain reusing the logic for the RID attach.
This is the first step to making the SVA invalidation follow the same
design as S1/S2 invalidation. At present nothing will read this list.
Tested-by: Nicolin Chen <nicolinc@nvidia.com> Tested-by: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com> Reviewed-by: Nicolin Chen <nicolinc@nvidia.com> Reviewed-by: Jerry Snitselaar <jsnitsel@redhat.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/9-v9-5cd718286059+79186-smmuv3_newapi_p2b_jgg@nvidia.com Signed-off-by: Will Deacon <will@kernel.org>
Jason Gunthorpe [Tue, 25 Jun 2024 12:37:38 +0000 (09:37 -0300)]
iommu/arm-smmu-v3: Thread SSID through the arm_smmu_attach_*() interface
Allow creating and managing arm_smmu_mater_domain's with a non-zero SSID
through the arm_smmu_attach_*() family of functions. This triggers ATC
invalidation for the correct SSID in PASID cases and tracks the
per-attachment SSID in the struct arm_smmu_master_domain.
Generalize arm_smmu_attach_remove() to be able to remove SSID's as well by
ensuring the ATC for the PASID is flushed properly.
Tested-by: Nicolin Chen <nicolinc@nvidia.com> Tested-by: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com> Reviewed-by: Nicolin Chen <nicolinc@nvidia.com> Reviewed-by: Jerry Snitselaar <jsnitsel@redhat.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/7-v9-5cd718286059+79186-smmuv3_newapi_p2b_jgg@nvidia.com Signed-off-by: Will Deacon <will@kernel.org>
Jason Gunthorpe [Tue, 25 Jun 2024 12:37:37 +0000 (09:37 -0300)]
iommu/arm-smmu-v3: Do not use master->sva_enable to restrict attaches
We no longer need a master->sva_enable to control what attaches are
allowed. Instead we can tell if the attach is legal based on the current
configuration of the master.
Keep track of the number of valid CD entries for SSID's in the cd_table
and if the cd_table has been installed in the STE directly so we know what
the configuration is.
The attach logic is then made into:
- SVA bind, check if the CD is installed
- RID attach of S2, block if SSIDs are used
- RID attach of IDENTITY/BLOCKING, block if SSIDs are used
arm_smmu_set_pasid() is already checking if it is possible to setup a CD
entry, at this patch it means the RID path already set a STE pointing at
the CD table.
Jason Gunthorpe [Tue, 25 Jun 2024 12:37:36 +0000 (09:37 -0300)]
iommu/arm-smmu-v3: Add ssid to struct arm_smmu_master_domain
Prepare to allow a S1 domain to be attached to a PASID as well. Keep track
of the SSID the domain is using on each master in the
arm_smmu_master_domain.
Tested-by: Nicolin Chen <nicolinc@nvidia.com> Tested-by: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com> Reviewed-by: Michael Shavit <mshavit@google.com> Reviewed-by: Nicolin Chen <nicolinc@nvidia.com> Reviewed-by: Jerry Snitselaar <jsnitsel@redhat.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/5-v9-5cd718286059+79186-smmuv3_newapi_p2b_jgg@nvidia.com Signed-off-by: Will Deacon <will@kernel.org>
Jason Gunthorpe [Tue, 25 Jun 2024 12:37:35 +0000 (09:37 -0300)]
iommu/arm-smmu-v3: Make changing domains be hitless for ATS
The core code allows the domain to be changed on the fly without a forced
stop in BLOCKED/IDENTITY. In this flow the driver should just continually
maintain the ATS with no change while the STE is updated.
ATS relies on a linked list smmu_domain->devices to keep track of which
masters have the domain programmed, but this list is also used by
arm_smmu_share_asid(), unrelated to ats.
Create two new functions to encapsulate this combined logic:
arm_smmu_attach_prepare()
<caller generates and sets the STE>
arm_smmu_attach_commit()
The two functions can sequence both enabling ATS and disabling across
the STE store. Have every update of the STE use this sequence.
Installing a S1/S2 domain always enables the ATS if the PCIe device
supports it.
The enable flow is now ordered differently to allow it to be hitless:
1) Add the master to the new smmu_domain->devices list
2) Program the STE
3) Enable ATS at PCIe
4) Remove the master from the old smmu_domain
This flow ensures that invalidations to either domain will generate an ATC
invalidation to the device while the STE is being switched. Thus we don't
need to turn off the ATS anymore for correctness.
The disable flow is the reverse:
1) Disable ATS at PCIe
2) Program the STE
3) Invalidate the ATC
4) Remove the master from the old smmu_domain
Move the nr_ats_masters adjustments to be close to the list
manipulations. It is a count of the number of ATS enabled masters
currently in the list. This is stricly before and after the STE/CD are
revised, and done under the list's spin_lock.
This is part of the bigger picture to allow changing the RID domain while
a PASID is in use. If a SVA PASID is relying on ATS to function then
changing the RID domain cannot just temporarily toggle ATS off without
also wrecking the SVA PASID. The new infrastructure here is organized so
that the PASID attach/detach flows will make use of it as well in
following patches.
Tested-by: Nicolin Chen <nicolinc@nvidia.com> Tested-by: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com> Reviewed-by: Nicolin Chen <nicolinc@nvidia.com> Reviewed-by: Michael Shavit <mshavit@google.com> Reviewed-by: Jerry Snitselaar <jsnitsel@redhat.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/4-v9-5cd718286059+79186-smmuv3_newapi_p2b_jgg@nvidia.com Signed-off-by: Will Deacon <will@kernel.org>
Jason Gunthorpe [Tue, 25 Jun 2024 12:37:33 +0000 (09:37 -0300)]
iommu/arm-smmu-v3: Start building a generic PASID layer
Add arm_smmu_set_pasid()/arm_smmu_remove_pasid() which are to be used by
callers that already constructed the arm_smmu_cd they wish to program.
These functions will encapsulate the shared logic to setup a CD entry that
will be shared by SVA and S1 domain cases.
Prior fixes had already moved most of this logic up into
__arm_smmu_sva_bind(), move it to it's final home.
Following patches will relieve some of the remaining SVA restrictions:
- The RID domain is a S1 domain and has already setup the STE to point to
the CD table
- The programmed PASID is the mm_get_enqcmd_pasid()
- Nothing changes while SVA is running (sva_enable)
SVA invalidation will still iterate over the S1 domain's master list,
later patches will resolve that.
Tested-by: Nicolin Chen <nicolinc@nvidia.com> Tested-by: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com> Reviewed-by: Nicolin Chen <nicolinc@nvidia.com> Reviewed-by: Jerry Snitselaar <jsnitsel@redhat.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/2-v9-5cd718286059+79186-smmuv3_newapi_p2b_jgg@nvidia.com Signed-off-by: Will Deacon <will@kernel.org>
Jason Gunthorpe [Tue, 25 Jun 2024 12:37:32 +0000 (09:37 -0300)]
iommu/arm-smmu-v3: Convert to domain_alloc_sva()
This allows the driver the receive the mm and always a device during
allocation. Later patches need this to properly setup the notifier when
the domain is first allocated.
Remove ops->domain_alloc() as SVA was the only remaining purpose.
Tested-by: Nicolin Chen <nicolinc@nvidia.com> Tested-by: Shameer Kolothum <shameerali.kolothum.thodi@huawei.com> Reviewed-by: Michael Shavit <mshavit@google.com> Reviewed-by: Nicolin Chen <nicolinc@nvidia.com> Reviewed-by: Jerry Snitselaar <jsnitsel@redhat.com> Signed-off-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/1-v9-5cd718286059+79186-smmuv3_newapi_p2b_jgg@nvidia.com Signed-off-by: Will Deacon <will@kernel.org>
Uros Bizjak [Wed, 22 May 2024 08:26:49 +0000 (10:26 +0200)]
iommufd: Use atomic_long_try_cmpxchg() in incr_user_locked_vm()
Use atomic_long_try_cmpxchg() instead of
atomic_long_cmpxchg (*ptr, old, new) != old in incr_user_locked_vm().
cmpxchg returns success in ZF flag, so this change saves a compare
after cmpxchg (and related move instruction in front of cmpxchg).
Also, atomic_long_try_cmpxchg() implicitly assigns old *ptr
value to "old" when cmpxchg fails. There is no need to re-read
the value in the loop.
Signed-off-by: Uros Bizjak <ubizjak@gmail.com> Cc: Jason Gunthorpe <jgg@ziepe.ca> Cc: Kevin Tian <kevin.tian@intel.com> Cc: Joerg Roedel <joro@8bytes.org> Cc: Will Deacon <will@kernel.org> Cc: Robin Murphy <robin.murphy@arm.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/20240522082729.971123-3-ubizjak@gmail.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
Uros Bizjak [Wed, 22 May 2024 08:26:48 +0000 (10:26 +0200)]
iommu/vt-d: Use try_cmpxchg64() in intel_pasid_get_entry()
Use try_cmpxchg64() instead of cmpxchg64 (*ptr, old, new) != old in
intel_pasid_get_entry(). cmpxchg returns success in ZF flag, so
this change saves a compare after cmpxchg (and related move
instruction in front of cmpxchg).
Signed-off-by: Uros Bizjak <ubizjak@gmail.com> Cc: David Woodhouse <dwmw2@infradead.org> Cc: Lu Baolu <baolu.lu@linux.intel.com> Cc: Joerg Roedel <joro@8bytes.org> Cc: Will Deacon <will@kernel.org> Cc: Robin Murphy <robin.murphy@arm.com> Reviewed-by: Lu Baolu <baolu.lu@linux.intel.com> Link: https://lore.kernel.org/r/20240522082729.971123-2-ubizjak@gmail.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
Uros Bizjak [Wed, 22 May 2024 08:26:47 +0000 (10:26 +0200)]
iommu/amd: Use try_cmpxchg64() in v2_alloc_pte()
Use try_cmpxchg64() instead of cmpxchg64 (*ptr, old, new) != old in
v2_alloc_pte(). cmpxchg returns success in ZF flag, so this change
saves a compare after cmpxchg (and related move instruction
in front of cmpxchg).
This is the same improvement as implemented for alloc_pte() in:
commit 0d10fe759117 ("iommu/amd: Use try_cmpxchg64 in alloc_pte and free_clear_pte")
Neil Armstrong [Thu, 6 Jun 2024 13:15:12 +0000 (15:15 +0200)]
dt-bindings: iommu: qcom,iommu: Add MSM8953 GPU IOMMU to SMMUv2 compatibles
Add MSM8953 compatible string with "qcom,msm-iommu-v2" as fallback
for the MSM8953 GPU IOMMU which is compatible with Qualcomm's secure
fw "SMMU v2" implementation.
Andre Przywara [Sun, 16 Jun 2024 22:40:55 +0000 (23:40 +0100)]
iommu: sun50i: Add H616 compatible string
The IOMMU IP in the Allwinner H616 SoC is *almost* compatible to the H6,
but uses a different reset value for the bypass register, and adds some
more registers.
While a driver *can* be written to support both variants (which we in
fact do), the hardware itself is not fully compatible, so we require a
separate compatible string.
Add the new compatible string to the list, but without changing the
behaviour, since the driver already supports both variants.
Andre Przywara [Sun, 16 Jun 2024 22:40:54 +0000 (23:40 +0100)]
dt-bindings: iommu: add new compatible strings
The Allwinner H616 and A523 contain IOMMU IP very similar to the H6, but
use a different reset value for the bypass register, which makes them
strictly speaking incompatible.
Add a new compatible string for the H616, and a version for the A523,
falling back to the H616.
Andre Przywara [Sun, 16 Jun 2024 22:40:53 +0000 (23:40 +0100)]
iommu: sun50i: allocate page tables from below 4 GiB
The Allwinner IOMMU is a strict 32-bit device, with its input addresses,
the page table root pointer as well as both level's page tables and also
the target addresses all required to be below 4GB.
The Allwinner H6 SoC only supports 32-bit worth of physical addresses
anyway, so this isn't a problem so far, but the H616 and later SoCs extend
the PA space beyond 32 bit to accommodate more DRAM.
To make sure we stay within the 32-bit PA range required by the IOMMU,
force the memory for the page tables to come from below 4GB. by using
allocations with the DMA32 flag.
Also reject any attempt to map target addresses beyond 4GB, and print a
warning to give users a hint while this fails.
Jernej Skrabec [Sun, 16 Jun 2024 22:40:52 +0000 (23:40 +0100)]
iommu: sun50i: clear bypass register
The Allwinner H6 IOMMU has a bypass register, which allows to circumvent
the page tables for each possible master. The reset value for this
register is 0, which disables the bypass.
The Allwinner H616 IOMMU resets this register to 0x7f, which activates
the bypass for all masters, which is not what we want.
Always clear this register to 0, to enforce the usage of page tables,
and make this driver compatible with the H616 in this respect.
Robin Murphy [Tue, 4 Jun 2024 12:39:09 +0000 (13:39 +0100)]
iommu/dma: Prune redundant pgprot arguments
Somewhere amongst previous refactorings, the pgprot value in
__iommu_dma_alloc_noncontiguous() became entirely unused, and the one
used in iommu_dma_alloc_remap() can be computed locally rather than by
its one remaining caller. Clean 'em up.
Lu Baolu [Tue, 28 May 2024 04:54:58 +0000 (12:54 +0800)]
iommu: Make iommu_sva_domain_alloc() static
iommu_sva_domain_alloc() is only called in iommu-sva.c, hence make it
static.
On the other hand, iommu_sva_domain_alloc() should not return NULL anymore
after commit <80af5a452024> ("iommu: Add ops->domain_alloc_sva()"), the
removal of inline code avoids potential confusion.
Fixes: 80af5a452024 ("iommu: Add ops->domain_alloc_sva()") Signed-off-by: Lu Baolu <baolu.lu@linux.intel.com> Reviewed-by: Kevin Tian <kevin.tian@intel.com> Reviewed-by: Jason Gunthorpe <jgg@nvidia.com> Link: https://lore.kernel.org/r/20240528045458.81458-1-baolu.lu@linux.intel.com Signed-off-by: Joerg Roedel <jroedel@suse.de>
Linus Torvalds [Sun, 9 Jun 2024 16:04:51 +0000 (09:04 -0700)]
Merge tag 'perf-tools-fixes-for-v6.10-2-2024-06-09' of git://git.kernel.org/pub/scm/linux/kernel/git/perf/perf-tools
Pull perf tools fixes from Arnaldo Carvalho de Melo:
- Update copies of kernel headers, which resulted in support for the
new 'mseal' syscall, SUBVOL statx return mask bit, RISC-V and PPC
prctls, fcntl's DUPFD_QUERY, POSTED_MSI_NOTIFICATION IRQ vector,
'map_shadow_stack' syscall for x86-32.
- Revert perf.data record memory allocation optimization that ended up
causing a regression, work is being done to re-introduce it in the
next merge window.
- Fix handling of minimal vmlinux.h file used with BPF's CO-RE when
interrupting the build.
* tag 'perf-tools-fixes-for-v6.10-2-2024-06-09' of git://git.kernel.org/pub/scm/linux/kernel/git/perf/perf-tools:
perf bpf: Fix handling of minimal vmlinux.h file when interrupting the build
Revert "perf record: Reduce memory for recording PERF_RECORD_LOST_SAMPLES event"
tools headers arm64: Sync arm64's cputype.h with the kernel sources
tools headers uapi: Sync linux/stat.h with the kernel sources to pick STATX_SUBVOL
tools headers UAPI: Update i915_drm.h with the kernel sources
tools headers UAPI: Sync kvm headers with the kernel sources
tools arch x86: Sync the msr-index.h copy with the kernel sources
tools headers: Update the syscall tables and unistd.h, mostly to support the new 'mseal' syscall
perf trace beauty: Update the arch/x86/include/asm/irq_vectors.h copy with the kernel sources to pick POSTED_MSI_NOTIFICATION
perf beauty: Update copy of linux/socket.h with the kernel sources
tools headers UAPI: Sync fcntl.h with the kernel sources to pick F_DUPFD_QUERY
tools headers UAPI: Sync linux/prctl.h with the kernel sources
tools include UAPI: Sync linux/stat.h with the kernel sources
Linus Torvalds [Sun, 9 Jun 2024 15:49:13 +0000 (08:49 -0700)]
Merge tag 'edac_urgent_for_v6.10_rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/ras/ras
Pull EDAC fixes from Borislav Petkov:
- Convert PCI core error codes to proper error numbers since latter get
propagated all the way up to the module loading functions
* tag 'edac_urgent_for_v6.10_rc3' of git://git.kernel.org/pub/scm/linux/kernel/git/ras/ras:
EDAC/igen6: Convert PCIBIOS_* return codes to errnos
EDAC/amd64: Convert PCIBIOS_* return codes to errnos
Linus Torvalds [Sun, 9 Jun 2024 02:14:02 +0000 (19:14 -0700)]
Merge tag 'clk-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/clk/linux
Pull clk fix from Stephen Boyd:
"One fix for the SiFive PRCI clocks so that the device boots again.
This driver was registering clkdev lookups that were always going to
be useless. This wasn't a problem until clkdev started returning an
error in these cases, causing this driver to fail probe, and thus boot
to fail because clks are essential for most drivers. The fix is
simple, don't use clkdev because this is a DT based system where
clkdev isn't used"
* tag 'clk-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/clk/linux:
clk: sifive: Do not register clkdevs for PRCI clocks
Linus Torvalds [Sun, 9 Jun 2024 02:07:18 +0000 (19:07 -0700)]
Merge tag '6.10-rc2-smb3-client-fixes' of git://git.samba.org/sfrench/cifs-2.6
Pull smb client fixes from Steve French:
"Two small smb3 client fixes:
- fix deadlock in umount
- minor cleanup due to netfs change"
* tag '6.10-rc2-smb3-client-fixes' of git://git.samba.org/sfrench/cifs-2.6:
cifs: Don't advance the I/O iterator before terminating subrequest
smb: client: fix deadlock in smb2_find_smb_tcon()
Linus Torvalds [Sat, 8 Jun 2024 17:12:33 +0000 (10:12 -0700)]
Merge tag 'kbuild-fixes-v6.10-2' of git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild
Pull Kbuild fixes from Masahiro Yamada:
- Fix the initial state of the save button in 'make gconfig'
- Improve the Kconfig documentation
- Fix a Kconfig bug regarding property visibility
- Fix build breakage for systems where 'sed' is not installed in /bin
- Fix a false warning about missing MODULE_DESCRIPTION()
* tag 'kbuild-fixes-v6.10-2' of git://git.kernel.org/pub/scm/linux/kernel/git/masahiroy/linux-kbuild:
modpost: do not warn about missing MODULE_DESCRIPTION() for vmlinux.o
kbuild: explicitly run mksysmap as sed script from link-vmlinux.sh
kconfig: remove wrong expr_trans_bool()
kconfig: doc: document behavior of 'select' and 'imply' followed by 'if'
kconfig: doc: fix a typo in the note about 'imply'
kconfig: gconf: give a proper initial state to the Save button
kconfig: remove unneeded code for user-supplied values being out of range
Linus Torvalds [Sat, 8 Jun 2024 16:57:09 +0000 (09:57 -0700)]
Merge tag 'media/v6.10-2' of git://git.kernel.org/pub/scm/linux/kernel/git/mchehab/linux-media
Pull media fixes from Mauro Carvalho Chehab:
- fixes for the new ipu6 driver (and related fixes to mei csi driver)
- fix a double debugfs remove logic at mgb4 driver
- a documentation fix
* tag 'media/v6.10-2' of git://git.kernel.org/pub/scm/linux/kernel/git/mchehab/linux-media:
media: intel/ipu6: add csi2 port sanity check in notifier bound
media: intel/ipu6: update the maximum supported csi2 port number to 6
media: mei: csi: Warn less verbosely of a missing device fwnode
media: mei: csi: Put the IPU device reference
media: intel/ipu6: fix the buffer flags caused by wrong parentheses
media: intel/ipu6: Fix an error handling path in isys_probe()
media: intel/ipu6: Move isys_remove() close to isys_probe()
media: intel/ipu6: Fix some redundant resources freeing in ipu6_pci_remove()
media: Documentation: v4l: Fix ACTIVE route flag
media: mgb4: Fix double debugfs remove
Linus Torvalds [Sat, 8 Jun 2024 16:44:50 +0000 (09:44 -0700)]
Merge tag 'irq-urgent-2024-06-08' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull irq fixes from Ingo Molnar:
- Fix possible memory leak the riscv-intc irqchip driver load failures
- Fix boot crash in the sifive-plic irqchip driver caused by recently
changed boot initialization order
- Fix race condition in the gic-v3-its irqchip driver
* tag 'irq-urgent-2024-06-08' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
irqchip/gic-v3-its: Fix potential race condition in its_vlpi_prop_update()
irqchip/sifive-plic: Chain to parent IRQ after handlers are ready
irqchip/riscv-intc: Prevent memory leak when riscv_intc_init_common() fails
Linus Torvalds [Sat, 8 Jun 2024 16:36:08 +0000 (09:36 -0700)]
Merge tag 'x86-urgent-2024-06-08' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull x86 fixes from Ingo Molnar:
"Miscellaneous fixes:
- Fix kexec() crash if call depth tracking is enabled
- Fix SMN reads on inaccessible registers on certain AMD systems"
* tag 'x86-urgent-2024-06-08' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
x86/amd_nb: Check for invalid SMN reads
x86/kexec: Fix bug with call depth tracking
Linus Torvalds [Sat, 8 Jun 2024 16:26:59 +0000 (09:26 -0700)]
Merge tag 'perf-urgent-2024-06-08' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip
Pull perf event fix from Ingo Molnar:
"Fix race between perf_event_free_task() and perf_event_release_kernel()
that can result in missed wakeups and hung tasks"
* tag 'perf-urgent-2024-06-08' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip:
perf/core: Fix missing wakeup when waiting for context reference