Matt Roper [Fri, 11 Aug 2023 16:06:15 +0000 (09:06 -0700)]
drm/xe/xe2: Program GuC's MOCS on Xe2 and beyond
As with PVC, Xe2 platforms require that the index of an uncached MOCS
entry be programmed into the GUC_SHIM_CONTROL register. This will
likely be needed on future platforms as well.
Xe2 also extends the size of the MOCS index register field from two bits
to four bits. Since these extra bits were unused on PVC, it should be
safe to just increase the size of the mask.
Bspec: 60592 Cc: Haridhar Kalvala <haridhar.kalvala@intel.com> Signed-off-by: Matt Roper <matthew.d.roper@intel.com> Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com> Reviewed-by: Balasubramani Vivekanandan <balasubramani.vivekanandan@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Matt Roper [Fri, 11 Aug 2023 16:06:13 +0000 (09:06 -0700)]
drm/xe/xe2: Track VA bits independently of max page table level
Starting with Xe2, a 5-level page table is always used, regardless of
the actual virtual address range supported by the platform. The two
values need to be tracked separately in the device descriptor since Xe2
platforms only have a 48 bit virtual address range.
Bspec: 59505, 65637, 70817 Cc: Balasubramani Vivekanandan <balasubramani.vivekanandan@intel.com> Signed-off-by: Matt Roper <matthew.d.roper@intel.com> Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com> Reviewed-by: Balasubramani Vivekanandan <balasubramani.vivekanandan@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Matt Roper [Fri, 11 Aug 2023 16:06:11 +0000 (09:06 -0700)]
drm/xe/xe2: Define Xe2_LPG IP features
Define a common set of Xe2 graphics feature flags and definitions that
will be used for all platforms in this family.
Several of the feature flags are inherited unchanged from Xe_HP and/or
Xe_HPC platforms:
- dma_mask_size remains 46 (Bspec 70817)
- supports_usm=1 (Bspec 59651)
- has_flatccs=1 (Bspec 58797)
- has_asid=1 (Bspec 59654, 59265, 60288)
- has_range_tlb_invalidate=1 (Bspec 71126)
However some of them still need proper implementation in the driver to
be used, so they are disabled.
Notable Xe2-specific changes:
- All Xe2 platforms use a five-level page table, regardless of the
virtual address space for the platform. (Bspec 59505)
The graphics engine mask represents the Xe2 architecture engines (Bspec
60149), but individual platforms may have a reduced set of usable
engines, as reflected by their fusing.
Cc: Balasubramani Vivekanandan <balasubramani.vivekanandan@intel.com> Signed-off-by: Matt Roper <matthew.d.roper@intel.com> Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com> Reviewed-by: Balasubramani Vivekanandan <balasubramani.vivekanandan@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Matt Roper [Fri, 11 Aug 2023 16:06:10 +0000 (09:06 -0700)]
drm/xe/xe2: AuxCCS is no longer used
Starting with Xe2, all platforms (including igpu platforms) use FlatCCS
compression rather than AuxCCS. Similar to PVC, any future platforms
that don't support FlatCCS should not attempt to fall back to AuxCCS
programming.
Signed-off-by: Matt Roper <matthew.d.roper@intel.com> Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com> Reviewed-by: Balasubramani Vivekanandan <balasubramani.vivekanandan@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Matt Roper [Fri, 11 Aug 2023 16:06:07 +0000 (09:06 -0700)]
drm/xe/xe2: Add MCR register steering for media GT
Xe2 media has a few types of MCR registers, but all except for "GPMXMT"
can safely steer to instance 0,0. GPMXMT follows the same rules that
MTL's OADDRM ranges did, so it can re-use the same enum value.
Bspec: 71186 Signed-off-by: Matt Roper <matthew.d.roper@intel.com> Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com> Reviewed-by: Matt Atwood <matthew.s.atwood@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Matt Roper [Fri, 11 Aug 2023 16:06:06 +0000 (09:06 -0700)]
drm/xe/xe2: Add MCR register steering for primary GT
Xe2 uses the same steering control register and steering semaphore
register as MTL. As with recent platforms, group/instance 0,0 is
sufficient to target a non-terminated instance for most classes of MCR
registers; the only types of ranges that need to consider platform
fusing to find a non-terminated instance are SLICE/DSS ranges and a new
SQIDI_PSMI type of range.
Note that the range of valid bits in XE2_NODE_ENABLE_MASK may be reduced
for some Xe2 SKUs. However the lowest bits are always valid and only
the lowest instance is obtained via __ffs(), so there's no need to
complicate the masking with extra platform/subplatform checks.
Also note that Wa_14017387313 suggests skipping MCR lock acquisition
around GAM and GAMWKR registers to prevent MCR register accesses in an
interrupt handler from deadlocking when the steering semaphore is
already held outside the interrupt context. At this time Xe never
issues MCR accesses from within an interrupt handler so the workaround
is not currently needed.
v2:
- [0x008700-0x0087FF] range to extend up to 0x887F (Matt Attwood)
- [0x00EF00-0x00F4FF] -> [0x00F000, 0xFFFF] to follow latest
bspec version (Bala)
Bspec: 71185 Signed-off-by: Matt Roper <matthew.d.roper@intel.com> Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com> Reviewed-by: Balasubramani Vivekanandan <balasubramani.vivekanandan@intel.com> Reviewed-by: Matt Atwood <matthew.s.atwood@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Matt Roper [Thu, 17 Aug 2023 23:04:12 +0000 (16:04 -0700)]
drm/xe: Stop tracking 4-tile support
The choice of Y-major tiling format (either the legacy "TileY" or the
newer "Tile4") is based on graphics IP version (12.50 and beyond have
Tile4, earlier platforms have TileY). The tracking in xe was originally
added to allow re-using display from i915. However as of i915 commit 4ebf43d0488f ("drm/i915: Eliminate has_4tile feature flag"), the display
code determines TileY vs Tile4 itself, so this can be removed from xe.
drm/xe: enable idle msg and set hysteresis for GSCCS
On MTL (and only on MTL) the GSCCS defaults with idle messaging
disabled. This means that, once awoken, the GSCCS will never signal its
idleness to the GT. To allow the GT to enter the proper low-power state,
we need therefore to turn idle messaging on. As part of this, we also
need to set a proper hysteresis value for the engine.
v2: use MEDIA_VERSION() and CLR() for the RTP rule and action, add reg
bit define in descending order (Matt)
Like the BCS, the GSCCS doesn't have any special HW that needs handling
when emitting commands, so we can re-use the same emit_job code. To make
it clear that this is now a shared low-level function, it has been
renamed to use the "simple" postfix, instead of "copy", to indicate that
it applies to all engines that don't need any additional engine-specific
handling.
The queue name assignment is identical in both GuC and execlists
backends, so we can move it to a common function. This will make adding
a new entry in the next patch slightly cleaner.
Oak Zeng [Fri, 14 Jul 2023 14:42:07 +0000 (10:42 -0400)]
drm/xe: Improve vram info debug printing
Print both device physical address range and CPU io range
of vram. Also print vram's actual size, usable size excluding
stolen memory, and CPU io accessible size.
V1:
- Add back small BAR device info (Matt)
Signed-off-by: Oak Zeng <oak.zeng@intel.com> Reviewed-by: Michael J. Ruhl <michael.j.ruhl@intel.com> Reviewed-by: Matthew Brost <matthew.brost@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Oak Zeng [Tue, 11 Jul 2023 21:46:09 +0000 (17:46 -0400)]
drm/xe: Make xe_mem_region struct
Make a xe_mem_region structure which will be used in the
coming patches. The new structure is used in both xe device
level (xe->mem.vram) and xe_tile level (tile->vram).
Make the definition of xe_mem_region.dpa_base to be the DPA
base of this memory region and change codes according to
this new definition.
v1:
- rename xe_mem_region.base to dpa_base per conversation with Mike
Ruhl
Signed-off-by: Oak Zeng <oak.zeng@intel.com> Reviewed-by: Michael J. Ruhl <michael.j.ruhl@intel.com> Reviewed-by: Matthew Brost <matthew.brost@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Matthew Brost [Fri, 11 Aug 2023 13:27:34 +0000 (06:27 -0700)]
drm/xe: Call __guc_exec_queue_fini_async direct for KERNEL exec_queues
Usually we call __guc_exec_queue_fini_async via a worker as the
exec_queue fini can be done from within the GPU scheduler which creates
a circular dependency without a worker. Kernel exec_queues are fini'd at
driver unload (not from within the GPU scheduler) so it is safe to
directly call __guc_exec_queue_fini_async.
Matthew Auld [Tue, 8 Aug 2023 09:12:09 +0000 (10:12 +0100)]
drm/xe: skip rebind_list if vma destroyed
If we are closing a vm, mark each vma as XE_VMA_DESTROYED and skip
touching the rebind_list if this is seen on the eviction path. That way
we can safely drop the vm dma-resv lock on the close path without
needing to worry about racing with the eviction path trying to add stuff
to the rebind_list which can corrupt our contended list, since the
destroy and rebind links are the same list entry underneath.
References: https://gitlab.freedesktop.org/drm/xe/kernel/-/issues/514 Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Matthew Brost <matthew.brost@intel.com> Reviewed-by: Rodrigo Vivi <rodrigo.vivi@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Matthew Auld [Wed, 9 Aug 2023 08:16:18 +0000 (09:16 +0100)]
drm/xe/guc_submit: fixup deregister in job timeout
Rather check if the engine is still registered before proceeding with
deregister steps. Also the engine being marked as disabled doesn't mean
the engine has been disabled or deregistered from GuC pov, and here we
are signalling fences so we need to be sure GuC is not still using this
context.
v2:
- Drop the read_stopped() for this path. Since we are signalling
fences on error here, best play it safe and wait for the GT reset to
mark the engine as disabled, rather than it just being queued.
v3 (Matt Brost):
- Keep the read_stopped() on the wait event, since there is no need to
wait for an already scheduled GT reset. If it is set we can then just
bail without signalling anything.
Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Matthew Brost <matthew.brost@intel.com> Reviewed-by: Matthew Brost <matthew.brost@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Anshuman Gupta [Wed, 2 Aug 2023 07:04:49 +0000 (12:34 +0530)]
drm/xe/pm: Add vram_d3cold_threshold for d3cold capable device
Do not register vram_d3cold_threshold device sysfs universally
for each gfx device, only register sysfs and set the threshold
value for d3cold capable devices.
Matt Roper [Thu, 27 Jul 2023 22:09:21 +0000 (15:09 -0700)]
drm/xe: Add Wa_14015150844 for DG2 and Xe_LPG
The workaround database tells us to set this bit, even though the bspec
indicates the bit doesn't exist on these platforms. Since this is a
write-only register, we also can't read back its value to verify whether
it's actually working or not. For now we'll trust that the workaround
database knows what it's talking about; if not, the hardware will just
ignore the attempt to write to a non-existent bit and it shouldn't cause
any problems.
To workaround a HW bug on DG2, driver is required to map the whole
ppgtt virtual address space before GPU workload submission. Thus
set the XE_VM_FLAG_SCRATCH_PAGE flag during vm create so the whole
address space is mapped to point to scratch page.
v1:
- Move the workaround implementation from xe_vm_create to
xe_vm_create_ioctl - Brian
- Reorder error checking in xe_vm_create_ioctl - Jose
- Implement WA only for DG2-G10 and DG2-G12
Signed-off-by: Oak Zeng <oak.zeng@intel.com> Reviewed-by: Brian Welty <brian.welty@intel.com> Reviewed-by: José Roberto de Souza <jose.souza@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Matthew Brost [Thu, 3 Aug 2023 03:15:38 +0000 (20:15 -0700)]
drm/xe: Set max pte size when skipping rebinds
When a rebind is skipped, we must set the max pte size of the newly
created vma to value of the old vma as we do not pte walk for the new
vma. Without this future rebinds may be incorrectly skipped due to the
wrong max pte size. Null binds are more likely to expose this bug as
larger ptes are more frequently used compared to normal bindings.
Matthew Auld [Thu, 3 Aug 2023 17:38:50 +0000 (18:38 +0100)]
drm/xe/guc_submit: prevent repeated unregister
It seems that various things can trigger the lr cleanup worker,
including CAT error, engine reset and destroying the actual engine, so
seems plausible to end up triggering the worker more than once in some
cases. If that does happen we can race with an ongoing engine deregister
before it has completed, thus triggering it again and also changing the
state back into pending_disable. Checking if the engine has been marked
as destroyed looks like it should prevent this.
Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Matthew Brost <matthew.brost@intel.com> Reviewed-by: Matthew Brost <matthew.brost@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Tejas Upadhyay [Fri, 4 Aug 2023 06:24:23 +0000 (11:54 +0530)]
drm/xe: Add min/max cap for engine scheduler properties
Add sysfs entries for the min, max, and defaults for each of
engine scheduler controls for every hardware engine class.
Non-elevated user IOCTLs to set these controls must be within
the min-max ranges of the sysfs entries, elevated user can set
these controls to any value. However, introduced compile time
CONFIG min-max values which restricts elevated user to be in
compile time min-max range if at all sysfs min/max are violated.
V12:
- Rebase
V11:
- Make engine_get_prop_minmax and enforce_sched_limit static - Matt
- use enum in place of string in engine_get_prop_minmax - Matt
- no need to use enforce_sched_limit or no need to filter min/max
per user type in sysfs - Matt
V10:
- Add kernel doc for non-static func
- Make helper to get min/max for range validation - Matt
- Filter min/max per user type
V9 :
- Rebase to use s/xe_engine/xe_hw_engine/ - Matt
V8 :
- fix enforce_sched_limit and avoid code duplication - Niranjana
- Make sure min < max - Niranjana
V7 :
- Rebase to replace hw engine with eclass interface
- return EINVAL in place of EPERM
- Use some APIs to avoid code duplication
V6 :
- Rebase changes to reflect per engine class props interface - MattB
- Use #if ENABLED - MattB
- Remove MAX_SCHED_TIMEOUT check as range validation is enough
V5 :
- Rebase to resolve conflicts - CI
V4 :
- Rebase
- Update commit to reflect tile addition
- Use XE_HW macro directly as they are already filtered
for CONFIG checks - Niranjana
- Add CONFIG for enable/disable min/max limitation
on elevated user. Default is enable - Matt/Joonas
V3 :
- Resolve CI hooks warning for kernel-doc
V2 :
- Restric min/max setting to #define default min/max for
elevated user - Himal
- Remove unrelated changes from patch - Niranjana
V7:
- Rebase
V6:
- Rebase to use s/xe_engine/xe_hw_engine/ - Matt
V5:
- Remove timeout validation, not relevant - Niranjana
V4:
- Rebase to replace hw engine with eclass interface
V3:
- Rebase to per class engine props interface
V2:
- Rebase
- Update commit message to add tile
V8:
- Rebase
V7:
- Rebase to use s/xe_engine/xe_hw_engine/ - Matt
V6:
- Remove duration validation, not relevant - Niranjana
V5:
- Rebase to replace hw engine with eclass interface
V4:
- Rebase to per class engine props interface
V3:
- Rebase
- Update commit messge to add tile
V2:
- Rebase
V8:
- Rebase
V7:
- Rebase to use s/xe_engine/xe_hw_engine/ - Matt
V6:
- Remove timeout validation, not relevant - Niranjana
- Rebase to use common error path
V5:
- Rebase to use engine class interface instead of hw engine
V4:
- Rebase to per class engine props interface
V3:
- Rebase
- Update commit message to reflect tile update
V2:
- Use sysfs_create_files as part of this patch
Tejas Upadhyay [Fri, 4 Aug 2023 12:06:25 +0000 (17:36 +0530)]
drm/xe: Add sysfs for default engine scheduler properties
For each HW engine under GT we are adding defaults sysfs
entry to list all engine scheduler properties and its
default values. So that it will be easier for user to
fetch default values of these properties anytime to go
back to default.
For example,
DUT# cat /sys/class/drm/card1/device/tileN/gtN/engines/bcs/.defaults/
job_timeout_ms preempt_timeout_us timeslice_duration_us
where,
@job_timeout_ms: The time after which a job is removed from the scheduler.
@preempt_timeout_us: How long to wait (in microseconds) for a preemption
event to occur when submitting a new context.
@timeslice_duration_us: Each context is scheduled for execution for the
timeslice duration, before switching to the next
context.
V12:
- Add missing drmm_add_action_or_reset and remove sysfs files
V11:
- Rebase
V10:
- Remove xe_gt.h inclusion from .h - Matt
V9 :
- Remove jiffies for job_timeout_ms - Matt
V8 :
- replace xe_engine with xe_hw_engine - Matt
V7 :
- Push all errors to one error path at every places - Niranjana
- Describe struct member to resolve kernel doc err - CI hooks
V6 :
- Use engine class interface instead of hw engine
in sysfs for better interfacing readability - Niranjana
V5 :
- Scheduling props should apply per class engine not per hardware engine - Matt
- Do not record value of job_timeout_ms if changed based on dma_fence - Matt
V4 :
- Resolve merge conflicts - CI
V3 :
- Rearrange code in its own file
- Rebase
- Update commit message to reflect tile addition
V2 :
- Use sysfs_create_files in this patch - Niranjana
- Handle prototype error for xe_add_engine_defaults - CI hooks
- Remove unused member sysfs_hwe - Niranjana
Tejas Upadhyay [Fri, 4 Aug 2023 11:47:56 +0000 (17:17 +0530)]
drm/xe: Add sysfs entries for engines under its GT
Add engines sysfs directory under its GT and
create sub directory for all engine class
(note its not per instance) present on GT.
For example,
DUT# cat /sys/class/drm/cardX/device/tileN/gtN/engines/
bcs/ ccs/
V9 :
- Add missing drmm_add_action_or_reset
V8 :
- Rebase
V7 :
- Remove xe_gt.h from .h and include in .c - Matt
V6 :
- Add kernel doc and arrange file in make file by alphabet - Matt
V5 :
- replace xe_engine with xe_hw_engine - Matt
V4 :
- Rebase to resolve conflicts - CI
V3 :
- Move code in its own file
- Rename API name
V2 :
- Correct class mask logic - Himal
- Remove extra parenthesis
Engine was inappropriately used to refer to execution queues and it
also created some confusion with hardware engines. Where it applies
the exec_queue variable name is changed to q and comments are also
updated.
drm/xe: Prefer WARN() over BUG() to avoid crashing the kernel
Replace calls to XE_BUG_ON() with calls XE_WARN_ON() which in turn calls
WARN() instead of BUG(). BUG() crashes the kernel and should only be
used when it is absolutely unavoidable in case of catastrophic and
unrecoverable failures, which is not the case here.
Matthew Brost [Fri, 28 Jul 2023 02:00:14 +0000 (19:00 -0700)]
drm/xe: Remove XE_GUC_CT_SELFTEST
XE_GUC_CT_SELFTEST enabled a debugfs entry to which ran a very simple
selftest ensuring the GuC CT code worked. This was added before the
kunit framework was available and before submissions were working too.
This test isn't worth porting over to the kunit frame as if the GuC CT
didn't work, literally almost nothing would work so just remove this.
Suggested-by: Oded Gabbay <ogabbay@kernel.org> Signed-off-by: Matthew Brost <matthew.brost@intel.com> Reviewed-by: Matthew Auld <matthew.auld@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Matt Roper [Fri, 28 Jul 2023 17:56:02 +0000 (10:56 -0700)]
drm/xe/mtl: Reduce Wa_14018575942 scope to the CCS engine
The MTL version of Wa_14018575942 has been updated to suggest only
applying the register change on the CCS engine.
Note that DG2 and PVC have a functionally equivalent workaround with
Wa_18018781329; for now that one is still applying to all engines,
although we'll keep an eye on it in case it changes to be CCS-specific
too.
drm/xe: Fix the runtime_idle call and d3cold.allowed decision.
According to Documentation/power/runtime_pm.txt:
int pm_runtime_put(struct device *dev);
- decrement the device's usage counter; if the result is 0 then run
pm_request_idle(dev) and return its result
int pm_runtime_put_autosuspend(struct device *dev);
- decrement the device's usage counter; if the result is 0 then run
pm_request_autosuspend(dev) and return its result
We need to ensure that the idle function is called before suspending
so we take the right d3cold.allowed decision and respect the values
set on vram_d3cold_threshold sysfs. So we need pm_runtime_put()
instead of pm_runtime_put_autosuspend().
Send uevent in case of gt reset failure. This intimation can be used by
userspace monitoring tool to do the device level reset/reboot
when GT reset fails. udevadm can be used to monitor the uevents.
v2:
- Support only gt failure notification (Rodrigo)
v3
- Rectify the comments in header file.
v4
- Use pci kobj instead of drm kobj for notification.(Rodrigo)
- Cleanup (Badal)
v5
- Add tile id and gt id as additional info provided by uevent.
- Provide code documentation for the uevent. (Rodrigo)
The order: 'offset, mask, val'; is more common in other
drivers and in special in i915, where any dev could copy
a sequence and end up with unexpected behavior.
We cannot have spin locks around xe_irq_reset, since it will
call the intel_display_power_is_enabled() function, and
that needs a mutex lock. Hence causing the undesired
"[ BUG: Invalid wait context ]"
We cannot convert i915's power domain lock to spin lock
due to the nested dependency of non-atomic context waits.
So, let's move the xe_irq_reset functions from the
critical area, while still ensuring that we are protecting
the irq.enabled and ensuring the right serialization
in the irq handlers.
v2: On the first version, I had missed the fact that
irq.enabled is checked on the xe/display glue layer,
and that i915 display code is actually using the irq
spin lock properly. So, this got changed to a version
suggested by Matthew Auld.
v3: do not use lockdep_assert for display glue.
do not save restore irq from inside IRQ or we can
get bogus irq restore warnings
Closes: https://gitlab.freedesktop.org/drm/xe/kernel/-/issues/463 Suggested-by: Matthew Auld <matthew.auld@intel.com> Reviewed-by: Matthew Auld <matthew.auld@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Lucas De Marchi [Wed, 26 Jul 2023 16:07:07 +0000 (09:07 -0700)]
drm/xe: Carve out top of DSM as reserved
Top of DSM contains the WOPCM where kernel driver shouldn't access as
it contains data from other HW agents. Carve it out from the stolen
memory. On a MTL system, the output now matches the expected values:
Lucas De Marchi [Wed, 26 Jul 2023 16:07:06 +0000 (09:07 -0700)]
drm/xe: Fix MTL+ stolen memory mapping
Based on commit 8d8d062be6b9 ("drm/i915/mtl: Fix MTL stolen memory GGTT
mapping"). For stolen on MTL and beyond, the address in the PTE is the
offset from DSM base. While at it, update the comments explaining each
part of the calculation.
Lucas De Marchi [Wed, 26 Jul 2023 16:07:04 +0000 (09:07 -0700)]
drm/xe: Set PTE_DM bit for stolen on MTL
Integrated graphics 1270 and beyond should set the PTE_LM bit in the PTE
when it's stolen memory. Add a new function, xe_bo_is_stolen_devmem(),
and use it when encoding the PTE.
In some places in the spec the PTE bit is called "Local Memory",
abbreviated as LM, and in others it's called "Device Memory" (DM). Since
we moved away from "Local Memory" and preferred the "vram" terminology,
also rename the macros as DM to follow the name of the new function.
Lucas De Marchi [Wed, 26 Jul 2023 16:07:03 +0000 (09:07 -0700)]
drm/xe: Decouple vram check from xe_bo_addr()
The output arg is_vram in xe_bo_addr() is unused by several callers.
It's also not what the function is mainly doing. Remove the argument and
let the interested callers to call xe_bo_is_vram().
in commit 81593af6c88d ("drm/xe: Convert xe_mmio_wait32 to us so we can
stop using wait_for_us.") the mcr semaphore register read was
accidentally switched from waiting for the register to go to 1 to
waiting for the register to go to 0, so we need to flip it back.
Lucas De Marchi [Wed, 26 Jul 2023 16:07:01 +0000 (09:07 -0700)]
drm/xe: Fix checking for unset value
Commit 37430402618d ("drm/xe: NULL binding implementation") introduced
the NULL binding implementation, but left a case in which the out value
is_vram is not set and the caller will use whatever was on stack.
Eventually the is_vram out could be removed, but this should at least
fix the current bug.
Fixes: 37430402618d ("drm/xe: NULL binding implementation") Reviewed-by: Matt Roper <matthew.d.roper@intel.com> Link: https://lore.kernel.org/r/20230726160708.3967790-4-lucas.demarchi@intel.com Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Matthew Auld [Wed, 26 Jul 2023 09:23:49 +0000 (10:23 +0100)]
drm/xe/engine: add missing rpm for bind engines
Bind engines need to use the migration vm, however we don't have any rpm
for such a vm, otherwise the kernel would prevent rpm suspend-resume.
There are two issues here, first is the actual engine create which needs
to touch the lrc, but since that is in VRAM we trigger loads of missing
mem_access asserts. The second issue is when destroying the actual
engine, which requires GuC CT to deregister the context.
v2 (Rodrigo):
- Just use ENGINE_FLAG_VM as the indicator that we need to hold an rpm
ref. This also handles the case in xe_vm_create() where we create
default bind engines.
This config is the only real one. If execlist remains in the
code it will forever be experimental and we shouldn't maintain
an uapi like that for that experimental piece of code that
should never be used by real users.
Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com> Reviewed-by: José Roberto de Souza <jose.souza@intel.com>
Matthew Auld [Fri, 31 Mar 2023 08:46:28 +0000 (09:46 +0100)]
drm/xe: fully turn on small-bar support
This allows vram_size > io_size, instead of just clamping the vram size
to the BAR size, now that the driver supports it.
Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Gwan-gyeong Mun <gwan-gyeong.mun@intel.com> Cc: Lucas De Marchi <lucas.demarchi@intel.com> Cc: Michael J. Ruhl <michael.j.ruhl@intel.com> Reviewed-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Reviewed-by: Gwan-gyeong Mun <gwan-gyeong.mun@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Matthew Auld [Fri, 31 Mar 2023 08:46:27 +0000 (09:46 +0100)]
drm/xe/uapi: add the userspace bits for small-bar
Mostly the same as i915. We add a new hint for userspace to force an
object into the mappable part of vram.
We also need to tell userspace how large the mappable part is. In Vulkan
for example, there will be two vram heaps for small-bar systems. And
here the size of each heap needs to be known. Likewise the used/avail
tracking needs to account for the mappable part.
We also limit the available tracking going forward, such that we limit
to privileged users only, since these values are system wide and are
technically considered an info leak.
v2 (Maarten):
- s/NEEDS_CPU_ACCESS/NEEDS_VISIBLE_VRAM/ in the uapi. We also no
longer require smem as an extra placement. This is more flexible,
and lets us use this for clear-color surfaces, since we need CPU access
there but we don't want to attach smem, since that effectively disables
CCS from kernel pov.
- Reject clear-color CCS buffers where NEEDS_VISIBLE_VRAM is not set,
instead of migrating it behind the scenes.
v3 (José):
- Split the changes that limit the accounting for perfmon_capable()
into a separate patch.
- Use XE_BO_CREATE_VRAM_MASK.
v4 (Gwan-gyeong Mun):
- Add some kernel-doc for the query bits.
v5:
- One small kernel-doc correction. The cpu_visible_size and
corresponding used tracking are always zero for non
XE_MEM_REGION_CLASS_VRAM.
v6:
- Without perfmon_capable() it likely makes more sense to report as
zero, instead of reporting as used == total size. This should give
similar behaviour as i915 which rather tracks free instead of used.
- Only enforce NEEDS_VISIBLE_VRAM on rc_ccs_cc_plane surfaces when the
device is actually small-bar.
Testcase: igt/tests/xe_query
Testcase: igt/tests/xe_mmap@small-bar Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com> Cc: Gwan-gyeong Mun <gwan-gyeong.mun@intel.com> Cc: Lucas De Marchi <lucas.demarchi@intel.com> Cc: José Roberto de Souza <jose.souza@intel.com> Cc: Filip Hazubski <filip.hazubski@intel.com> Cc: Carl Zhang <carl.zhang@intel.com> Cc: Effie Yu <effie.yu@intel.com> Reviewed-by: José Roberto de Souza <jose.souza@intel.com> Reviewed-by: Gwan-gyeong Mun <gwan-gyeong.mun@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Matthew Auld [Fri, 31 Mar 2023 08:46:26 +0000 (09:46 +0100)]
drm/xe/bo: support tiered vram allocation for small-bar
Add the new flag XE_BO_NEEDS_CPU_ACCESS, to force allocating in the
mappable part of vram. If no flag is specified we do a topdown
allocation, to limit the chances of stealing the precious mappable part,
if we don't need it. If this is a full-bar system, then this all gets
nooped.
For kernel users, it looks like xe_bo_create_pin_map() is the central
place which users should call if they want CPU access to the object, so
add the flag there.
We still need to plumb this through for userspace allocations. Also it
looks like page-tables are using pin_map(), which is less than ideal. If
we can already use the GPU to do page-table management, then maybe we
should just force that for small-bar.
Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Gwan-gyeong Mun <gwan-gyeong.mun@intel.com> Cc: Thomas Hellström <thomas.hellstrom@linux.intel.com> Cc: Lucas De Marchi <lucas.demarchi@intel.com> Reviewed-by: Maarten Lankhorst <maarten.lankhorst@linux.intel.com> Reviewed-by: Gwan-gyeong Mun <gwan-gyeong.mun@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
v2:
- Rename to XE_BO_PAGETABLE to make it more clear that this BO is the
pagetable itself, rather than just being bound in the PPGTT. (Lucas)
Cc: Lucas De Marchi <lucas.demarchi@intel.com> Reviewed-by: Lucas De Marchi <lucas.demarchi@intel.com> Acked-by: Nirmoy Das <nirmoy.das@intel.com> Link: https://lore.kernel.org/r/20230725003433.1992137-3-matthew.d.roper@intel.com Signed-off-by: Matt Roper <matthew.d.roper@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Matthew Auld [Mon, 24 Jul 2023 10:47:44 +0000 (11:47 +0100)]
drm/xe: add lockdep annotation for xe_device_mem_access_put()
The main motivation is with d3cold which will make the suspend and
resume callbacks even more scary, but is useful regardless. We already
have the needed annotation on the acquire side with
xe_device_mem_access_get(), and by adding the annotation on the release
side we should have a lot more confidence that our locking hierarchy is
correct.
v2:
- Move the annotation into both callbacks for better symmetry. Also
don't hold over the entire mem_access_get(); we only need to lockep
to understand what is being held upon entering mem_access_get(), and
how that matches up with locks in the callbacks.
Matthew Brost [Fri, 21 Jul 2023 19:16:13 +0000 (12:16 -0700)]
drm/xe: Use migrate engine for page fault binds
We must use migrate engine for page fault binds in order to avoid a
deadlock as the migrate engine has a reserved BCS instance which cannot
be stuck on a fault. To use the migrate engine the engine argument to
xe_migrate_update_pgtables must be NULL, this was incorrectly wired up
so vm->eng[tile_id] was always being used. Fix this.
Matthew Brost [Thu, 20 Jul 2023 03:44:25 +0000 (20:44 -0700)]
drm/xe: Reduce the number list links in xe_vma
Combine the userptr, rebind, and destroy links into a union as
the lists these links belong to are mutually exclusive.
v2: Adjust which lists are combined (Thomas H)
v3: Add kernel doc why this is safe (Thomas H), remove related change
of list_del_init -> list_del (Rodrigo)
Matthew Brost [Wed, 19 Jul 2023 21:46:01 +0000 (14:46 -0700)]
drm/xe: Avoid doing rebinds
If we dont change page sizes we can avoid doing rebinds rather just do a
partial unbind. The algorithm to determine its page size is greedy as we
assume all pages in the removed VMA are the largest page used in the
VMA.
v2: Don't exceed 100 lines
v3: struct xe_vma_op_unmap remove in different patch, remove XXX comment
Matthew Brost [Wed, 19 Jul 2023 21:10:11 +0000 (14:10 -0700)]
drm/xe: Make bind engines safe
We currently have a race between bind engines which can result in
corrupted page tables leading to faults.
A simple example:
bind A 0x0000-0x1000, engine A, has unsatisfied in-fence
bind B 0x1000-0x2000, engine B, no in-fences
exec A uses 0x1000-0x2000
Bind B will pass bind A and exec A will fault. This occurs as bind A
programs the root of the page table in a bind job which is held up by an
in-fence. Bind B in this case just programs a leaf entry of the
structure.
To fix use range-fence utility to track cross bind engine conflicts. In
the above example bind A would insert an dependency into the range-fence
tree with a key of 0x0-0x7fffffffff, bind B would find that dependency
and its bind job would scheduled behind the unsatisfied in-fence and
bind A's job.
Reviewed-by: Maarten Lankhorst<maarten.lankhorst@linux.intel.com> Co-developed-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> Signed-off-by: Matthew Brost <matthew.brost@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Thomas Hellström [Sun, 9 Jul 2023 16:54:59 +0000 (09:54 -0700)]
drm/xe: Introduce a range-fence utility
Add generic utility to track range conflicts signaled by a dma-fence.
Tracking implemented via an interval tree. An example use case being
tracking conflicts for pending (un)binds from multiple bind engines. By
being generic ths idea would this could moved to the DRM level and used
in multiple drivers for similar problems.
v2: Make interval tree functions static (CI)
v3: Remove non-static cleanup function (CI)
Reviewed-by: Matthew Brost <matthew.brost@intel.com> Signed-off-by: Matthew Brost <matthew.brost@intel.com> Signed-off-by: Thomas Hellström <thomas.hellstrom@linux.intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Those messages are unnecessary because a generic message is already
produced in case of allocation failure. Besides, this also removes a
misuse of the XE_IOCTL_DBG macro.
Lucas De Marchi [Tue, 18 Jul 2023 19:39:24 +0000 (12:39 -0700)]
drm/xe: Use FIELD_PREP/FIELD_GET for tile id encoding
Use FIELD_PREP()/FIELD_GET() to encode the tile id into flags. Besides
protecting for eventual overflow it also makes it easier to see a new
flag can't be added as BIT(7).
Lucas De Marchi [Tue, 18 Jul 2023 19:39:23 +0000 (12:39 -0700)]
drm/xe: Normalize XE_VM_FLAG* names
Rename XE_VM_FLAGS_64K to XE_VM_FLAG_64K to follow the other names and
s/GT/TILE/ that got missed in commit 08dea7674533 ("drm/xe: Move
migration from GT to tile").
Matthew Auld [Thu, 13 Jul 2023 09:00:49 +0000 (10:00 +0100)]
drm/xe: add missing bulk_move reset
It looks like bulk_move is set during object construction, but is only
removed on object close, however in various places we might not yet have
an actual fd to close, like on the error paths for the gem_create ioctl,
and also one internal user for the evict_test_run_gt() selftest. Try to
handle those cases by manually resetting the bulk_move. This should
prevent triggering:
WARNING: CPU: 7 PID: 8252 at drivers/gpu/drm/ttm/ttm_bo.c:327
ttm_bo_release+0x25e/0x2a0 [ttm]
v2 (Nirmoy):
- It should be safe to just unconditionally call
__xe_bo_unset_bulk_move() in most places.
Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Matthew Brost <matthew.brost@intel.com> Reviewed-by: Nirmoy Das <nirmoy.das@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Matthew Auld [Thu, 13 Jul 2023 09:13:33 +0000 (10:13 +0100)]
drm/xe/selftests: restart GT after xe_bo_restore_kernel()
Test seems to be failing badly after calling xe_bo_restore_kernel().
Taking a snapshot of the CTB and copying back a potentially old version
seems risky, depending on what might have been inflight. Also it seems
snapshotting the ADS object and copying back results in serious
breakage. Normally when calling xe_bo_restore_kernel() we always fully
restart the GT, which re-intializes such things. We could potentially
skip saving and restoring such objects in xe_bo_evict_all() however
seems quite fragile not to also restart the GT. Try to do that here by
triggering a GT reset.
Signed-off-by: Matthew Auld <matthew.auld@intel.com> Cc: Matthew Brost <matthew.brost@intel.com> Acked-by: Nirmoy Das <nirmoy.das@intel.com> Signed-off-by: Rodrigo Vivi <rodrigo.vivi@intel.com>