]> www.infradead.org Git - users/hch/misc.git/commitdiff
drm/xe/guc: Check GuC running state before deregistering exec queue
authorShuicheng Lin <shuicheng.lin@intel.com>
Fri, 10 Oct 2025 17:25:29 +0000 (17:25 +0000)
committerLucas De Marchi <lucas.demarchi@intel.com>
Mon, 13 Oct 2025 20:03:26 +0000 (13:03 -0700)
In normal operation, a registered exec queue is disabled and
deregistered through the GuC, and freed only after the GuC confirms
completion. However, if the driver is forced to unbind while the exec
queue is still running, the user may call exec_destroy() after the GuC
has already been stopped and CT communication disabled.

In this case, the driver cannot receive a response from the GuC,
preventing proper cleanup of exec queue resources. Fix this by directly
releasing the resources when GuC is not running.

Here is the failure dmesg log:
"
[  468.089581] ---[ end trace 0000000000000000 ]---
[  468.089608] pci 0000:03:00.0: [drm] *ERROR* GT0: GUC ID manager unclean (1/65535)
[  468.090558] pci 0000:03:00.0: [drm] GT0:     total 65535
[  468.090562] pci 0000:03:00.0: [drm] GT0:     used 1
[  468.090564] pci 0000:03:00.0: [drm] GT0:     range 1..1 (1)
[  468.092716] ------------[ cut here ]------------
[  468.092719] WARNING: CPU: 14 PID: 4775 at drivers/gpu/drm/xe/xe_ttm_vram_mgr.c:298 ttm_vram_mgr_fini+0xf8/0x130 [xe]
"

v2: use xe_uc_fw_is_running() instead of xe_guc_ct_enabled().
    As CT may go down and come back during VF migration.

Fixes: dd08ebf6c352 ("drm/xe: Introduce a new DRM driver for Intel GPUs")
Cc: stable@vger.kernel.org
Cc: Matthew Brost <matthew.brost@intel.com>
Signed-off-by: Shuicheng Lin <shuicheng.lin@intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
Link: https://lore.kernel.org/r/20251010172529.2967639-2-shuicheng.lin@intel.com
(cherry picked from commit 9b42321a02c50a12b2beb6ae9469606257fbecea)
Signed-off-by: Lucas De Marchi <lucas.demarchi@intel.com>
drivers/gpu/drm/xe/xe_guc_submit.c

index 53024eb5670b707b781649015fbf2db004a9e17f..94ed8159496f10f83d7f46a0b6f048b9927bd39e 100644 (file)
@@ -44,6 +44,7 @@
 #include "xe_ring_ops_types.h"
 #include "xe_sched_job.h"
 #include "xe_trace.h"
+#include "xe_uc_fw.h"
 #include "xe_vm.h"
 
 static struct xe_guc *
@@ -1489,7 +1490,17 @@ static void __guc_exec_queue_process_msg_cleanup(struct xe_sched_msg *msg)
        xe_gt_assert(guc_to_gt(guc), !(q->flags & EXEC_QUEUE_FLAG_PERMANENT));
        trace_xe_exec_queue_cleanup_entity(q);
 
-       if (exec_queue_registered(q))
+       /*
+        * Expected state transitions for cleanup:
+        * - If the exec queue is registered and GuC firmware is running, we must first
+        *   disable scheduling and deregister the queue to ensure proper teardown and
+        *   resource release in the GuC, then destroy the exec queue on driver side.
+        * - If the GuC is already stopped (e.g., during driver unload or GPU reset),
+        *   we cannot expect a response for the deregister request. In this case,
+        *   it is safe to directly destroy the exec queue on driver side, as the GuC
+        *   will not process further requests and all resources must be cleaned up locally.
+        */
+       if (exec_queue_registered(q) && xe_uc_fw_is_running(&guc->fw))
                disable_scheduling_deregister(guc, q);
        else
                __guc_exec_queue_destroy(guc, q);