Do not hold xef->exec_queue.lock mutex while parsing the xarray
xef->exec_queue.xa in xe_file_close() as it is not needed and
will cause an unwanted dependency between this lock and the vm->lock.
This lock protects the exec queue lookup and reference taking which
doesn't apply to this code path. When FD is closing, IOCTLs presumably
can't be modifying the xarray.
v2: Update commit text (Matt Brost)
v3: Add more code comment (Rodrigo Vivi)
v4: Further expand code comment (Rodirgo Vivi)
Signed-off-by: Niranjana Vishwanathapura <niranjana.vishwanathapura@intel.com>
Reviewed-by: Matthew Brost <matthew.brost@intel.com>
Reviewed-by: Jagmeet Randhawa <jagmeet.randhawa@intel.com>
Acked-by: Rodrigo Vivi <rodrigo.vivi@intel.com>
Signed-off-by: Matthew Brost <matthew.brost@intel.com>
Link: https://patchwork.freedesktop.org/patch/msgid/20240529221639.23117-1-niranjana.vishwanathapura@intel.com
struct xe_exec_queue *q;
unsigned long idx;
- mutex_lock(&xef->exec_queue.lock);
+ /*
+ * No need for exec_queue.lock here as there is no contention for it
+ * when FD is closing as IOCTLs presumably can't be modifying the
+ * xarray. Taking exec_queue.lock here causes undue dependency on
+ * vm->lock taken during xe_exec_queue_kill().
+ */
xa_for_each(&xef->exec_queue.xa, idx, q) {
xe_exec_queue_kill(q);
xe_exec_queue_put(q);
}
- mutex_unlock(&xef->exec_queue.lock);
xa_destroy(&xef->exec_queue.xa);
mutex_destroy(&xef->exec_queue.lock);
mutex_lock(&xef->vm.lock);