]> www.infradead.org Git - users/jedix/linux-maple.git/commit
nvmet: make nvmet_wq visible in sysfs
authorGuixin Liu <kanie@linux.alibaba.com>
Thu, 31 Oct 2024 02:27:20 +0000 (10:27 +0800)
committerKeith Busch <kbusch@kernel.org>
Tue, 5 Nov 2024 16:36:18 +0000 (08:36 -0800)
commitc74649b6e400edae67eba56e5285a92619dfb647
treee76d2073bd349cd9eca04ccb057eccdc56f552a4
parent63a5c7a4b4c49ad86c362e9f555e6f343804ee1d
nvmet: make nvmet_wq visible in sysfs

In some complex scenarios, we deploy multiple tasks on a single machine
(hybrid deployment), such as Docker containers for function computation
(background processing), real-time tasks, monitoring, event handling,
and management, along with an NVMe target server.

Each of these components is restricted to its own CPU cores to prevent
mutual interference and ensure strict isolation. To achieve this level
of isolation for nvmet_wq we need to  use sysfs tunables such as
cpumask that are currently not accessible.

Add WQ_SYSFS flag to alloc_workqueue() when creating nvmet_wq so
workqueue tunables are exported in the userspace via sysfs.

with this patch :-

  nvme (nvme-6.13) # ls /sys/devices/virtual/workqueue/nvmet-wq/
  affinity_scope  affinity_strict  cpumask  max_active  nice per_cpu
  power  subsystem  uevent

Signed-off-by: Guixin Liu <kanie@linux.alibaba.com>
Reviewed-by: Chaitanya Kulkarni <kch@nvidia.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Keith Busch <kbusch@kernel.org>
drivers/nvme/target/core.c