(running nvmetcli without these packages may produce
hard-to-decipher errors).
-Walk-Through Example Usage
----------------------------
-Make sure to run nvmetcli as root, the nvmet module is loaded,
-your devices are loaded, and configfs is mounted on /sys/kernel/config
-using:
-
- mount -t configs none /sys/kernel/config
-
-To get started with the interactive mode and the nvmetcli command prompt,
-type (in root):
-
-./nvmetcli
-...>
-
-The following walks through an example using interactive mode.
-
-#
-# Create a subsystem. If you do not specify a name a NQN will be generated,
-# which is probably the best choice, we we don't do it here as the name
-# would be random
-#
-
-> cd /subsystems
-...> create testnqn
-
-#
-# Add access for a specific NVMe Host by it's NQN:
-#
-
-...> cd /hosts
-...> create hostnqn
-...> cd /subsystems/testnqn/allowed_hosts/
-...> create hostnqn
-
-#
-# remove access of a subsystem by deleting the Host NQN
-#
-
-...> cd /subsystems/testnqn/allowed_hosts/
-...> delete hostnqn
-
-#
-# Alternatively this allows any host to connect to the subsystsem. Only
-# use this in tightly controller environments:
-#
-
-...> cd /subsystems/testnqn/
-...> set attr allow_any_host=1
-
-#
-# Create a new namespace. If you do not specify a namespace ID the fist
-# unused one will be used.
-#
-
-...> cd /subsystems/testnqn/namespaces
-...> create 1
-...> cd 1
-...> set device path=/dev/nvme0n1
-...> enable
-
-# Note that in the above setup the 'device_nguid' attribute
-# does not have to be set for correct NVMe Target functionality.
-
-#
-# Create a port through which access is allowed, and enable access to
-# a subsystem through it.
-#
-# This creates a trivial loopback port that can be used with nvme-loop on
-# the same machine:
-#
-
-...> cd /ports/
-...> create 1
-...> cd 1/
-...> set addr trtype=loop
-...> cd subsystems/
-...> create testnqn
-
-#
-# Or create a RDMA (IB, RoCE, iWarp) port using IPv4 addressing, 4420 is the
-# IANA assigned port for NVMe over Fabrics using RDMA:
-#
-
-...> cd /ports/
-...> create 2
-...> cd 2/
-...> set addr trtype=rdma
-...> set addr adrfam=ipv4
-...> set addr traddr=192.168.6.68
-...> set addr trsvcid=4420
-...> cd subsystems/
-...> create testnqn
-
-Discovery
-----------
-Each NVMe Target has a discovery controller mechanism that an NVMe
-Host can use to determine the NVM subsystems it has access too.
-nvmetcli can be used to add a new record to the discovery controller
-upon each new subsystem entry and port entry that the newly
-created subsystem entry binds too (see the '/port/' and
-'/subystem/' nvmetcli walk-through earlier in this README). Each NVMe
-Host only gets to see the discovery entries defined in
-/subsystems/[NQN NAME]/allowed_hosts and the IP port it is connected
-to the NVMe Target.
-
-A Host can retrieve these discovery logs via the nvme-cli tool
-(https://github.com/linux-nvme/nvme-cli).
-
-Referrals
----------
-TBD
-
-Saving and restoring the configuration
---------------------------------------
-The saveconfig command in the interactive
-nvmetcli mode saves the current configuration. For example,
-
-./nvmetcli
-...> saveconfig test.json
-
-You can also invoke a 'save' command, as well as a 'restore'
-and 'clear' command on the Linux command line. For example,
-
-(to load an NVMe Target configuration):
- ./nvmetcli restore test.json
-
-(to clear a current NVMe Target configuration):
- ./nvmetcli clear test.json
-
-Without an additional file name these operate on /etc/nvmet/config.json.
+Usage
+-----
+Look at Documentation/nvmetcli.txt for details.
Example NVMe Target .json files
--------------------------------------
Development
-----------------
-Please send to linux-nvme@lists.infradead.org for review and acceptance.
+Please send patches and bug reports to linux-nvme@lists.infradead.org for
+review and acceptance.