From 3ecfb281cefb09d119e12d8824f43d99ba6b05b3 Mon Sep 17 00:00:00 2001 From: Orit Wasserman Date: Mon, 12 Feb 2024 14:39:38 +0200 Subject: [PATCH] doc: Add NVMe-oF gateway documentation - Add nvmeof-initiator-esx.rst - Add nvmeof-initiator-linux.rst - Add nvmeof-initiators.rst - Add nvmeof-overview.rst - Add nvmeof-requirements.rst - Add nvmeof-target-configure.rst - Add links to rbd-integrations.rst Co-authored-by: Ilya Dryomov Co-authored-by: Zac Dover Signed-off-by: Orit Wasserman (cherry picked from commit 9f86c35a0d308c6ff24d3a033f5314ec86bf896b) --- doc/rbd/nvmeof-initiator-esx.rst | 70 ++++++++++++++++ doc/rbd/nvmeof-initiator-linux.rst | 83 +++++++++++++++++++ doc/rbd/nvmeof-initiators.rst | 16 ++++ doc/rbd/nvmeof-overview.rst | 48 +++++++++++ doc/rbd/nvmeof-requirements.rst | 14 ++++ doc/rbd/nvmeof-target-configure.rst | 122 ++++++++++++++++++++++++++++ doc/rbd/rbd-integrations.rst | 1 + 7 files changed, 354 insertions(+) create mode 100644 doc/rbd/nvmeof-initiator-esx.rst create mode 100644 doc/rbd/nvmeof-initiator-linux.rst create mode 100644 doc/rbd/nvmeof-initiators.rst create mode 100644 doc/rbd/nvmeof-overview.rst create mode 100644 doc/rbd/nvmeof-requirements.rst create mode 100644 doc/rbd/nvmeof-target-configure.rst diff --git a/doc/rbd/nvmeof-initiator-esx.rst b/doc/rbd/nvmeof-initiator-esx.rst new file mode 100644 index 0000000000000..6afa29f1e9f97 --- /dev/null +++ b/doc/rbd/nvmeof-initiator-esx.rst @@ -0,0 +1,70 @@ +--------------------------------- +NVMe/TCP Initiator for VMware ESX +--------------------------------- + +Prerequisites +============= + +- A VMware ESXi host running VMware vSphere Hypervisor (ESXi) 7.0U3 version or later. +- Deployed Ceph NVMe-oF gateway. +- Ceph cluster with NVMe-oF configuration. +- Subsystem defined in the gateway. + +Configuration +============= + +The following instructions will use the default vSphere web client and esxcli. + +1. Enable NVMe/TCP on a NIC: + + .. prompt:: bash # + + esxcli nvme fabric enable --protocol TCP --device vmnicN + + Replace ``N`` with the number of the NIC. + +2. Tag a VMKernel NIC to permit NVMe/TCP traffic: + + .. prompt:: bash # + + esxcli network uip interface tag add --interface-nme vmkN --tagname NVMeTCP + + Replace ``N`` with the ID of the VMkernel. + +3. Configure the VMware ESXi host for NVMe/TCP: + + #. List the NVMe-oF adapter: + + .. prompt:: bash # + + esxcli nvme adapter list + + #. Discover NVMe-oF subsystems: + + .. prompt:: bash # + + esxcli nvme fabric discover -a NVME_TCP_ADAPTER -i GATEWAY_IP -p 4420 + + #. Connect to NVME-oF gateway subsystem: + + .. prompt:: bash # + + esxcli nvme connect -a NVME_TCP_ADAPTER -i GATEWAY_IP -p 4420 -s SUBSYSTEM_NQN + + #. List the NVMe/TCP controllers: + + .. prompt:: bash # + + esxcli nvme controller list + + #. List the NVMe-oF namespaces in the subsystem: + + .. prompt:: bash # + + esxcli nvme namespace list + +4. Verify that the initiator has been set up correctly: + + #. From the vSphere client go to the ESXi host. + #. On the Storage page go to the Devices tab. + #. Verify that the NVME/TCP disks are listed in the table. diff --git a/doc/rbd/nvmeof-initiator-linux.rst b/doc/rbd/nvmeof-initiator-linux.rst new file mode 100644 index 0000000000000..4889e4132c1e8 --- /dev/null +++ b/doc/rbd/nvmeof-initiator-linux.rst @@ -0,0 +1,83 @@ +============================== + NVMe/TCP Initiator for Linux +============================== + +Prerequisites +============= + +- Kernel 5.0 or later +- RHEL 9.2 or later +- Ubuntu 24.04 or later +- SLES 15 SP3 or later + +Installation +============ + +1. Install the nvme-cli: + + .. prompt:: bash # + + yum install nvme-cli + +2. Load the NVMe-oF module: + + .. prompt:: bash # + + modprobe nvme-fabrics + +3. Verify the NVMe/TCP target is reachable: + + .. prompt:: bash # + + nvme discover -t tcp -a GATEWAY_IP -s 4420 + +4. Connect to the NVMe/TCP target: + + .. prompt:: bash # + + nvme connect -t tcp -a GATEWAY_IP -n SUBSYSTEM_NQN + +Next steps +========== + +Verify that the initiator is set up correctly: + +1. List the NVMe block devices: + + .. prompt:: bash # + + nvme list + +2. Create a filesystem on the desired device: + + .. prompt:: bash # + + mkfs.ext4 NVME_NODE_PATH + +3. Mount the filesystem: + + .. prompt:: bash # + + mkdir /mnt/nvmeof + + .. prompt:: bash # + + mount NVME_NODE_PATH /mnt/nvmeof + +4. List the NVME-oF files: + + .. prompt:: bash # + + ls /mnt/nvmeof + +5. Create a text file in the ``/mnt/nvmeof`` directory: + + .. prompt:: bash # + + echo "Hello NVME-oF" > /mnt/nvmeof/hello.text + +6. Verify that the file can be accessed: + + .. prompt:: bash # + + cat /mnt/nvmeof/hello.text diff --git a/doc/rbd/nvmeof-initiators.rst b/doc/rbd/nvmeof-initiators.rst new file mode 100644 index 0000000000000..8fa4a5b9d89c8 --- /dev/null +++ b/doc/rbd/nvmeof-initiators.rst @@ -0,0 +1,16 @@ +.. _configuring-the-nvmeof-initiators: + +==================================== + Configuring the NVMe-oF Initiators +==================================== + +- `NVMe/TCP Initiator for Linux <../nvmeof-initiator-linux>`_ + +- `NVMe/TCP Initiator for VMware ESX <../nvmeof-initiator-esx>`_ + +.. toctree:: + :maxdepth: 1 + :hidden: + + Linux + VMware ESX diff --git a/doc/rbd/nvmeof-overview.rst b/doc/rbd/nvmeof-overview.rst new file mode 100644 index 0000000000000..070024a3abfa8 --- /dev/null +++ b/doc/rbd/nvmeof-overview.rst @@ -0,0 +1,48 @@ +.. _ceph-nvmeof: + +====================== + Ceph NVMe-oF Gateway +====================== + +The NVMe-oF Gateway presents an NVMe-oF target that exports +RADOS Block Device (RBD) images as NVMe namespaces. The NVMe-oF protocol allows +clients (initiators) to send NVMe commands to storage devices (targets) over a +TCP/IP network, enabling clients without native Ceph client support to access +Ceph block storage. + +Each NVMe-oF gateway consists of an `SPDK `_ NVMe-oF target +with ``bdev_rbd`` and a control daemon. Ceph’s NVMe-oF gateway can be used to +provision a fully integrated block-storage infrastructure with all the features +and benefits of a conventional Storage Area Network (SAN). + +.. ditaa:: + Cluster Network (optional) + +-------------------------------------------+ + | | | | + +-------+ +-------+ +-------+ +-------+ + | | | | | | | | + | OSD 1 | | OSD 2 | | OSD 3 | | OSD N | + | {s}| | {s}| | {s}| | {s}| + +-------+ +-------+ +-------+ +-------+ + | | | | + +--------->| | +---------+ | |<----------+ + : | | | RBD | | | : + | +----------------| Image |----------------+ | + | Public Network | {d} | | + | +---------+ | + | | + | +--------------------+ | + | +--------------+ | NVMeoF Initiators | +--------------+ | + | | NVMe‐oF GW | | +-----------+ | | NVMe‐oF GW | | + +-->| RBD Module |<--+ | Various | +-->| RBD Module |<--+ + | | | | Operating | | | | + +--------------+ | | Systems | | +--------------+ + | +-----------+ | + +--------------------+ + +.. toctree:: + :maxdepth: 1 + + Requirements + Configuring the NVME-oF Target + Configuring the NVMe-oF Initiators diff --git a/doc/rbd/nvmeof-requirements.rst b/doc/rbd/nvmeof-requirements.rst new file mode 100644 index 0000000000000..a53d1c2d76ce1 --- /dev/null +++ b/doc/rbd/nvmeof-requirements.rst @@ -0,0 +1,14 @@ +============================ +NVME-oF Gateway Requirements +============================ + +We recommend that you provision at least two NVMe/TCP gateways on different +nodes to implement a highly-available Ceph NVMe/TCP solution. + +We recommend at a minimum a single 10Gb Ethernet link in the Ceph public +network for the gateway. For hardware recommendations, see +:ref:`hardware-recommendations` . + +.. note:: On the NVMe-oF gateway, the memory footprint is a function of the + number of mapped RBD images and can grow to be large. Plan memory + requirements accordingly based on the number RBD images to be mapped. diff --git a/doc/rbd/nvmeof-target-configure.rst b/doc/rbd/nvmeof-target-configure.rst new file mode 100644 index 0000000000000..13c22397d9c0d --- /dev/null +++ b/doc/rbd/nvmeof-target-configure.rst @@ -0,0 +1,122 @@ +========================================== +Installing and Configuring NVMe-oF Targets +========================================== + +Traditionally, block-level access to a Ceph storage cluster has been limited to +(1) QEMU and ``librbd`` (which is a key enabler for adoption within OpenStack +environments), and (2) the Linux kernel client. Starting with the Ceph Reef +release, block-level access has been expanded to offer standard NVMe/TCP +support, allowing wider platform usage and potentially opening new use cases. + +Prerequisites +============= + +- Red Hat Enterprise Linux/CentOS 8.0 (or newer); Linux kernel v4.16 (or newer) + +- A working Ceph Reef or later storage cluster, deployed with ``cephadm`` + +- NVMe-oF gateways, which can either be colocated with OSD nodes or on dedicated nodes + +- Separate network subnets for NVME-oF front-end traffic and Ceph back-end traffic + +Explanation +=========== + +The Ceph NVMe-oF gateway is both an NVMe-oF target and a Ceph client. Think of +it as a "translator" between Ceph's RBD interface and the NVME-oF protocol. The +Ceph NVMe-oF gateway can run on a standalone node or be colocated with other +daemons, for example on a Ceph Object Store Disk (OSD) node. When colocating +the Ceph NVMe-oF gateway with other daemons, ensure that sufficient CPU and +memory are available. The steps below explain how to install and configure the +Ceph NVMe/TCP gateway for basic operation. + + +Installation +============ + +Complete the following steps to install the Ceph NVME-oF gateway: + +#. Create a pool in which the gateways configuration can be managed: + + .. prompt:: bash # + + ceph osd pool create NVME-OF_POOL_NAME + +#. Enable RBD on the NVMe-oF pool: + + .. prompt:: bash # + + rbd pool init NVME-OF_POOL_NAME + +#. Deploy the NVMe-oF gateway daemons on a specific set of nodes: + + .. prompt:: bash # + + ceph orch apply nvmeof NVME-OF_POOL_NAME --placment="host01, host02" + +Configuration +============= + +Download the ``nvmeof-cli`` container before first use. +To download it use the following command: + +.. prompt:: bash # + + podman pull quay.io/ceph/nvmeof-cli:latest + +#. Create an NVMe subsystem: + +.. prompt:: bash # + + podman run -it quay.io/ceph/nvmeof-cli:latest --server-address GATEWAY_IP --server-port GATEWAY_PORT 5500 subsystem add --subsystem SUSYSTEM_NQN + + The subsystem NQN is a user defined string, for example ``nqn.2016-06.io.spdk:cnode1``. + +#. Define the IP port on the gateway that will process the NVME/TCP commands and I/O: + + a. On the install node, get the NVME-oF Gateway name: + + .. prompt:: bash # + + ceph orch ps | grep nvme + + b. Define the IP port for the gateway: + + .. prompt:: bash # + + podman run -it quay.io/ceph/nvmeof-cli:latest --server-address GATEWAY_IP --server-port GATEWAY_PORT 5500 listener add --subsystem SUBSYSTEM_NQN --gateway-name GATEWAY_NAME --traddr GATEWAY_IP --trsvcid 4420 + +#. Get the host NQN (NVME Qualified Name) for each host: + + .. prompt:: bash # + + cat /etc/nvme/hostnqn + + .. prompt:: bash # + + esxcli nvme info get + +#. Allow the initiator host to connect to the newly-created NVMe subsystem: + + .. prompt:: bash # + + podman run -it quay.io/ceph/nvmeof-cli:latest --server-address GATEWAY_IP --server-port GATEWAY_PORT 5500 host add --subsystem SUBSYSTEM_NQN --host "HOST_NQN1, HOST_NQN2" + +#. List all subsystems configured in the gateway: + + .. prompt:: bash # + + podman run -it quay.io/ceph/nvmeof-cli:latest --server-address GATEWAY_IP --server-port GATEWAY_PORT 5500 subsystem list + +#. Create a new NVMe namespace: + + .. prompt:: bash # + + podman run -it quay.io/ceph/nvmeof-cli:latest --server-address GATEWAY_IP --server-port GATEWAY_PORT 5500 namespace add --subsystem SUBSYSTEM_NQN --rbd-pool POOL_NAME --rbd-image IMAGE_NAME + +#. List all namespaces in the subsystem: + + .. prompt:: bash # + + podman run -it quay.io/ceph/nvmeof-cli:latest --server-address GATEWAY_IP --server-port GATEWAY_PORT 5500 namespace list --subsystem SUBSYSTEM_NQN + diff --git a/doc/rbd/rbd-integrations.rst b/doc/rbd/rbd-integrations.rst index f55604a6fcf7c..3c4afe38f3d45 100644 --- a/doc/rbd/rbd-integrations.rst +++ b/doc/rbd/rbd-integrations.rst @@ -14,3 +14,4 @@ CloudStack LIO iSCSI Gateway Windows + NVMe-oF Gateway -- 2.39.5