--- /dev/null
+---------------------------------
+NVMe/TCP Initiator for VMware ESX
+---------------------------------
+
+Prerequisites
+=============
+
+- A VMware ESXi host running VMware vSphere Hypervisor (ESXi) 7.0U3 version or later.
+- Deployed Ceph NVMe-oF gateway.
+- Ceph cluster with NVMe-oF configuration.
+- Subsystem defined in the gateway.
+
+Configuration
+=============
+
+The following instructions will use the default vSphere web client and esxcli.
+
+1. Enable NVMe/TCP on a NIC:
+
+ .. prompt:: bash #
+
+ esxcli nvme fabric enable --protocol TCP --device vmnicN
+
+ Replace ``N`` with the number of the NIC.
+
+2. Tag a VMKernel NIC to permit NVMe/TCP traffic:
+
+ .. prompt:: bash #
+
+ esxcli network uip interface tag add --interface-nme vmkN --tagname NVMeTCP
+
+ Replace ``N`` with the ID of the VMkernel.
+
+3. Configure the VMware ESXi host for NVMe/TCP:
+
+ #. List the NVMe-oF adapter:
+
+ .. prompt:: bash #
+
+ esxcli nvme adapter list
+
+ #. Discover NVMe-oF subsystems:
+
+ .. prompt:: bash #
+
+ esxcli nvme fabric discover -a NVME_TCP_ADAPTER -i GATEWAY_IP -p 4420
+
+ #. Connect to NVME-oF gateway subsystem:
+
+ .. prompt:: bash #
+
+ esxcli nvme connect -a NVME_TCP_ADAPTER -i GATEWAY_IP -p 4420 -s SUBSYSTEM_NQN
+
+ #. List the NVMe/TCP controllers:
+
+ .. prompt:: bash #
+
+ esxcli nvme controller list
+
+ #. List the NVMe-oF namespaces in the subsystem:
+
+ .. prompt:: bash #
+
+ esxcli nvme namespace list
+
+4. Verify that the initiator has been set up correctly:
+
+ #. From the vSphere client go to the ESXi host.
+ #. On the Storage page go to the Devices tab.
+ #. Verify that the NVME/TCP disks are listed in the table.
--- /dev/null
+==============================
+ NVMe/TCP Initiator for Linux
+==============================
+
+Prerequisites
+=============
+
+- Kernel 5.0 or later
+- RHEL 9.2 or later
+- Ubuntu 24.04 or later
+- SLES 15 SP3 or later
+
+Installation
+============
+
+1. Install the nvme-cli:
+
+ .. prompt:: bash #
+
+ yum install nvme-cli
+
+2. Load the NVMe-oF module:
+
+ .. prompt:: bash #
+
+ modprobe nvme-fabrics
+
+3. Verify the NVMe/TCP target is reachable:
+
+ .. prompt:: bash #
+
+ nvme discover -t tcp -a GATEWAY_IP -s 4420
+
+4. Connect to the NVMe/TCP target:
+
+ .. prompt:: bash #
+
+ nvme connect -t tcp -a GATEWAY_IP -n SUBSYSTEM_NQN
+
+Next steps
+==========
+
+Verify that the initiator is set up correctly:
+
+1. List the NVMe block devices:
+
+ .. prompt:: bash #
+
+ nvme list
+
+2. Create a filesystem on the desired device:
+
+ .. prompt:: bash #
+
+ mkfs.ext4 NVME_NODE_PATH
+
+3. Mount the filesystem:
+
+ .. prompt:: bash #
+
+ mkdir /mnt/nvmeof
+
+ .. prompt:: bash #
+
+ mount NVME_NODE_PATH /mnt/nvmeof
+
+4. List the NVME-oF files:
+
+ .. prompt:: bash #
+
+ ls /mnt/nvmeof
+
+5. Create a text file in the ``/mnt/nvmeof`` directory:
+
+ .. prompt:: bash #
+
+ echo "Hello NVME-oF" > /mnt/nvmeof/hello.text
+
+6. Verify that the file can be accessed:
+
+ .. prompt:: bash #
+
+ cat /mnt/nvmeof/hello.text
--- /dev/null
+.. _configuring-the-nvmeof-initiators:
+
+====================================
+ Configuring the NVMe-oF Initiators
+====================================
+
+- `NVMe/TCP Initiator for Linux <../nvmeof-initiator-linux>`_
+
+- `NVMe/TCP Initiator for VMware ESX <../nvmeof-initiator-esx>`_
+
+.. toctree::
+ :maxdepth: 1
+ :hidden:
+
+ Linux <nvmeof-initiator-linux>
+ VMware ESX <nvmeof-initiator-esx>
--- /dev/null
+.. _ceph-nvmeof:
+
+======================
+ Ceph NVMe-oF Gateway
+======================
+
+The NVMe-oF Gateway presents an NVMe-oF target that exports
+RADOS Block Device (RBD) images as NVMe namespaces. The NVMe-oF protocol allows
+clients (initiators) to send NVMe commands to storage devices (targets) over a
+TCP/IP network, enabling clients without native Ceph client support to access
+Ceph block storage.
+
+Each NVMe-oF gateway consists of an `SPDK <https://spdk.io/>`_ NVMe-oF target
+with ``bdev_rbd`` and a control daemon. Ceph’s NVMe-oF gateway can be used to
+provision a fully integrated block-storage infrastructure with all the features
+and benefits of a conventional Storage Area Network (SAN).
+
+.. ditaa::
+ Cluster Network (optional)
+ +-------------------------------------------+
+ | | | |
+ +-------+ +-------+ +-------+ +-------+
+ | | | | | | | |
+ | OSD 1 | | OSD 2 | | OSD 3 | | OSD N |
+ | {s}| | {s}| | {s}| | {s}|
+ +-------+ +-------+ +-------+ +-------+
+ | | | |
+ +--------->| | +---------+ | |<----------+
+ : | | | RBD | | | :
+ | +----------------| Image |----------------+ |
+ | Public Network | {d} | |
+ | +---------+ |
+ | |
+ | +--------------------+ |
+ | +--------------+ | NVMeoF Initiators | +--------------+ |
+ | | NVMe‐oF GW | | +-----------+ | | NVMe‐oF GW | |
+ +-->| RBD Module |<--+ | Various | +-->| RBD Module |<--+
+ | | | | Operating | | | |
+ +--------------+ | | Systems | | +--------------+
+ | +-----------+ |
+ +--------------------+
+
+.. toctree::
+ :maxdepth: 1
+
+ Requirements <nvmeof-requirements>
+ Configuring the NVME-oF Target <nvmeof-target-configure>
+ Configuring the NVMe-oF Initiators <nvmeof-initiators>
--- /dev/null
+============================
+NVME-oF Gateway Requirements
+============================
+
+We recommend that you provision at least two NVMe/TCP gateways on different
+nodes to implement a highly-available Ceph NVMe/TCP solution.
+
+We recommend at a minimum a single 10Gb Ethernet link in the Ceph public
+network for the gateway. For hardware recommendations, see
+:ref:`hardware-recommendations` .
+
+.. note:: On the NVMe-oF gateway, the memory footprint is a function of the
+ number of mapped RBD images and can grow to be large. Plan memory
+ requirements accordingly based on the number RBD images to be mapped.
--- /dev/null
+==========================================
+Installing and Configuring NVMe-oF Targets
+==========================================
+
+Traditionally, block-level access to a Ceph storage cluster has been limited to
+(1) QEMU and ``librbd`` (which is a key enabler for adoption within OpenStack
+environments), and (2) the Linux kernel client. Starting with the Ceph Reef
+release, block-level access has been expanded to offer standard NVMe/TCP
+support, allowing wider platform usage and potentially opening new use cases.
+
+Prerequisites
+=============
+
+- Red Hat Enterprise Linux/CentOS 8.0 (or newer); Linux kernel v4.16 (or newer)
+
+- A working Ceph Reef or later storage cluster, deployed with ``cephadm``
+
+- NVMe-oF gateways, which can either be colocated with OSD nodes or on dedicated nodes
+
+- Separate network subnets for NVME-oF front-end traffic and Ceph back-end traffic
+
+Explanation
+===========
+
+The Ceph NVMe-oF gateway is both an NVMe-oF target and a Ceph client. Think of
+it as a "translator" between Ceph's RBD interface and the NVME-oF protocol. The
+Ceph NVMe-oF gateway can run on a standalone node or be colocated with other
+daemons, for example on a Ceph Object Store Disk (OSD) node. When colocating
+the Ceph NVMe-oF gateway with other daemons, ensure that sufficient CPU and
+memory are available. The steps below explain how to install and configure the
+Ceph NVMe/TCP gateway for basic operation.
+
+
+Installation
+============
+
+Complete the following steps to install the Ceph NVME-oF gateway:
+
+#. Create a pool in which the gateways configuration can be managed:
+
+ .. prompt:: bash #
+
+ ceph osd pool create NVME-OF_POOL_NAME
+
+#. Enable RBD on the NVMe-oF pool:
+
+ .. prompt:: bash #
+
+ rbd pool init NVME-OF_POOL_NAME
+
+#. Deploy the NVMe-oF gateway daemons on a specific set of nodes:
+
+ .. prompt:: bash #
+
+ ceph orch apply nvmeof NVME-OF_POOL_NAME --placment="host01, host02"
+
+Configuration
+=============
+
+Download the ``nvmeof-cli`` container before first use.
+To download it use the following command:
+
+.. prompt:: bash #
+
+ podman pull quay.io/ceph/nvmeof-cli:latest
+
+#. Create an NVMe subsystem:
+
+.. prompt:: bash #
+
+ podman run -it quay.io/ceph/nvmeof-cli:latest --server-address GATEWAY_IP --server-port GATEWAY_PORT 5500 subsystem add --subsystem SUSYSTEM_NQN
+
+ The subsystem NQN is a user defined string, for example ``nqn.2016-06.io.spdk:cnode1``.
+
+#. Define the IP port on the gateway that will process the NVME/TCP commands and I/O:
+
+ a. On the install node, get the NVME-oF Gateway name:
+
+ .. prompt:: bash #
+
+ ceph orch ps | grep nvme
+
+ b. Define the IP port for the gateway:
+
+ .. prompt:: bash #
+
+ podman run -it quay.io/ceph/nvmeof-cli:latest --server-address GATEWAY_IP --server-port GATEWAY_PORT 5500 listener add --subsystem SUBSYSTEM_NQN --gateway-name GATEWAY_NAME --traddr GATEWAY_IP --trsvcid 4420
+
+#. Get the host NQN (NVME Qualified Name) for each host:
+
+ .. prompt:: bash #
+
+ cat /etc/nvme/hostnqn
+
+ .. prompt:: bash #
+
+ esxcli nvme info get
+
+#. Allow the initiator host to connect to the newly-created NVMe subsystem:
+
+ .. prompt:: bash #
+
+ podman run -it quay.io/ceph/nvmeof-cli:latest --server-address GATEWAY_IP --server-port GATEWAY_PORT 5500 host add --subsystem SUBSYSTEM_NQN --host "HOST_NQN1, HOST_NQN2"
+
+#. List all subsystems configured in the gateway:
+
+ .. prompt:: bash #
+
+ podman run -it quay.io/ceph/nvmeof-cli:latest --server-address GATEWAY_IP --server-port GATEWAY_PORT 5500 subsystem list
+
+#. Create a new NVMe namespace:
+
+ .. prompt:: bash #
+
+ podman run -it quay.io/ceph/nvmeof-cli:latest --server-address GATEWAY_IP --server-port GATEWAY_PORT 5500 namespace add --subsystem SUBSYSTEM_NQN --rbd-pool POOL_NAME --rbd-image IMAGE_NAME
+
+#. List all namespaces in the subsystem:
+
+ .. prompt:: bash #
+
+ podman run -it quay.io/ceph/nvmeof-cli:latest --server-address GATEWAY_IP --server-port GATEWAY_PORT 5500 namespace list --subsystem SUBSYSTEM_NQN
+
CloudStack <rbd-cloudstack>
LIO iSCSI Gateway <iscsi-overview>
Windows <rbd-windows>
+ NVMe-oF Gateway <nvmeof-overview>