**iSCSI Discovery and Multipath Device Setup:**
-1. From vSphere, open the Storage Adapters, on the Configuration tab. Right click
+#. From vSphere, open the Storage Adapters, on the Configuration tab. Right click
on the iSCSI Software Adapter and select Properties.
-2. If CHAP was setup on the iSCSI gateway, in the General tab click the "CHAP…"
+#. If CHAP was setup on the iSCSI gateway, in the General tab click the "CHAP…"
button. If CHAP is not being used, skip to step 4.
-3. On the CHAP Credentials windows, select “Do not use CHAP unless required by target”,
+#. On the CHAP Credentials windows, select “Do not use CHAP unless required by target”,
and enter the "Name" and "Secret" values used on the initial setup for the iSCSI
gateway, then click on the "OK" button.
-4. On the Dynamic Discovery tab, click the "Add…" button, and enter the IP address
+#. On the Dynamic Discovery tab, click the "Add…" button, and enter the IP address
and port of one of the iSCSI target portals. Click on the "OK" button.
-5. Close the iSCSI Initiator Properties window. A prompt will ask to rescan the
+#. Close the iSCSI Initiator Properties window. A prompt will ask to rescan the
iSCSI software adapter. Select Yes.
-6. In the Details pane, the LUN on the iSCSI target will be displayed. Right click
+#. In the Details pane, the LUN on the iSCSI target will be displayed. Right click
on a device and select "Manage Paths":
-7. On the Manage Paths window, select “Most Recently Used (VMware)” for the policy
+#. On the Manage Paths window, select “Most Recently Used (VMware)” for the policy
path selection. Close and repeat for the other disks:
Now the disks can be used for datastores.
**Installing:**
-1. Install the iSCSI initiator and multipath tools:
+Install the iSCSI initiator and multipath tools:
::
**Configuring:**
-1. Create the default ``/etc/multipath.conf`` file and enable the
+#. Create the default ``/etc/multipath.conf`` file and enable the
``multiapthd`` service:
::
# mpathconf --enable --with_multipathd y
-2. Add the following to ``/etc/multipath.conf`` file:
+#. Add the following to ``/etc/multipath.conf`` file:
::
}
}
-3. Restart the ``multipathd`` service:
+#. Restart the ``multipathd`` service:
::
**iSCSI Discovery and Setup:**
-1. Discover the target portals:
+#. Discover the target portals:
::
192.168.56.101:3260,1 iqn.2003-01.org.linux-iscsi.rheln1
192.168.56.102:3260,2 iqn.2003-01.org.linux-iscsi.rheln1
-2. Login to target:
+#. Login to target:
::
**iSCSI Initiator, Discovery and Setup:**
-1. Install the iSCSI initiator driver and MPIO tools.
+#. Install the iSCSI initiator driver and MPIO tools.
-2. Launch the MPIO program, click on the “Discover Multi-Paths” tab select “Add
+#. Launch the MPIO program, click on the “Discover Multi-Paths” tab select “Add
support for iSCSI devices”.
-3. On the iSCSI Initiator Properties window, on the "Discovery" tab, add a target
+#. On the iSCSI Initiator Properties window, on the "Discovery" tab, add a target
portal. Enter the IP address or DNS name and Port of the Ceph iSCSI gateway.
-4. On the “Targets” tab, select the target and click on “Connect”.
+#. On the “Targets” tab, select the target and click on “Connect”.
-5. On the “Connect To Target” window, select the “Enable multi-path” option, and
+#. On the “Connect To Target” window, select the “Enable multi-path” option, and
click the “Advanced” button.
-6. Under the "Connet using" section, select a “Target portal IP” . Select the
+#. Under the "Connet using" section, select a “Target portal IP” . Select the
“Enable CHAP login on” and enter the "Name" and "Target secret" values from the
Ceph iSCSI Ansible client credentials section, and click OK.
-7. Repeat steps 5 and 6 for each target portal defined when setting up
+#. Repeat steps 5 and 6 for each target portal defined when setting up
the iSCSI gateway.
**Multipath IO Setup:**
mpclaim -s -m
MSDSM-wide Load Balance Policy: Fail Over Only
-1. Using the MPIO tool, from the “Targets” tab, click on the
+#. Using the MPIO tool, from the “Targets” tab, click on the
“Devices…” button:
-2. From the Devices window, select a disk and click the
+#. From the Devices window, select a disk and click the
“MPIO…” button:
-3. On the "Device Details" window the paths to each target portal is
+#. On the "Device Details" window the paths to each target portal is
displayed. If using the ``ceph-iscsi-ansible`` setup method, the
iSCSI gateway will use ALUA to tell the iSCSI initiator which path
and iSCSI gateway should be used as the primary path. The Load
**Installing:**
-1. As ``root``, install the ``ceph-iscsi-tools`` package on each iSCSI
+#. As ``root``, install the ``ceph-iscsi-tools`` package on each iSCSI
gateway node:
::
# yum install ceph-iscsi-tools
-2. As ``root``, install the performance co-pilot package on each iSCSI
+#. As ``root``, install the performance co-pilot package on each iSCSI
gateway node:
::
# yum install pcp
-3. As ``root``, install the LIO PMDA package on each iSCSI gateway node:
+#. As ``root``, install the LIO PMDA package on each iSCSI gateway node:
::
# yum install pcp-pmda-lio
-4. As ``root``, enable and start the performance co-pilot service on
+#. As ``root``, enable and start the performance co-pilot service on
each iSCSI gateway node:
::
# systemctl enable pmcd
# systemctl start pmcd
-5. As ``root``, register the ``pcp-pmda-lio`` agent:
+#. As ``root``, register the ``pcp-pmda-lio`` agent:
::
**Installing:**
-1. On the Ansible installer node, which could be either the administration node
+#. On the Ansible installer node, which could be either the administration node
or a dedicated deployment node, perform the following steps:
- a. As ``root``, install the ``ceph-iscsi-ansible`` package:
+ #. As ``root``, install the ``ceph-iscsi-ansible`` package:
::
# yum install ceph-iscsi-ansible
- b. Add an entry in ``/etc/ansible/hosts`` file for the gateway group:
+ #. Add an entry in ``/etc/ansible/hosts`` file for the gateway group:
::
| | connection defines an ``image_list`` |
| | which is a comma separated list of |
| | the form |
-| | ``pool.rbd_image[,pool.rbd_image,…] |
-| | ``. |
+| | ``pool.rbd_image[,pool.rbd_image]``. |
| | RBD images can be added and removed |
| | from this list, to change the client |
| | masking. Note that there are no |
On the Ansible installer node, perform the following steps.
-1. As ``root``, execute the Ansible playbook:
+#. As ``root``, execute the Ansible playbook:
::
The Ansible playbook will handle RPM dependencies, RBD creation
and Linux IO configuration.
-2. Verify the configuration from an iSCSI gateway node:
+#. Verify the configuration from an iSCSI gateway node:
::
Do the following steps on the Ceph iSCSI gateway node before proceeding
to the *Installing* section:
-1. If the Ceph iSCSI gateway is not colocated on an OSD node, then copy
+#. If the Ceph iSCSI gateway is not colocated on an OSD node, then copy
the Ceph configuration files, located in ``/etc/ceph/``, from a
running Ceph node in the storage cluster to the iSCSI Gateway node.
The Ceph configuration files must exist on the iSCSI gateway node
under ``/etc/ceph/``.
-2. Install and configure the `Ceph Command-line
+#. Install and configure the `Ceph Command-line
Interface <http://docs.ceph.com/docs/master/start/quick-rbd/#install-ceph>`_
-3. If needed, open TCP ports 3260 and 5000 on the firewall.
+#. If needed, open TCP ports 3260 and 5000 on the firewall.
-4. Create a new or use an existing RADOS Block Device (RBD).
+#. Create a new or use an existing RADOS Block Device (RBD).
**Installing:**
-1. As ``root``, on all iSCSI gateway nodes, install the
+#. As ``root``, on all iSCSI gateway nodes, install the
``ceph-iscsi-cli`` package:
::
# yum install ceph-iscsi-cli
-2. As ``root``, on all iSCSI gateway nodes, install the ``tcmu-runner``
+#. As ``root``, on all iSCSI gateway nodes, install the ``tcmu-runner``
package:
::
# yum install tcmu-runner
-3. As ``root``, on a iSCSI gateway node, create a file named
+#. As ``root``, on a iSCSI gateway node, create a file named
``iscsi-gateway.cfg`` in the ``/etc/ceph/`` directory:
::
# touch /etc/ceph/iscsi-gateway.cfg
- a. Edit the ``iscsi-gateway.cfg`` file and add the following lines:
+ #. Edit the ``iscsi-gateway.cfg`` file and add the following lines:
::
.. IMPORTANT::
The ``iscsi-gateway.cfg`` file must be identical on all iSCSI gateway nodes.
- b. As ``root``, copy the ``iscsi-gateway.cfg`` file to all iSCSI
+ #. As ``root``, copy the ``iscsi-gateway.cfg`` file to all iSCSI
gateway nodes.
-4. As ``root``, on all iSCSI gateway nodes, enable and start the API
+#. As ``root``, on all iSCSI gateway nodes, enable and start the API
service:
::
**Configuring:**
-1. As ``root``, on a iSCSI gateway node, start the iSCSI gateway
+#. As ``root``, on a iSCSI gateway node, start the iSCSI gateway
command-line interface:
::
# gwcli
-2. Creating the iSCSI gateways:
+#. Creating the iSCSI gateways:
::
> create <iscsi_gw_name> <IP_addr_of_gw>
> create <iscsi_gw_name> <IP_addr_of_gw>
-3. Adding a RADOS Block Device (RBD):
+#. Adding a RADOS Block Device (RBD):
::
> cd /iscsi-target/iqn.2003-01.com.redhat.iscsi-gw:<target_name>/disks/
>/disks/ create pool=<pool_name> image=<image_name> size=<image_size>m|g|t
-4. Creating a client:
+#. Creating a client:
::
CHAP must always be configured. Without CHAP, the target will
reject any login requests.
-5. Adding disks to a client:
+#. Adding disks to a client:
::