#. From vSphere, open the Storage Adapters, on the Configuration tab. Right click
on the iSCSI Software Adapter and select Properties.
+#. In the General tab click the "Advanced" button and in the "Advanced Settings"
+ set RecoveryTimeout to 25.
+
#. If CHAP was setup on the iSCSI gateway, in the General tab click the "CHAP…"
button. If CHAP is not being used, skip to step 4.
iSCSI software adapter. Select Yes.
#. In the Details pane, the LUN on the iSCSI target will be displayed. Right click
- on a device and select "Manage Paths":
+ on a device and select "Manage Paths".
#. On the Manage Paths window, select “Most Recently Used (VMware)” for the policy
- path selection. Close and repeat for the other disks:
+ path selection. Close and repeat for the other disks.
Now the disks can be used for datastores.
path_checker tur
prio alua
prio_args exclusive_pref_bit
- no_path_retry 120
+ fast_oi_fail_tmo 25
+ no_path_retry queue
}
}
retry options are using PowerShell with the ``mpclaim`` command. The
reset is done in the MPIO tool.
-.. NOTE::
+.. note::
It is recommended to increase the ``PDORemovePeriod`` option to 120
seconds from PowerShell. This value might need to be adjusted based
on the application. When all paths are down, and 120 seconds
MSDSM-wide Load Balance Policy: Fail Over Only
#. Using the MPIO tool, from the “Targets” tab, click on the
- “Devices…” button:
+ “Devices...” button.
#. From the Devices window, select a disk and click the
- “MPIO…” button:
+ “MPIO...” button.
#. On the "Device Details" window the paths to each target portal is
- displayed. If using the ``ceph-iscsi-ansible`` setup method, the
+ displayed. If using the ``ceph-ansible`` setup method, the
iSCSI gateway will use ALUA to tell the iSCSI initiator which path
and iSCSI gateway should be used as the primary path. The Load
Balancing Policy “Fail Over Only” must be selected
mpclaim -s -d $MPIO_DISK_ID
-.. NOTE::
- For the ``ceph-iscsi-ansible`` setup method, there will be one
+.. note::
+ For the ``ceph-ansible`` setup method, there will be one
Active/Optimized path which is the path to the iSCSI gateway node
that owns the LUN, and there will be an Active/Unoptimized path for
each other iSCSI gateway node.
::
- DiskTimeout = 25
+ TimeOutValue = 65
- Microsoft iSCSI Initiator Driver
HKEY_LOCAL_MACHINE\\SYSTEM\CurrentControlSet\Control\Class\{4D36E97B-E325-11CE-BFC1-08002BE10318}\<Instance_Number>\Parameters
::
-
- SRBTimeoutDelta = 5
+ LinkDownTime = 25
+ SRBTimeoutDelta = 15
clients, such as Microsoft Windows, to access the Ceph Storage cluster.
Each iSCSI gateway runs the Linux IO target kernel subsystem (LIO) to provide the
-iSCSI protocol support and utilizes the Ceph’s RBD kernel module to expose RBD
-images to iSCSI clients. With Ceph’s iSCSI gateway you can effectively run a fully
-integrated block-storage infrastructure with all the features and benefits of a
-conventional Storage Area Network (SAN).
+iSCSI protocol support. LIO utilizes a userspace passthrough (TCMU) to interact
+with Ceph's librbd library and expose RBD images to iSCSI clients. With Ceph’s
+iSCSI gateway you can effectively run a fully integrated block-storage
+infrastructure with all the features and benefits of a conventional Storage Area
+Network (SAN).
.. ditaa::
Cluster Network
For hardware recommendations, see the `Hardware Recommendation page <http://docs.ceph.com/docs/master/start/hardware-recommendations/>`_
for more details.
-.. WARNING::
+.. note::
On the iSCSI gateway nodes, the memory footprint of the RBD images
can grow to a large size. Plan memory requirements accordingly based
- off the number RBD images mapped. Each RBD image roughly uses 90 MB
- of RAM.
+ off the number RBD images mapped.
There are no specific iSCSI gateway options for the Ceph Monitors or
-OSDs, but changing the ``osd_client_watch_timeout`` value to ``15`` is
-required for each OSD node in the storage cluster.
+OSDs, but it is important to lower the default timers for detecting
+down OSDs to reduce the possibility of initiator timeouts. The following
+configuration options are suggested for each OSD node in the storage
+cluster::
+
+ [osd]
+ osd heartbeat grace = 20
+ osd heartbeat interval = 5
- Online Updating Using the Ceph Monitor
::
- # ceph tell osd.0 injectargs '--osd_client_watch_timeout 15'
+ ceph tell osd.0 injectargs '--osd_heartbeat_grace 20'
+ ceph tell osd.0 injectargs '--osd_heartbeat_interval 5'
- Online Updating on the OSD Node
::
- # ceph daemon osd.0 config set osd_client_watch_timeout 15
-
- .. NOTE::
- Update the Ceph configuration file and copy it to all nodes in the
- Ceph storage cluster. For example, the default configuration file is
- ``/etc/ceph/ceph.conf``. Add the following lines to the Ceph
- configuration file:
-
- ::
-
- [osd]
- osd_client_watch_timeout = 15
+ ceph daemon osd.0 config set osd_heartbeat_grace 20
+ ceph daemon osd.0 config set osd_heartbeat_interval 5
For more details on setting Ceph's configuration options, see the `Configuration page <http://docs.ceph.com/docs/master/rados/configuration/>`_.
- A running Ceph Luminous (12.2.x) cluster or newer
-- The ``ceph-iscsi-config`` package installed on all the iSCSI gateway nodes
+- RHEL/CentOS 7.4; or Linux kernel v4.14 or newer
-.. NOTE::
- The ``device-mapper-multipath-0.4.9-99`` or newer package is only required for
- older kernel RBD based implementations. This package is not required for newer
- ``librbd`` based implementations.
+- The ``ceph-iscsi-config`` package installed on all the iSCSI gateway nodes
**Installing:**
#. On the Ansible installer node, which could be either the administration node
or a dedicated deployment node, perform the following steps:
- #. As ``root``, install the ``ceph-iscsi-ansible`` package:
+ #. As ``root``, install the ``ceph-ansible`` package:
::
- # yum install ceph-iscsi-ansible
+ # yum install ceph-ansible
#. Add an entry in ``/etc/ansible/hosts`` file for the gateway group:
ceph-igw-1
ceph-igw-2
-.. NOTE::
+.. note::
If co-locating the iSCSI gateway with an OSD node, then add the OSD node to the
``[ceph-iscsi-gw]`` section.
**Configuring:**
-The ``ceph-iscsi-ansible`` package places a file in the ``/usr/share/ceph-ansible/group_vars/``
+The ``ceph-ansible`` package places a file in the ``/usr/share/ceph-ansible/group_vars/``
directory called ``ceph-iscsi-gw.sample``. Create a copy of this sample file named
``ceph-iscsi-gw.yml``. Review the following Ansible variables and descriptions,
and update accordingly.
| | across client connections. |
+--------------------------------------+--------------------------------------+
-.. NOTE::
+.. note::
When using the ``gateway_iqn`` variable, and for Red Hat Enterprise Linux
clients, installing the ``iscsi-initiator-utils`` package is required for
retrieving the gateway’s IQN name. The iSCSI initiator name is located in the
# cd /usr/share/ceph-ansible
# ansible-playbook ceph-iscsi-gw.yml
- .. NOTE::
+ .. note::
The Ansible playbook will handle RPM dependencies, RBD creation
and Linux IO configuration.
# gwcli ls
- .. NOTE::
+ .. note::
For more information on using the ``gwcli`` command to install and configure
a Ceph iSCSI gateaway, see the `Configuring the iSCSI Target using the Command Line Interface`_
section.
- .. IMPORTANT::
+ .. important::
Attempting to use the ``targetcli`` tool to change the configuration will
result in the following issues, such as ALUA misconfiguration and path failover
problems. There is the potential to corrupt data, to have mismatched
there are a number of operational workflows that the Ansible playbook
supports.
-.. WARNING::
+.. warning::
Before removing RBD images from the iSCSI gateway configuration,
follow the standard procedures for removing a storage device from
the operating system.
**Removing the Configuration:**
-The ``ceph-iscsi-ansible`` package provides an Ansible playbook to
+The ``ceph-ansible`` package provides an Ansible playbook to
remove the iSCSI gateway configuration and related RBD images. The
Ansible playbook is ``/usr/share/ceph-ansible/purge_gateways.yml``. When
this Ansible playbook is ran a prompted for the type of purge to
environment, other unrelated RBD images will not be removed. Ensure the
correct mode is chosen, this operation will delete data.
-.. WARNING::
+.. warning::
A purge operation is destructive action against your iSCSI gateway
environment.
-.. WARNING::
+.. warning::
A purge operation will fail, if RBD images have snapshots or clones
and are exported through the Ceph iSCSI gateway.
- A running Ceph Luminous or later storage cluster
-- CentOS 7.4, Linux kernel v4.12 or newer
+- RHEL/CentOS 7.4; or Linux kernel v4.14 or newer
- The following packages must be installed from your Linux distribution's software repository:
- ``python-rtslib-2.1.fb64`` or newer package
- - ``tcmu-runner-1.2.1`` or newer package
+ - ``tcmu-runner-1.3.0`` or newer package
- .. IMPORTANT::
+ - ``ceph-iscsi-config-2.3`` or newer package
+
+ - ``ceph-iscsi-cli-2.5`` or newer package
+
+ .. important::
If previous versions of these packages exist, then they must
be removed first before installing the newer versions.
#. Edit the ``iscsi-gateway.cfg`` file and add the following lines:
- ::
-
- [config]
- cluster_name = <ceph_cluster_name>
- gateway_keyring = <ceph_client_keyring>
- api_secure = false
-
::
[config]
# api_port = 5001
# trusted_ip_list = 192.168.0.10,192.168.0.11
- .. IMPORTANT::
+ .. important::
The ``iscsi-gateway.cfg`` file must be identical on all iSCSI gateway nodes.
#. As ``root``, copy the ``iscsi-gateway.cfg`` file to all iSCSI
> auth chap=<user_name>/<password> | nochap
- .. WARNING::
+ .. warning::
CHAP must always be configured. Without CHAP, the target will
reject any login requests.
block-level access is expanding to offer standard iSCSI support allowing
wider platform usage, and potentially opening new use cases.
-- CentOS 7.4 or newer
+- RHEL/CentOS 7.4; or Linux kernel v4.14 or newer
- A working Ceph Storage cluster, deployed with ``ceph-ansible`` or using the command-line interface