From: Jason Dillaman Date: Tue, 8 Aug 2017 23:14:57 +0000 (-0400) Subject: doc: tweaks for the RBD iSCSI configuration documentation X-Git-Tag: v13.0.1~1104^2 X-Git-Url: http://git-server-git.apps.pok.os.sepia.ceph.com/?a=commitdiff_plain;h=refs%2Fpull%2F17376%2Fhead;p=ceph.git doc: tweaks for the RBD iSCSI configuration documentation Signed-off-by: Jason Dillaman --- diff --git a/doc/images/win2016_mpclaim_output.png b/doc/images/win2016_mpclaim_output.png old mode 100755 new mode 100644 index e3ff0c83cfe..73e1e5ed2ea Binary files a/doc/images/win2016_mpclaim_output.png and b/doc/images/win2016_mpclaim_output.png differ diff --git a/doc/rbd/iscsi-initiator-esx.rst b/doc/rbd/iscsi-initiator-esx.rst index 2a0371e4d8c..18dd5832289 100644 --- a/doc/rbd/iscsi-initiator-esx.rst +++ b/doc/rbd/iscsi-initiator-esx.rst @@ -11,6 +11,9 @@ The iSCSI Initiator for VMware ESX #. From vSphere, open the Storage Adapters, on the Configuration tab. Right click on the iSCSI Software Adapter and select Properties. +#. In the General tab click the "Advanced" button and in the "Advanced Settings" + set RecoveryTimeout to 25. + #. If CHAP was setup on the iSCSI gateway, in the General tab click the "CHAP…​" button. If CHAP is not being used, skip to step 4. @@ -25,9 +28,9 @@ The iSCSI Initiator for VMware ESX iSCSI software adapter. Select Yes. #. In the Details pane, the LUN on the iSCSI target will be displayed. Right click - on a device and select "Manage Paths": + on a device and select "Manage Paths". #. On the Manage Paths window, select “Most Recently Used (VMware)” for the policy - path selection. Close and repeat for the other disks: + path selection. Close and repeat for the other disks. Now the disks can be used for datastores. diff --git a/doc/rbd/iscsi-initiator-rhel.rst b/doc/rbd/iscsi-initiator-rhel.rst index 27f0b08781a..51248e46f79 100644 --- a/doc/rbd/iscsi-initiator-rhel.rst +++ b/doc/rbd/iscsi-initiator-rhel.rst @@ -42,7 +42,8 @@ Install the iSCSI initiator and multipath tools: path_checker tur prio alua prio_args exclusive_pref_bit - no_path_retry 120 + fast_oi_fail_tmo 25 + no_path_retry queue } } diff --git a/doc/rbd/iscsi-initiator-win.rst b/doc/rbd/iscsi-initiator-win.rst index 0f63ffe889e..08a1cfbde6f 100644 --- a/doc/rbd/iscsi-initiator-win.rst +++ b/doc/rbd/iscsi-initiator-win.rst @@ -34,7 +34,7 @@ Configuring the MPIO load balancing policy, setting the timeout and retry options are using PowerShell with the ``mpclaim`` command. The reset is done in the MPIO tool. -.. NOTE:: +.. note:: It is recommended to increase the ``PDORemovePeriod`` option to 120 seconds from PowerShell. This value might need to be adjusted based on the application. When all paths are down, and 120 seconds @@ -54,13 +54,13 @@ reset is done in the MPIO tool. MSDSM-wide Load Balance Policy: Fail Over Only #. Using the MPIO tool, from the “Targets” tab, click on the - “Devices…​” button: + “Devices...” button. #. From the Devices window, select a disk and click the - “MPIO…​” button: + “MPIO...” button. #. On the "Device Details" window the paths to each target portal is - displayed. If using the ``ceph-iscsi-ansible`` setup method, the + displayed. If using the ``ceph-ansible`` setup method, the iSCSI gateway will use ALUA to tell the iSCSI initiator which path and iSCSI gateway should be used as the primary path. The Load Balancing Policy “Fail Over Only” must be selected @@ -69,8 +69,8 @@ reset is done in the MPIO tool. mpclaim -s -d $MPIO_DISK_ID -.. NOTE:: - For the ``ceph-iscsi-ansible`` setup method, there will be one +.. note:: + For the ``ceph-ansible`` setup method, there will be one Active/Optimized path which is the path to the iSCSI gateway node that owns the LUN, and there will be an Active/Unoptimized path for each other iSCSI gateway node. @@ -87,7 +87,7 @@ Consider using the following registry settings: :: - DiskTimeout = 25 + TimeOutValue = 65 - Microsoft iSCSI Initiator Driver @@ -96,5 +96,5 @@ Consider using the following registry settings: HKEY_LOCAL_MACHINE\\SYSTEM\CurrentControlSet\Control\Class\{4D36E97B-E325-11CE-BFC1-08002BE10318}\\Parameters :: - - SRBTimeoutDelta = 5 + LinkDownTime = 25 + SRBTimeoutDelta = 15 diff --git a/doc/rbd/iscsi-overview.rst b/doc/rbd/iscsi-overview.rst index 63b7f7c2187..a8c64e2069d 100644 --- a/doc/rbd/iscsi-overview.rst +++ b/doc/rbd/iscsi-overview.rst @@ -9,10 +9,11 @@ to SCSI storage devices (targets) over a TCP/IP network. This allows for heterog clients, such as Microsoft Windows, to access the Ceph Storage cluster. Each iSCSI gateway runs the Linux IO target kernel subsystem (LIO) to provide the -iSCSI protocol support and utilizes the Ceph’s RBD kernel module to expose RBD -images to iSCSI clients. With Ceph’s iSCSI gateway you can effectively run a fully -integrated block-storage infrastructure with all the features and benefits of a -conventional Storage Area Network (SAN). +iSCSI protocol support. LIO utilizes a userspace passthrough (TCMU) to interact +with Ceph's librbd library and expose RBD images to iSCSI clients. With Ceph’s +iSCSI gateway you can effectively run a fully integrated block-storage +infrastructure with all the features and benefits of a conventional Storage Area +Network (SAN). .. ditaa:: Cluster Network diff --git a/doc/rbd/iscsi-requirements.rst b/doc/rbd/iscsi-requirements.rst index a3c0dd9023f..1ae19e06001 100644 --- a/doc/rbd/iscsi-requirements.rst +++ b/doc/rbd/iscsi-requirements.rst @@ -9,15 +9,20 @@ solution. For hardware recommendations, see the `Hardware Recommendation page `_ for more details. -.. WARNING:: +.. note:: On the iSCSI gateway nodes, the memory footprint of the RBD images can grow to a large size. Plan memory requirements accordingly based - off the number RBD images mapped. Each RBD image roughly uses 90 MB - of RAM. + off the number RBD images mapped. There are no specific iSCSI gateway options for the Ceph Monitors or -OSDs, but changing the ``osd_client_watch_timeout`` value to ``15`` is -required for each OSD node in the storage cluster. +OSDs, but it is important to lower the default timers for detecting +down OSDs to reduce the possibility of initiator timeouts. The following +configuration options are suggested for each OSD node in the storage +cluster:: + + [osd] + osd heartbeat grace = 20 + osd heartbeat interval = 5 - Online Updating Using the Ceph Monitor @@ -27,7 +32,8 @@ required for each OSD node in the storage cluster. :: - # ceph tell osd.0 injectargs '--osd_client_watch_timeout 15' + ceph tell osd.0 injectargs '--osd_heartbeat_grace 20' + ceph tell osd.0 injectargs '--osd_heartbeat_interval 5' - Online Updating on the OSD Node @@ -37,17 +43,7 @@ required for each OSD node in the storage cluster. :: - # ceph daemon osd.0 config set osd_client_watch_timeout 15 - - .. NOTE:: - Update the Ceph configuration file and copy it to all nodes in the - Ceph storage cluster. For example, the default configuration file is - ``/etc/ceph/ceph.conf``. Add the following lines to the Ceph - configuration file: - - :: - - [osd] - osd_client_watch_timeout = 15 + ceph daemon osd.0 config set osd_heartbeat_grace 20 + ceph daemon osd.0 config set osd_heartbeat_interval 5 For more details on setting Ceph's configuration options, see the `Configuration page `_. diff --git a/doc/rbd/iscsi-target-ansible.rst b/doc/rbd/iscsi-target-ansible.rst index 9a585f73bc8..4169a9f4244 100644 --- a/doc/rbd/iscsi-target-ansible.rst +++ b/doc/rbd/iscsi-target-ansible.rst @@ -11,23 +11,20 @@ install, and configure the Ceph iSCSI gateway for basic operation. - A running Ceph Luminous (12.2.x) cluster or newer -- The ``ceph-iscsi-config`` package installed on all the iSCSI gateway nodes +- RHEL/CentOS 7.4; or Linux kernel v4.14 or newer -.. NOTE:: - The ``device-mapper-multipath-0.4.9-99`` or newer package is only required for - older kernel RBD based implementations. This package is not required for newer - ``librbd`` based implementations. +- The ``ceph-iscsi-config`` package installed on all the iSCSI gateway nodes **Installing:** #. On the Ansible installer node, which could be either the administration node or a dedicated deployment node, perform the following steps: - #. As ``root``, install the ``ceph-iscsi-ansible`` package: + #. As ``root``, install the ``ceph-ansible`` package: :: - # yum install ceph-iscsi-ansible + # yum install ceph-ansible #. Add an entry in ``/etc/ansible/hosts`` file for the gateway group: @@ -37,13 +34,13 @@ install, and configure the Ceph iSCSI gateway for basic operation. ceph-igw-1 ceph-igw-2 -.. NOTE:: +.. note:: If co-locating the iSCSI gateway with an OSD node, then add the OSD node to the ``[ceph-iscsi-gw]`` section. **Configuring:** -The ``ceph-iscsi-ansible`` package places a file in the ``/usr/share/ceph-ansible/group_vars/`` +The ``ceph-ansible`` package places a file in the ``/usr/share/ceph-ansible/group_vars/`` directory called ``ceph-iscsi-gw.sample``. Create a copy of this sample file named ``ceph-iscsi-gw.yml``. Review the following Ansible variables and descriptions, and update accordingly. @@ -134,7 +131,7 @@ and update accordingly. | | across client connections. | +--------------------------------------+--------------------------------------+ -.. NOTE:: +.. note:: When using the ``gateway_iqn`` variable, and for Red Hat Enterprise Linux clients, installing the ``iscsi-initiator-utils`` package is required for retrieving the gateway’s IQN name. The iSCSI initiator name is located in the @@ -151,7 +148,7 @@ On the Ansible installer node, perform the following steps. # cd /usr/share/ceph-ansible # ansible-playbook ceph-iscsi-gw.yml - .. NOTE:: + .. note:: The Ansible playbook will handle RPM dependencies, RBD creation and Linux IO configuration. @@ -161,12 +158,12 @@ On the Ansible installer node, perform the following steps. # gwcli ls - .. NOTE:: + .. note:: For more information on using the ``gwcli`` command to install and configure a Ceph iSCSI gateaway, see the `Configuring the iSCSI Target using the Command Line Interface`_ section. - .. IMPORTANT:: + .. important:: Attempting to use the ``targetcli`` tool to change the configuration will result in the following issues, such as ALUA misconfiguration and path failover problems. There is the potential to corrupt data, to have mismatched @@ -207,7 +204,7 @@ Within the ``/usr/share/ceph-ansible/group_vars/ceph-iscsi-gw`` file there are a number of operational workflows that the Ansible playbook supports. -.. WARNING:: +.. warning:: Before removing RBD images from the iSCSI gateway configuration, follow the standard procedures for removing a storage device from the operating system. @@ -271,7 +268,7 @@ change across the iSCSI gateway nodes. **Removing the Configuration:** -The ``ceph-iscsi-ansible`` package provides an Ansible playbook to +The ``ceph-ansible`` package provides an Ansible playbook to remove the iSCSI gateway configuration and related RBD images. The Ansible playbook is ``/usr/share/ceph-ansible/purge_gateways.yml``. When this Ansible playbook is ran a prompted for the type of purge to @@ -290,11 +287,11 @@ When ``all`` is chosen, the LIO configuration is removed together with environment, other unrelated RBD images will not be removed. Ensure the correct mode is chosen, this operation will delete data. -.. WARNING:: +.. warning:: A purge operation is destructive action against your iSCSI gateway environment. -.. WARNING:: +.. warning:: A purge operation will fail, if RBD images have snapshots or clones and are exported through the Ceph iSCSI gateway. diff --git a/doc/rbd/iscsi-target-cli.rst b/doc/rbd/iscsi-target-cli.rst index 5a542596a17..6da6f10e2ef 100644 --- a/doc/rbd/iscsi-target-cli.rst +++ b/doc/rbd/iscsi-target-cli.rst @@ -11,7 +11,7 @@ install, and configure the Ceph iSCSI gateway for basic operation. - A running Ceph Luminous or later storage cluster -- CentOS 7.4, Linux kernel v4.12 or newer +- RHEL/CentOS 7.4; or Linux kernel v4.14 or newer - The following packages must be installed from your Linux distribution's software repository: @@ -19,9 +19,13 @@ install, and configure the Ceph iSCSI gateway for basic operation. - ``python-rtslib-2.1.fb64`` or newer package - - ``tcmu-runner-1.2.1`` or newer package + - ``tcmu-runner-1.3.0`` or newer package - .. IMPORTANT:: + - ``ceph-iscsi-config-2.3`` or newer package + + - ``ceph-iscsi-cli-2.5`` or newer package + + .. important:: If previous versions of these packages exist, then they must be removed first before installing the newer versions. @@ -66,13 +70,6 @@ to the *Installing* section: #. Edit the ``iscsi-gateway.cfg`` file and add the following lines: - :: - - [config] - cluster_name = - gateway_keyring = - api_secure = false - :: [config] @@ -104,7 +101,7 @@ to the *Installing* section: # api_port = 5001 # trusted_ip_list = 192.168.0.10,192.168.0.11 - .. IMPORTANT:: + .. important:: The ``iscsi-gateway.cfg`` file must be identical on all iSCSI gateway nodes. #. As ``root``, copy the ``iscsi-gateway.cfg`` file to all iSCSI @@ -152,7 +149,7 @@ to the *Installing* section: > auth chap=/ | nochap - .. WARNING:: + .. warning:: CHAP must always be configured. Without CHAP, the target will reject any login requests. diff --git a/doc/rbd/iscsi-targets.rst b/doc/rbd/iscsi-targets.rst index 498dcac1c7d..b7dcac79f06 100644 --- a/doc/rbd/iscsi-targets.rst +++ b/doc/rbd/iscsi-targets.rst @@ -8,7 +8,7 @@ within OpenStack environments. Starting with the Ceph Luminous release, block-level access is expanding to offer standard iSCSI support allowing wider platform usage, and potentially opening new use cases. -- CentOS 7.4 or newer +- RHEL/CentOS 7.4; or Linux kernel v4.14 or newer - A working Ceph Storage cluster, deployed with ``ceph-ansible`` or using the command-line interface