From f240bfd5736366f480d158aabe4c08c361df12a2 Mon Sep 17 00:00:00 2001 From: Patrick Donnelly Date: Fri, 7 Jul 2023 08:42:58 -0400 Subject: [PATCH] doc/cephfs: add note to isolate metadata pool osds Signed-off-by: Patrick Donnelly (cherry picked from commit 4e2e61f16438dcd5eb35854091114d18b6fe9a9e) --- doc/cephfs/createfs.rst | 4 ++++ doc/rados/operations/crush-map.rst | 2 ++ 2 files changed, 6 insertions(+) diff --git a/doc/cephfs/createfs.rst b/doc/cephfs/createfs.rst index 59706d1d2dc..4a282e562fe 100644 --- a/doc/cephfs/createfs.rst +++ b/doc/cephfs/createfs.rst @@ -15,6 +15,10 @@ There are important considerations when planning these pools: - We recommend the fastest feasible low-latency storage devices (NVMe, Optane, or at the very least SAS/SATA SSD) for the metadata pool, as this will directly affect the latency of client file system operations. +- We strongly suggest that the CephFS metadata pool be provisioned on dedicated + SSD / NVMe OSDs. This ensures that high client workload does not adversely + impact metadata operations. See :ref:`device_classes` to configure pools this + way. - The data pool used to create the file system is the "default" data pool and the location for storing all inode backtrace information, which is used for hard link management and disaster recovery. For this reason, all CephFS inodes diff --git a/doc/rados/operations/crush-map.rst b/doc/rados/operations/crush-map.rst index ac4f1cb12a1..54ad63130cb 100644 --- a/doc/rados/operations/crush-map.rst +++ b/doc/rados/operations/crush-map.rst @@ -221,6 +221,8 @@ To view the contents of the rules, run the following command: ceph osd crush rule dump +.. _device_classes: + Device classes -------------- -- 2.39.5