From c89163904dc8d8fae7317dcaa4467643126acebb Mon Sep 17 00:00:00 2001 From: =?utf8?q?Niklas=20Hamb=C3=BCchen?= Date: Tue, 14 Oct 2025 03:58:08 +0200 Subject: [PATCH] doc/cephfs/createfs: Recommend default data pool on SSDs for non-EC MIME-Version: 1.0 Content-Type: text/plain; charset=utf8 Content-Transfer-Encoding: 8bit Signed-off-by: Niklas Hambüchen --- doc/cephfs/createfs.rst | 18 +++++++++++++++++- 1 file changed, 17 insertions(+), 1 deletion(-) diff --git a/doc/cephfs/createfs.rst b/doc/cephfs/createfs.rst index b61a6222784..f8cea360405 100644 --- a/doc/cephfs/createfs.rst +++ b/doc/cephfs/createfs.rst @@ -22,12 +22,28 @@ There are important considerations when planning these pools: - The data pool used to create the file system is the "default" data pool and the location for storing all inode backtrace information, which is used for hard link management and disaster recovery. For this reason, all CephFS inodes - have at least one object in the default data pool. If erasure-coded + have at least one RADOS object in the default data pool. If erasure-coded pools are planned for file system data, it is best to configure the default as a replicated pool to improve small-object write and read performance when updating backtraces. Separately, another erasure-coded data pool can be added (see also :ref:`ecpool`) that can be used on an entire hierarchy of directories and files (see also :ref:`file-layouts`). +- For the same reason, even if you are not using erasure coding + and plan to store all or most of your files on HDDs, + it is recommended to set the default data pool to an SSD pool + and set a file layout for top-level directories to an HDD pool. + This gives you the option to move small files and inode objects completely off HDDs + in the future using file layouts, without having to re-create the pool from scratch. + That reduces scrub and recovery times when you have many small files, + as those operations cause at least one HDD seek per RADOS object. + This optimization cannot be retrofitted in place when deploying a CephFS + file system with an HDD default data pool, + and the default data pool cannot be subsequently removed without creating + an entirely new CephFS filesystem and migrating all files. + This strategy requires only modest capacity in the SSD default data pool + when subdirectories are aligned with an HDD data pool, + but accelerates various operations and sets your file system up for + future flexibility. Refer to :doc:`/rados/operations/pools` to learn more about managing pools. For example, to create two pools with default settings for use with a file system, you -- 2.39.5