From: Jos Collin Date: Tue, 27 Feb 2024 08:45:26 +0000 (+0530) Subject: doc: update 'journal reset' command with --yes-i-really-really-mean-it X-Git-Tag: v19.1.1~259^2 X-Git-Url: http://git.apps.os.sepia.ceph.com/?a=commitdiff_plain;h=f65137d9446e645f5fdcddea39cd4291aca11a12;p=ceph.git doc: update 'journal reset' command with --yes-i-really-really-mean-it Fixes: https://tracker.ceph.com/issues/62925 Signed-off-by: Jos Collin (cherry picked from commit 42953ece97b1b082400443b44bab46b1118fb1f8) --- diff --git a/PendingReleaseNotes b/PendingReleaseNotes index 263c64ed4e025..f845608071c35 100644 --- a/PendingReleaseNotes +++ b/PendingReleaseNotes @@ -186,6 +186,10 @@ CephFS: Disallow delegating preallocated inode ranges to clients. Config and the new feature bit for more information. * cls_cxx_gather is marked as deprecated. +* CephFS: cephfs-journal-tool is guarded against running on an online file system. + The 'cephfs-journal-tool --rank : journal reset' and + 'cephfs-journal-tool --rank : journal reset --force' + commands require '--yes-i-really-really-mean-it'. * Dashboard: Rearranged Navigation Layout: The navigation layout has been reorganized for improved usability and easier access to key features. diff --git a/doc/cephfs/cephfs-journal-tool.rst b/doc/cephfs/cephfs-journal-tool.rst index 64a1130911828..4ad7304481f7f 100644 --- a/doc/cephfs/cephfs-journal-tool.rst +++ b/doc/cephfs/cephfs-journal-tool.rst @@ -15,7 +15,8 @@ examining, modifying, and extracting data from journals. This tool is **dangerous** because it directly modifies internal data structures of the file system. Make backups, be careful, and - seek expert advice. If you are unsure, do not run this tool. + seek expert advice. If you are unsure, do not run this tool. As a + precaution, cephfs-journal-tool doesn't work on an active filesystem. Syntax ------ diff --git a/doc/cephfs/disaster-recovery-experts.rst b/doc/cephfs/disaster-recovery-experts.rst index 9a196c88e2340..7677b42f47e12 100644 --- a/doc/cephfs/disaster-recovery-experts.rst +++ b/doc/cephfs/disaster-recovery-experts.rst @@ -68,9 +68,9 @@ truncate it like so: :: - cephfs-journal-tool [--rank=N] journal reset + cephfs-journal-tool [--rank=:{mds-rank|all}] journal reset --yes-i-really-really-mean-it -Specify the MDS rank using the ``--rank`` option when the file system has/had +Specify the filesystem and the MDS rank using the ``--rank`` option when the file system has/had multiple active MDS. .. warning:: @@ -135,7 +135,7 @@ objects. # InoTable cephfs-table-tool 0 reset inode # Journal - cephfs-journal-tool --rank=0 journal reset + cephfs-journal-tool --rank=:0 journal reset --yes-i-really-really-mean-it # Root inodes ("/" and MDS directory) cephfs-data-scan init @@ -253,7 +253,7 @@ Next, we will create the intial metadata for the fs: cephfs-table-tool cephfs_recovery:0 reset session cephfs-table-tool cephfs_recovery:0 reset snap cephfs-table-tool cephfs_recovery:0 reset inode - cephfs-journal-tool --rank cephfs_recovery:0 journal reset --force + cephfs-journal-tool --rank cephfs_recovery:0 journal reset --force --yes-i-really-really-mean-it Now perform the recovery of the metadata pool from the data pool: