From ba19cb00ecb68455eafed03c8b188eb8fc7e99d7 Mon Sep 17 00:00:00 2001 From: Jos Collin Date: Tue, 27 Feb 2024 14:15:26 +0530 Subject: [PATCH] doc: update 'journal reset' command with --yes-i-really-really-mean-it Fixes: https://tracker.ceph.com/issues/62925 Signed-off-by: Jos Collin (cherry picked from commit 42953ece97b1b082400443b44bab46b1118fb1f8) --- PendingReleaseNotes | 5 +++++ doc/cephfs/cephfs-journal-tool.rst | 3 ++- doc/cephfs/disaster-recovery-experts.rst | 8 ++++---- 3 files changed, 11 insertions(+), 5 deletions(-) diff --git a/PendingReleaseNotes b/PendingReleaseNotes index 3335b16793d4..33101674c565 100644 --- a/PendingReleaseNotes +++ b/PendingReleaseNotes @@ -133,6 +133,11 @@ cluster, MDS_CLIENTS_BROKEN_ROOTSQUASH. See the documentation on this warning and the new feature bit for more information. +* cls_cxx_gather is marked as deprecated. +* CephFS: cephfs-journal-tool is guarded against running on an online file system. + The 'cephfs-journal-tool --rank : journal reset' and + 'cephfs-journal-tool --rank : journal reset --force' + commands require '--yes-i-really-really-mean-it'. * CephFS: Command "ceph mds fail" and "ceph fs fail" now requires a confirmation flag when some MDSs exhibit health warning MDS_TRIM or diff --git a/doc/cephfs/cephfs-journal-tool.rst b/doc/cephfs/cephfs-journal-tool.rst index 64a113091182..4ad7304481f7 100644 --- a/doc/cephfs/cephfs-journal-tool.rst +++ b/doc/cephfs/cephfs-journal-tool.rst @@ -15,7 +15,8 @@ examining, modifying, and extracting data from journals. This tool is **dangerous** because it directly modifies internal data structures of the file system. Make backups, be careful, and - seek expert advice. If you are unsure, do not run this tool. + seek expert advice. If you are unsure, do not run this tool. As a + precaution, cephfs-journal-tool doesn't work on an active filesystem. Syntax ------ diff --git a/doc/cephfs/disaster-recovery-experts.rst b/doc/cephfs/disaster-recovery-experts.rst index 9a196c88e234..7677b42f47e1 100644 --- a/doc/cephfs/disaster-recovery-experts.rst +++ b/doc/cephfs/disaster-recovery-experts.rst @@ -68,9 +68,9 @@ truncate it like so: :: - cephfs-journal-tool [--rank=N] journal reset + cephfs-journal-tool [--rank=:{mds-rank|all}] journal reset --yes-i-really-really-mean-it -Specify the MDS rank using the ``--rank`` option when the file system has/had +Specify the filesystem and the MDS rank using the ``--rank`` option when the file system has/had multiple active MDS. .. warning:: @@ -135,7 +135,7 @@ objects. # InoTable cephfs-table-tool 0 reset inode # Journal - cephfs-journal-tool --rank=0 journal reset + cephfs-journal-tool --rank=:0 journal reset --yes-i-really-really-mean-it # Root inodes ("/" and MDS directory) cephfs-data-scan init @@ -253,7 +253,7 @@ Next, we will create the intial metadata for the fs: cephfs-table-tool cephfs_recovery:0 reset session cephfs-table-tool cephfs_recovery:0 reset snap cephfs-table-tool cephfs_recovery:0 reset inode - cephfs-journal-tool --rank cephfs_recovery:0 journal reset --force + cephfs-journal-tool --rank cephfs_recovery:0 journal reset --force --yes-i-really-really-mean-it Now perform the recovery of the metadata pool from the data pool: -- 2.47.3