From 1ef6aec7e4b32f4df3fe53cc1cb8a3c2ad27463f Mon Sep 17 00:00:00 2001 From: "Yan, Zheng" Date: Fri, 7 Jul 2017 20:06:18 +0800 Subject: [PATCH] doc: add some docs about 'cephfs-data-scan scan_links' Signed-off-by: "Yan, Zheng" --- doc/cephfs/disaster-recovery.rst | 25 ++++++++++++++----------- 1 file changed, 14 insertions(+), 11 deletions(-) diff --git a/doc/cephfs/disaster-recovery.rst b/doc/cephfs/disaster-recovery.rst index 88bc4dd84dd..f47bd79e1da 100644 --- a/doc/cephfs/disaster-recovery.rst +++ b/doc/cephfs/disaster-recovery.rst @@ -130,18 +130,20 @@ objects. Finally, you can regenerate metadata objects for missing files and directories based on the contents of a data pool. This is -a two-phase process. First, scanning *all* objects to calculate +a three-phase process. First, scanning *all* objects to calculate size and mtime metadata for inodes. Second, scanning the first -object from every file to collect this metadata and inject -it into the metadata pool. +object from every file to collect this metadata and inject it into +the metadata pool. Third, checking inode linkages and fixing found +errors. :: cephfs-data-scan scan_extents cephfs-data-scan scan_inodes + cephfs-data-scan scan_links -This command may take a *very long* time if there are many -files or very large files in the data pool. +'scan_extents' and 'scan_inodes' commands may take a *very long* time +if there are many files or very large files in the data pool. To accelerate the process, run multiple instances of the tool. @@ -246,7 +248,7 @@ it with empty file system data structures: ceph osd pool create recovery replicated ceph fs new recovery-fs recovery --allow-dangerous-metadata-overlay cephfs-data-scan init --force-init --filesystem recovery-fs --alternate-pool recovery - ceph fs reset recovery-fs --yes-i-realy-mean-it + ceph fs reset recovery-fs --yes-i-really-mean-it cephfs-table-tool recovery-fs:all reset session cephfs-table-tool recovery-fs:all reset snap cephfs-table-tool recovery-fs:all reset inode @@ -256,8 +258,9 @@ results to the alternate pool: :: - cephfs-data-scan scan_extents --alternate-pool recovery --filesystem + cephfs-data-scan scan_extents --alternate-pool recovery --filesystem cephfs-data-scan scan_inodes --alternate-pool recovery --filesystem --force-corrupt --force-init + cephfs-data-scan scan_links --filesystem recovery-fs If the damaged filesystem contains dirty journal data, it may be recovered next with: @@ -267,10 +270,10 @@ with: cephfs-journal-tool --rank=:0 event recover_dentries list --alternate-pool recovery cephfs-journal-tool --rank recovery-fs:0 journal reset --force -After recovery, some recovered directories will have incorrect link counts. -Ensure the parameter mds_debug_scatterstat is set to false (the default) to -prevent the MDS from checking the link counts, then run a forward scrub to -repair them. Ensure you have an MDS running and issue: +After recovery, some recovered directories will have incorrect statistics. +Ensure the parameters mds_verify_scatter and mds_debug_scatterstat are set +to false (the default) to prevent the MDS from checking the statistics, then +run a forward scrub to repair them. Ensure you have an MDS running and issue: :: -- 2.39.5