From: Piotr Dałek Date: Wed, 1 Mar 2017 11:07:14 +0000 (+0100) Subject: doc: document new force-recovery/force-backfill commands X-Git-Tag: v12.1.2~145^2~2 X-Git-Url: http://git-server-git.apps.pok.os.sepia.ceph.com/?a=commitdiff_plain;h=3fea25e1f5db8dbf7d8ab7aa76f95510a5f00559;p=ceph.git doc: document new force-recovery/force-backfill commands Documentation for new pg force-recovery, pg force-backfill, pg-cancel-force-recovery and pg-cancel-force-backfill. Signed-off-by: Piotr Dałek --- diff --git a/doc/rados/operations/pg-states.rst b/doc/rados/operations/pg-states.rst index 233042945046..0fbd3dcf0b04 100644 --- a/doc/rados/operations/pg-states.rst +++ b/doc/rados/operations/pg-states.rst @@ -38,11 +38,17 @@ map is ``active + clean``. *Recovering* Ceph is migrating/synchronizing objects and their replicas. +*Forced-Recovery* + High recovery priority of that PG is enforced by user. + *Backfill* Ceph is scanning and synchronizing the entire contents of a placement group instead of inferring what contents need to be synchronized from the logs of recent operations. *Backfill* is a special case of recovery. +*Forced-Backfill* + High backfill priority of that PG is enforced by user. + *Wait-backfill* The placement group is waiting in line to start backfill. diff --git a/doc/rados/operations/placement-groups.rst b/doc/rados/operations/placement-groups.rst index d822373dd168..10979a9c4d75 100644 --- a/doc/rados/operations/placement-groups.rst +++ b/doc/rados/operations/placement-groups.rst @@ -403,6 +403,36 @@ or mismatched, and their contents are consistent. Assuming the replicas all match, a final semantic sweep ensures that all of the snapshot-related object metadata is consistent. Errors are reported via logs. +Prioritize backfill/recovery of a Placement Group(s) +==================================================== + +You may run into a situation where a bunch of placement groups will require +recovery and/or backfill, and some particular groups hold data more important +than others (for example, those PGs may hold data for images used by running +machines and other PGs may be used by inactive machines/less relevant data). +In that case, you may want to prioritize recovery of those groups so +performance and/or availability of data stored on those groups is restored +earlier. To do this (mark particular placement group(s) as prioritized during +backfill or recovery), execute the following:: + + ceph pg force-recovery {pg-id} [{pg-id #2}] [{pg-id #3} ...] + ceph pg force-backfill {pg-id} [{pg-id #2}] [{pg-id #3} ...] + +This will cause Ceph to perform recovery or backfill on specified placement +groups first, before other placement groups. This does not interrupt currently +ongoing backfills or recovery, but causes specified PGs to be processed +as soon as possible. If you change your mind or prioritize wrong groups, +use:: + + ceph pg cancel-force-recovery {pg-id} [{pg-id #2}] [{pg-id #3} ...] + ceph pg cancel-force-backfill {pg-id} [{pg-id #2}] [{pg-id #3} ...] + +This will remove "force" flag from those PGs and they will be processed +in default order. Again, this doesn't affect currently processed placement +group, only those that are still queued. + +The "force" flag is cleared automatically after recovery or backfill of group +is done. Revert Lost ===========