From cf366fc3b21ff6f98530dbadb75a430c25672d56 Mon Sep 17 00:00:00 2001 From: Nilamdyuti Goswami Date: Thu, 18 Dec 2014 17:11:22 +0530 Subject: [PATCH] doc: Changes format style in ceph to improve readability as html. Signed-off-by: Nilamdyuti Goswami (cherry picked from commit 8b796173063ac9af8c21364521fc5ee23d901196) --- doc/man/8/ceph.rst | 1019 ++++++++++++++++++++++++++++++-------------- 1 file changed, 695 insertions(+), 324 deletions(-) diff --git a/doc/man/8/ceph.rst b/doc/man/8/ceph.rst index 2b52df3e68108..73cb10f5aa869 100644 --- a/doc/man/8/ceph.rst +++ b/doc/man/8/ceph.rst @@ -35,7 +35,7 @@ Synopsis Description =========== -**ceph** is a control utility which is used for manual deployment and maintenance +:program:`ceph` is a control utility which is used for manual deployment and maintenance of a Ceph cluster. It provides a diverse set of commands that allows deployment of monitors, OSDs, placement groups, MDS and overall maintenance, administration of the cluster. @@ -43,674 +43,1045 @@ of the cluster. Commands ======== -**auth**: Manage authentication keys. It is used for adding, removing, exporting +auth +---- + +Manage authentication keys. It is used for adding, removing, exporting or updating of authentication keys for a particular entity such as a monitor or OSD. It uses some additional subcommands. -Subcommand **add** adds authentication info for a particular entity from input +Subcommand ``add`` adds authentication info for a particular entity from input file, or random key if no input given and/or any caps specified in the command. -Usage: ceph auth add { [...]} +Usage:: + + ceph auth add { [...]} + +Subcommand ``caps`` updates caps for **name** from caps specified in the command. -Subcommand **caps** updates caps for **name** from caps specified in the command. +Usage:: -Usage: ceph auth caps [...] + ceph auth caps [...] -Subcommand **del** deletes all caps for **name**. +Subcommand ``del`` deletes all caps for ``name``. -Usage: ceph auth del +Usage:: -Subcommand **export** writes keyring for requested entity, or master keyring if + ceph auth del + +Subcommand ``export`` writes keyring for requested entity, or master keyring if none given. -Usage: ceph auth export {} +Usage:: + + ceph auth export {} + +Subcommand ``get`` writes keyring file with requested key. -Subcommand **get** writes keyring file with requested key. +Usage:: -Usage: ceph auth get + ceph auth get -Subcommand **get-key** displays requested key. +Subcommand ``get-key`` displays requested key. -Usage: ceph auth get-key +Usage:: -Subcommand **get-or-create** adds authentication info for a particular entity + ceph auth get-key + +Subcommand ``get-or-create`` adds authentication info for a particular entity from input file, or random key if no input given and/or any caps specified in the command. -Usage: ceph auth get-or-create { [...]} +Usage:: + + ceph auth get-or-create { [...]} -Subcommand **get-or-create-key** gets or adds key for **name** from system/caps +Subcommand ``get-or-create-key`` gets or adds key for ``name`` from system/caps pairs specified in the command. If key already exists, any given caps must match the existing caps for that key. -Subcommand **import** reads keyring from input file. +Usage:: + + ceph auth get-or-create-key { [...]} + +Subcommand ``import`` reads keyring from input file. + +Usage:: + + ceph auth import + +Subcommand ``list`` lists authentication state. + +Usage:: + + ceph auth list + +Subcommand ``print-key`` displays requested key. + +Usage:: + + ceph auth print-key + +Subcommand ``print_key`` displays requested key. + +Usage:: + + ceph auth print_key + + +compact +------- + +Causes compaction of monitor's leveldb storage. -Usage: ceph auth import +Usage:: -Subcommand **list** lists authentication state. + ceph compact -Usage: ceph auth list -Subcommand **print-key** displays requested key. +config-key +---------- -Usage: ceph auth print-key +Manage configuration key. It uses some additional subcommands. -Subcommand **print_key** displays requested key. +Subcommand ``get`` gets the configuration key. -Usage: ceph auth print_key +Usage:: -**compact**: Causes compaction of monitor's leveldb storage. + ceph config-key get -Usage: ceph compact +Subcommand ``put`` puts configuration key and values. -**config-key**: Manage configuration key. It uses some additional subcommands. +Usage:: -Subcommand **get** gets the configuration key. + ceph config-key put {} -Usage: ceph config-key get +Subcommand ``exists`` checks for configuration keys existence. -Subcommand **put** puts configuration key and values. +Usage:: -Usage: ceph config-key put {} + ceph config-key exists -Subcommand **exists** checks for configuration keys existence. +Subcommand ``list`` lists configuration keys. -Usage: ceph config-key exists +Usage:: -Subcommand **list** lists configuration keys. + ceph config-key list -Usage: ceph config-key list +Subcommand ``del`` deletes configuration key. -Subcommand **del** deletes configuration key. +Usage:: -Usage: ceph config-key del + ceph config-key del -**df**: Show cluster's free space status. -Usage: ceph df +df +-- -**fsid**: Show cluster's FSID/UUID. +Show cluster's free space status. -Usage: ceph fsid +Usage:: -**health**: Show cluster's health. + ceph df -Usage: ceph health -**heap**: Show heap usage info (available only if compiled with tcmalloc) +fsid +---- -Usage: ceph heap dump|start_profiler|stop_profiler|release|stats +Show cluster's FSID/UUID. -**injectargs**: Inject configuration arguments into monitor. +Usage:: -Usage: ceph injectargs [...] + ceph fsid -**log**: Log supplied text to the monitor log. -Usage: ceph log [...] +health +------ -**mds**: Manage metadata server configuration and administration. It uses some +Show cluster's health. + +Usage:: + + ceph health + + +heap +---- + +Show heap usage info (available only if compiled with tcmalloc) + +Usage:: + + ceph heap dump|start_profiler|stop_profiler|release|stats + + +injectargs +---------- + +Inject configuration arguments into monitor. + +Usage:: + + ceph injectargs [...] + + +log +--- + +Log supplied text to the monitor log. + +Usage:: + + ceph log [...] + + +mds +--- + +Manage metadata server configuration and administration. It uses some additional subcommands. -Subcommand **add_data_pool** adds data pool. +Subcommand ``add_data_pool`` adds data pool. + +Usage:: + + ceph mds add_data_pool + +Subcommand ``cluster_down`` takes mds cluster down. -Usage: ceph mds add_data_pool +Usage:: -Subcommand **cluster_down** takes mds cluster down. + ceph mds cluster_down -Usage: ceph mds cluster_down +Subcommand ``cluster_up`` brings mds cluster up. -Subcommand **cluster_up** brings mds cluster up. +Usage:: -Usage: ceph mds cluster_up + ceph mds cluster_up -Subcommand **compat** manages compatible features. It uses some additional +Subcommand ``compat`` manages compatible features. It uses some additional subcommands. -Subcommand **rm_compat** removes compatible feature. +Subcommand ``rm_compat`` removes compatible feature. -Usage: ceph mds compat rm_compat +Usage:: -Subcommand **rm_incompat** removes incompatible feature. + ceph mds compat rm_compat -Usage: ceph mds compat rm_incompat +Subcommand ``rm_incompat`` removes incompatible feature. -Subcommand **show** shows mds compatibility settings. +Usage:: -Usage: ceph mds compat show + ceph mds compat rm_incompat -Subcommand **deactivate** stops mds. +Subcommand ``show`` shows mds compatibility settings. -Usage: ceph mds deactivate +Usage:: -Subcommand **dump** dumps information, optionally from epoch. + ceph mds compat show -Usage: ceph mds dump {} +Subcommand ``deactivate`` stops mds. -Subcommand **fail** forces mds to status fail. +Usage:: -Usage: ceph mds fail + ceph mds deactivate -Subcommand **getmap** gets MDS map, optionally from epoch. +Subcommand ``dump`` dumps information, optionally from epoch. -Usage: ceph mds getmap {} +Usage:: -Subcommand **newfs** makes new filesystem using pools and . + ceph mds dump {} -Usage: ceph mds newfs {--yes-i-really-mean-it} +Subcommand ``fail`` forces mds to status fail. -Subcommand **remove_data_pool** removes data pool. +Usage:: -Usage: ceph mds remove_data_pool + ceph mds fail -Subcommand **rm** removes inactive mds. +Subcommand ``getmap`` gets MDS map, optionally from epoch. -Usage: ceph mds rm (type.id)> +Usage:: -Subcommand **rmfailed** removes failed mds. + ceph mds getmap {} -Usage: ceph mds rmfailed +Subcommand ``newfs`` makes new filesystem using pools and . -Subcommand **set_max_mds** sets max MDS index. +Usage:: -Usage: ceph mds set_max_mds + ceph mds newfs {--yes-i-really-mean-it} -Subcommand **set_state** sets mds state of to . +Subcommand ``remove_data_pool`` removes data pool. -Usage: ceph mds set_state +Usage:: -Subcommand **setmap** sets mds map; must supply correct epoch number. + ceph mds remove_data_pool -Usage: ceph mds setmap +Subcommand ``rm`` removes inactive mds. -Subcommand **stat** shows MDS status. +Usage:: -Usage: ceph mds stat + ceph mds rm (type.id)> -Subcommand **stop** stops mds. +Subcommand ``rmfailed`` removes failed mds. -Usage: ceph mds stop +Usage:: -Subcommand **tell** sends command to particular mds. + ceph mds rmfailed -Usage: ceph mds tell [...] +Subcommand ``set_max_mds`` sets max MDS index. -**mon**: Manage monitor configuration and administration. It uses some -additional subcommands. +Usage:: + + ceph mds set_max_mds + +Subcommand ``set_state`` sets mds state of to . + +Usage:: + + ceph mds set_state + +Subcommand ``setmap`` sets mds map; must supply correct epoch number. + +Usage:: + + ceph mds setmap + +Subcommand ``stat`` shows MDS status. + +Usage:: + + ceph mds stat + +Subcommand ``stop`` stops mds. + +Usage:: + + ceph mds stop + +Subcommand ``tell`` sends command to particular mds. + +Usage:: + + ceph mds tell [...] + +mon +--- + +Manage monitor configuration and administration. It uses some additional +subcommands. + +Subcommand ``add`` adds new monitor named at . + +Usage:: + + ceph mon add -Subcommand **add** adds new monitor named at . +Subcommand ``dump`` dumps formatted monmap (optionally from epoch) -Usage: ceph mon add +Usage:: -Subcommand **dump** dumps formatted monmap (optionally from epoch) + ceph mon dump {} -Usage: ceph mon dump {} +Subcommand ``getmap`` gets monmap. -Subcommand **getmap** gets monmap. +Usage:: -Usage: ceph mon getmap {} + ceph mon getmap {} -Subcommand **remove** removes monitor named . +Subcommand ``remove`` removes monitor named . -Usage: ceph mon remove +Usage:: -Subcommand **stat** summarizes monitor status. + ceph mon remove -Usage: ceph mon stat +Subcommand ``stat`` summarizes monitor status. -Subcommand **mon_status** reports status of monitors. +Usage:: -Usage: ceph mon_status + ceph mon stat -**osd**: Manage OSD configuration and administration. It uses some additional +Subcommand ``mon_status`` reports status of monitors. + +Usage:: + + ceph mon_status + +osd +--- + +Manage OSD configuration and administration. It uses some additional subcommands. -Subcommand **create** creates new osd (with optional UUID). +Subcommand ``create`` creates new osd (with optional UUID). + +Usage:: -Usage: ceph osd create {} + ceph osd create {} -Subcommand **crush** is used for CRUSH management. It uses some additional +Subcommand ``crush`` is used for CRUSH management. It uses some additional subcommands. -Subcommand **add** adds or updates crushmap position and weight for with +Subcommand ``add`` adds or updates crushmap position and weight for with and location . -Usage: ceph osd crush add [...] +Usage:: + + ceph osd crush add [...] -Subcommand **add-bucket** adds no-parent (probably root) crush bucket of +Subcommand ``add-bucket`` adds no-parent (probably root) crush bucket of type . -Usage: ceph osd crush add-bucket +Usage:: -Subcommand **create-or-move** creates entry or moves existing entry for + ceph osd crush add-bucket + +Subcommand ``create-or-move`` creates entry or moves existing entry for at/to location . -Usage: ceph osd crush create-or-move +Usage:: + + ceph osd crush create-or-move [...] -Subcommand **dump** dumps crush map. +Subcommand ``dump`` dumps crush map. + +Usage:: -Usage: ceph osd crush dump + ceph osd crush dump -Subcommand **link** links existing entry for under location . +Subcommand ``link`` links existing entry for under location . -Usage: ceph osd crush link [...] +Usage:: -Subcommand **move** moves existing entry for to location . + ceph osd crush link [...] -Usage: ceph osd crush move [...] +Subcommand ``move`` moves existing entry for to location . -Subcommand **remove** removes from crush map (everywhere, or just at +Usage:: + + ceph osd crush move [...] + +Subcommand ``remove`` removes from crush map (everywhere, or just at ). -Usage: ceph osd crush remove {} +Usage:: + + ceph osd crush remove {} -Subcommand **reweight** change 's weight to in crush map. +Subcommand ``reweight`` change 's weight to in crush map. -Usage: ceph osd crush reweight +Usage:: -Subcommand **rm** removes from crush map (everywhere, or just at + ceph osd crush reweight + +Subcommand ``rm`` removes from crush map (everywhere, or just at ). -Usage: ceph osd crush rm {} +Usage:: + + ceph osd crush rm {} -Subcommand **rule** is used for creating crush rules. It uses some additional +Subcommand ``rule`` is used for creating crush rules. It uses some additional subcommands. -Subcommand **create-erasure** creates crush rule for erasure coded pool +Subcommand ``create-erasure`` creates crush rule for erasure coded pool created with (default default). -Usage: ceph osd crush rule create-erasure {} +Usage:: + + ceph osd crush rule create-erasure {} -Subcommand **create-simple** creates crush rule to start from , +Subcommand ``create-simple`` creates crush rule to start from , replicate across buckets of type , using a choose mode of (default firstn; indep best for erasure pools). -Usage: ceph osd crush rule create-simple {firstn|indep} +Usage:: + + ceph osd crush rule create-simple {firstn|indep} + +Subcommand ``dump`` dumps crush rule (default all). + +Usage:: -Subcommand **dump** dumps crush rule (default all). + ceph osd crush rule dump {} -Usage: ceph osd crush rule dump {} +Subcommand ``list`` lists crush rules. -Subcommand **list** lists crush rules. +Usage:: -Usage: ceph osd crush rule list + ceph osd crush rule list -Subcommand **ls** lists crush rules. +Subcommand ``ls`` lists crush rules. -Usage: ceph osd crush rule ls +Usage:: -Subcommand **rm** removes crush rule . + ceph osd crush rule ls -Usage: ceph osd crush rule rm +Subcommand ``rm`` removes crush rule . -Subcommand **set** sets crush map from input file. +Usage:: -Usage: ceph osd crush set + ceph osd crush rule rm -Subcommand **set** with osdname/osd.id update crushmap position and weight +Subcommand ``set`` sets crush map from input file. + +Usage:: + + ceph osd crush set + +Subcommand ``set`` with osdname/osd.id update crushmap position and weight for to with location . -Usage: ceph osd crush set [...] +Usage:: + + ceph osd crush set [...] + +Subcommand ``show-tunables`` shows current crush tunables. -Subcommand **show-tunables** shows current crush tunables. +Usage:: -Usage: ceph osd crush show-tunables + ceph osd crush show-tunables -Subcommand **tunables** sets crush tunables values to . +Subcommand ``tunables`` sets crush tunables values to . -Usage: ceph osd crush tunables legacy|argonaut|bobtail|firefly|optimal|default +Usage:: -Subcommand **unlink** unlinks from crush map (everywhere, or just at + ceph osd crush tunables legacy|argonaut|bobtail|firefly|optimal|default + +Subcommand ``unlink`` unlinks from crush map (everywhere, or just at ). -Usage: ceph osd crush unlink {} +Usage:: + + ceph osd crush unlink {} + +Subcommand ``deep-scrub`` initiates deep scrub on specified osd. -Subcommand **deep-scrub** initiates deep scrub on specified osd. +Usage:: -Usage: ceph osd deep-scrub + ceph osd deep-scrub -Subcommand **down** sets osd(s) [...] down. +Subcommand ``down`` sets osd(s) [...] down. -Usage: ceph osd down [...] +Usage:: -Subcommand **dump** prints summary of OSD map. + ceph osd down [...] -Usage: ceph osd dump {} +Subcommand ``dump`` prints summary of OSD map. -Subcommand **erasure-code-profile** is used for managing the erasure code +Usage:: + + ceph osd dump {} + +Subcommand ``erasure-code-profile`` is used for managing the erasure code profiles. It uses some additional subcommands. -Subcommand **get** gets erasure code profile . +Subcommand ``get`` gets erasure code profile . + +Usage:: -Usage: ceph osd erasure-code-profile get + ceph osd erasure-code-profile get -Subcommand **ls** lists all erasure code profiles. +Subcommand ``ls`` lists all erasure code profiles. -Usage: ceph osd erasure-code-profile ls +Usage:: -Subcommand **rm** removes erasure code profile . + ceph osd erasure-code-profile ls -Usage: ceph osd erasure-code-profile rm +Subcommand ``rm`` removes erasure code profile . -Subcommand **set** creates erasure code profile with [ ...] +Usage:: + + ceph osd erasure-code-profile rm + +Subcommand ``set`` creates erasure code profile with [ ...] pairs. Add a --force at the end to override an existing profile (IT IS RISKY). -Usage: ceph osd erasure-code-profile set { [...]} +Usage:: + + ceph osd erasure-code-profile set { [...]} + +Subcommand ``find`` find osd in the CRUSH map and shows its location. + +Usage:: + + ceph osd find -Subcommand **find** find osd in the CRUSH map and shows its location. +Subcommand ``getcrushmap`` gets CRUSH map. -Usage: ceph osd find +Usage:: -Subcommand **getcrushmap** gets CRUSH map. + ceph osd getcrushmap {} -Usage: ceph osd getcrushmap {} +Subcommand ``getmap`` gets OSD map. -Subcommand **getmap** gets OSD map. +Usage:: -Usage: ceph osd getmap {} + ceph osd getmap {} -Subcommand **getmaxosd** shows largest OSD id. +Subcommand ``getmaxosd`` shows largest OSD id. -Usage: ceph osd getmaxosd +Usage:: -Subcommand **in** sets osd(s) [...] in. + ceph osd getmaxosd -Usage: ceph osd in [...] +Subcommand ``in`` sets osd(s) [...] in. -Subcommand **lost** marks osd as permanently lost. THIS DESTROYS DATA IF NO +Usage:: + + ceph osd in [...] + +Subcommand ``lost`` marks osd as permanently lost. THIS DESTROYS DATA IF NO MORE REPLICAS EXIST, BE CAREFUL. -Usage: ceph osd lost {--yes-i-really-mean-it} +Usage:: + + ceph osd lost {--yes-i-really-mean-it} + +Subcommand ``ls`` shows all OSD ids. + +Usage:: + + ceph osd ls {} -Subcommand **ls** shows all OSD ids. +Subcommand ``lspools`` lists pools. -Usage: ceph osd ls {} +Usage:: -Subcommand **lspools** lists pools. + ceph osd lspools {} -Usage: ceph osd lspools {} +Subcommand ``map`` finds pg for in . -Subcommand **map** finds pg for in . +Usage:: -Usage: ceph osd map + ceph osd map -Subcommand **metadata** fetches metadata for osd . +Subcommand ``metadata`` fetches metadata for osd . -Usage: ceph osd metadata +Usage:: -Subcommand **out** sets osd(s) [...] out. + ceph osd metadata -Usage: ceph osd out [...] +Subcommand ``out`` sets osd(s) [...] out. -Subcommand **pause** pauses osd. +Usage:: -Usage: ceph osd pause + ceph osd out [...] -Subcommand **perf** prints dump of OSD perf summary stats. +Subcommand ``pause`` pauses osd. -Usage: ceph osd perf +Usage:: -Subcommand **pg-temp** set pg_temp mapping pgid:[ [...]] (developers + ceph osd pause + +Subcommand ``perf`` prints dump of OSD perf summary stats. + +Usage:: + + ceph osd perf + +Subcommand ``pg-temp`` set pg_temp mapping pgid:[ [...]] (developers only). -Usage: ceph osd pg-temp { [...]} +Usage:: + + ceph osd pg-temp { [...]} -Subcommand **pool** is used for managing data pools. It uses some additional +Subcommand ``pool`` is used for managing data pools. It uses some additional subcommands. -Subcommand **create** creates pool. +Subcommand ``create`` creates pool. + +Usage:: -Usage: ceph osd pool create {} {replicated|erasure} -{} {} + ceph osd pool create {} {replicated|erasure} + {} {} -Subcommand **delete** deletes pool. +Subcommand ``delete`` deletes pool. -Usage: ceph osd pool delete {} {--yes-i-really-really-mean-it} +Usage:: -Subcommand **get** gets pool parameter . + ceph osd pool delete {} {--yes-i-really-really-mean-it} -Usage: ceph osd pool get size|min_size|crash_replay_interval|pg_num| -pgp_num|crush_ruleset|hit_set_type|hit_set_period|hit_set_count|hit_set_fpp| +Subcommand ``get`` gets pool parameter . -ceph osd pool get auid|target_max_objects|target_max_bytes +Usage:: -ceph osd pool get cache_target_dirty_ratio|cache_target_full_ratio + ceph osd pool get size|min_size|crash_replay_interval|pg_num| + pgp_num|crush_ruleset|hit_set_type|hit_set_period|hit_set_count|hit_set_fpp| -ceph osd pool get cache_min_flush_age|cache_min_evict_age| -erasure_code_profile + ceph osd pool get auid|target_max_objects|target_max_bytes -Subcommand **get-quota** obtains object or byte limits for pool. + ceph osd pool get cache_target_dirty_ratio|cache_target_full_ratio -Usage: ceph osd pool get-quota + ceph osd pool get cache_min_flush_age|cache_min_evict_age| + erasure_code_profile -Subcommand **mksnap** makes snapshot in . +Subcommand ``get-quota`` obtains object or byte limits for pool. -Usage: ceph osd pool mksnap +Usage:: -Subcommand **rename** renames to . + ceph osd pool get-quota -Usage: ceph osd pool rename +Subcommand ``mksnap`` makes snapshot in . -Subcommand **rmsnap** removes snapshot from . +Usage:: -Usage: ceph osd pool rmsnap + ceph osd pool mksnap -Subcommand **set** sets pool parameter to . +Subcommand ``rename`` renames to . -Usage: ceph osd pool set size|min_size|crash_replay_interval|pg_num| -pgp_num|crush_ruleset|hashpspool|hit_set_type|hit_set_period| +Usage:: -ceph osd pool set hit_set_count|hit_set_fpp|debug_fake_ec_pool + ceph osd pool rename -ceph osd pool set target_max_bytes|target_max_objects +Subcommand ``rmsnap`` removes snapshot from . -ceph osd pool set cache_target_dirty_ratio|cache_target_full_ratio +Usage:: -ceph osd pool set cache_min_flush_age + ceph osd pool rmsnap -ceph osd pool set cache_min_evict_age|auid {--yes-i-really-mean-it} +Subcommand ``set`` sets pool parameter to . -Subcommand **set-quota** sets object or byte limit on pool. +Usage:: -Usage: ceph osd pool set-quota max_objects|max_bytes + ceph osd pool set size|min_size|crash_replay_interval|pg_num| + pgp_num|crush_ruleset|hashpspool|hit_set_type|hit_set_period| -Subcommand **stats** obtain stats from all pools, or from specified pool. + ceph osd pool set hit_set_count|hit_set_fpp|debug_fake_ec_pool -Usage: ceph osd pool stats {} + ceph osd pool set target_max_bytes|target_max_objects -Subcommand **primary-affinity** adjust osd primary-affinity from 0.0 <= + ceph osd pool set cache_target_dirty_ratio|cache_target_full_ratio + + ceph osd pool set cache_min_flush_age + + ceph osd pool set cache_min_evict_age|auid + {--yes-i-really-mean-it} + +Subcommand ``set-quota`` sets object or byte limit on pool. + +Usage:: + + ceph osd pool set-quota max_objects|max_bytes + +Subcommand ``stats`` obtain stats from all pools, or from specified pool. + +Usage:: + + ceph osd pool stats {} + +Subcommand ``primary-affinity`` adjust osd primary-affinity from 0.0 <= <= 1.0 -Usage: ceph osd primary-affinity +Usage:: -Subcommand **primary-temp** sets primary_temp mapping pgid:|-1 (developers + ceph osd primary-affinity + +Subcommand ``primary-temp`` sets primary_temp mapping pgid:|-1 (developers only). -Usage: ceph osd primary-temp +Usage:: + + ceph osd primary-temp + +Subcommand ``repair`` initiates repair on a specified osd. -Subcommand **repair** initiates repair on a specified osd. +Usage:: -Usage: ceph osd repair + ceph osd repair -Subcommand **reweight** reweights osd to 0.0 < < 1.0. +Subcommand ``reweight`` reweights osd to 0.0 < < 1.0. -Usage: osd reweight +Usage:: -Subcommand **reweight-by-utilization** reweight OSDs by utilization + osd reweight + +Subcommand ``reweight-by-utilization`` reweight OSDs by utilization [overload-percentage-for-consideration, default 120]. -Usage: ceph osd reweight-by-utilization {} +Usage:: + + ceph osd reweight-by-utilization {} + +Subcommand ``rm`` removes osd(s) [...] in the cluster. + +Usage:: + + ceph osd rm [...] + +Subcommand ``scrub`` initiates scrub on specified osd. -Subcommand **rm** removes osd(s) [...] in the cluster. +Usage:: -Usage: ceph osd rm [...] + ceph osd scrub -Subcommand **scrub** initiates scrub on specified osd. +Subcommand ``set`` sets . -Usage: ceph osd scrub +Usage:: -Subcommand **set** sets . + ceph osd set pause|noup|nodown|noout|noin|nobackfill|norecover|noscrub| + nodeep-scrub|notieragent -Usage: ceph osd set pause|noup|nodown|noout|noin|nobackfill|norecover|noscrub| -nodeep-scrub|notieragent +Subcommand ``setcrushmap`` sets crush map from input file. -Subcommand **setcrushmap** sets crush map from input file. +Usage:: -Usage: ceph osd setcrushmap + ceph osd setcrushmap -Subcommand **setmaxosd** sets new maximum osd value. +Subcommand ``setmaxosd`` sets new maximum osd value. -Usage: ceph osd setmaxosd +Usage:: -Subcommand **stat** prints summary of OSD map. + ceph osd setmaxosd -Usage: ceph osd stat +Subcommand ``stat`` prints summary of OSD map. -Subcommand **thrash** thrashes OSDs for . +Usage:: -Usage: ceph osd thrash + ceph osd stat -Subcommand **tier** is used for managing tiers. It uses some additional +Subcommand ``thrash`` thrashes OSDs for . + +Usage:: + + ceph osd thrash + +Subcommand ``tier`` is used for managing tiers. It uses some additional subcommands. -Subcommand **add** adds the tier (the second one) to base pool +Subcommand ``add`` adds the tier (the second one) to base pool (the first one). -Usage: ceph osd tier add {--force-nonempty} +Usage:: -Subcommand **add-cache** adds a cache (the second one) of size + ceph osd tier add {--force-nonempty} + +Subcommand ``add-cache`` adds a cache (the second one) of size to existing pool (the first one). -Usage: ceph osd tier add-cache +Usage:: + + ceph osd tier add-cache -Subcommand **cache-mode** specifies the caching mode for cache tier . +Subcommand ``cache-mode`` specifies the caching mode for cache tier . -Usage: ceph osd tier cache-mode none|writeback|forward|readonly +Usage:: -Subcommand **remove** removes the tier (the second one) from base pool + ceph osd tier cache-mode none|writeback|forward|readonly + +Subcommand ``remove`` removes the tier (the second one) from base pool (the first one). -Usage: ceph osd tier remove +Usage:: + + ceph osd tier remove + +Subcommand ``remove-overlay`` removes the overlay pool for base pool . -Subcommand **remove-overlay** removes the overlay pool for base pool . +Usage:: -Usage: ceph osd tier remove-overlay + ceph osd tier remove-overlay -Subcommand **set-overlay** set the overlay pool for base pool to be +Subcommand ``set-overlay`` set the overlay pool for base pool to be . -Usage: ceph osd tier set-overlay +Usage:: + + ceph osd tier set-overlay + +Subcommand ``tree`` prints OSD tree. + +Usage:: -Subcommand **tree** prints OSD tree. + ceph osd tree {} -Usage: ceph osd tree {} +Subcommand ``unpause`` unpauses osd. -Subcommand **unpause** unpauses osd. +Usage:: -Usage: ceph osd unpause + ceph osd unpause -Subcommand **unset** unsets . +Subcommand ``unset`` unsets . -Usage: osd unset pause|noup|nodown|noout|noin|nobackfill|norecover|noscrub| -nodeep-scrub|notieragent +Usage:: -**pg**: It is used for managing the placement groups in OSDs. It uses some + osd unset pause|noup|nodown|noout|noin|nobackfill|norecover|noscrub| + nodeep-scrub|notieragent + + +pg +-- + +It is used for managing the placement groups in OSDs. It uses some additional subcommands. -Subcommand **debug** shows debug info about pgs. +Subcommand ``debug`` shows debug info about pgs. + +Usage:: + + ceph pg debug unfound_objects_exist|degraded_pgs_exist + +Subcommand ``deep-scrub`` starts deep-scrub on . -Usage: ceph pg debug unfound_objects_exist|degraded_pgs_exist +Usage:: -Subcommand **deep-scrub** starts deep-scrub on . + ceph pg deep-scrub -Usage: ceph pg deep-scrub +Subcommand ``dump`` shows human-readable versions of pg map (only 'all' valid +with plain). -Subcommand **dump** shows human-readable versions of pg map (only 'all' valid with -plain). +Usage:: -Usage: ceph pg dump {all|summary|sum|delta|pools|osds|pgs|pgs_brief} + ceph pg dump {all|summary|sum|delta|pools|osds|pgs|pgs_brief} -ceph pg dump {all|summary|sum|delta|pools|osds|pgs|pgs_brief...} + ceph pg dump {all|summary|sum|delta|pools|osds|pgs|pgs_brief...} -Subcommand **dump_json** shows human-readable version of pg map in json only. +Subcommand ``dump_json`` shows human-readable version of pg map in json only. -Usage: ceph pg dump_json {all|summary|sum|pools|osds|pgs[all|summary|sum|pools| -osds|pgs...]} +Usage:: -Subcommand **dump_pools_json** shows pg pools info in json only. + ceph pg dump_json {all|summary|sum|pools|osds|pgs[all|summary|sum|pools| + osds|pgs...]} -Usage: ceph pg dump_pools_json +Subcommand ``dump_pools_json`` shows pg pools info in json only. -Subcommand **dump_stuck** shows information about stuck pgs. +Usage:: -Usage: ceph pg dump_stuck {inactive|unclean|stale[inactive|unclean|stale...]} -{} + ceph pg dump_pools_json -Subcommand **force_create_pg** forces creation of pg . +Subcommand ``dump_stuck`` shows information about stuck pgs. -Usage: ceph pg force_create_pg +Usage:: -Subcommand **getmap** gets binary pg map to -o/stdout. + ceph pg dump_stuck {inactive|unclean|stale[inactive|unclean|stale...]} + {} -Usage: ceph pg getmap +Subcommand ``force_create_pg`` forces creation of pg . -Subcommand **map** shows mapping of pg to osds. +Usage:: -Usage: ceph pg map + ceph pg force_create_pg -Subcommand **repair** starts repair on . +Subcommand ``getmap`` gets binary pg map to -o/stdout. -Usage: ceph pg repair +Usage:: -Subcommand **scrub** starts scrub on . + ceph pg getmap -Usage: ceph pg scrub +Subcommand ``map`` shows mapping of pg to osds. -Subcommand **send_pg_creates** triggers pg creates to be issued. +Usage:: -Usage: ceph pg send_pg_creates + ceph pg map -Subcommand **set_full_ratio** sets ratio at which pgs are considered full. +Subcommand ``repair`` starts repair on . -Usage: ceph pg set_full_ratio +Usage:: -Subcommand **set_nearfull_ratio** sets ratio at which pgs are considered nearly + ceph pg repair + +Subcommand ``scrub`` starts scrub on . + +Usage:: + + ceph pg scrub + +Subcommand ``send_pg_creates`` triggers pg creates to be issued. + +Usage:: + + ceph pg send_pg_creates + +Subcommand ``set_full_ratio`` sets ratio at which pgs are considered full. + +Usage:: + + ceph pg set_full_ratio + +Subcommand ``set_nearfull_ratio`` sets ratio at which pgs are considered nearly full. -Usage: ceph pg set_nearfull_ratio +Usage:: + + ceph pg set_nearfull_ratio + +Subcommand ``stat`` shows placement group status. + +Usage:: + + ceph pg stat + + +quorum +------ + +Enter or exit quorum. + +Usage:: + + ceph quorum enter|exit + + +quorum_status +------------- + +Reports status of monitor quorum. + +Usage:: + + ceph quorum_status + + +report +------ + +Reports full status of cluster, optional title tag strings. + +Usage:: + + ceph report { [...]} + + +scrub +----- + +Scrubs the monitor stores. + +Usage:: + + ceph scrub -Subcommand **stat** shows placement group status. -Usage: ceph pg stat +status +------ -**quorum**: Enter or exit quorum. +Shows cluster status. -Usage: ceph quorum enter|exit +Usage:: -**quorum_status**: Reports status of monitor quorum. + ceph status -Usage: ceph quorum_status -**report**: Reports full status of cluster, optional title tag strings. +sync force +---------- -Usage: ceph report { [...]} +Forces sync of and clear monitor store. -**scrub**: Scrubs the monitor stores. +Usage:: -Usage: ceph scrub + ceph sync force {--yes-i-really-mean-it} {--i-know-what-i-am-doing} -**status**: Shows cluster status. -Usage: ceph status +tell +---- -**sync force**: Forces sync of and clear monitor store. +Sends a command to a specific daemon. -Usage: ceph sync force {--yes-i-really-mean-it} {--i-know-what-i-am-doing} +Usage:: -**tell**: Sends a command to a specific daemon. + ceph tell [...] -Usage: ceph tell [...] Options ======= @@ -730,7 +1101,7 @@ Options .. option:: -c ceph.conf, --conf=ceph.conf Use ceph.conf configuration file instead of the default - /etc/ceph/ceph.conf to determine monitor addresses during startup. + ``/etc/ceph/ceph.conf`` to determine monitor addresses during startup. .. option:: --id CLIENT_ID, --user CLIENT_ID @@ -804,8 +1175,8 @@ Options Availability ============ -**ceph** is part of the Ceph distributed storage system. Please refer to the Ceph documentation at -http://ceph.com/docs for more information. +:program:`ceph` is a part of the Ceph distributed storage system. Please refer to +the Ceph documentation at http://ceph.com/docs for more information. See also -- 2.39.5