]> git.apps.os.sepia.ceph.com Git - ceph.git/commitdiff
doc: Changes format style in ceph to improve readability as html.
authorNilamdyuti Goswami <ngoswami@redhat.com>
Thu, 18 Dec 2014 11:41:22 +0000 (17:11 +0530)
committerDavid Zafman <dzafman@redhat.com>
Thu, 12 Mar 2015 17:48:13 +0000 (10:48 -0700)
Signed-off-by: Nilamdyuti Goswami <ngoswami@redhat.com>
(cherry picked from commit 8b796173063ac9af8c21364521fc5ee23d901196)

doc/man/8/ceph.rst

index 2b52df3e68108075b430c722e80d545a20f1cdbd..73cb10f5aa86931d6e55d4f47db061323812b4cc 100644 (file)
@@ -35,7 +35,7 @@ Synopsis
 Description
 ===========
 
-**ceph** is a control utility which is used for manual deployment and maintenance
+:program:`ceph` is a control utility which is used for manual deployment and maintenance
 of a Ceph cluster. It provides a diverse set of commands that allows deployment of
 monitors, OSDs, placement groups, MDS and overall maintenance, administration
 of the cluster.
@@ -43,674 +43,1045 @@ of the cluster.
 Commands
 ========
 
-**auth**: Manage authentication keys. It is used for adding, removing, exporting
+auth
+----
+
+Manage authentication keys. It is used for adding, removing, exporting
 or updating of authentication keys for a particular  entity such as a monitor or
 OSD. It uses some additional subcommands.
 
-Subcommand **add** adds authentication info for a particular entity from input
+Subcommand ``add`` adds authentication info for a particular entity from input
 file, or random key if no input given and/or any caps specified in the command.
 
-Usage: ceph auth add <entity> {<caps> [<caps>...]}
+Usage::
+
+       ceph auth add <entity> {<caps> [<caps>...]}
+
+Subcommand ``caps`` updates caps for **name** from caps specified in the command.
 
-Subcommand **caps** updates caps for **name** from caps specified in the command.
+Usage::
 
-Usage: ceph auth caps <entity> <caps> [<caps>...]
+       ceph auth caps <entity> <caps> [<caps>...]
 
-Subcommand **del** deletes all caps for **name**.
+Subcommand ``del`` deletes all caps for ``name``.
 
-Usage: ceph auth del <entity>
+Usage::
 
-Subcommand **export** writes keyring for requested entity, or master keyring if
+       ceph auth del <entity>
+
+Subcommand ``export`` writes keyring for requested entity, or master keyring if
 none given.
 
-Usage: ceph auth export {<entity>}
+Usage::
+
+       ceph auth export {<entity>}
+
+Subcommand ``get`` writes keyring file with requested key.
 
-Subcommand **get** writes keyring file with requested key.
+Usage::
 
-Usage: ceph auth get <entity>
+       ceph auth get <entity>
 
-Subcommand **get-key** displays requested key.
+Subcommand ``get-key`` displays requested key.
 
-Usage: ceph auth get-key <entity>
+Usage::
 
-Subcommand **get-or-create** adds authentication info for a particular entity
+       ceph auth get-key <entity>
+
+Subcommand ``get-or-create`` adds authentication info for a particular entity
 from input file, or random key if no input given and/or any caps specified in the
 command.
 
-Usage: ceph auth get-or-create <entity> {<caps> [<caps>...]}
+Usage::
+
+       ceph auth get-or-create <entity> {<caps> [<caps>...]}
 
-Subcommand **get-or-create-key** gets or adds key for **name** from system/caps
+Subcommand ``get-or-create-key`` gets or adds key for ``name`` from system/caps
 pairs specified in the command.  If key already exists, any given caps must match
 the existing caps for that key.
 
-Subcommand **import** reads keyring from input file.
+Usage::
+
+       ceph auth get-or-create-key <entity> {<caps> [<caps>...]}
+
+Subcommand ``import`` reads keyring from input file.
+
+Usage::
+
+       ceph auth import
+
+Subcommand ``list`` lists authentication state.
+
+Usage::
+
+       ceph auth list
+
+Subcommand ``print-key`` displays requested key.
+
+Usage::
+
+       ceph auth print-key <entity>
+
+Subcommand ``print_key`` displays requested key.
+
+Usage::
+
+       ceph auth print_key <entity>
+
+
+compact
+-------
+
+Causes compaction of monitor's leveldb storage.
 
-Usage: ceph auth import
+Usage::
 
-Subcommand **list** lists authentication state.
+       ceph compact
 
-Usage: ceph auth list
 
-Subcommand **print-key** displays requested key.
+config-key
+----------
 
-Usage: ceph auth print-key <entity>
+Manage configuration key. It uses some additional subcommands.
 
-Subcommand **print_key** displays requested key.
+Subcommand ``get`` gets the configuration key.
 
-Usage: ceph auth print_key <entity>
+Usage::
 
-**compact**: Causes compaction of monitor's leveldb storage.
+       ceph config-key get <key>
 
-Usage: ceph compact
+Subcommand ``put`` puts configuration key and values.
 
-**config-key**: Manage configuration key. It uses some additional subcommands.
+Usage::
 
-Subcommand **get** gets the configuration key.
+       ceph config-key put <key> {<val>}
 
-Usage: ceph config-key get <key>
+Subcommand ``exists`` checks for configuration keys existence.
 
-Subcommand **put** puts configuration key and values.
+Usage::
 
-Usage: ceph config-key put <key> {<val>}
+       ceph config-key exists <key>
 
-Subcommand **exists** checks for configuration keys existence.
+Subcommand ``list`` lists configuration keys.
 
-Usage: ceph config-key exists <key>
+Usage::
 
-Subcommand **list** lists configuration keys.
+       ceph config-key list
 
-Usage: ceph config-key list
+Subcommand ``del`` deletes configuration key.
 
-Subcommand **del** deletes configuration key.
+Usage::
 
-Usage: ceph config-key del <key>
+       ceph config-key del <key>
 
-**df**: Show cluster's free space status.
 
-Usage: ceph df
+df
+--
 
-**fsid**: Show cluster's FSID/UUID.
+Show cluster's free space status.
 
-Usage: ceph fsid
+Usage::
 
-**health**: Show cluster's health.
+       ceph df
 
-Usage: ceph health
 
-**heap**: Show heap usage info (available only if compiled with tcmalloc)
+fsid
+----
 
-Usage: ceph heap dump|start_profiler|stop_profiler|release|stats
+Show cluster's FSID/UUID.
 
-**injectargs**: Inject configuration arguments into monitor.
+Usage::
 
-Usage: ceph injectargs <injected_args> [<injected_args>...]
+       ceph fsid
 
-**log**: Log supplied text to the monitor log.
 
-Usage: ceph log <logtext> [<logtext>...]
+health
+------
 
-**mds**: Manage metadata server configuration and administration. It uses some
+Show cluster's health.
+
+Usage::
+
+       ceph health
+
+
+heap
+----
+
+Show heap usage info (available only if compiled with tcmalloc)
+
+Usage::
+
+       ceph heap dump|start_profiler|stop_profiler|release|stats
+
+
+injectargs
+----------
+
+Inject configuration arguments into monitor.
+
+Usage::
+
+       ceph injectargs <injected_args> [<injected_args>...]
+
+
+log
+---
+
+Log supplied text to the monitor log.
+
+Usage::
+
+       ceph log <logtext> [<logtext>...]
+
+
+mds
+---
+
+Manage metadata server configuration and administration. It uses some
 additional subcommands.
 
-Subcommand **add_data_pool** adds data pool.
+Subcommand ``add_data_pool`` adds data pool.
+
+Usage::
+
+       ceph mds add_data_pool <pool>
+
+Subcommand ``cluster_down`` takes mds cluster down.
 
-Usage: ceph mds add_data_pool <pool>
+Usage::
 
-Subcommand **cluster_down** takes mds cluster down.
+       ceph mds cluster_down
 
-Usage: ceph mds cluster_down
+Subcommand ``cluster_up`` brings mds cluster up.
 
-Subcommand **cluster_up** brings mds cluster up.
+Usage::
 
-Usage: ceph mds cluster_up
+       ceph mds cluster_up
 
-Subcommand **compat** manages compatible features. It uses some additional
+Subcommand ``compat`` manages compatible features. It uses some additional
 subcommands.
 
-Subcommand **rm_compat** removes compatible feature.
+Subcommand ``rm_compat`` removes compatible feature.
 
-Usage: ceph mds compat rm_compat <int[0-]>
+Usage::
 
-Subcommand **rm_incompat** removes incompatible feature.
+       ceph mds compat rm_compat <int[0-]>
 
-Usage: ceph mds compat rm_incompat <int[0-]>
+Subcommand ``rm_incompat`` removes incompatible feature.
 
-Subcommand **show** shows mds compatibility settings.
+Usage::
 
-Usage: ceph mds compat show
+       ceph mds compat rm_incompat <int[0-]>
 
-Subcommand **deactivate** stops mds.
+Subcommand ``show`` shows mds compatibility settings.
 
-Usage: ceph mds deactivate <who>
+Usage::
 
-Subcommand **dump** dumps information, optionally from epoch.
+       ceph mds compat show
 
-Usage: ceph mds dump {<int[0-]>}
+Subcommand ``deactivate`` stops mds.
 
-Subcommand **fail** forces mds to status fail.
+Usage::
 
-Usage: ceph mds fail <who>
+       ceph mds deactivate <who>
 
-Subcommand **getmap** gets MDS map, optionally from epoch.
+Subcommand ``dump`` dumps information, optionally from epoch.
 
-Usage: ceph mds getmap {<int[0-]>}
+Usage::
 
-Subcommand **newfs** makes new filesystem using pools <metadata> and <data>.
+       ceph mds dump {<int[0-]>}
 
-Usage: ceph mds newfs <int[0-]> <int[0-]> {--yes-i-really-mean-it}
+Subcommand ``fail`` forces mds to status fail.
 
-Subcommand **remove_data_pool** removes data pool.
+Usage::
 
-Usage: ceph mds remove_data_pool <pool>
+       ceph mds fail <who>
 
-Subcommand **rm** removes inactive mds.
+Subcommand ``getmap`` gets MDS map, optionally from epoch.
 
-Usage: ceph mds rm <int[0-]> <name> (type.id)>
+Usage::
 
-Subcommand **rmfailed** removes failed mds.
+       ceph mds getmap {<int[0-]>}
 
-Usage: ceph mds rmfailed <int[0-]>
+Subcommand ``newfs`` makes new filesystem using pools <metadata> and <data>.
 
-Subcommand **set_max_mds** sets max MDS index.
+Usage::
 
-Usage: ceph mds set_max_mds <int[0-]>
+       ceph mds newfs <int[0-]> <int[0-]> {--yes-i-really-mean-it}
 
-Subcommand **set_state** sets mds state of <gid> to <numeric-state>.
+Subcommand ``remove_data_pool`` removes data pool.
 
-Usage: ceph mds set_state <int[0-]> <int[0-20]>
+Usage::
 
-Subcommand **setmap** sets mds map; must supply correct epoch number.
+       ceph mds remove_data_pool <pool>
 
-Usage: ceph mds setmap <int[0-]>
+Subcommand ``rm`` removes inactive mds.
 
-Subcommand **stat** shows MDS status.
+Usage::
 
-Usage: ceph mds stat
+       ceph mds rm <int[0-]> <name> (type.id)>
 
-Subcommand **stop** stops mds.
+Subcommand ``rmfailed`` removes failed mds.
 
-Usage: ceph mds stop <who>
+Usage::
 
-Subcommand **tell** sends command to particular mds.
+       ceph mds rmfailed <int[0-]>
 
-Usage: ceph mds tell <who> <args> [<args>...]
+Subcommand ``set_max_mds`` sets max MDS index.
 
-**mon**: Manage monitor configuration and administration. It uses some
-additional subcommands.
+Usage::
+
+       ceph mds set_max_mds <int[0-]>
+
+Subcommand ``set_state`` sets mds state of <gid> to <numeric-state>.
+
+Usage::
+
+       ceph mds set_state <int[0-]> <int[0-20]>
+
+Subcommand ``setmap`` sets mds map; must supply correct epoch number.
+
+Usage::
+
+       ceph mds setmap <int[0-]>
+
+Subcommand ``stat`` shows MDS status.
+
+Usage::
+
+       ceph mds stat
+
+Subcommand ``stop`` stops mds.
+
+Usage::
+
+       ceph mds stop <who>
+
+Subcommand ``tell`` sends command to particular mds.
+
+Usage::
+
+       ceph mds tell <who> <args> [<args>...]
+
+mon
+---
+
+Manage monitor configuration and administration. It uses some additional
+subcommands.
+
+Subcommand ``add`` adds new monitor named <name> at <addr>.
+
+Usage::
+
+       ceph mon add <name> <IPaddr[:port]>
 
-Subcommand **add** adds new monitor named <name> at <addr>.
+Subcommand ``dump`` dumps formatted monmap (optionally from epoch)
 
-Usage: ceph mon add <name> <IPaddr[:port]>
+Usage::
 
-Subcommand **dump** dumps formatted monmap (optionally from epoch)
+       ceph mon dump {<int[0-]>}
 
-Usage: ceph mon dump {<int[0-]>}
+Subcommand ``getmap`` gets monmap.
 
-Subcommand **getmap** gets monmap.
+Usage::
 
-Usage: ceph mon getmap {<int[0-]>}
+       ceph mon getmap {<int[0-]>}
 
-Subcommand **remove** removes monitor named <name>.
+Subcommand ``remove`` removes monitor named <name>.
 
-Usage: ceph mon remove <name>
+Usage::
 
-Subcommand **stat** summarizes monitor status.
+       ceph mon remove <name>
 
-Usage: ceph mon stat
+Subcommand ``stat`` summarizes monitor status.
 
-Subcommand **mon_status** reports status of monitors.
+Usage::
 
-Usage: ceph mon_status
+       ceph mon stat
 
-**osd**: Manage OSD configuration and administration. It uses some additional
+Subcommand ``mon_status`` reports status of monitors.
+
+Usage::
+
+       ceph mon_status
+
+osd
+---
+
+Manage OSD configuration and administration. It uses some additional
 subcommands.
 
-Subcommand **create** creates new osd (with optional UUID).
+Subcommand ``create`` creates new osd (with optional UUID).
+
+Usage::
 
-Usage: ceph osd create {<uuid>}
+       ceph osd create {<uuid>}
 
-Subcommand **crush** is used for CRUSH management. It uses some additional
+Subcommand ``crush`` is used for CRUSH management. It uses some additional
 subcommands.
 
-Subcommand **add** adds or updates crushmap position and weight for <name> with
+Subcommand ``add`` adds or updates crushmap position and weight for <name> with
 <weight> and location <args>.
 
-Usage: ceph osd crush add <osdname (id|osd.id)> <float[0.0-]> <args> [<args>...]
+Usage::
+
+       ceph osd crush add <osdname (id|osd.id)> <float[0.0-]> <args> [<args>...]
 
-Subcommand **add-bucket** adds no-parent (probably root) crush bucket <name> of
+Subcommand ``add-bucket`` adds no-parent (probably root) crush bucket <name> of
 type <type>.
 
-Usage: ceph osd crush add-bucket <name> <type>
+Usage::
 
-Subcommand **create-or-move** creates entry or moves existing entry for <name>
+       ceph osd crush add-bucket <name> <type>
+
+Subcommand ``create-or-move`` creates entry or moves existing entry for <name>
 <weight> at/to location <args>.
 
-Usage: ceph osd crush create-or-move <osdname (id|osd.id)> <float[0.0-]> <args>
+Usage::
+
+       ceph osd crush create-or-move <osdname (id|osd.id)> <float[0.0-]> <args>
 [<args>...]
 
-Subcommand **dump** dumps crush map.
+Subcommand ``dump`` dumps crush map.
+
+Usage::
 
-Usage: ceph osd crush dump
+       ceph osd crush dump
 
-Subcommand **link** links existing entry for <name> under location <args>.
+Subcommand ``link`` links existing entry for <name> under location <args>.
 
-Usage: ceph osd crush link <name> <args> [<args>...]
+Usage::
 
-Subcommand **move** moves existing entry for <name> to location <args>.
+       ceph osd crush link <name> <args> [<args>...]
 
-Usage: ceph osd crush move <name> <args> [<args>...]
+Subcommand ``move`` moves existing entry for <name> to location <args>.
 
-Subcommand **remove** removes <name> from crush map (everywhere, or just at
+Usage::
+
+       ceph osd crush move <name> <args> [<args>...]
+
+Subcommand ``remove`` removes <name> from crush map (everywhere, or just at
 <ancestor>).
 
-Usage: ceph osd crush remove <name> {<ancestor>}
+Usage::
+
+       ceph osd crush remove <name> {<ancestor>}
 
-Subcommand **reweight** change <name>'s weight to <weight> in crush map.
+Subcommand ``reweight`` change <name>'s weight to <weight> in crush map.
 
-Usage: ceph osd crush reweight <name> <float[0.0-]>
+Usage::
 
-Subcommand **rm** removes <name> from crush map (everywhere, or just at
+       ceph osd crush reweight <name> <float[0.0-]>
+
+Subcommand ``rm`` removes <name> from crush map (everywhere, or just at
 <ancestor>).
 
-Usage: ceph osd crush rm <name> {<ancestor>}
+Usage::
+
+       ceph osd crush rm <name> {<ancestor>}
 
-Subcommand **rule** is used for creating crush rules. It uses some additional
+Subcommand ``rule`` is used for creating crush rules. It uses some additional
 subcommands.
 
-Subcommand **create-erasure** creates crush rule <name> for erasure coded pool
+Subcommand ``create-erasure`` creates crush rule <name> for erasure coded pool
 created with <profile> (default default).
 
-Usage: ceph osd crush rule create-erasure <name> {<profile>}
+Usage::
+
+       ceph osd crush rule create-erasure <name> {<profile>}
 
-Subcommand **create-simple** creates crush rule <name> to start from <root>,
+Subcommand ``create-simple`` creates crush rule <name> to start from <root>,
 replicate across buckets of type <type>, using a choose mode of <firstn|indep>
 (default firstn; indep best for erasure pools).
 
-Usage: ceph osd crush rule create-simple <name> <root> <type> {firstn|indep}
+Usage::
+
+       ceph osd crush rule create-simple <name> <root> <type> {firstn|indep}
+
+Subcommand ``dump`` dumps crush rule <name> (default all).
+
+Usage::
 
-Subcommand **dump** dumps crush rule <name> (default all).
+       ceph osd crush rule dump {<name>}
 
-Usage: ceph osd crush rule dump {<name>}
+Subcommand ``list`` lists crush rules.
 
-Subcommand **list** lists crush rules.
+Usage::
 
-Usage: ceph osd crush rule list
+       ceph osd crush rule list
 
-Subcommand **ls** lists crush rules.
+Subcommand ``ls`` lists crush rules.
 
-Usage: ceph osd crush rule ls
+Usage::
 
-Subcommand **rm** removes crush rule <name>.
+       ceph osd crush rule ls
 
-Usage: ceph osd crush rule rm <name>
+Subcommand ``rm`` removes crush rule <name>.
 
-Subcommand **set** sets crush map from input file.
+Usage::
 
-Usage: ceph osd crush set
+       ceph osd crush rule rm <name>
 
-Subcommand **set** with osdname/osd.id update crushmap position and weight
+Subcommand ``set`` sets crush map from input file.
+
+Usage::
+
+       ceph osd crush set
+
+Subcommand ``set`` with osdname/osd.id update crushmap position and weight
 for <name> to <weight> with location <args>.
 
-Usage: ceph osd crush set <osdname (id|osd.id)> <float[0.0-]> <args> [<args>...]
+Usage::
+
+       ceph osd crush set <osdname (id|osd.id)> <float[0.0-]> <args> [<args>...]
+
+Subcommand ``show-tunables`` shows current crush tunables.
 
-Subcommand **show-tunables** shows current crush tunables.
+Usage::
 
-Usage: ceph osd crush show-tunables
+       ceph osd crush show-tunables
 
-Subcommand **tunables** sets crush tunables values to <profile>.
+Subcommand ``tunables`` sets crush tunables values to <profile>.
 
-Usage: ceph osd crush tunables legacy|argonaut|bobtail|firefly|optimal|default
+Usage::
 
-Subcommand **unlink** unlinks <name> from crush map (everywhere, or just at
+       ceph osd crush tunables legacy|argonaut|bobtail|firefly|optimal|default
+
+Subcommand ``unlink`` unlinks <name> from crush map (everywhere, or just at
 <ancestor>).
 
-Usage: ceph osd crush unlink <name> {<ancestor>}
+Usage::
+
+       ceph osd crush unlink <name> {<ancestor>}
+
+Subcommand ``deep-scrub`` initiates deep scrub on specified osd.
 
-Subcommand **deep-scrub** initiates deep scrub on specified osd.
+Usage::
 
-Usage: ceph osd deep-scrub <who>
+       ceph osd deep-scrub <who>
 
-Subcommand **down** sets osd(s) <id> [<id>...] down.
+Subcommand ``down`` sets osd(s) <id> [<id>...] down.
 
-Usage: ceph osd down <ids> [<ids>...]
+Usage::
 
-Subcommand **dump** prints summary of OSD map.
+       ceph osd down <ids> [<ids>...]
 
-Usage: ceph osd dump {<int[0-]>}
+Subcommand ``dump`` prints summary of OSD map.
 
-Subcommand **erasure-code-profile** is used for managing the erasure code
+Usage::
+
+       ceph osd dump {<int[0-]>}
+
+Subcommand ``erasure-code-profile`` is used for managing the erasure code
 profiles. It uses some additional subcommands.
 
-Subcommand **get** gets erasure code profile <name>.
+Subcommand ``get`` gets erasure code profile <name>.
+
+Usage::
 
-Usage: ceph osd erasure-code-profile get <name>
+       ceph osd erasure-code-profile get <name>
 
-Subcommand **ls** lists all erasure code profiles.
+Subcommand ``ls`` lists all erasure code profiles.
 
-Usage: ceph osd erasure-code-profile ls
+Usage::
 
-Subcommand **rm** removes erasure code profile <name>.
+       ceph osd erasure-code-profile ls
 
-Usage: ceph osd erasure-code-profile rm <name>
+Subcommand ``rm`` removes erasure code profile <name>.
 
-Subcommand **set** creates erasure code profile <name> with [<key[=value]> ...]
+Usage::
+
+       ceph osd erasure-code-profile rm <name>
+
+Subcommand ``set`` creates erasure code profile <name> with [<key[=value]> ...]
 pairs. Add a --force at the end to override an existing profile (IT IS RISKY).
 
-Usage: ceph osd erasure-code-profile set <name> {<profile> [<profile>...]}
+Usage::
+
+       ceph osd erasure-code-profile set <name> {<profile> [<profile>...]}
+
+Subcommand ``find`` find osd <id> in the CRUSH map and shows its location.
+
+Usage::
+
+       ceph osd find <int[0-]>
 
-Subcommand **find** find osd <id> in the CRUSH map and shows its location.
+Subcommand ``getcrushmap`` gets CRUSH map.
 
-Usage: ceph osd find <int[0-]>
+Usage::
 
-Subcommand **getcrushmap** gets CRUSH map.
+       ceph osd getcrushmap {<int[0-]>}
 
-Usage: ceph osd getcrushmap {<int[0-]>}
+Subcommand ``getmap`` gets OSD map.
 
-Subcommand **getmap** gets OSD map.
+Usage::
 
-Usage: ceph osd getmap {<int[0-]>}
+       ceph osd getmap {<int[0-]>}
 
-Subcommand **getmaxosd** shows largest OSD id.
+Subcommand ``getmaxosd`` shows largest OSD id.
 
-Usage: ceph osd getmaxosd
+Usage::
 
-Subcommand **in** sets osd(s) <id> [<id>...] in.
+       ceph osd getmaxosd
 
-Usage: ceph osd in <ids> [<ids>...]
+Subcommand ``in`` sets osd(s) <id> [<id>...] in.
 
-Subcommand **lost** marks osd as permanently lost. THIS DESTROYS DATA IF NO
+Usage::
+
+       ceph osd in <ids> [<ids>...]
+
+Subcommand ``lost`` marks osd as permanently lost. THIS DESTROYS DATA IF NO
 MORE REPLICAS EXIST, BE CAREFUL.
 
-Usage: ceph osd lost <int[0-]> {--yes-i-really-mean-it}
+Usage::
+
+       ceph osd lost <int[0-]> {--yes-i-really-mean-it}
+
+Subcommand ``ls`` shows all OSD ids.
+
+Usage::
+
+       ceph osd ls {<int[0-]>}
 
-Subcommand **ls** shows all OSD ids.
+Subcommand ``lspools`` lists pools.
 
-Usage: ceph osd ls {<int[0-]>}
+Usage::
 
-Subcommand **lspools** lists pools.
+       ceph osd lspools {<int>}
 
-Usage: ceph osd lspools {<int>}
+Subcommand ``map`` finds pg for <object> in <pool>.
 
-Subcommand **map** finds pg for <object> in <pool>.
+Usage::
 
-Usage: ceph osd map <poolname> <objectname>
+       ceph osd map <poolname> <objectname>
 
-Subcommand **metadata** fetches metadata for osd <id>.
+Subcommand ``metadata`` fetches metadata for osd <id>.
 
-Usage: ceph osd metadata <int[0-]>
+Usage::
 
-Subcommand **out** sets osd(s) <id> [<id>...] out.
+       ceph osd metadata <int[0-]>
 
-Usage: ceph osd out <ids> [<ids>...]
+Subcommand ``out`` sets osd(s) <id> [<id>...] out.
 
-Subcommand **pause** pauses osd.
+Usage::
 
-Usage: ceph osd pause
+       ceph osd out <ids> [<ids>...]
 
-Subcommand **perf** prints dump of OSD perf summary stats.
+Subcommand ``pause`` pauses osd.
 
-Usage: ceph osd perf
+Usage::
 
-Subcommand **pg-temp** set pg_temp mapping pgid:[<id> [<id>...]] (developers
+       ceph osd pause
+
+Subcommand ``perf`` prints dump of OSD perf summary stats.
+
+Usage::
+
+       ceph osd perf
+
+Subcommand ``pg-temp`` set pg_temp mapping pgid:[<id> [<id>...]] (developers
 only).
 
-Usage: ceph osd pg-temp <pgid> {<id> [<id>...]}
+Usage::
+
+       ceph osd pg-temp <pgid> {<id> [<id>...]}
 
-Subcommand **pool** is used for managing data pools. It uses some additional
+Subcommand ``pool`` is used for managing data pools. It uses some additional
 subcommands.
 
-Subcommand **create** creates pool.
+Subcommand ``create`` creates pool.
+
+Usage::
 
-Usage: ceph osd pool create <poolname> <int[0-]> {<int[0-]>} {replicated|erasure}
-{<erasure_code_profile>} {<ruleset>}
+       ceph osd pool create <poolname> <int[0-]> {<int[0-]>} {replicated|erasure}
+       {<erasure_code_profile>} {<ruleset>}
 
-Subcommand **delete** deletes pool.
+Subcommand ``delete`` deletes pool.
 
-Usage: ceph osd pool delete <poolname> {<poolname>} {--yes-i-really-really-mean-it}
+Usage::
 
-Subcommand **get** gets pool parameter <var>.
+       ceph osd pool delete <poolname> {<poolname>} {--yes-i-really-really-mean-it}
 
-Usage: ceph osd pool get <poolname> size|min_size|crash_replay_interval|pg_num|
-pgp_num|crush_ruleset|hit_set_type|hit_set_period|hit_set_count|hit_set_fpp|
+Subcommand ``get`` gets pool parameter <var>.
 
-ceph osd pool get <poolname> auid|target_max_objects|target_max_bytes
+Usage::
 
-ceph osd pool get <poolname> cache_target_dirty_ratio|cache_target_full_ratio
+       ceph osd pool get <poolname> size|min_size|crash_replay_interval|pg_num|
+       pgp_num|crush_ruleset|hit_set_type|hit_set_period|hit_set_count|hit_set_fpp|
 
-ceph osd pool get <poolname> cache_min_flush_age|cache_min_evict_age|
-erasure_code_profile
+       ceph osd pool get <poolname> auid|target_max_objects|target_max_bytes
 
-Subcommand **get-quota** obtains object or byte limits for pool.
+       ceph osd pool get <poolname> cache_target_dirty_ratio|cache_target_full_ratio
 
-Usage: ceph osd pool get-quota <poolname>
+       ceph osd pool get <poolname> cache_min_flush_age|cache_min_evict_age|
+       erasure_code_profile
 
-Subcommand **mksnap** makes snapshot <snap> in <pool>.
+Subcommand ``get-quota`` obtains object or byte limits for pool.
 
-Usage: ceph osd pool mksnap <poolname> <snap>
+Usage::
 
-Subcommand **rename** renames <srcpool> to <destpool>.
+       ceph osd pool get-quota <poolname>
 
-Usage: ceph osd pool rename <poolname> <poolname>
+Subcommand ``mksnap`` makes snapshot <snap> in <pool>.
 
-Subcommand **rmsnap** removes snapshot <snap> from <pool>.
+Usage::
 
-Usage: ceph osd pool rmsnap <poolname> <snap>
+       ceph osd pool mksnap <poolname> <snap>
 
-Subcommand **set** sets pool parameter <var> to <val>.
+Subcommand ``rename`` renames <srcpool> to <destpool>.
 
-Usage: ceph osd pool set <poolname> size|min_size|crash_replay_interval|pg_num|
-pgp_num|crush_ruleset|hashpspool|hit_set_type|hit_set_period|
+Usage::
 
-ceph osd pool set <poolname> hit_set_count|hit_set_fpp|debug_fake_ec_pool
+       ceph osd pool rename <poolname> <poolname>
 
-ceph osd pool set <poolname> target_max_bytes|target_max_objects
+Subcommand ``rmsnap`` removes snapshot <snap> from <pool>.
 
-ceph osd pool set <poolname> cache_target_dirty_ratio|cache_target_full_ratio
+Usage::
 
-ceph osd pool set <poolname> cache_min_flush_age
+       ceph osd pool rmsnap <poolname> <snap>
 
-ceph osd pool set <poolname> cache_min_evict_age|auid <val> {--yes-i-really-mean-it}
+Subcommand ``set`` sets pool parameter <var> to <val>.
 
-Subcommand **set-quota** sets object or byte limit on pool.
+Usage::
 
-Usage: ceph osd pool set-quota <poolname> max_objects|max_bytes <val>
+       ceph osd pool set <poolname> size|min_size|crash_replay_interval|pg_num|
+       pgp_num|crush_ruleset|hashpspool|hit_set_type|hit_set_period|
 
-Subcommand **stats** obtain stats from all pools, or from specified pool.
+       ceph osd pool set <poolname> hit_set_count|hit_set_fpp|debug_fake_ec_pool
 
-Usage: ceph osd pool stats {<name>}
+       ceph osd pool set <poolname> target_max_bytes|target_max_objects
 
-Subcommand **primary-affinity** adjust osd primary-affinity from 0.0 <=<weight>
+       ceph osd pool set <poolname> cache_target_dirty_ratio|cache_target_full_ratio
+
+       ceph osd pool set <poolname> cache_min_flush_age
+
+       ceph osd pool set <poolname> cache_min_evict_age|auid <val>
+       {--yes-i-really-mean-it}
+
+Subcommand ``set-quota`` sets object or byte limit on pool.
+
+Usage::
+
+       ceph osd pool set-quota <poolname> max_objects|max_bytes <val>
+
+Subcommand ``stats`` obtain stats from all pools, or from specified pool.
+
+Usage::
+
+       ceph osd pool stats {<name>}
+
+Subcommand ``primary-affinity`` adjust osd primary-affinity from 0.0 <=<weight>
 <= 1.0
 
-Usage: ceph osd primary-affinity <osdname (id|osd.id)> <float[0.0-1.0]>
+Usage::
 
-Subcommand **primary-temp** sets primary_temp mapping pgid:<id>|-1 (developers
+       ceph osd primary-affinity <osdname (id|osd.id)> <float[0.0-1.0]>
+
+Subcommand ``primary-temp`` sets primary_temp mapping pgid:<id>|-1 (developers
 only).
 
-Usage: ceph osd primary-temp <pgid> <id>
+Usage::
+
+       ceph osd primary-temp <pgid> <id>
+
+Subcommand ``repair`` initiates repair on a specified osd.
 
-Subcommand **repair** initiates repair on a specified osd.
+Usage::
 
-Usage: ceph osd repair <who>
+       ceph osd repair <who>
 
-Subcommand **reweight** reweights osd to 0.0 < <weight> < 1.0.
+Subcommand ``reweight`` reweights osd to 0.0 < <weight> < 1.0.
 
-Usage: osd reweight <int[0-]> <float[0.0-1.0]>
+Usage::
 
-Subcommand **reweight-by-utilization** reweight OSDs by utilization
+       osd reweight <int[0-]> <float[0.0-1.0]>
+
+Subcommand ``reweight-by-utilization`` reweight OSDs by utilization
 [overload-percentage-for-consideration, default 120].
 
-Usage: ceph osd reweight-by-utilization {<int[100-]>}
+Usage::
+
+       ceph osd reweight-by-utilization {<int[100-]>}
+
+Subcommand ``rm`` removes osd(s) <id> [<id>...] in the cluster.
+
+Usage::
+
+       ceph osd rm <ids> [<ids>...]
+
+Subcommand ``scrub`` initiates scrub on specified osd.
 
-Subcommand **rm** removes osd(s) <id> [<id>...] in the cluster.
+Usage::
 
-Usage: ceph osd rm <ids> [<ids>...]
+       ceph osd scrub <who>
 
-Subcommand **scrub** initiates scrub on specified osd.
+Subcommand ``set`` sets <key>.
 
-Usage: ceph osd scrub <who>
+Usage::
 
-Subcommand **set** sets <key>.
+       ceph osd set pause|noup|nodown|noout|noin|nobackfill|norecover|noscrub|
+       nodeep-scrub|notieragent
 
-Usage: ceph osd set pause|noup|nodown|noout|noin|nobackfill|norecover|noscrub|
-nodeep-scrub|notieragent
+Subcommand ``setcrushmap`` sets crush map from input file.
 
-Subcommand **setcrushmap** sets crush map from input file.
+Usage::
 
-Usage: ceph osd setcrushmap
+       ceph osd setcrushmap
 
-Subcommand **setmaxosd** sets new maximum osd value.
+Subcommand ``setmaxosd`` sets new maximum osd value.
 
-Usage: ceph osd setmaxosd <int[0-]>
+Usage::
 
-Subcommand **stat** prints summary of OSD map.
+       ceph osd setmaxosd <int[0-]>
 
-Usage: ceph osd stat
+Subcommand ``stat`` prints summary of OSD map.
 
-Subcommand **thrash** thrashes OSDs for <num_epochs>.
+Usage::
 
-Usage: ceph osd thrash <int[0-]>
+       ceph osd stat
 
-Subcommand **tier** is used for managing tiers. It uses some additional
+Subcommand ``thrash`` thrashes OSDs for <num_epochs>.
+
+Usage::
+
+       ceph osd thrash <int[0-]>
+
+Subcommand ``tier`` is used for managing tiers. It uses some additional
 subcommands.
 
-Subcommand **add** adds the tier <tierpool> (the second one) to base pool <pool>
+Subcommand ``add`` adds the tier <tierpool> (the second one) to base pool <pool>
 (the first one).
 
-Usage: ceph osd tier add <poolname> <poolname> {--force-nonempty}
+Usage::
 
-Subcommand **add-cache** adds a cache <tierpool> (the second one) of size <size>
+       ceph osd tier add <poolname> <poolname> {--force-nonempty}
+
+Subcommand ``add-cache`` adds a cache <tierpool> (the second one) of size <size>
 to existing pool <pool> (the first one).
 
-Usage: ceph osd tier add-cache <poolname> <poolname> <int[0-]>
+Usage::
+
+       ceph osd tier add-cache <poolname> <poolname> <int[0-]>
 
-Subcommand **cache-mode** specifies the caching mode for cache tier <pool>.
+Subcommand ``cache-mode`` specifies the caching mode for cache tier <pool>.
 
-Usage: ceph osd tier cache-mode <poolname> none|writeback|forward|readonly
+Usage::
 
-Subcommand **remove** removes the tier <tierpool> (the second one) from base pool
+       ceph osd tier cache-mode <poolname> none|writeback|forward|readonly
+
+Subcommand ``remove`` removes the tier <tierpool> (the second one) from base pool
 <pool> (the first one).
 
-Usage: ceph osd tier remove <poolname> <poolname>
+Usage::
+
+       ceph osd tier remove <poolname> <poolname>
+
+Subcommand ``remove-overlay`` removes the overlay pool for base pool <pool>.
 
-Subcommand **remove-overlay** removes the overlay pool for base pool <pool>.
+Usage::
 
-Usage: ceph osd tier remove-overlay <poolname>
+       ceph osd tier remove-overlay <poolname>
 
-Subcommand **set-overlay** set the overlay pool for base pool <pool> to be
+Subcommand ``set-overlay`` set the overlay pool for base pool <pool> to be
 <overlaypool>.
 
-Usage: ceph osd tier set-overlay <poolname> <poolname>
+Usage::
+
+       ceph osd tier set-overlay <poolname> <poolname>
+
+Subcommand ``tree`` prints OSD tree.
+
+Usage::
 
-Subcommand **tree** prints OSD tree.
+       ceph osd tree {<int[0-]>}
 
-Usage: ceph osd tree {<int[0-]>}
+Subcommand ``unpause`` unpauses osd.
 
-Subcommand **unpause** unpauses osd.
+Usage::
 
-Usage: ceph osd unpause
+       ceph osd unpause
 
-Subcommand **unset** unsets <key>.
+Subcommand ``unset`` unsets <key>.
 
-Usage: osd unset pause|noup|nodown|noout|noin|nobackfill|norecover|noscrub|
-nodeep-scrub|notieragent
+Usage::
 
-**pg**: It is used for managing the placement groups in OSDs. It uses some
+       osd unset pause|noup|nodown|noout|noin|nobackfill|norecover|noscrub|
+       nodeep-scrub|notieragent
+
+
+pg
+--
+
+It is used for managing the placement groups in OSDs. It uses some
 additional subcommands.
 
-Subcommand **debug** shows debug info about pgs.
+Subcommand ``debug`` shows debug info about pgs.
+
+Usage::
+
+       ceph pg debug unfound_objects_exist|degraded_pgs_exist
+
+Subcommand ``deep-scrub`` starts deep-scrub on <pgid>.
 
-Usage: ceph pg debug unfound_objects_exist|degraded_pgs_exist
+Usage::
 
-Subcommand **deep-scrub** starts deep-scrub on <pgid>.
+       ceph pg deep-scrub <pgid>
 
-Usage: ceph pg deep-scrub <pgid>
+Subcommand ``dump`` shows human-readable versions of pg map (only 'all' valid
+with plain).
 
-Subcommand **dump** shows human-readable versions of pg map (only 'all' valid with
-plain).
+Usage::
 
-Usage: ceph pg dump {all|summary|sum|delta|pools|osds|pgs|pgs_brief}
+       ceph pg dump {all|summary|sum|delta|pools|osds|pgs|pgs_brief}
 
-ceph pg dump {all|summary|sum|delta|pools|osds|pgs|pgs_brief...}
+       ceph pg dump {all|summary|sum|delta|pools|osds|pgs|pgs_brief...}
 
-Subcommand **dump_json** shows human-readable version of pg map in json only.
+Subcommand ``dump_json`` shows human-readable version of pg map in json only.
 
-Usage: ceph pg dump_json {all|summary|sum|pools|osds|pgs[all|summary|sum|pools|
-osds|pgs...]}
+Usage::
 
-Subcommand **dump_pools_json** shows pg pools info in json only.
+       ceph pg dump_json {all|summary|sum|pools|osds|pgs[all|summary|sum|pools|
+       osds|pgs...]}
 
-Usage: ceph pg dump_pools_json
+Subcommand ``dump_pools_json`` shows pg pools info in json only.
 
-Subcommand **dump_stuck** shows information about stuck pgs.
+Usage::
 
-Usage: ceph pg dump_stuck {inactive|unclean|stale[inactive|unclean|stale...]}
-{<int>}
+       ceph pg dump_pools_json
 
-Subcommand **force_create_pg** forces creation of pg <pgid>.
+Subcommand ``dump_stuck`` shows information about stuck pgs.
 
-Usage: ceph pg force_create_pg <pgid>
+Usage::
 
-Subcommand **getmap** gets binary pg map to -o/stdout.
+       ceph pg dump_stuck {inactive|unclean|stale[inactive|unclean|stale...]}
+       {<int>}
 
-Usage: ceph pg getmap
+Subcommand ``force_create_pg`` forces creation of pg <pgid>.
 
-Subcommand **map** shows mapping of pg to osds.
+Usage::
 
-Usage: ceph pg map <pgid>
+       ceph pg force_create_pg <pgid>
 
-Subcommand **repair** starts repair on <pgid>.
+Subcommand ``getmap`` gets binary pg map to -o/stdout.
 
-Usage: ceph pg repair <pgid>
+Usage::
 
-Subcommand **scrub** starts scrub on <pgid>.
+       ceph pg getmap
 
-Usage: ceph pg scrub <pgid>
+Subcommand ``map`` shows mapping of pg to osds.
 
-Subcommand **send_pg_creates** triggers pg creates to be issued.
+Usage::
 
-Usage: ceph pg send_pg_creates
+       ceph pg map <pgid>
 
-Subcommand **set_full_ratio** sets ratio at which pgs are considered full.
+Subcommand ``repair`` starts repair on <pgid>.
 
-Usage: ceph pg set_full_ratio <float[0.0-1.0]>
+Usage::
 
-Subcommand **set_nearfull_ratio** sets ratio at which pgs are considered nearly
+       ceph pg repair <pgid>
+
+Subcommand ``scrub`` starts scrub on <pgid>.
+
+Usage::
+
+       ceph pg scrub <pgid>
+
+Subcommand ``send_pg_creates`` triggers pg creates to be issued.
+
+Usage::
+
+       ceph pg send_pg_creates
+
+Subcommand ``set_full_ratio`` sets ratio at which pgs are considered full.
+
+Usage::
+
+       ceph pg set_full_ratio <float[0.0-1.0]>
+
+Subcommand ``set_nearfull_ratio`` sets ratio at which pgs are considered nearly
 full.
 
-Usage: ceph pg set_nearfull_ratio <float[0.0-1.0]>
+Usage::
+
+       ceph pg set_nearfull_ratio <float[0.0-1.0]>
+
+Subcommand ``stat`` shows placement group status.
+
+Usage::
+
+       ceph pg stat
+
+
+quorum
+------
+
+Enter or exit quorum.
+
+Usage::
+
+       ceph quorum enter|exit
+
+
+quorum_status
+-------------
+
+Reports status of monitor quorum.
+
+Usage::
+
+       ceph quorum_status
+
+
+report
+------
+
+Reports full status of cluster, optional title tag strings.
+
+Usage::
+
+       ceph report {<tags> [<tags>...]}
+
+
+scrub
+-----
+
+Scrubs the monitor stores.
+
+Usage::
+
+       ceph scrub
 
-Subcommand **stat** shows placement group status.
 
-Usage: ceph pg stat
+status
+------
 
-**quorum**: Enter or exit quorum.
+Shows cluster status.
 
-Usage: ceph quorum enter|exit
+Usage::
 
-**quorum_status**: Reports status of monitor quorum.
+       ceph status
 
-Usage: ceph quorum_status
 
-**report**: Reports full status of cluster, optional title tag strings.
+sync force
+----------
 
-Usage: ceph report {<tags> [<tags>...]}
+Forces sync of and clear monitor store.
 
-**scrub**: Scrubs the monitor stores.
+Usage::
 
-Usage: ceph scrub
+       ceph sync force {--yes-i-really-mean-it} {--i-know-what-i-am-doing}
 
-**status**: Shows cluster status.
 
-Usage: ceph status
+tell
+----
 
-**sync force**: Forces sync of and clear monitor store.
+Sends a command to a specific daemon.
 
-Usage: ceph sync force {--yes-i-really-mean-it} {--i-know-what-i-am-doing}
+Usage::
 
-**tell**: Sends a command to a specific daemon.
+       ceph tell <name (type.id)> <args> [<args>...]
 
-Usage: ceph tell <name (type.id)> <args> [<args>...]
 
 Options
 =======
@@ -730,7 +1101,7 @@ Options
 .. option:: -c ceph.conf, --conf=ceph.conf
 
    Use ceph.conf configuration file instead of the default
-   /etc/ceph/ceph.conf to determine monitor addresses during startup.
+   ``/etc/ceph/ceph.conf`` to determine monitor addresses during startup.
 
 .. option:: --id CLIENT_ID, --user CLIENT_ID
 
@@ -804,8 +1175,8 @@ Options
 Availability
 ============
 
-**ceph** is part of the Ceph distributed storage system. Please refer to the Ceph documentation at
-http://ceph.com/docs for more information.
+:program:`ceph` is a part of the Ceph distributed storage system. Please refer to
+the Ceph documentation at http://ceph.com/docs for more information.
 
 
 See also