| **ceph** **auth** [ *add* \| *caps* \| *del* \| *export* \| *get* \| *get-key* \| *get-or-create* \| *get-or-create-key* \| *import* \| *list* \| *print-key* \| *print_key* ] ...
-| **ceph** **compact**
-
| **ceph** **config** [ *dump* | *ls* | *help* | *get* | *show* | *show-with-defaults* | *set* | *rm* | *log* | *reset* | *assimilate-conf* | *generate-minimal-conf* ] ...
| **ceph** **config-key** [ *rm* | *exists* | *get* | *ls* | *dump* | *set* ] ...
===========
:program:`ceph` is a control utility which is used for manual deployment and maintenance
-of a Ceph cluster. It provides a diverse set of commands that allows deployment of
-monitors, OSDs, placement groups, MDS and overall maintenance, administration
+of a Ceph cluster. It provides a diverse set of commands that allow deployment of
+Monitors, OSDs, placement groups, MDS and overall maintenance, administration
of the cluster.
Commands
----
Manage authentication keys. It is used for adding, removing, exporting
-or updating of authentication keys for a particular entity such as a monitor or
-OSD. It uses some additional subcommands.
+or updating of authentication keys for a particular entity such as a Monitor or
+OSD. It accepts the following additional subcommands.
Subcommand ``add`` adds authentication info for a particular entity from input
file, or random key if no input is given and/or any caps specified in the command.
Usage::
- ceph auth add <entity> {<caps> [<caps>...]}
+ ceph auth add <entity> {<caps> [<caps>...]}
-Subcommand ``caps`` updates caps for **name** from caps specified in the command.
+Subcommand ``caps`` updates caps for ``name`` from caps specified in the command.
Usage::
- ceph auth caps <entity> <caps> [<caps>...]
+ ceph auth caps <entity> <caps> [<caps>...]
Subcommand ``del`` deletes all caps for ``name``.
Usage::
- ceph auth del <entity>
+ ceph auth del <entity>
Subcommand ``export`` writes keyring for requested entity, or master keyring if
none given.
Usage::
- ceph auth export {<entity>}
+ ceph auth export {<entity>}
Subcommand ``get`` writes keyring file with requested key.
Usage::
- ceph auth get <entity>
+ ceph auth get <entity>
Subcommand ``get-key`` displays requested key.
Usage::
- ceph auth get-key <entity>
+ ceph auth get-key <entity>
Subcommand ``get-or-create`` adds authentication info for a particular entity
from input file, or random key if no input given and/or any caps specified in the
Usage::
- ceph auth get-or-create <entity> {<caps> [<caps>...]}
+ ceph auth get-or-create <entity> {<caps> [<caps>...]}
Subcommand ``get-or-create-key`` gets or adds key for ``name`` from system/caps
pairs specified in the command. If key already exists, any given caps must match
Usage::
- ceph auth get-or-create-key <entity> {<caps> [<caps>...]}
+ ceph auth get-or-create-key <entity> {<caps> [<caps>...]}
Subcommand ``import`` reads keyring from input file.
Usage::
- ceph auth import
+ ceph auth import
Subcommand ``ls`` lists authentication state.
Usage::
- ceph auth ls
+ ceph auth ls
Subcommand ``print-key`` displays requested key.
Usage::
- ceph auth print-key <entity>
+ ceph auth print-key <entity>
Subcommand ``print_key`` displays requested key.
Usage::
- ceph auth print_key <entity>
-
-
-compact
--------
-
-Causes compaction of monitor's RocksDB storage.
-
-Usage::
-
- ceph compact
+ ceph auth print_key <entity>
config
------
Configure the cluster. By default, Ceph daemons and clients retrieve their
-configuration options from monitor when they start, and are updated if any of
-the tracked options is changed at run time. It uses following additional
-subcommand.
+configuration options from the Monitors when they start, and are updated when
+tracked options are changed at runtime. It accepts the following additional
+subcommands.
Subcommand ``dump`` to dump all options for the cluster
Usage::
- ceph config dump
+ ceph config dump
Subcommand ``ls`` to list all option names for the cluster
Usage::
- ceph config ls
+ ceph config ls
Subcommand ``help`` to describe the specified configuration option
Subcommand ``show`` to display the running configuration of the specified
entity. Please note, unlike ``get``, which only shows the options managed
-by monitor, ``show`` displays all the configurations being actively used.
+by Monitor, ``show`` displays all the configurations being actively used.
These options are pulled from several sources, for instance, the compiled-in
-default value, the monitor's configuration database, ``ceph.conf`` file on
-the host. The options can even be overridden at runtime. So, there is chance
+default value, the Monitor's configuration database, ``ceph.conf`` file on
+the host. The options can even be overridden at runtime. So, there is a chance
that the configuration options in the output of ``show`` could be different
from those in the output of ``get``.
Usage::
- ceph config show {<who>}
+ ceph config show {<who>}
-Subcommand ``show-with-defaults`` to display the running configuration along with the compiled-in defaults of the specified entity
+Subcommand ``show-with-defaults`` to display the running configuration along
+with the compiled-in defaults of the specified entity
Usage::
- ceph config show {<who>}
+ ceph config show-with-defaults {<who>}
Subcommand ``set`` to set an option for one or more specified entities
----------
Manage configuration key. Config-key is a general purpose key/value service
-offered by the monitors. This service is mainly used by Ceph tools and daemons
-for persisting various settings. Among which, ceph-mgr modules uses it for
-storing their options. It uses some additional subcommands.
+provided by the Monitors. This service is mainly used by Ceph tools and daemons
+for persisting various settings. Among which, Manager modules use it for
+storing their options. It accepts the following additional subcommands.
Subcommand ``rm`` deletes configuration key.
Usage::
- ceph config-key rm <key>
+ ceph config-key rm <key>
Subcommand ``exists`` checks for configuration keys existence.
Usage::
- ceph config-key exists <key>
+ ceph config-key exists <key>
Subcommand ``get`` gets the configuration key.
Usage::
- ceph config-key get <key>
+ ceph config-key get <key>
Subcommand ``ls`` lists configuration keys.
Usage::
- ceph config-key ls
+ ceph config-key ls
Subcommand ``dump`` dumps configuration keys and values.
Usage::
- ceph config-key dump
+ ceph config-key dump
Subcommand ``set`` puts configuration key and value.
Usage::
- ceph config-key set <key> {<val>}
+ ceph config-key set <key> {<val>}
.. _man-ceph-daemon:
Usage::
- ceph daemon {daemon_name|socket_path} {command} ...
+ ceph daemon {<daemon_name>|<socket_path>} {command} ...
Example::
- ceph daemon osd.0 help
+ ceph daemon osd.0 help
daemonperf
Usage::
- ceph daemonperf {daemon_name|socket_path} [{interval} [{count}]]
+ ceph daemonperf {<daemon_name>|<socket_path>} [{interval} [{count}]]
df
Usage::
- ceph df {detail}
+ ceph df {detail}
.. _ceph features:
fs
--
-Manage cephfs file systems. It uses some additional subcommands.
+Manage CephFS file systems. It accepts the following additional subcommands.
-Subcommand ``add_data_pool`` adds an new data pool to the FS. Ths pool can
+Subcommand ``add_data_pool`` adds a new data pool to the file system. This pool can
be used for file layouts as an alternate location to store the file data.
Usage::
ceph fs add_data_pool <fs-name> <pool name/id>
-Subcommand ``authorize`` creates a new client (if the client doesn't exists
+Subcommand ``authorize`` creates a new client (if the client doesn't exist
on the cluster) that will be authorized for the given path in ``<fs_name>``.
-Pass ``/`` to authorize for the entire FS. ``<perms>`` below can be ``r``,
+Pass ``/`` to authorize for the entire file system. ``<perms>`` below can be ``r``,
``rw`` or ``rwp``.
Running it for an existing client can grant the client a new capability
ceph fs flag set <flag-name> <flag-val> --yes-i-really-mean-it
-Subcommand ``get`` displays the information about FS, including settings and
+Subcommand ``get`` displays the information about the file system, including settings and
ranks. Information printed here in subset of same information from the
``fs dump`` command.
Usage::
- ceph fs ls
+ ceph fs ls
-Subcommand ``lsflags`` displays all the flags set on the given FS.
+Subcommand ``lsflags`` displays all the flags set on the given file system.
Usage::
Usage::
- ceph fs new <fs_name> <metadata> <data>
+ ceph fs new <fs_name> <metadata> <data>
Subcommand ``rename`` assigns a new name to CephFS and also updates
application tags on the pools of this CephFS.
Usage::
- ceph fs reset <fs_name> {--yes-i-really-mean-it}
+ ceph fs reset <fs_name> {--yes-i-really-mean-it}
Subcommand ``rm`` to disable the named file system
Usage::
- ceph fs rm <fs_name> {--yes-i-really-mean-it}
+ ceph fs rm <fs_name> {--yes-i-really-mean-it}
-Subcommand ``rm_data_pool`` removes the specified pool from FS's list of
+Subcommand ``rm_data_pool`` removes the specified pool from the file system's list of
data pools. File data on this pool will become unavailable. Default data pool
cannot be removed.
ceph fs rm_data_pool <fs-name> <pool name/id>
-Subcommand ``set`` sets or updates a FS setting value for given FS name.
+Subcommand ``set`` sets or updates a file system setting value for the given file system name.
Usage::
Usage::
- ceph fsid
+ ceph fsid
health
Usage::
- ceph health {detail}
+ ceph health {detail}
heap
----
-Show heap usage info (available only if compiled with tcmalloc)
+Show heap usage info (available only if compiled with TCMalloc).
Usage::
- ceph tell <name (type.id)> heap dump|start_profiler|stop_profiler|stats
+ ceph tell <name (type.id)> heap dump|start_profiler|stop_profiler|stats
-Subcommand ``release`` to make TCMalloc to releases no-longer-used memory back to the kernel at once.
+Subcommand ``release`` to make TCMalloc to releases no-longer-used memory back
+to the kernel at once.
Usage::
- ceph tell <name (type.id)> heap release
+ ceph tell <name (type.id)> heap release
-Subcommand ``(get|set)_release_rate`` get or set the TCMalloc memory release rate. TCMalloc releases
-no-longer-used memory back to the kernel gradually. the rate controls how quickly this happens.
-Increase this setting to make TCMalloc to return unused memory more frequently. 0 means never return
-memory to system, 1 means wait for 1000 pages after releasing a page to system. It is ``1.0`` by default..
+Subcommand ``(get|set)_release_rate`` gets or sets the TCMalloc memory release rate. TCMalloc releases
+no-longer-used memory back to the kernel gradually. The rate controls how quickly this happens.
+Increase this setting to make TCMalloc return unused memory more frequently. 0 means never return
+memory to system, 1 means wait for 1000 pages after releasing a page to system. It is ``1.0`` by default.
Usage::
- ceph tell <name (type.id)> heap get_release_rate|set_release_rate {<val>}
+ ceph tell <name (type.id)> heap get_release_rate|set_release_rate {<val>}
injectargs
----------
-Inject configuration arguments into monitor.
+Inject configuration arguments into Monitor.
Usage::
- ceph injectargs <injected_args> [<injected_args>...]
+ ceph injectargs <injected_args> [<injected_args>...]
log
---
-Log supplied text to the monitor log.
+Log supplied text to the Monitor log.
Usage::
- ceph log <logtext> [<logtext>...]
+ ceph log <logtext> [<logtext>...]
mds
---
-Manage metadata server configuration and administration. It uses some
+Perform metadata server configuration and administration. It accepts the following
additional subcommands.
-Subcommand ``compat`` manages compatible features. It uses some additional
+Subcommand ``compat`` manages compatible features. It accepts the following additional
subcommands.
Subcommand ``rm_compat`` removes compatible feature.
Usage::
- ceph mds compat rm_compat <int[0-]>
+ ceph mds compat rm_compat <int[0-]>
Subcommand ``rm_incompat`` removes incompatible feature.
Usage::
- ceph mds compat rm_incompat <int[0-]>
+ ceph mds compat rm_incompat <int[0-]>
-Subcommand ``show`` shows mds compatibility settings.
+Subcommand ``show`` shows MDS compatibility settings.
Usage::
- ceph mds compat show
+ ceph mds compat show
-Subcommand ``fail`` forces mds to status fail.
+Subcommand ``fail`` forces MDS to status fail.
Usage::
- ceph mds fail <role|gid>
+ ceph mds fail <role|gid>
-Subcommand ``rm`` removes inactive mds.
+Subcommand ``rm`` removes an inactive MDS.
Usage::
- ceph mds rm <int[0-]> <name> (type.id)>
+ ceph mds rm <int[0-]> <name> (type.id)>
-Subcommand ``rmfailed`` removes failed mds.
+Subcommand ``rmfailed`` removes a failed MDS.
Usage::
- ceph mds rmfailed <int[0-]>
+ ceph mds rmfailed <int[0-]>
-Subcommand ``set_state`` sets mds state of <gid> to <numeric-state>.
+Subcommand ``set_state`` sets MDS state of <gid> to <numeric-state>.
Usage::
- ceph mds set_state <int[0-]> <int[0-20]>
+ ceph mds set_state <int[0-]> <int[0-20]>
Subcommand ``stat`` shows MDS status.
Usage::
- ceph mds stat
+ ceph mds stat
Subcommand ``repaired`` mark a damaged MDS rank as no longer damaged.
Usage::
- ceph mds repaired <role>
+ ceph mds repaired <role>
mon
---
-Manage monitor configuration and administration. It uses some additional
+Perform Monitor configuration and administration. It accepts the following additional
subcommands.
-Subcommand ``add`` adds new monitor named <name> at <addr>.
+Subcommand ``add`` adds new Monitor named <name> at <addr>.
Usage::
- ceph mon add <name> <IPaddr[:port]>
+ ceph mon add <name> <IPaddr[:port]>
Subcommand ``dump`` dumps formatted monmap (optionally from epoch)
Usage::
- ceph mon dump {<int[0-]>}
+ ceph mon dump {<int[0-]>}
Subcommand ``getmap`` gets monmap.
Usage::
- ceph mon getmap {<int[0-]>}
+ ceph mon getmap {<int[0-]>}
Subcommand ``enable_stretch_mode`` enables stretch mode, changing the peering
rules and failure handling on all pools. For a given PG to successfully peer
and be marked active, ``min_size`` replicas will now need to be active under all
(currently two) CRUSH buckets of type <dividing_bucket>.
-<tiebreaker_mon> is the tiebreaker mon to use if a network split happens.
+<tiebreaker_mon> is the tiebreaker Monitor to use if a network split happens.
<dividing_bucket> is the bucket type across which to stretch.
This will typically be ``datacenter`` or other CRUSH hierarchy bucket type that
Usage::
- ceph mon enable_stretch_mode <tiebreaker_mon> <new_crush_rule> <dividing_bucket>
+ ceph mon enable_stretch_mode <tiebreaker_mon> <new_crush_rule> <dividing_bucket>
-Subcommand ``remove`` removes monitor named <name>.
+Subcommand ``remove`` removes Monitor named <name>.
Usage::
- ceph mon remove <name>
+ ceph mon remove <name>
-Subcommand ``stat`` summarizes monitor status.
+Subcommand ``stat`` summarizes Monitor status.
Usage::
- ceph mon stat
+ ceph mon stat
mgr
---
-Ceph manager daemon configuration and management.
+Ceph Manager daemon configuration and management.
Subcommand ``dump`` dumps the latest MgrMap, which describes the active
-and standby manager daemons.
+and standby Manager daemons.
Usage::
- ceph mgr dump
+ ceph mgr dump
-Subcommand ``fail`` will mark a manager daemon as failed, removing it
-from the manager map. If it is the active manager daemon a standby
+Subcommand ``fail`` will mark a Manager daemon as failed, removing it
+from the Manager map. If it is the active Manager daemon a standby
will take its place.
Usage::
- ceph mgr fail <name>
+ ceph mgr fail <name>
-Subcommand ``module ls`` will list currently enabled manager modules (plugins).
+Subcommand ``module ls`` will list currently enabled Manager modules (plugins).
Usage::
- ceph mgr module ls
+ ceph mgr module ls
-Subcommand ``module enable`` will enable a manager module. Available modules are included in MgrMap and visible via ``mgr dump``.
+Subcommand ``module enable`` will enable a Manager module. Available modules
+are included in MgrMap and visible via ``mgr dump``.
Usage::
- ceph mgr module enable <module>
+ ceph mgr module enable <module>
-Subcommand ``module disable`` will disable an active manager module.
+Subcommand ``module disable`` will disable an active Manager module.
Usage::
- ceph mgr module disable <module>
+ ceph mgr module disable <module>
-Subcommand ``metadata`` will report metadata about all manager daemons or, if the name is specified, a single manager daemon.
+Subcommand ``metadata`` will report metadata about all Manager daemons or, if
+the name is specified, a single Manager daemon.
Usage::
- ceph mgr metadata [name]
+ ceph mgr metadata [name]
Subcommand ``versions`` will report a count of running daemon versions.
Usage::
- ceph mgr versions
+ ceph mgr versions
Subcommand ``count-metadata`` will report a count of any daemon metadata field.
Usage::
- ceph mgr count-metadata <field>
+ ceph mgr count-metadata <field>
.. _ceph-admin-osd:
osd
---
-Manage OSD configuration and administration. It uses some additional
+Perform OSD configuration and administration. It accepts the following additional
subcommands.
-Subcommand ``blocklist`` manage blocklisted clients. It uses some additional
+Subcommand ``blocklist`` manage blocklisted clients. It accepts the following additional
subcommands.
Subcommand ``add`` add <addr> to blocklist (optionally until <expire> seconds
Usage::
- ceph osd blocklist add <EntityAddr> {<float[0.0-]>}
+ ceph osd blocklist add <EntityAddr> {<float[0.0-]>}
Subcommand ``ls`` show blocklisted clients
Usage::
- ceph osd blocklist ls
+ ceph osd blocklist ls
Subcommand ``rm`` remove <addr> from blocklist
Usage::
- ceph osd blocklist rm <EntityAddr>
+ ceph osd blocklist rm <EntityAddr>
Subcommand ``blocked-by`` prints a histogram of which OSDs are blocking their peers
Usage::
- ceph osd blocked-by
+ ceph osd blocked-by
-Subcommand ``create`` creates new osd (with optional UUID and ID).
+Subcommand ``create`` creates new OSD (with optional UUID and ID).
This command is DEPRECATED as of the Luminous release, and will be removed in
a future release.
Usage::
- ceph osd create {<uuid>} {<id>}
+ ceph osd create {<uuid>} {<id>}
Subcommand ``new`` can be used to create a new OSD or to recreate a previously
destroyed OSD with a specific *id*. The new OSD will have the specified *uuid*,
and the command expects a JSON file containing the base64 cephx key for auth
-entity *client.osd.<id>*, as well as optional base64 cepx key for dm-crypt
+entity *client.osd.<id>*, as well as optional base64 cephx key for dm-crypt
lockbox access and a dm-crypt key. Specifying a dm-crypt requires specifying
the accompanying lockbox cephx key.
{
"cephx_secret": "AQBWtwhZdBO5ExAAIDyjK2Bh16ZXylmzgYYEjg==",
- "crush_device_class": "myclass"
+ "crush_device_class": "myclass"
}
Or::
"cephx_secret": "AQBWtwhZdBO5ExAAIDyjK2Bh16ZXylmzgYYEjg==",
"cephx_lockbox_secret": "AQDNCglZuaeVCRAAYr76PzR1Anh7A0jswkODIQ==",
"dmcrypt_key": "<dm-crypt key>",
- "crush_device_class": "myclass"
+ "crush_device_class": "myclass"
}
Or::
{
- "crush_device_class": "myclass"
+ "crush_device_class": "myclass"
}
The "crush_device_class" property is optional. If specified, it will set the
initial CRUSH device class for the new OSD.
-Subcommand ``crush`` is used for CRUSH management. It uses some additional
+Subcommand ``crush`` is used for CRUSH management. It accepts the following additional
subcommands.
-Subcommand ``add`` adds or updates crushmap position and weight for <name> with
+Subcommand ``add`` adds or updates CRUSH map position and weight for <name> with
<weight> and location <args>.
Usage::
- ceph osd crush add <osdname (id|osd.id)> <float[0.0-]> <args> [<args>...]
+ ceph osd crush add <osdname (id|osd.id)> <float[0.0-]> <args> [<args>...]
-Subcommand ``add-bucket`` adds no-parent (probably root) crush bucket <name> of
+Subcommand ``add-bucket`` adds no-parent (probably root) CRUSH bucket <name> of
type <type>.
Usage::
- ceph osd crush add-bucket <name> <type>
+ ceph osd crush add-bucket <name> <type>
Subcommand ``create-or-move`` creates entry or moves existing entry for <name>
<weight> at/to location <args>.
Usage::
- ceph osd crush create-or-move <osdname (id|osd.id)> <float[0.0-]> <args>
- [<args>...]
+ ceph osd crush create-or-move <osdname (id|osd.id)> <float[0.0-]> <args> [<args>...]
-Subcommand ``dump`` dumps crush map.
+Subcommand ``dump`` dumps CRUSH map.
Usage::
- ceph osd crush dump
+ ceph osd crush dump
-Subcommand ``get-tunable`` get crush tunable straw_calc_version
+Subcommand ``get-tunable`` get CRUSH tunable straw_calc_version
Usage::
- ceph osd crush get-tunable straw_calc_version
+ ceph osd crush get-tunable straw_calc_version
Subcommand ``link`` links existing entry for <name> under location <args>.
Usage::
- ceph osd crush link <name> <args> [<args>...]
+ ceph osd crush link <name> <args> [<args>...]
Subcommand ``move`` moves existing entry for <name> to location <args>.
Usage::
- ceph osd crush move <name> <args> [<args>...]
+ ceph osd crush move <name> <args> [<args>...]
-Subcommand ``remove`` removes <name> from crush map (everywhere, or just at
+Subcommand ``remove`` removes <name> from CRUSH map (everywhere, or just at
<ancestor>).
Usage::
- ceph osd crush remove <name> {<ancestor>}
+ ceph osd crush remove <name> {<ancestor>}
Subcommand ``rename-bucket`` renames bucket <srcname> to <dstname>
Usage::
- ceph osd crush rename-bucket <srcname> <dstname>
+ ceph osd crush rename-bucket <srcname> <dstname>
-Subcommand ``reweight`` change <name>'s weight to <weight> in crush map.
+Subcommand ``reweight`` change <name>'s weight to <weight> in CRUSH map.
Usage::
- ceph osd crush reweight <name> <float[0.0-]>
+ ceph osd crush reweight <name> <float[0.0-]>
Subcommand ``reweight-all`` recalculate the weights for the tree to
ensure they sum correctly
Usage::
- ceph osd crush reweight-all
+ ceph osd crush reweight-all
Subcommand ``reweight-subtree`` changes all leaf items beneath <name>
-to <weight> in crush map
+to <weight> in CRUSH map
Usage::
- ceph osd crush reweight-subtree <name> <weight>
+ ceph osd crush reweight-subtree <name> <weight>
-Subcommand ``rm`` removes <name> from crush map (everywhere, or just at
+Subcommand ``rm`` removes <name> from CRUSH map (everywhere, or just at
<ancestor>).
Usage::
- ceph osd crush rm <name> {<ancestor>}
+ ceph osd crush rm <name> {<ancestor>}
-Subcommand ``rule`` is used for creating crush rules. It uses some additional
+Subcommand ``rule`` is used for creating CRUSH rules. It accepts the following additional
subcommands.
-Subcommand ``create-erasure`` creates crush rule <name> for erasure coded pool
+Subcommand ``create-erasure`` creates CRUSH rule <name> for erasure-coded pool
created with <profile> (default default).
Usage::
- ceph osd crush rule create-erasure <name> {<profile>}
+ ceph osd crush rule create-erasure <name> {<profile>}
-Subcommand ``create-simple`` creates crush rule <name> to start from <root>,
+Subcommand ``create-simple`` creates CRUSH rule <name> to start from <root>,
replicate across buckets of type <type>, using a choose mode of <firstn|indep>
-(default firstn; indep best for erasure pools).
+(default firstn; indep best for erasure-coded pools).
Usage::
- ceph osd crush rule create-simple <name> <root> <type> {firstn|indep}
+ ceph osd crush rule create-simple <name> <root> <type> {firstn|indep}
-Subcommand ``dump`` dumps crush rule <name> (default all).
+Subcommand ``dump`` dumps CRUSH rule <name> (default all).
Usage::
- ceph osd crush rule dump {<name>}
+ ceph osd crush rule dump {<name>}
-Subcommand ``ls`` lists crush rules.
+Subcommand ``ls`` lists CRUSH rules.
Usage::
- ceph osd crush rule ls
+ ceph osd crush rule ls
-Subcommand ``rm`` removes crush rule <name>.
+Subcommand ``rm`` removes CRUSH rule <name>.
Usage::
- ceph osd crush rule rm <name>
+ ceph osd crush rule rm <name>
-Subcommand ``set`` used alone, sets crush map from input file.
+Subcommand ``set`` used alone, sets CRUSH map from input file.
Usage::
- ceph osd crush set
+ ceph osd crush set
-Subcommand ``set`` with osdname/osd.id update crushmap position and weight
+Subcommand ``set`` with osdname/osd.id update CRUSH map position and weight
for <name> to <weight> with location <args>.
Usage::
- ceph osd crush set <osdname (id|osd.id)> <float[0.0-]> <args> [<args>...]
+ ceph osd crush set <osdname (id|osd.id)> <float[0.0-]> <args> [<args>...]
-Subcommand ``set-tunable`` set crush tunable <tunable> to <value>. The only
+Subcommand ``set-tunable`` set CRUSH tunable <tunable> to <value>. The only
tunable that can be set is straw_calc_version.
Usage::
- ceph osd crush set-tunable straw_calc_version <value>
+ ceph osd crush set-tunable straw_calc_version <value>
-Subcommand ``show-tunables`` shows current crush tunables.
+Subcommand ``show-tunables`` shows current CRUSH tunables.
Usage::
- ceph osd crush show-tunables
+ ceph osd crush show-tunables
-Subcommand ``tree`` shows the crush buckets and items in a tree view.
+Subcommand ``tree`` shows the CRUSH buckets and items in a tree view.
Usage::
- ceph osd crush tree
+ ceph osd crush tree
-Subcommand ``tunables`` sets crush tunables values to <profile>.
+Subcommand ``tunables`` sets CRUSH tunables values to <profile>.
Usage::
- ceph osd crush tunables legacy|argonaut|bobtail|firefly|hammer|optimal|default
+ ceph osd crush tunables legacy|argonaut|bobtail|firefly|hammer|optimal|default
-Subcommand ``unlink`` unlinks <name> from crush map (everywhere, or just at
+Subcommand ``unlink`` unlinks <name> from CRUSH map (everywhere, or just at
<ancestor>).
Usage::
- ceph osd crush unlink <name> {<ancestor>}
+ ceph osd crush unlink <name> {<ancestor>}
Subcommand ``df`` shows OSD utilization
Usage::
- ceph osd df {plain|tree}
+ ceph osd df {plain|tree}
-Subcommand ``deep-scrub`` initiates deep scrub on specified osd.
+Subcommand ``deep-scrub`` initiates deep scrub on specified OSD.
Usage::
- ceph osd deep-scrub <who>
+ ceph osd deep-scrub <who>
-Subcommand ``down`` sets osd(s) <id> [<id>...] down.
+Subcommand ``down`` sets OSD(s) <id> [<id>...] down.
Usage::
- ceph osd down <ids> [<ids>...]
+ ceph osd down <ids> [<ids>...]
Subcommand ``dump`` prints summary of OSD map.
Usage::
- ceph osd dump {<int[0-]>}
+ ceph osd dump {<int[0-]>}
Subcommand ``erasure-code-profile`` is used for managing the erasure code
-profiles. It uses some additional subcommands.
+profiles. It accepts the following additional subcommands.
Subcommand ``get`` gets erasure code profile <name>.
Usage::
- ceph osd erasure-code-profile get <name>
+ ceph osd erasure-code-profile get <name>
Subcommand ``ls`` lists all erasure code profiles.
Usage::
- ceph osd erasure-code-profile ls
+ ceph osd erasure-code-profile ls
Subcommand ``rm`` removes erasure code profile <name>.
Usage::
- ceph osd erasure-code-profile rm <name>
+ ceph osd erasure-code-profile rm <name>
Subcommand ``set`` creates erasure code profile <name> with [<key[=value]> ...]
-pairs. Add a --force at the end to override an existing profile (IT IS RISKY).
+pairs. Add a ``--force`` at the end to override an existing profile (IT IS RISKY).
Usage::
- ceph osd erasure-code-profile set <name> {<profile> [<profile>...]}
+ ceph osd erasure-code-profile set <name> {<profile> [<profile>...]}
-Subcommand ``find`` find osd <id> in the CRUSH map and shows its location.
+Subcommand ``find`` find OSD <id> in the CRUSH map and shows its location.
Usage::
- ceph osd find <int[0-]>
+ ceph osd find <int[0-]>
Subcommand ``getcrushmap`` gets CRUSH map.
Usage::
- ceph osd getcrushmap {<int[0-]>}
+ ceph osd getcrushmap {<int[0-]>}
Subcommand ``getmap`` gets OSD map.
Usage::
- ceph osd getmap {<int[0-]>}
+ ceph osd getmap {<int[0-]>}
-Subcommand ``getmaxosd`` shows largest OSD id.
+Subcommand ``getmaxosd`` shows largest OSD ID.
Usage::
- ceph osd getmaxosd
+ ceph osd getmaxosd
-Subcommand ``in`` sets osd(s) <id> [<id>...] in.
+Subcommand ``in`` sets OSD(s) <id> [<id>...] in.
Usage::
- ceph osd in <ids> [<ids>...]
+ ceph osd in <ids> [<ids>...]
-Subcommand ``lost`` marks osd as permanently lost. THIS DESTROYS DATA IF NO
+Subcommand ``lost`` marks OSD as permanently lost. THIS DESTROYS DATA IF NO
MORE REPLICAS EXIST, BE CAREFUL.
Usage::
- ceph osd lost <int[0-]> {--yes-i-really-mean-it}
+ ceph osd lost <int[0-]> {--yes-i-really-mean-it}
-Subcommand ``ls`` shows all OSD ids.
+Subcommand ``ls`` shows all OSD IDs.
Usage::
- ceph osd ls {<int[0-]>}
+ ceph osd ls {<int[0-]>}
Subcommand ``lspools`` lists pools.
Usage::
- ceph osd lspools {<int>}
+ ceph osd lspools {<int>}
-Subcommand ``map`` finds pg for <object> in <pool>.
+Subcommand ``map`` finds PG for <object> in <pool>.
Usage::
- ceph osd map <poolname> <objectname>
+ ceph osd map <poolname> <objectname>
-Subcommand ``metadata`` fetches metadata for osd <id>.
+Subcommand ``metadata`` fetches metadata for OSD <id> (default all).
Usage::
- ceph osd metadata {int[0-]} (default all)
+ ceph osd metadata {<int[0-]>}
-Subcommand ``out`` sets osd(s) <id> [<id>...] out.
+Subcommand ``out`` sets OSD(s) <id> [<id>...] out.
Usage::
- ceph osd out <ids> [<ids>...]
+ ceph osd out <ids> [<ids>...]
Subcommand ``ok-to-stop`` checks whether the list of OSD(s) can be
stopped without immediately making data unavailable. That is, all
Usage::
- ceph osd ok-to-stop <id> [<ids>...] [--max <num>]
+ ceph osd ok-to-stop <id> [<ids>...] [--max <num>]
Subcommand ``ok-to-upgrade`` determines a safe set of OSDs found within the
specified CRUSH bucket to upgrade simultaneously without impacting cluster
Usage::
- ceph osd ok-to-upgrade <crush_bucket_name> <new_ceph_version_short> [--max <num>]
+ ceph osd ok-to-upgrade <crush_bucket_name> <new_ceph_version_short> [--max <num>]
-Subcommand ``pause`` pauses osd.
+Subcommand ``pause`` pauses OSD.
Usage::
- ceph osd pause
+ ceph osd pause
Subcommand ``perf`` prints dump of OSD perf summary stats.
Usage::
- ceph osd perf
+ ceph osd perf
Subcommand ``pg-temp`` set pg_temp mapping pgid:[<id> [<id>...]] (developers
only).
Usage::
- ceph osd pg-temp <pgid> {<id> [<id>...]}
+ ceph osd pg-temp <pgid> {<id> [<id>...]}
-Subcommand ``force-create-pg`` forces creation of pg <pgid>.
+Subcommand ``force-create-pg`` forces creation of PG <pgid>.
Usage::
- ceph osd force-create-pg <pgid>
+ ceph osd force-create-pg <pgid>
-Subcommand ``pool`` is used for managing data pools. It uses some additional
+Subcommand ``pool`` is used for managing data pools. It accepts the following additional
subcommands.
Subcommand ``create`` creates pool.
Usage::
- ceph osd pool create <poolname> {<int[0-]>} {<int[0-]>} {replicated|erasure}
- {<erasure_code_profile>} {<rule>} {<int>} {--autoscale-mode=<on,off,warn>}
+ ceph osd pool create <poolname> {<int[0-]>} {<int[0-]>} {replicated|erasure}
+ {<erasure_code_profile>} {<rule>} {<int>} {--autoscale-mode=<on,off,warn>}
Subcommand ``delete`` deletes pool.
Usage::
- ceph osd pool delete <poolname> {<poolname>} {--yes-i-really-really-mean-it}
+ ceph osd pool delete <poolname> {<poolname>} {--yes-i-really-really-mean-it}
Subcommand ``get`` gets pool parameter <var>.
Usage::
- ceph osd pool get <poolname> size|min_size|pg_num|pgp_num|crush_rule|write_fadvise_dontneed
+ ceph osd pool get <poolname> size|min_size|pg_num|pgp_num|crush_rule|write_fadvise_dontneed
Only for tiered pools::
- ceph osd pool get <poolname> hit_set_type|hit_set_period|hit_set_count|hit_set_fpp|
- target_max_objects|target_max_bytes|cache_target_dirty_ratio|cache_target_dirty_high_ratio|
- cache_target_full_ratio|cache_min_flush_age|cache_min_evict_age|
- min_read_recency_for_promote|hit_set_grade_decay_rate|hit_set_search_last_n
+ ceph osd pool get <poolname> hit_set_type|hit_set_period|hit_set_count|hit_set_fpp|
+ target_max_objects|target_max_bytes|cache_target_dirty_ratio|cache_target_dirty_high_ratio|
+ cache_target_full_ratio|cache_min_flush_age|cache_min_evict_age|
+ min_read_recency_for_promote|hit_set_grade_decay_rate|hit_set_search_last_n
-Only for erasure coded pools::
+Only for erasure-coded pools::
- ceph osd pool get <poolname> erasure_code_profile
+ ceph osd pool get <poolname> erasure_code_profile
Use ``all`` to get all pool parameters that apply to the pool's type::
- ceph osd pool get <poolname> all
+ ceph osd pool get <poolname> all
Subcommand ``get-quota`` obtains object or byte limits for pool.
Usage::
- ceph osd pool get-quota <poolname>
+ ceph osd pool get-quota <poolname>
Subcommand ``ls`` list pools
Usage::
- ceph osd pool ls {detail}
+ ceph osd pool ls {detail}
Subcommand ``mksnap`` makes snapshot <snap> in <pool>.
Usage::
- ceph osd pool mksnap <poolname> <snap>
+ ceph osd pool mksnap <poolname> <snap>
Subcommand ``rename`` renames <srcpool> to <destpool>.
Usage::
- ceph osd pool rename <poolname> <poolname>
+ ceph osd pool rename <poolname> <poolname>
Subcommand ``rmsnap`` removes snapshot <snap> from <pool>.
Usage::
- ceph osd pool rmsnap <poolname> <snap>
+ ceph osd pool rmsnap <poolname> <snap>
Subcommand ``set`` sets pool parameter <var> to <val>.
Usage::
- ceph osd pool set <poolname> size|min_size|pg_num|
- pgp_num|crush_rule|hashpspool|nodelete|nopgchange|nosizechange|
- hit_set_type|hit_set_period|hit_set_count|hit_set_fpp|debug_fake_ec_pool|
- target_max_bytes|target_max_objects|cache_target_dirty_ratio|
- cache_target_dirty_high_ratio|
- cache_target_full_ratio|cache_min_flush_age|cache_min_evict_age|
- min_read_recency_for_promote|write_fadvise_dontneed|hit_set_grade_decay_rate|
- hit_set_search_last_n
- <val> {--yes-i-really-mean-it}
+ ceph osd pool set <poolname> size|min_size|pg_num|
+ pgp_num|crush_rule|hashpspool|nodelete|nopgchange|nosizechange|
+ hit_set_type|hit_set_period|hit_set_count|hit_set_fpp|debug_fake_ec_pool|
+ target_max_bytes|target_max_objects|cache_target_dirty_ratio|
+ cache_target_dirty_high_ratio|
+ cache_target_full_ratio|cache_min_flush_age|cache_min_evict_age|
+ min_read_recency_for_promote|write_fadvise_dontneed|hit_set_grade_decay_rate|
+ hit_set_search_last_n
+ <val> {--yes-i-really-mean-it}
Subcommand ``set-quota`` sets object or byte limit on pool.
Usage::
- ceph osd pool set-quota <poolname> max_objects|max_bytes <val>
+ ceph osd pool set-quota <poolname> max_objects|max_bytes <val>
Subcommand ``stats`` obtain stats from all pools, or from specified pool.
Usage::
- ceph osd pool stats {<name>}
+ ceph osd pool stats {<name>}
Subcommand ``application`` is used for adding an annotation to the given
pool. By default, the possible applications are object, block, and file
Usage::
- ceph osd pool application disable <pool-name> <app> {--yes-i-really-mean-it}
+ ceph osd pool application disable <pool-name> <app> {--yes-i-really-mean-it}
Subcommand ``enable`` adds an annotation to the given pool for the mentioned
application.
Usage::
- ceph osd pool application enable <pool-name> <app> {--yes-i-really-mean-it}
+ ceph osd pool application enable <pool-name> <app> {--yes-i-really-mean-it}
Subcommand ``get`` displays the value for the given key that is associated
with the given application of the given pool. Not passing the optional
Usage::
- ceph osd pool application get {<pool-name>} {<app>} {<key>}
+ ceph osd pool application get {<pool-name>} {<app>} {<key>}
Subcommand ``rm`` removes the key-value pair for the given key in the given
application of the given pool.
Usage::
- ceph osd pool application rm <pool-name> <app> <key>
+ ceph osd pool application rm <pool-name> <app> <key>
Subcommand ``set`` associates or updates, if it already exists, a key-value
pair with the given application for the given pool.
Usage::
- ceph osd pool application set <pool-name> <app> <key> <value>
+ ceph osd pool application set <pool-name> <app> <key> <value>
Subcommand ``primary-affinity`` adjust osd primary-affinity from 0.0 <=<weight>
<= 1.0
Usage::
- ceph osd primary-affinity <osdname (id|osd.id)> <float[0.0-1.0]>
+ ceph osd primary-affinity <osdname (id|osd.id)> <float[0.0-1.0]>
Subcommand ``primary-temp`` sets primary_temp mapping pgid:<id>|-1 (developers
only).
Usage::
- ceph osd primary-temp <pgid> <id>
+ ceph osd primary-temp <pgid> <id>
-Subcommand ``repair`` initiates repair on a specified osd.
+Subcommand ``repair`` initiates repair on a specified OSD.
Usage::
- ceph osd repair <who>
+ ceph osd repair <who>
-Subcommand ``reweight`` reweights osd to 0.0 < <weight> < 1.0.
+Subcommand ``reweight`` reweights OSD to 0.0 < <weight> < 1.0.
Usage::
- osd reweight <int[0-]> <float[0.0-1.0]>
+ osd reweight <int[0-]> <float[0.0-1.0]>
Subcommand ``reweight-by-pg`` reweight OSDs by PG distribution
[overload-percentage-for-consideration, default 120].
Usage::
- ceph osd reweight-by-pg {<int[100-]>} {<poolname> [<poolname...]}
- {--no-increasing}
+ ceph osd reweight-by-pg {<int[100-]>} {<poolname> [<poolname...]} {--no-increasing}
Subcommand ``reweight-by-utilization`` reweights OSDs by utilization. It only reweights
outlier OSDs whose utilization exceeds the average, eg. the default 120%
limits reweight to those OSDs that are more than 20% over the average.
-[overload-threshold, default 120 [max_weight_change, default 0.05 [max_osds_to_adjust, default 4]]]
+[overload-threshold, default 120 [max_weight_change, default 0.05 [max_osds_to_adjust, default 4]]]
Usage::
- ceph osd reweight-by-utilization {<int[100-]> {<float[0.0-]> {<int[0-]>}}}
- {--no-increasing}
+ ceph osd reweight-by-utilization {<int[100-]> {<float[0.0-]> {<int[0-]>}}} {--no-increasing}
-Subcommand ``rm`` removes osd(s) <id> [<id>...] from the OSD map.
+Subcommand ``rm`` removes OSD(s) <id> [<id>...] from the OSD map.
Usage::
- ceph osd rm <ids> [<ids>...]
+ ceph osd rm <ids> [<ids>...]
Subcommand ``destroy`` marks OSD *id* as *destroyed*, removing its cephx
entity's keys and all of its dm-crypt and daemon-private config key
entries.
-This command will not remove the OSD from crush, nor will it remove the
+This command will not remove the OSD from the CRUSH map, nor will it remove the
OSD from the OSD map. Instead, once the command successfully completes,
the OSD will show marked as *destroyed*.
Usage::
- ceph osd safe-to-destroy <id> [<ids>...]
+ ceph osd safe-to-destroy <id> [<ids>...]
-Subcommand ``scrub`` initiates scrub on specified osd.
+Subcommand ``scrub`` initiates scrub on specified OSD.
Usage::
- ceph osd scrub <who>
+ ceph osd scrub <who>
Subcommand ``set`` sets cluster-wide <flag> by updating OSD map.
The ``full`` flag is not honored anymore since the Mimic release, and
Usage::
- ceph osd set pause|noup|nodown|noout|noin|nobackfill|
- norebalance|norecover|noscrub|nodeep-scrub|notieragent
+ ceph osd set pause|noup|nodown|noout|noin|nobackfill|norebalance|norecover|noscrub|nodeep-scrub|notieragent
-Subcommand ``setcrushmap`` sets crush map from input file.
+Subcommand ``setcrushmap`` sets CRUSH map from input file.
Usage::
- ceph osd setcrushmap
+ ceph osd setcrushmap
-Subcommand ``setmaxosd`` sets new maximum osd value.
+Subcommand ``setmaxosd`` sets new maximum OSD value.
Usage::
- ceph osd setmaxosd <int[0-]>
+ ceph osd setmaxosd <int[0-]>
Subcommand ``set-require-min-compat-client`` enforces the cluster to be backward
compatible with the specified client version. This subcommand prevents you from
-making any changes (e.g., crush tunables, or using new features) that
+making any changes (e.g., CRUSH tunables, or using new features) that
would violate the current setting. Please note, This subcommand will fail if
any connected daemon or client is not compatible with the features offered by
the given <version>. To see the features and releases of all clients connected
Usage::
- ceph osd stat
+ ceph osd stat
-Subcommand ``tier`` is used for managing tiers. It uses some additional
+Subcommand ``tier`` is used for managing tiers. It accepts the following additional
subcommands.
Subcommand ``add`` adds the tier <tierpool> (the second one) to base pool <pool>
Usage::
- ceph osd tier add <poolname> <poolname> {--force-nonempty}
+ ceph osd tier add <poolname> <poolname> {--force-nonempty}
Subcommand ``add-cache`` adds a cache <tierpool> (the second one) of size <size>
to existing pool <pool> (the first one).
Usage::
- ceph osd tier add-cache <poolname> <poolname> <int[0-]>
+ ceph osd tier add-cache <poolname> <poolname> <int[0-]>
Subcommand ``cache-mode`` specifies the caching mode for cache tier <pool>.
Usage::
- ceph osd tier cache-mode <poolname> writeback|proxy|readproxy|readonly|none
+ ceph osd tier cache-mode <poolname> writeback|proxy|readproxy|readonly|none
Subcommand ``remove`` removes the tier <tierpool> (the second one) from base pool
<pool> (the first one).
Usage::
- ceph osd tier remove <poolname> <poolname>
+ ceph osd tier remove <poolname> <poolname>
Subcommand ``remove-overlay`` removes the overlay pool for base pool <pool>.
Usage::
- ceph osd tier remove-overlay <poolname>
+ ceph osd tier remove-overlay <poolname>
Subcommand ``set-overlay`` set the overlay pool for base pool <pool> to be
<overlaypool>.
Usage::
- ceph osd tier set-overlay <poolname> <poolname>
+ ceph osd tier set-overlay <poolname> <poolname>
Subcommand ``tree`` prints OSD tree.
Usage::
- ceph osd tree {<int[0-]>}
+ ceph osd tree {<int[0-]>}
-Subcommand ``unpause`` unpauses osd.
+Subcommand ``unpause`` unpauses OSD.
Usage::
- ceph osd unpause
+ ceph osd unpause
Subcommand ``unset`` unsets cluster-wide <flag> by updating OSD map.
Usage::
- ceph osd unset pause|noup|nodown|noout|noin|nobackfill|
- norebalance|norecover|noscrub|nodeep-scrub|notieragent
+ ceph osd unset pause|noup|nodown|noout|noin|nobackfill|norebalance|norecover|noscrub|nodeep-scrub|notieragent
pg
--
-It is used for managing the placement groups in OSDs. It uses some
+It is used for managing the placement groups in OSDs. It accepts the following
additional subcommands.
-Subcommand ``debug`` shows debug info about pgs.
+Subcommand ``debug`` shows debug info about PGs.
Usage::
- ceph pg debug unfound_objects_exist|degraded_pgs_exist
+ ceph pg debug unfound_objects_exist|degraded_pgs_exist
Subcommand ``deep-scrub`` starts deep-scrub on <pgid>.
Usage::
- ceph pg deep-scrub <pgid>
+ ceph pg deep-scrub <pgid>
-Subcommand ``dump`` shows human-readable versions of pg map (only 'all' valid
+Subcommand ``dump`` shows human-readable versions of PG map (only 'all' valid
with plain).
Usage::
- ceph pg dump {all|summary|sum|delta|pools|osds|pgs|pgs_brief} [{all|summary|sum|delta|pools|osds|pgs|pgs_brief...]}
+ ceph pg dump {all|summary|sum|delta|pools|osds|pgs|pgs_brief} [{all|summary|sum|delta|pools|osds|pgs|pgs_brief...}]
-Subcommand ``dump_json`` shows human-readable version of pg map in json only.
+Subcommand ``dump_json`` prints the PG map in JSON format for consumption by scripts and other tools.
Usage::
- ceph pg dump_json {all|summary|sum|delta|pools|osds|pgs|pgs_brief} [{all|summary|sum|delta|pools|osds|pgs|pgs_brief...]}
+ ceph pg dump_json {all|summary|sum|delta|pools|osds|pgs|pgs_brief} [{all|summary|sum|delta|pools|osds|pgs|pgs_brief...}]
-Subcommand ``dump_pools_json`` shows pg pools info in json only.
+Subcommand ``dump_pools_json`` prints RADOS pool information in JSON format.
Usage::
- ceph pg dump_pools_json
+ ceph pg dump_pools_json
-Subcommand ``dump_stuck`` shows information about stuck pgs.
+Subcommand ``dump_stuck`` shows information about stuck placement groups (PGs).
Usage::
- ceph pg dump_stuck {inactive|unclean|stale|undersized|degraded [inactive|unclean|stale|undersized|degraded...]}
- {<int>}
+ ceph pg dump_stuck {inactive|unclean|stale|undersized|degraded [inactive|unclean|stale|undersized|degraded...]} {<int>}
-Subcommand ``getmap`` gets binary pg map to -o/stdout.
+Subcommand ``getmap`` prints the PG map in internal, binary format for redirection into a file or a command.
Usage::
- ceph pg getmap
+ ceph pg getmap
-Subcommand ``ls`` lists pg with specific pool, osd, state
+Subcommand ``ls`` lists PG with specific pool, OSD, state
Usage::
- ceph pg ls {<int>} {<pg-state> [<pg-state>...]}
+ ceph pg ls {<int>} {<pg-state> [<pg-state>...]}
-Subcommand ``ls-by-osd`` lists pg on osd [osd]
+Subcommand ``ls-by-osd`` lists PG on OSD [osd]
Usage::
- ceph pg ls-by-osd <osdname (id|osd.id)> {<int>}
- {<pg-state> [<pg-state>...]}
+ ceph pg ls-by-osd <osdname (id|osd.id)> {<int>} {<pg-state> [<pg-state>...]}
-Subcommand ``ls-by-pool`` lists pg with pool = [poolname]
+Subcommand ``ls-by-pool`` lists the pool's PGs with pool = [poolname]
Usage::
- ceph pg ls-by-pool <poolstr> {<int>} {<pg-state> [<pg-state>...]}
+ ceph pg ls-by-pool <poolstr> {<int>} {<pg-state> [<pg-state>...]}
-Subcommand ``ls-by-primary`` lists pg with primary = [osd]
+Subcommand ``ls-by-primary`` lists the pool's PGs with primary = [osd]
Usage::
- ceph pg ls-by-primary <osdname (id|osd.id)> {<int>}
- {<pg-state> [<pg-state>...]}
+ ceph pg ls-by-primary <osdname (id|osd.id)> {<int>} {<pg-state> [<pg-state>...]}
-Subcommand ``map`` shows mapping of pg to osds.
+Subcommand ``map`` shows the OSDs that host a given PG.
Usage::
- ceph pg map <pgid>
+ ceph pg map <pgid>
Subcommand ``repair`` starts repair on <pgid>.
Usage::
- ceph pg repair <pgid>
+ ceph pg repair <pgid>
Subcommand ``scrub`` starts scrub on <pgid>.
Usage::
- ceph pg scrub <pgid>
+ ceph pg scrub <pgid>
Subcommand ``stat`` shows placement group status.
Usage::
- ceph pg stat
+ ceph pg stat
A second format for PG-directed commands is the following:
Usage::
- ceph tell <pgid> scrub
+ ceph tell <pgid> scrub
Subcommand ``deep-scrub`` starts a deep-scrub of <pgid>, similar
to ``ceph pg deep-scrub <pgid>``.
Usage::
- ceph tell <pgid> deep-scrub
+ ceph tell <pgid> deep-scrub
Subcommand ``scrub-abort`` aborts any ongoing scrub of <pgid>, and removes
any previous scrub or deep scrub operator request for the PG.
Usage::
- ceph tell <pgid> scrub-abort
+ ceph tell <pgid> scrub-abort
quorum
------
-Cause a specific MON to enter or exit quorum.
+Cause a specific Monitor to enter or exit quorum.
Usage::
- ceph tell mon.<id> quorum enter|exit
+ ceph tell mon.<id> quorum enter|exit
quorum_status
-------------
-Reports status of monitor quorum.
+Reports status of Monitor quorum.
Usage::
- ceph quorum_status
+ ceph quorum_status
report
Usage::
- ceph report {<tags> [<tags>...]}
+ ceph report {<tags> [<tags>...]}
status
Usage::
- ceph status
+ ceph status
tell
Usage::
- ceph tell <name (type.id)> <command> [options...]
+ ceph tell <name (type.id)> <command> [options...]
List all available commands.
Usage::
- ceph tell <name (type.id)> help
+ ceph tell <name (type.id)> help
version
-------
-Show mon daemon version
+Show Monitor daemon version
Usage::
- ceph version
+ ceph version
Options
=======
.. option:: -i infile, --in-file=infile
- will specify an input file to be passed along as a payload with the
- command to the monitor cluster. This is only used for specific
- monitor commands.
+ Specify an input file to be passed as a payload with the
+ command to the Monitor cluster. This is only used for specific
+ Monitor commands.
.. option:: -o outfile, --out-file=outfile
- will write any payload returned by the monitor cluster with its
- reply to outfile. Only specific monitor commands (e.g. osd getmap)
+ Write any payload returned by the Monitor cluster with its
+ reply to outfile. Only specific Monitor commands (e.g. ``osd getmap``)
return a payload.
.. option:: --setuser user
- will apply the appropriate user ownership to the file specified by
+ Apply the appropriate user ownership to the file specified by
the option '-o'.
.. option:: --setgroup group
- will apply the appropriate group ownership to the file specified by
+ Apply the appropriate group ownership to the file specified by
the option '-o'.
.. option:: -c ceph.conf, --conf=ceph.conf
Use ceph.conf configuration file instead of the default
- ``/etc/ceph/ceph.conf`` to determine monitor addresses during startup.
+ ``/etc/ceph/ceph.conf`` to determine Monitor addresses during startup.
.. option:: --id CLIENT_ID, --user CLIENT_ID
- Client id for authentication.
+ Client ID for authentication.
.. option:: --name CLIENT_NAME, -n CLIENT_NAME
- Client name for authentication.
+ Client name for authentication.
.. option:: --cluster CLUSTER
- Name of the Ceph cluster.
+ Name of the Ceph cluster.
.. option:: --admin-daemon ADMIN_SOCKET, daemon DAEMON_NAME
- Submit admin-socket commands via admin sockets in /var/run/ceph.
+ Submit admin-socket commands via admin sockets in /var/run/ceph.
.. option:: --admin-socket ADMIN_SOCKET_NOPE
- You probably mean --admin-daemon
+ You probably mean ``--admin-daemon``
.. option:: -s, --status
- Show cluster status.
+ Show cluster status.
.. option:: -w, --watch
- Watch live cluster changes on the default 'cluster' channel
+ Watch live cluster changes on the default 'cluster' channel
.. option:: -W, --watch-channel
- Watch live cluster changes on any channel (cluster, audit, cephadm, or * for all)
+ Watch live cluster changes on any channel (cluster, audit, cephadm, or * for all)
.. option:: --watch-debug
- Watch debug events.
+ Watch debug events.
.. option:: --watch-info
- Watch info events.
+ Watch info events.
.. option:: --watch-sec
- Watch security events.
+ Watch security events.
.. option:: --watch-warn
- Watch warning events.
+ Watch warning events.
.. option:: --watch-error
- Watch error events.
+ Watch error events.
.. option:: --version, -v
- Display version.
+ Display version.
.. option:: --verbose
- Make verbose.
+ Make verbose.
.. option:: --concise
- Make less verbose.
+ Make less verbose.
.. option:: -f {json,json-pretty,xml,xml-pretty,plain,yaml}, --format
- Format of output.
+ Format of output.
- Note: yaml is only valid for orch commands.
+ Note: yaml is only valid for orch commands.
.. option:: --daemon-output-file OUTPUT_FILE
- When using --format=json|json-pretty, you may specify a file name on the
- host running the daemon to stream output to. Be mindful this is probably
- not the same machine running the ceph command. So to analyze the output, it
- will be necessary to fetch the file once the command completes.
+ When using ``--format=json|json-pretty``, you may specify a file name on the
+ host running the daemon to stream output to. Be mindful this is probably
+ not the same machine running the ceph command. So to analyze the output, it
+ will be necessary to fetch the file once the command completes.
- OUTPUT_FILE may also be ``:tmp:``, indicating that the daemon should create
- a temporary file (subject to configurations tmp_dir and tmp_file_template).
+ OUTPUT_FILE may also be ``:tmp:``, indicating that the daemon should create
+ a temporary file (subject to configurations tmp_dir and tmp_file_template).
- The ``tell`` command will output json with the path to the output file
- written to, the size of the file, the result code of the command, and any
- output produced by the command.
+ The ``tell`` command will output JSON with the path to the output file
+ written to, the size of the file, the result code of the command, and any
+ output produced by the command.
- Note: this option is only used for ``ceph tell`` commands.
+ Note: this option is only used for ``ceph tell`` commands.
.. option:: --connect-timeout CLUSTER_TIMEOUT
- Set a timeout for connecting to the cluster.
+ Set a timeout for connecting to the cluster.
.. option:: --no-increasing
- ``--no-increasing`` is off by default. So increasing the osd weight is allowed
- using the ``reweight-by-utilization`` or ``test-reweight-by-utilization`` commands.
- If this option is used with these commands, it will help not to increase osd weight
- even the osd is under utilized.
+ ``--no-increasing`` is off by default. So increasing the OSD weight is allowed
+ using the ``reweight-by-utilization`` or ``test-reweight-by-utilization`` commands.
+ If this option is used with these commands, it will not increase OSD weight
+ even the OSD is underutilized.
.. option:: --block
- block until completion (scrub and deep-scrub only)
+ Block until completion (scrub and deep-scrub only).
Availability
============