ceph df
-The **RAW STORAGE** section of the output provides an overview of the
-amount of storage that is managed by your cluster.
+The output of ``ceph df`` looks like this::
-- **CLASS:** The class of OSD device (or the total for the cluster)
+ CLASS SIZE AVAIL USED RAW USED %RAW USED
+ ssd 202 GiB 200 GiB 2.0 GiB 2.0 GiB 1.00
+ TOTAL 202 GiB 200 GiB 2.0 GiB 2.0 GiB 1.00
+
+ --- POOLS ---
+ POOL ID PGS STORED (DATA) (OMAP) OBJECTS USED (DATA) (OMAP) %USED MAX AVAIL QUOTA OBJECTS QUOTA BYTES DIRTY USED COMPR UNDER COMPR
+ device_health_metrics 1 1 242 KiB 15 KiB 227 KiB 4 251 KiB 24 KiB 227 KiB 0 297 GiB N/A N/A 4 0 B 0 B
+ cephfs.a.meta 2 32 6.8 KiB 6.8 KiB 0 B 22 96 KiB 96 KiB 0 B 0 297 GiB N/A N/A 22 0 B 0 B
+ cephfs.a.data 3 32 0 B 0 B 0 B 0 0 B 0 B 0 B 0 99 GiB N/A N/A 0 0 B 0 B
+ test 4 32 22 MiB 22 MiB 50 KiB 248 19 MiB 19 MiB 50 KiB 0 297 GiB N/A N/A 248 0 B 0 B
+
+
+
+
+
+- **CLASS:** for example, "ssd" or "hdd"
- **SIZE:** The amount of storage capacity managed by the cluster.
- **AVAIL:** The amount of free space available in the cluster.
-- **USED:** The amount of raw storage consumed by user data.
-- **RAW USED:** The amount of raw storage consumed by user data, internal overhead, or reserved capacity.
+- **USED:** The amount of raw storage consumed by user data (excluding
+ BlueStore's database)
+- **RAW USED:** The amount of raw storage consumed by user data, internal
+ overhead, or reserved capacity.
- **%RAW USED:** The percentage of raw storage used. Use this number in
conjunction with the ``full ratio`` and ``near full ratio`` to ensure that
you are not reaching your cluster's capacity. See `Storage Capacity`_ for
additional details.
+
+**POOLS:**
+
The **POOLS** section of the output provides a list of pools and the notional
usage of each pool. The output from this section **DOES NOT** reflect replicas,
clones or snapshots. For example, if you store an object with 1MB of data, the
notional usage will be 1MB, but the actual usage may be 2MB or more depending
-on the number of replicas, clones and snapshots.
+on the number of replicas, clones and snapshots.
+
+- **ID:** The number of the node within the pool.
+- **STORED:** actual amount of data user/Ceph has stored in a pool. This is
+ similar to the USED column in earlier versions of Ceph but the calculations
+ (for BlueStore!) are more precise (gaps are properly handled).
+
+ - **(DATA):** usage for RBD (RADOS Block Device), CephFS file data, and RGW
+ (RADOS Gateway) object data.
+ - **(OMAP):** key-value pairs. Used primarily by CephFS and RGW (RADOS
+ Gateway) for metadata storage.
+
+- **OBJECTS:** The notional number of objects stored per pool. "Notional" is
+ defined above in the paragraph immediately under "POOLS".
+- **USED:** The space allocated for a pool over all OSDs. This includes
+ replication, allocation granularity, and erasure-coding overhead. Compression
+ savings and object content gaps are also taken into account. BlueStore's
+ database is not included in this amount.
+
+ - **(DATA):** object usage for RBD (RADOS Block Device), CephFS file data, and RGW
+ (RADOS Gateway) object data.
+ - **(OMAP):** object key-value pairs. Used primarily by CephFS and RGW (RADOS
+ Gateway) for metadata storage.
-- **NAME:** The name of the pool.
-- **ID:** The pool ID.
-- **USED:** The notional amount of data stored in kilobytes, unless the number
- appends **M** for megabytes or **G** for gigabytes.
- **%USED:** The notional percentage of storage used per pool.
- **MAX AVAIL:** An estimate of the notional amount of data that can be written
to this pool.
-- **OBJECTS:** The notional number of objects stored per pool.
+- **QUOTA OBJECTS:** The number of quota objects.
+- **QUOTA BYTES:** The number of bytes in the quota objects.
+- **DIRTY:** "DIRTY" is meaningful only when cache tiering is in use. If cache
+ tiering is in use, the "DIRTY" column lists the number of objects in the
+ cache pool that have been written to the cache pool but have not flushed yet
+ to the base pool.
+- **USED COMPR:** amount of space allocated for compressed data (i.e. this
+ includes comrpessed data plus all the allocation, replication and erasure
+ coding overhead).
+- **UNDER COMPR:** amount of data passed through compression (summed over all
+ replicas) and beneficial enough to be stored in a compressed form.
-.. note:: The numbers in the **POOLS** section are notional. They are not
- inclusive of the number of replicas, snapshots or clones. As a result,
- the sum of the **USED** and **%USED** amounts will not add up to the
- **USED** and **%USED** amounts in the **RAW** section of the
- output.
-.. note:: The **MAX AVAIL** value is a complicated function of the
- replication or erasure code used, the CRUSH rule that maps storage
- to devices, the utilization of those devices, and the configured
- mon_osd_full_ratio.
+.. note:: The numbers in the POOLS section are notional. They are not
+ inclusive of the number of replicas, snapshots or clones. As a result, the
+ sum of the USED and %USED amounts will not add up to the USED and %USED
+ amounts in the RAW section of the output.
+.. note:: The MAX AVAIL value is a complicated function of the replication
+ or erasure code used, the CRUSH rule that maps storage to devices, the
+ utilization of those devices, and the configured ``mon_osd_full_ratio``.
Checking OSD Status
===================
-You can check OSDs to ensure they are ``up`` and ``in`` by executing::
+You can check OSDs to ensure they are ``up`` and ``in`` by executing the
+following command:
+
+.. prompt:: bash #
- ceph osd stat
+ ceph osd stat
-Or::
+Or:
+
+.. prompt:: bash #
- ceph osd dump
+ ceph osd dump
-You can also check view OSDs according to their position in the CRUSH map. ::
+You can also check view OSDs according to their position in the CRUSH map by
+using the folloiwng command:
- ceph osd tree
+.. prompt:: bash #
+
+ ceph osd tree
Ceph will print out a CRUSH tree with a host, its OSDs, whether they are up
-and their weight. ::
-
- #ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
- -1 3.00000 pool default
- -3 3.00000 rack mainrack
- -2 3.00000 host osd-host
- 0 ssd 1.00000 osd.0 up 1.00000 1.00000
- 1 ssd 1.00000 osd.1 up 1.00000 1.00000
- 2 ssd 1.00000 osd.2 up 1.00000 1.00000
+and their weight:
+
+.. code-block:: bash
+
+ #ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
+ -1 3.00000 pool default
+ -3 3.00000 rack mainrack
+ -2 3.00000 host osd-host
+ 0 ssd 1.00000 osd.0 up 1.00000 1.00000
+ 1 ssd 1.00000 osd.1 up 1.00000 1.00000
+ 2 ssd 1.00000 osd.2 up 1.00000 1.00000
For a detailed discussion, refer to `Monitoring OSDs and Placement Groups`_.