From 555375f8a2fa7c3faf024d2ce8dd422ace14c9fa Mon Sep 17 00:00:00 2001 From: Willem Jan Withagen Date: Wed, 30 Oct 2019 13:17:08 +0100 Subject: [PATCH] doc/ceph-volume: docs for zfs/inventory fixes: https://tracker.ceph.com/issues/42722 Signed-off-by: Willem Jan Withagen (cherry picked from commit 799673a580da20ffffdc1f2922fa5f08123a86cb) --- doc/ceph-volume/index.rst | 5 + doc/ceph-volume/zfs/index.rst | 31 ++++++ doc/ceph-volume/zfs/inventory.rst | 19 ++++ doc/dev/ceph-volume/index.rst | 1 + doc/dev/ceph-volume/zfs.rst | 176 ++++++++++++++++++++++++++++++ 5 files changed, 232 insertions(+) create mode 100644 doc/ceph-volume/zfs/index.rst create mode 100644 doc/ceph-volume/zfs/inventory.rst create mode 100644 doc/dev/ceph-volume/zfs.rst diff --git a/doc/ceph-volume/index.rst b/doc/ceph-volume/index.rst index 7ffc3dd61b7c0..36ce9cc8910ee 100644 --- a/doc/ceph-volume/index.rst +++ b/doc/ceph-volume/index.rst @@ -15,8 +15,11 @@ follow a predictable, and robust way of preparing, activating, and starting OSDs There is currently support for ``lvm``, and plain disks (with GPT partitions) that may have been deployed with ``ceph-disk``. +``zfs`` support is available for running a FreeBSD cluster. + * :ref:`ceph-volume-lvm` * :ref:`ceph-volume-simple` +* :ref:`ceph-volume-zfs` **Node inventory** @@ -76,3 +79,5 @@ and ``ceph-disk`` is fully disabled. Encryption is fully supported. simple/activate simple/scan simple/systemd + zfs/index + zfs/inventory diff --git a/doc/ceph-volume/zfs/index.rst b/doc/ceph-volume/zfs/index.rst new file mode 100644 index 0000000000000..c06228de91dc0 --- /dev/null +++ b/doc/ceph-volume/zfs/index.rst @@ -0,0 +1,31 @@ +.. _ceph-volume-zfs: + +``zfs`` +======= +Implements the functionality needed to deploy OSDs from the ``zfs`` subcommand: +``ceph-volume zfs`` + +The current implementation only works for ZFS on FreeBSD + +**Command Line Subcommands** + +* :ref:`ceph-volume-zfs-inventory` + +.. not yet implemented +.. * :ref:`ceph-volume-zfs-prepare` + +.. * :ref:`ceph-volume-zfs-activate` + +.. * :ref:`ceph-volume-zfs-create` + +.. * :ref:`ceph-volume-zfs-list` + +.. * :ref:`ceph-volume-zfs-scan` + +**Internal functionality** + +There are other aspects of the ``zfs`` subcommand that are internal and not +exposed to the user, these sections explain how these pieces work together, +clarifying the workflows of the tool. + +:ref:`zfs ` diff --git a/doc/ceph-volume/zfs/inventory.rst b/doc/ceph-volume/zfs/inventory.rst new file mode 100644 index 0000000000000..fd00325b6a885 --- /dev/null +++ b/doc/ceph-volume/zfs/inventory.rst @@ -0,0 +1,19 @@ +.. _ceph-volume-zfs-inventory: + +``inventory`` +============= +The ``inventory`` subcommand queries a host's disc inventory through GEOM and provides +hardware information and metadata on every physical device. + +This only works on a FreeBSD platform. + +By default the command returns a short, human-readable report of all physical disks. + +For programmatic consumption of this report pass ``--format json`` to generate a +JSON formatted report. This report includes extensive information on the +physical drives such as disk metadata (like model and size), logical volumes +and whether they are used by ceph, and if the disk is usable by ceph and +reasons why not. + +A device path can be specified to report extensive information on a device in +both plain and json format. diff --git a/doc/dev/ceph-volume/index.rst b/doc/dev/ceph-volume/index.rst index b6f9dc04517b2..5feef80893502 100644 --- a/doc/dev/ceph-volume/index.rst +++ b/doc/dev/ceph-volume/index.rst @@ -10,4 +10,5 @@ ceph-volume developer documentation plugins lvm + zfs systemd diff --git a/doc/dev/ceph-volume/zfs.rst b/doc/dev/ceph-volume/zfs.rst new file mode 100644 index 0000000000000..ca961698b22db --- /dev/null +++ b/doc/dev/ceph-volume/zfs.rst @@ -0,0 +1,176 @@ + +.. _ceph-volume-zfs-api: + +ZFS +=== +The backend of ``ceph-volume zfs`` is ZFS, it relies heavily on the usage of +tags, which is a way for ZFS to allow extending its volume metadata. These +values can later be queried against devices and it is how they get discovered +later. + +Currently this interface is only usable when running on FreeBSD. + +.. warning:: These APIs are not meant to be public, but are documented so that + it is clear what the tool is doing behind the scenes. Do not alter + any of these values. + + +.. _ceph-volume-zfs-tag-api: + +Tag API +------- +The process of identifying filesystems, volumes and pools as part of Ceph relies +on applying tags on all volumes. It follows a naming convention for the +namespace that looks like:: + + ceph.= + +All tags are prefixed by the ``ceph`` keyword to claim ownership of that +namespace and make it easily identifiable. This is how the OSD ID would be used +in the context of zfs tags:: + + ceph.osd_id=0 + +Tags on filesystems are stored as property. +Tags on a zpool are stored in the comment property as a concatenated list +seperated by ``;`` + +.. _ceph-volume-zfs-tags: + +Metadata +-------- +The following describes all the metadata from Ceph OSDs that is stored on a +ZFS filesystem, volume, pool: + + +``type`` +-------- +Describes if the device is an OSD or Journal, with the ability to expand to +other types when supported + +Example:: + + ceph.type=osd + + +``cluster_fsid`` +---------------- +Example:: + + ceph.cluster_fsid=7146B649-AE00-4157-9F5D-1DBFF1D52C26 + + +``data_device`` +--------------- +Example:: + + ceph.data_device=/dev/ceph/data-0 + + +``data_uuid`` +------------- +Example:: + + ceph.data_uuid=B76418EB-0024-401C-8955-AE6919D45CC3 + + +``journal_device`` +------------------ +Example:: + + ceph.journal_device=/dev/ceph/journal-0 + + +``journal_uuid`` +---------------- +Example:: + + ceph.journal_uuid=2070E121-C544-4F40-9571-0B7F35C6CB2B + + +``osd_fsid`` +------------ +Example:: + + ceph.osd_fsid=88ab9018-f84b-4d62-90b4-ce7c076728ff + + +``osd_id`` +---------- +Example:: + + ceph.osd_id=1 + + +``block_device`` +---------------- +Just used on :term:`bluestore` backends. Captures the path to the logical +volume path. + +Example:: + + ceph.block_device=/dev/gpt/block-0 + + +``block_uuid`` +-------------- +Just used on :term:`bluestore` backends. Captures either the logical volume UUID or +the partition UUID. + +Example:: + + ceph.block_uuid=E5F041BB-AAD4-48A8-B3BF-31F7AFD7D73E + + +``db_device`` +------------- +Just used on :term:`bluestore` backends. Captures the path to the logical +volume path. + +Example:: + + ceph.db_device=/dev/gpt/db-0 + + +``db_uuid`` +----------- +Just used on :term:`bluestore` backends. Captures either the logical volume UUID or +the partition UUID. + +Example:: + + ceph.db_uuid=F9D02CF1-31AB-4910-90A3-6A6302375525 + + +``wal_device`` +-------------- +Just used on :term:`bluestore` backends. Captures the path to the logical +volume path. + +Example:: + + ceph.wal_device=/dev/gpt/wal-0 + + +``wal_uuid`` +------------ +Just used on :term:`bluestore` backends. Captures either the logical volume UUID or +the partition UUID. + +Example:: + + ceph.wal_uuid=A58D1C68-0D6E-4CB3-8E99-B261AD47CC39 + + +``compression`` +--------------- +A compression-enabled device can allways be set using the native zfs settings on +a volume or filesystem. This will/can be activated during creation of the volume +of filesystem. +When activated by ``ceph-volume zfs`` this tag will be created. +Compression manually set AFTER ``ceph-volume`` will go unnoticed, unless this +tag is also manually set. + +Example for an enabled compression device:: + + ceph.vdo=1 -- 2.39.5