Crimson (Tech Preview)
======================
-Crimson is the code name of ``crimson-osd``, which is the next generation ``ceph-osd``.
-It is designed to deliver enhanced performance on fast network and storage devices by leveraging modern technologies including DPDK and SPDK.
+Crimson is the next generation ``ceph-osd``. It is designed to deliver enhanced performance on
+fast network and storage devices by leveraging modern technologies including DPDK and SPDK.
-Crimson is intended to be a drop-in replacement for the classic Object Storage Daemon (OSD),
+Crimson is intended to be a drop-in replacement for the classic :ref:`Object Storage Daemon (OSD)<ceph_osd-daemon>`,
aiming to allow seamless migration from existing ``ceph-osd`` deployments.
The second phase of the project introduces :ref:`seastore`, a complete redesign of the object storage backend built around Crimson's native architecture.
.. note::
Crimson is in a tech preview stage and is **not suitable for production use**.
-The Ceph CI/CD pipeline builds containers with both ``ceph-osd-crimson`` and the standard ``ceph-osd``.
+The Ceph CI/CD pipeline builds containers with both ``ceph-osd-crimson``, the crimson OSD binary
+and the standard ``ceph-osd``, the classic OSD binary.
Once a branch at commit <sha1> has been built and is available in
Shaman / Quay, you can deploy it using the cephadm instructions outlined
and the images are available in `quay <https://quay.ceph.io/repository/ceph-ci/ceph?tab=tags>`_.
We recommend using one of the latest available builds, as Crimson evolves rapidly.
-Before deploying a crimson OSD ensure the :ref:`required flags <crimson-required-flags>` are set.
-
The cephadm :ref:`bootstrap command <cephadm_bootstrap_a_new_cluster>` can be used as is, no further
changes are needed for crimson OSDs. You'll likely need to include the ``--allow-mismatched-release`` flag
to use a non-release branch.
.. prompt:: bash #
-
+
cephadm --image quay.ceph.io/ceph-ci/ceph:<sha1> --allow-mismatched-release bootstrap ...
-When :ref:`deploying OSDs <cephadm-deploy-osds>`, use ``--osd-type``
-flag to specify crimson OSDs. By default this value is set to ``classic``. To deploy a crimson OSD,
-set this flag to ``crimson``.
-
-.. prompt:: bash #
-
- ceph orch apply osd --osd-type crimson ...
-
-Alternatively, you can also set the `osd_type
-<https://docs.ceph.com/en/latest/cephadm/services/osd/#ceph.deployment.drive_group.DriveGroupSpec.osd_type>`_
-to ``crimson`` in the :ref:`OSD Service Specification <drivegroups>` file
-like so:
-
-.. code-block:: yaml
+.. _crimson-required-flags:
- service_type: osd
- service_id: default_drive_group
- placement:
- host_pattern: '*'
- spec:
- data_devices:
- all: true
- osd_type: crimson # osd_type should be set to crimson
+Crimson Required Flags
+----------------------
-If the above file is named ``osd-spec.yaml``, it can be used to deploy OSDs like so:
+After starting your cluster, prior to deploying OSDs (in cephadm terms,
+after bootstrap is done and hosts are added), enable Crimson by setting the following flags:
.. prompt:: bash #
- ceph orch apply -i /path/to/osd_spec.yml
+ ceph config set global 'enable_experimental_unrecoverable_data_corrupting_features' crimson
+ ceph osd set-allow-crimson --yes-i-really-mean-it
+ ceph config set mon osd_pool_default_crimson true
+ ceph config set osd crimson_cpu_num <SUITABLE_INT>
-.. _crimson-cpu-allocation:
+The first command enables the ``crimson`` experimental feature.
-Enabling io_uring
-=================
+The second enables the ``allow_crimson`` OSDMap flag. The monitor will
+not allow Crimson OSD to boot without that flag.
-Seastar backend can beneift significantly from Linux's io_uring interface, providing lower latency and higher throughput.
-io_uring is the defualt reactor backend (see `crimson_reactor_backend` option).
-On some conservative distributions, io_uring may be disabled, preventing Crimson from using it.
-If this configuration change is acceptable in your environment, you may enable io_uring support by running:
+The third causes pools to be created by default with the ``crimson`` flag.
+Crimson pools are restricted to operations supported by Crimson.
+Crimson OSD won't instantiate PGs from non-Crimson pools.
-.. prompt:: bash #
+The fourth ensures that `Crimson CPU allocation`_ flags were set appropriately.
- sudo sysctl -w kernel.io_uring_disabled=0
+Optionally, you can also enable :ref:`io_uring<enable-io-uring>`.
+Now you're ready to deploy :ref:`Crimson OSDs<deploying-crimson-osd>`!
+
+.. _crimson-cpu-allocation:
Crimson CPU allocation
-======================
+----------------------
.. note::
-
#. Allocation options **cannot** be changed after deployment.
#. :ref:`vstart.sh <dev_crimson_vstart>` sets these options using the ``--crimson-smp`` flag.
-The ``crimson_cpu_num`` parameter defines the number of CPUs used to serve Seastar reactors.
+The ``crimson_cpu_num`` parameter defines the number of CPUs used to serve `Seastar<https://seastar.io/>`_ reactors.
Each reactor is expected to run on a dedicated CPU core.
This parameter **does not have a default value**.
However, using this option requires manually setting the CPU set for each OSD,
and is generally less recommended due to its complexity.
-.. _crimson-required-flags:
+.. _enable-io-uring:
-Crimson Required Flags
-======================
+Enabling io_uring
+-----------------
-After starting your cluster, prior to deploying OSDs (in cephadm terms,
-after bootstrap is done and hosts are added), enable Crimson by setting the following flags:
+Crimson can benefit significantly from Linux's io_uring interface, providing lower latency and higher throughput.
+io_uring is the default reactor backend (see `crimson_reactor_backend` option).
+On some conservative distributions, io_uring may be disabled, preventing Crimson from using it.
+If this configuration change is acceptable in your environment, you may enable io_uring support by running:
.. prompt:: bash #
+
+ sudo sysctl -w kernel.io_uring_disabled=0
- ceph config set global 'enable_experimental_unrecoverable_data_corrupting_features' crimson
- ceph osd set-allow-crimson --yes-i-really-mean-it
- ceph config set mon osd_pool_default_crimson true
- ceph config set osd crimson_cpu_num <SUITABLE_INT>
+.. _deploying-crimson-osd:
-The first command enables the ``crimson`` experimental feature.
+Deploying Crimson OSDs
+----------------------
-The second enables the ``allow_crimson`` OSDMap flag. The monitor will
-not allow ``crimson-osd`` to boot without that flag.
+Before deploying a crimson OSD ensure the :ref:`required flags <crimson-required-flags>` are set.
-The third causes pools to be created by default with the ``crimson`` flag.
-Crimson pools are restricted to operations supported by Crimson.
-``crimson-osd`` won't instantiate PGs from non-Crimson pools.
+When :ref:`deploying OSDs <cephadm-deploy-osds>`, use ``--osd-type``
+flag to specify crimson OSDs. By default this value is set to ``classic``. To deploy a crimson OSD,
+set this flag to ``crimson``.
-Lastly, ensure that `Crimson CPU allocation`_ flags were set appropriately.
+.. prompt:: bash #
+
+ ceph orch apply osd --osd-type crimson ...
+
+Alternatively, you can also set the `osd_type
+<https://docs.ceph.com/en/latest/cephadm/services/osd/#ceph.deployment.drive_group.DriveGroupSpec.osd_type>`_
+to ``crimson`` in the :ref:`OSD Service Specification <drivegroups>` file
+like so:
+
+.. code-block:: yaml
+
+ service_type: osd
+ service_id: default_drive_group
+ placement:
+ host_pattern: '*'
+ spec:
+ data_devices:
+ all: true
+ osd_type: crimson # osd_type should be set to crimson
+
+If the above file is named ``osd-spec.yaml``, it can be used to deploy OSDs like so:
+
+.. prompt:: bash #
+
+ ceph orch apply -i /path/to/osd_spec.yml
.. _crimson-bakends:
SeaStore is the primary native object store for Crimson OSD, though it is not the default as the support is in early stages.
It is built with the Seastar framework and adheres to its asynchronous, shard-based architecture.
-.. note::
- The Orchastrator's ``apply osd --method`` command does not currently support deploying
- Crimson OSDs with SeaStore directly on the physical device with ``--method raw``.
- Use the default ``lvm`` method instead.
-
When :ref:`deploying OSDs <cephadm-deploy-osds>`, use the ``--objectstore`` flag to specify the object store type.
The default value is ``bluestore``. To deploy a Crimson OSD with SeaStore, set this flag to ``seastore``.
.. prompt:: bash #
-
+
ceph orch apply osd --osd-type crimson --objectstore seastore ...
Alternatively, you can also set the ``objectstore`` to ``seastore`` in the :ref:`OSD Service Specification <drivegroups>` file
osd_type: crimson
objectstore: seastore # objectstore should be set to seastore
+
+ .. note::
+ The Orchastrator's ``apply osd --method`` command does not currently support deploying
+ Crimson OSDs with SeaStore directly on the physical device with ``--method raw``.
+ Use the default ``lvm`` method instead.
+
.. describe:: cyanstore
CyanStore is inspired by ``memstore`` from the classic OSD, offering a lightweight, in-memory object store model.
Asock command
-------------
-An admin socket command is offered for dumping metrics::
+An admin socket command is offered for dumping metrics:
.. prompt:: bash #
-
- $ ceph tell osd.0 dump_metrics
- $ ceph tell osd.0 dump_metrics reactor_utilization
+
+ ceph tell osd.0 dump_metrics
+ ceph tell osd.0 dump_metrics reactor_utilization
Here `reactor_utilization` is an optional string allowing us to filter
the dumped metrics by prefix.