Labels mostly existed already but add labels in 2 files.
Add missing closing quotation mark in
rados/troubleshooting/log-and-debug.rst.
Signed-off-by: Ville Ojamo <14869000+bluikko@users.noreply.github.com>
to port the configured 6799. Use the option "--processor.jaeger-compact.server-host-port=6799" for manual Jaeger
deployments.
+.. _jaegertracing-enable:
HOW TO ENABLE TRACING IN CEPH
-----------------------------
ceph config-key set <key> {<val>}
+.. _man-ceph-daemon:
daemon
------
The ``[ident|fault]`` parameter determines which kind of light will blink. By
default, the `identification` light is used.
-.. note:: This command works only if the Cephadm or the Rook `orchestrator
- <https://docs.ceph.com/docs/master/mgr/orchestrator/#orchestrator-cli-module>`_
- module is enabled. To see which orchestrator module is enabled, run the
- following command:
+.. note:: This command works only if the Cephadm or the Rook
+ :ref:`orchestrator <orchestrator-cli-module>` module is enabled. To see
+ which orchestrator module is enabled, run the following command:
.. prompt:: bash $
- a fatal signal has been raised or
- an assertion within Ceph code has been triggered or
- sending in-memory logs to the output log has been manually triggered.
- Consult `the portion of the "Ceph Administration Tool documentation
+ Consult :ref:`the portion of the "Ceph Administration Tool" documentation
that provides an example of how to submit admin socket commands
- <http://docs.ceph.com/en/latest/man/8/ceph/#daemon>`_ for more detail.
+ <man-ceph-daemon>` for more detail.
Log levels and memory levels can be set either together or separately. If a
subsystem is assigned a single value, then that value determines both the log
end
-If `tracing is enabled <https://docs.ceph.com/en/latest/jaegertracing/#how-to-enable-tracing-in-ceph/>`_ on the RGW, the value of Request.Trace.Enable is true, so we should disable tracing for all other requests that do not match the bucket name.
+If :ref:`tracing is enabled <jaegertracing-enable>` on the RGW, the value of Request.Trace.Enable is true, so we should disable tracing for all other requests that do not match the bucket name.
In the ``prerequest`` context:
.. code-block:: lua
Configuring a zone involves specifying a series of Ceph Object Gateway
pools. For consistency, we recommend using a pool prefix that is the
-same as the zone name. See
-`Pools <http://docs.ceph.com/en/latest/rados/operations/pools/#pools>`__
-for details of configuring pools.
+same as the zone name. See :ref:`rados_pools` for details of
+configuring pools.
To set a zone, create a JSON object consisting of the pools, save the
object to a file (e.g., ``zone.json``); then, run the following
will create that pool with the default values from ``osd pool default pg num``
and ``osd pool default pgp num``. These defaults are sufficient for some pools,
but others (especially those listed in ``placement_pools`` for the bucket index
-and data) will require additional tuning. See `Pools
-<http://docs.ceph.com/en/latest/rados/operations/pools/#pools>`__ for details
-on pool creation.
+and data) will require additional tuning. See :ref:`rados_pools` for details on
+pool creation.
.. _radosgw-pool-namespaces:
ceph osd lspools
If it does not exist instructions for creating pools can be found on the
- `RADOS pool operations page
- <http://docs.ceph.com/en/latest/rados/operations/pools/>`_.
+ :ref:`RADOS pool operations page <rados_pools>`.
#. As ``root``, on a iSCSI gateway node, create a file named
``iscsi-gateway.cfg`` in the ``/etc/ceph/`` directory:
For more
information on how to effectively use a mix of fast drives and slow drives in
-your Ceph cluster, see the `block and block.db`_ section of the Bluestore
-Configuration Reference.
+your Ceph cluster, see the :ref:`block and block.db <bluestore-mixed-device-config>`
+section of the Bluestore Configuration Reference.
Hard Disk Drives
----------------
-.. _block and block.db: https://docs.ceph.com/en/latest/rados/configuration/bluestore-config-ref/#block-and-block-db
.. _Ceph blog: https://ceph.com/community/blog/
.. _Ceph Write Throughput 1: http://ceph.com/community/ceph-performance-part-1-disk-controller-write-throughput/
.. _Ceph Write Throughput 2: http://ceph.com/community/ceph-performance-part-2-write-throughput-without-ssd-journals/