From: Ville Ojamo <14869000+bluikko@users.noreply.github.com> Date: Tue, 5 Aug 2025 14:45:05 +0000 (+0700) Subject: doc/rados: Use ref instead of relative external links X-Git-Url: http://git.apps.os.sepia.ceph.com/?a=commitdiff_plain;h=d4be6b3245267ec02f6a37f08c3977d76e08ca93;p=ceph.git doc/rados: Use ref instead of relative external links Instead of external links use :ref: where dst labels exist already in: operations/erasure-code.rst operations/pools.rst troubleshooting/troubleshooting-osd.rst Use link text generation where it is reasonably close to previous manual link text. Delete some unused link definitions. Signed-off-by: Ville Ojamo <14869000+bluikko@users.noreply.github.com> --- diff --git a/doc/rados/operations/erasure-code.rst b/doc/rados/operations/erasure-code.rst index 2c381112bb6a6..f27d00e5e9e76 100644 --- a/doc/rados/operations/erasure-code.rst +++ b/doc/rados/operations/erasure-code.rst @@ -4,7 +4,7 @@ Erasure code ============== -By default, Ceph `pools <../pools>`_ are created with the type "replicated". In +By default, Ceph :ref:`rados_pools` are created with the type "replicated". In replicated-type pools, every object is copied to multiple disks. This multiple copying is the method of data protection known as "replication". @@ -169,8 +169,8 @@ no two *chunks* are stored in the same rack. +------+ -More information can be found in the `erasure-code profiles -<../erasure-code-profile>`_ documentation. +More information can be found in the :ref:`erasure-code-profiles` +documentation. Erasure Coding with Overwrites @@ -204,7 +204,7 @@ erasure-coded pool as the ``--data-pool`` during image creation: rbd create --size 1G --data-pool ec_pool replicated_pool/image_name For CephFS, an erasure-coded pool can be set as the default data pool during -file system creation or via `file layouts <../../../cephfs/file-layouts>`_. +file system creation or via :ref:`file-layouts`. Erasure-coded pool overhead --------------------------- diff --git a/doc/rados/operations/pools.rst b/doc/rados/operations/pools.rst index d12758b855680..c3c8c6ef61fd2 100644 --- a/doc/rados/operations/pools.rst +++ b/doc/rados/operations/pools.rst @@ -15,8 +15,8 @@ Pools provide: For example: a typical configuration stores three replicas (copies) of each RADOS object (that is: ``size = 3``), but you can configure - the number of replicas on a per-pool basis. For `erasure-coded pools - <../erasure-code>`_, resilience is defined as the number of coding (aka parity) chunks + the number of replicas on a per-pool basis. For :ref:`erasure-coded pools + `, resilience is defined as the number of coding (aka parity) chunks (for example, ``m = 2`` in the default erasure code profile). - **Placement Groups**: The :ref:`autoscaler ` sets the number @@ -99,12 +99,12 @@ To retrieve even more information, you can execute this command with the ``--for Creating a Pool =============== -Before creating a pool, consult `Pool, PG and CRUSH Config Reference`_. The +Before creating a pool, consult :ref:`rados_config_pool_pg_crush_ref`. The Ceph central configuration database contains a default setting (namely, ``osd_pool_default_pg_num``) that determines the number of PGs assigned to a new pool if no specific value has been specified. It is possible to change this value from its default. For more on the subject of setting the number of -PGs per pool, see `setting the number of placement groups`_. +PGs per pool, see :ref:`setting the number of placement groups`. .. note:: In Luminous and later releases, each pool must be associated with the application that will be using the pool. For more information, see @@ -163,7 +163,7 @@ following: The pool's data protection strategy. This can be either ``replicated`` (like RAID1 and RAID10) ``erasure (a kind - of `generalized parity RAID <../erasure-code>`_ strategy like RAID6 but + of :ref:`generalized parity RAID ` strategy like RAID6 but more flexible). A ``replicated`` pool yields less usable capacity for a given amount of raw storage but is suitable for all Ceph components and use cases. @@ -185,12 +185,12 @@ following: :Type: String :Required: No. - :Default: For ``replicated`` pools, it is by default the rule specified by the :confval:`osd_pool_default_crush_rule` configuration option. This rule must exist. For ``erasure`` pools, it is the ``erasure-code`` rule if the ``default`` `erasure code profile`_ is used or the ``{pool-name}`` rule if not. This rule will be created implicitly if it doesn't already exist. + :Default: For ``replicated`` pools, it is by default the rule specified by the :confval:`osd_pool_default_crush_rule` configuration option. This rule must exist. For ``erasure`` pools, it is the ``erasure-code`` rule if the ``default`` :ref:`erasure code profile ` is used or the ``{pool-name}`` rule if not. This rule will be created implicitly if it doesn't already exist. .. describe:: [erasure-code-profile=profile] - For ``erasure`` pools only. Instructs Ceph to use the specified `erasure - code profile`_. This profile must be an existing profile as defined via + For ``erasure`` pools only. Instructs Ceph to use the specified :ref:`erasure + code profile `. This profile must be an existing profile as defined via the dashboard or invoking ``osd erasure-code-profile set``. Note that changes to the EC profile of a pool after creation do *not* take effect. To change the EC profile of an existing pool one must modify the pool to @@ -199,8 +199,6 @@ following: :Type: String :Required: No. -.. _erasure code profile: ../erasure-code-profile - .. describe:: --autoscale-mode= - ``on``: the Ceph cluster will autotune changes to the number of PGs in the pool based on actual usage. @@ -275,9 +273,7 @@ To remove a pool, you must set the ``mon_allow_pool_delete`` flag to ``true`` in central configuration, otherwise the Ceph monitors will refuse to remove pools. -For more information, see `Monitor Configuration`_. - -.. _Monitor Configuration: ../../configuration/mon-config-ref +For more information, see :ref:`Monitor Configuration `. If there are custom CRUSH rules that are no longer in use or needed, consider deleting those rules. @@ -420,7 +416,7 @@ You may set values for the following keys: .. describe:: min_size - :Description: Sets the minimum number of active replicas (or shards) required for PGs to be active and thus for I/O operations to proceed. For further details, see `Setting the Number of RADOS Object Replicas`_. For erasure-coded pools, this should be set to a value greater than ``K``. If I/O is allowed with only ``K`` shards available, there will be no redundancy and data will be lost in the event of an additional, permanent OSD failure. For more information, see `Erasure Code <../erasure-code>`_ + :Description: Sets the minimum number of active replicas (or shards) required for PGs to be active and thus for I/O operations to proceed. For further details, see `Setting the Number of RADOS Object Replicas`_. For erasure-coded pools, this should be set to a value greater than ``K``. If I/O is allowed with only ``K`` shards available, there will be no redundancy and data will be lost in the event of an additional, permanent OSD failure. For more information, see :ref:`ecpool` :Type: Integer :Version: ``0.54`` and above @@ -902,9 +898,7 @@ Here are the break downs of the argument: :Type: String :Required: Yes. -.. _Pool, PG and CRUSH Config Reference: ../../configuration/pool-pg-config-ref .. _Bloom Filter: https://en.wikipedia.org/wiki/Bloom_filter -.. _setting the number of placement groups: ../placement-groups#set-the-number-of-placement-groups .. _Erasure Coding with Overwrites: ../erasure-code#erasure-coding-with-overwrites .. _Block Device Commands: ../../../rbd/rados-rbd-cmds/#create-a-block-device-pool .. _pgcalc: ../pgcalc diff --git a/doc/rados/troubleshooting/troubleshooting-osd.rst b/doc/rados/troubleshooting/troubleshooting-osd.rst index 37de718a01288..e8a67af053047 100644 --- a/doc/rados/troubleshooting/troubleshooting-osd.rst +++ b/doc/rados/troubleshooting/troubleshooting-osd.rst @@ -11,7 +11,7 @@ is a monitor quorum. If the monitors don't have a quorum or if there are errors with the monitor status, address the monitor issues before proceeding by consulting the material -in `Troubleshooting Monitors <../troubleshooting-mon>`_. +in :ref:`rados-troubleshooting-mon`. Next, check your networks to make sure that they are running properly. Networks can have a significant impact on OSD operation and performance. Look for @@ -421,7 +421,7 @@ the full OSD. copy of your data on at least one OSD. Deleting placement group directories is a rare and extreme intervention. It is not to be undertaken lightly. -See `Monitor Config Reference`_ for more information. +See :ref:`monitor-config-reference` for more information. OSDs are Slow/Unresponsive @@ -812,11 +812,8 @@ being marked ``out`` (regardless of the current value of .. _iostat: https://en.wikipedia.org/wiki/Iostat -.. _Ceph Logging and Debugging: ../../configuration/ceph-conf#ceph-logging-and-debugging .. _Logging and Debugging: ../log-and-debug -.. _Debugging and Logging: ../debug .. _Monitor/OSD Interaction: ../../configuration/mon-osd-interaction -.. _Monitor Config Reference: ../../configuration/mon-config-ref .. _monitoring your OSDs: ../../operations/monitoring-osd-pg .. _monitoring OSDs: ../../operations/monitoring-osd-pg/#monitoring-osds