From: Ville Ojamo <14869000+bluikko@users.noreply.github.com> Date: Wed, 22 Oct 2025 07:19:31 +0000 (+0700) Subject: doc: Use validated links with ref instead of external links X-Git-Url: http://git-server-git.apps.pok.os.sepia.ceph.com/?a=commitdiff_plain;h=refs%2Fpull%2F66020%2Fhead;p=ceph.git doc: Use validated links with ref instead of external links Use :ref: for intra-docs links that are validated, instead of external links. Only use already existing labels. Fixes a few anchors that pointed to now-renamed section titles. Use automatically generated link text where appropriate. Delete unused link definitions. Mostly in doc/rados/ but also a few in doc/rbd/. Try to fix all links in each of the changed documents. Signed-off-by: Ville Ojamo <14869000+bluikko@users.noreply.github.com> --- diff --git a/doc/rados/configuration/auth-config-ref.rst b/doc/rados/configuration/auth-config-ref.rst index fc14f4ee6eff..31711678b22f 100644 --- a/doc/rados/configuration/auth-config-ref.rst +++ b/doc/rados/configuration/auth-config-ref.rst @@ -14,9 +14,8 @@ is very safe and you cannot afford authentication, you can disable it. man-in-the-middle attack that alters your client/server messages, which could have disastrous security effects. -For information about creating users, see `User Management`_. For details on -the architecture of CephX, see `Architecture - High Availability -Authentication`_. +For information about creating users, see :ref:`user-management`. For details on +the architecture of CephX, see :ref:`arch_high_availability_authentication`. Deployment Scenarios @@ -52,8 +51,8 @@ Enabling CephX When CephX is enabled, Ceph will look for the keyring in the default search path: this path includes ``/etc/ceph/$cluster.$name.keyring``. It is possible to override this search-path location by adding a ``keyring`` option in the -``[global]`` section of your `Ceph configuration`_ file, but this is not -recommended. +``[global]`` section of your :ref:`Ceph configuration ` +file, but this is not recommended. To enable CephX on a cluster for which authentication has been disabled, carry out the following procedure. If you (or your deployment utility) have already @@ -104,7 +103,8 @@ generated the keys, you may skip the steps related to generating keys. ceph auth get-or-create mds.{$id} mon 'allow rwx' osd 'allow *' mds 'allow *' mgr 'allow profile mds' -o /var/lib/ceph/mds/ceph-{$id}/keyring #. Enable CephX authentication by setting the following options in the - ``[global]`` section of your `Ceph configuration`_ file: + ``[global]`` section of your :ref:`Ceph configuration ` + file: .. code-block:: ini @@ -128,7 +128,8 @@ so.** However, setup and troubleshooting might be easier if authentication is temporarily disabled and subsequently re-enabled. #. Disable CephX authentication by setting the following options in the - ``[global]`` section of your `Ceph configuration`_ file: + ``[global]`` section of your :ref:`Ceph configuration ` + file: .. code-block:: ini @@ -196,7 +197,7 @@ commands and Ceph clients is to include a Ceph keyring under the ``/etc/ceph`` directory. For Octopus and later releases that use ``cephadm``, the filename is usually ``ceph.client.admin.keyring``. If the keyring is included in the ``/etc/ceph`` directory, then it is unnecessary to specify a ``keyring`` entry -in the Ceph configuration file. +in the :ref:`Ceph configuration ` file. Because the Ceph Storage Cluster's keyring file contains the ``client.admin`` key, we recommend copying the keyring file to nodes from which you run @@ -374,6 +375,3 @@ Time to Live .. _Monitor Bootstrapping: ../../../install/manual-deployment#monitor-bootstrapping .. _Operating a Cluster: ../../operations/operating .. _Manual Deployment: ../../../install/manual-deployment -.. _Ceph configuration: ../ceph-conf -.. _Architecture - High Availability Authentication: ../../../architecture#high-availability-authentication -.. _User Management: ../../operations/user-management diff --git a/doc/rados/configuration/common.rst b/doc/rados/configuration/common.rst index 887c476fa870..881ff67b7707 100644 --- a/doc/rados/configuration/common.rst +++ b/doc/rados/configuration/common.rst @@ -91,9 +91,7 @@ corresponding directory. With metavariables fully expressed and a cluster named /var/lib/ceph/mon/ceph-a -For additional details, see the `Monitor Config Reference`_. - -.. _Monitor Config Reference: ../mon-config-ref +For additional details, see the :ref:`monitor-config-reference`. .. _ceph-osd-config: @@ -112,10 +110,8 @@ the Ceph configuration file, as shown here: auth_service_required = cephx auth_client_required = cephx -In addition, you should enable message signing. For details, see `Cephx Config -Reference`_. - -.. _Cephx Config Reference: ../auth-config-ref +In addition, you should enable message signing. For details, +see :ref:`rados-cephx-config-ref`. .. _ceph-monitor-config: diff --git a/doc/rados/configuration/mon-config-ref.rst b/doc/rados/configuration/mon-config-ref.rst index c0d2ad5316df..3b3afd034826 100644 --- a/doc/rados/configuration/mon-config-ref.rst +++ b/doc/rados/configuration/mon-config-ref.rst @@ -8,7 +8,7 @@ Understanding how to configure a :term:`Ceph Monitor` is an important part of building a reliable :term:`Ceph Storage Cluster`. **All Ceph Storage Clusters have at least one monitor**. The monitor complement usually remains fairly consistent, but you can add, remove or replace a monitor in a cluster. See -`Adding/Removing a Monitor`_ for details. +:ref:`adding-and-removing-monitors` for details. .. index:: Ceph Monitor; Paxos @@ -28,7 +28,7 @@ algorithm can compute the location of any RADOS object within the cluster. This makes it possible for Ceph clients to talk directly to Ceph OSD Daemons. Direct communication between clients and Ceph OSD Daemons improves upon traditional storage architectures that required clients to communicate with a central -component. See `Scalability and High Availability`_ for more on this subject. +component. See :ref:`arch_scalability_and_high_availability` for more on this subject. The Ceph Monitor's primary function is to maintain a master copy of the cluster map. Monitors also provide authentication and logging services. All changes in @@ -224,7 +224,7 @@ monitors. However, if you decide to change the monitor's IP address, you must follow a specific procedure. See :ref:`Changing a Monitor's IP address` for details. -Monitors can also be found by clients by using DNS SRV records. See `Monitor lookup through DNS`_ for details. +Monitors can also be found by clients by using DNS SRV records. See :ref:`mon-dns-lookup` for details. Cluster ID ---------- @@ -641,11 +641,8 @@ NVMe-oF Monitor Client .. _Monitor Keyrings: ../../../dev/mon-bootstrap#secret-keys .. _Ceph configuration file: ../ceph-conf/#monitors .. _Network Configuration Reference: ../network-config-ref -.. _Monitor lookup through DNS: ../mon-lookup-dns .. _ACID: https://en.wikipedia.org/wiki/ACID -.. _Adding/Removing a Monitor: ../../operations/add-or-rm-mons .. _Monitoring a Cluster: ../../operations/monitoring .. _Monitoring OSDs and PGs: ../../operations/monitoring-osd-pg .. _Bootstrapping a Monitor: ../../../dev/mon-bootstrap .. _Monitor/OSD Interaction: ../mon-osd-interaction -.. _Scalability and High Availability: ../../../architecture#scalability-and-high-availability diff --git a/doc/rados/configuration/network-config-ref.rst b/doc/rados/configuration/network-config-ref.rst index b44c9d952a4f..5cefa27c4bb7 100644 --- a/doc/rados/configuration/network-config-ref.rst +++ b/doc/rados/configuration/network-config-ref.rst @@ -344,9 +344,7 @@ General Settings .. confval:: ms_inject_socket_failures -.. _Scalability and High Availability: ../../../architecture#scalability-and-high-availability .. _Hardware Recommendations - Networks: ../../../start/hardware-recommendations#networks -.. _hardware recommendations: ../../../start/hardware-recommendations .. _Monitor / OSD Interaction: ../mon-osd-interaction .. _Message Signatures: ../auth-config-ref#signatures .. _CIDR: https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing diff --git a/doc/rados/operations/user-management.rst b/doc/rados/operations/user-management.rst index 6a0eae9b54b3..7718cf7d8e98 100644 --- a/doc/rados/operations/user-management.rst +++ b/doc/rados/operations/user-management.rst @@ -49,15 +49,15 @@ Alternatively, you may use the ``CEPH_ARGS`` environment variable to avoid re-entry of the user name and secret. For details on configuring the Ceph Storage Cluster to use authentication, see -`Cephx Config Reference`_. For details on the architecture of Cephx, see -`Architecture - High Availability Authentication`_. +:ref:`rados-cephx-config-ref`. For details on the architecture of Cephx, see +:ref:`arch_high_availability_authentication`. Background ========== No matter what type of Ceph client is used (for example: Block Device, Object Storage, Filesystem, native API), Ceph stores all data as RADOS objects within -`pools`_. Ceph users must have access to a given pool in order to read and +:ref:`rados_pools`. Ceph users must have access to a given pool in order to read and write data, and Ceph users must have execute permissions in order to use Ceph's administrative commands. The following concepts will help you understand Ceph['s] user management. @@ -822,8 +822,6 @@ Ceph supports the following usage for user name and secret: sudo rbd map --id foo --keyring /path/to/keyring mypool/myimage -.. _pools: ../pools - Limitations =========== @@ -865,5 +863,3 @@ encryption. Anyone storing sensitive data in Ceph should consider encrypting their data before providing it to the Ceph system. -.. _Architecture - High Availability Authentication: ../../../architecture#high-availability-authentication -.. _Cephx Config Reference: ../../configuration/auth-config-ref diff --git a/doc/rbd/libvirt.rst b/doc/rbd/libvirt.rst index a55a4f95b799..e9c9ff2a00f1 100644 --- a/doc/rbd/libvirt.rst +++ b/doc/rbd/libvirt.rst @@ -45,7 +45,7 @@ cloud solutions like OpenStack, OpenNebula or CloudStack. The cloud solution use ``libvirt`` to interact with QEMU/KVM, and QEMU/KVM interacts with Ceph block devices via ``librbd``. See `Block Devices and OpenStack`_, `Block Devices and OpenNebula`_ and `Block Devices and CloudStack`_ for details. -See `Installation`_ for installation details. +See :ref:`install-overview` for installation details. You can also use Ceph block devices with ``libvirt``, ``virsh`` and the ``libvirt`` API. See `libvirt Virtualization API`_ for details. @@ -63,7 +63,7 @@ Configuring Ceph To configure Ceph for use with ``libvirt``, perform the following steps: -#. `Create a pool`_. The following example uses the +#. :ref:`Create a pool `. The following example uses the pool name ``libvirt-pool``.:: ceph osd pool create libvirt-pool @@ -76,7 +76,7 @@ To configure Ceph for use with ``libvirt``, perform the following steps: rbd pool init -#. `Create a Ceph User`_ (or use ``client.admin`` for version 0.9.7 and +#. :ref:`Create a Ceph User ` (or use ``client.admin`` for version 0.9.7 and earlier). The following example uses the Ceph user name ``client.libvirt`` and references ``libvirt-pool``. :: @@ -87,7 +87,7 @@ To configure Ceph for use with ``libvirt``, perform the following steps: ceph auth ls **NOTE**: ``libvirt`` will access Ceph using the ID ``libvirt``, - not the Ceph name ``client.libvirt``. See `User Management - User`_ and + not the Ceph name ``client.libvirt``. See :ref:`User Management - User ` and `User Management - CLI`_ for a detailed explanation of the difference between ID and name. @@ -230,8 +230,8 @@ commands, refer to `Virsh Command Reference`_. #. Save the file. -#. If your Ceph Storage Cluster has `Ceph Authentication`_ enabled (it does by - default), you must generate a secret. :: +#. If your Ceph Storage Cluster has :ref:`rados-cephx-config-ref` + enabled (it does by default), you must generate a secret. :: cat > secret.xml < @@ -307,19 +307,14 @@ If everything looks okay, you may begin using the Ceph block device within your VM. -.. _Installation: ../../install .. _libvirt Virtualization API: http://www.libvirt.org .. _Block Devices and OpenStack: ../rbd-openstack .. _Block Devices and OpenNebula: https://docs.opennebula.io/stable/open_cluster_deployment/storage_setup/ceph_ds.html#datastore-internals .. _Block Devices and CloudStack: ../rbd-cloudstack -.. _Create a pool: ../../rados/operations/pools#create-a-pool -.. _Create a Ceph User: ../../rados/operations/user-management#add-a-user .. _create an image: ../qemu-rbd#creating-images-with-qemu .. _Virsh Command Reference: http://www.libvirt.org/virshcmdref.html .. _KVM/VirtManager: https://help.ubuntu.com/community/KVM/VirtManager -.. _Ceph Authentication: ../../rados/configuration/auth-config-ref .. _Disks: http://www.libvirt.org/formatdomain.html#elementsDisks .. _rbd create: ../rados-rbd-cmds#creating-a-block-device-image -.. _User Management - User: ../../rados/operations/user-management#user .. _User Management - CLI: ../../rados/operations/user-management#command-line-usage .. _Virtio: http://www.linux-kvm.org/page/Virtio diff --git a/doc/rbd/rbd-persistent-read-only-cache.rst b/doc/rbd/rbd-persistent-read-only-cache.rst index 5bef7f592008..61a40e4fba00 100644 --- a/doc/rbd/rbd-persistent-read-only-cache.rst +++ b/doc/rbd/rbd-persistent-read-only-cache.rst @@ -38,7 +38,8 @@ Enable RBD Shared Read-only Parent Image Cache ---------------------------------------------- To enable RBD shared read-only parent image cache, the following Ceph settings -need to added in the ``[client]`` `section`_ of your ``ceph.conf`` file:: +need to added in the ``[client]`` :ref:`section ` of +your ``ceph.conf`` file:: rbd parent cache enabled = true rbd plugins = parent_cache @@ -121,8 +122,8 @@ Running the Immutable Object Cache Daemon ----------------------------------------- ``ceph-immutable-object-cache`` daemon should use a unique Ceph user ID. -To `create a Ceph user`_, with ``ceph`` specify the ``auth get-or-create`` -command, user name, monitor caps, and OSD caps:: +To :ref:`create a Ceph user `, with ``ceph`` specify +the ``auth get-or-create`` command, user name, monitor caps, and OSD caps:: ceph auth get-or-create client.ceph-immutable-object-cache.{unique id} mon 'allow r' osd 'profile rbd-read-only' @@ -196,6 +197,4 @@ The immutable object cache supports throttling, controlled by the following sett :Default: ``1`` .. _Cloned RBD Images: ../rbd-snapshot/#layering -.. _section: ../../rados/configuration/ceph-conf/#configuration-sections -.. _create a Ceph user: ../../rados/operations/user-management#add-a-user diff --git a/doc/rbd/rbd-persistent-write-log-cache.rst b/doc/rbd/rbd-persistent-write-log-cache.rst index af323962d0c6..0be62c53d41d 100644 --- a/doc/rbd/rbd-persistent-write-log-cache.rst +++ b/doc/rbd/rbd-persistent-write-log-cache.rst @@ -66,7 +66,7 @@ Here are some cache configuration settings: size is 1 GB. The above configurations can be set per-host, per-pool, per-image etc. Eg, to -set per-host, add the overrides to the appropriate `section`_ in the host's +set per-host, add the overrides to the appropriate :ref:`section ` in the host's ``ceph.conf`` file. To set per-pool, per-image, etc, please refer to the ``rbd config`` `commands`_. @@ -134,6 +134,5 @@ For example:: $ rbd persistent-cache invalidate rbd/foo -.. _section: ../../rados/configuration/ceph-conf/#configuration-sections .. _commands: ../../man/8/rbd#commands .. _DAX: https://www.kernel.org/doc/Documentation/filesystems/dax.txt diff --git a/doc/rbd/rbd-snapshot.rst b/doc/rbd/rbd-snapshot.rst index 4a4309f8e7dd..a8a917018ba6 100644 --- a/doc/rbd/rbd-snapshot.rst +++ b/doc/rbd/rbd-snapshot.rst @@ -42,7 +42,7 @@ the ``rbd`` command and several higher-level interfaces, including `QEMU`_, Cephx Notes =========== -When `cephx`_ authentication is enabled (it is by default), you must specify a +When :ref:`cephx ` authentication is enabled (it is by default), you must specify a user name or ID and a path to the keyring containing the corresponding key. See :ref:`User Management ` for details. @@ -361,7 +361,6 @@ For example: a flattened image takes up more storage space than a layered clone does. -.. _cephx: ../../rados/configuration/auth-config-ref/ .. _QEMU: ../qemu-rbd/ .. _OpenStack: ../rbd-openstack/ .. _OpenNebula: https://docs.opennebula.io/stable/management_and_operations/vm_management/vm_instances.html?highlight=ceph#managing-disk-snapshots