man-in-the-middle attack that alters your client/server messages, which
could have disastrous security effects.
-For information about creating users, see `User Management`_. For details on
-the architecture of CephX, see `Architecture - High Availability
-Authentication`_.
+For information about creating users, see :ref:`user-management`. For details on
+the architecture of CephX, see :ref:`arch_high_availability_authentication`.
Deployment Scenarios
When CephX is enabled, Ceph will look for the keyring in the default search
path: this path includes ``/etc/ceph/$cluster.$name.keyring``. It is possible
to override this search-path location by adding a ``keyring`` option in the
-``[global]`` section of your `Ceph configuration`_ file, but this is not
-recommended.
+``[global]`` section of your :ref:`Ceph configuration <configuring-ceph>`
+file, but this is not recommended.
To enable CephX on a cluster for which authentication has been disabled, carry
out the following procedure. If you (or your deployment utility) have already
ceph auth get-or-create mds.{$id} mon 'allow rwx' osd 'allow *' mds 'allow *' mgr 'allow profile mds' -o /var/lib/ceph/mds/ceph-{$id}/keyring
#. Enable CephX authentication by setting the following options in the
- ``[global]`` section of your `Ceph configuration`_ file:
+ ``[global]`` section of your :ref:`Ceph configuration <configuring-ceph>`
+ file:
.. code-block:: ini
temporarily disabled and subsequently re-enabled.
#. Disable CephX authentication by setting the following options in the
- ``[global]`` section of your `Ceph configuration`_ file:
+ ``[global]`` section of your :ref:`Ceph configuration <configuring-ceph>`
+ file:
.. code-block:: ini
directory. For Octopus and later releases that use ``cephadm``, the filename is
usually ``ceph.client.admin.keyring``. If the keyring is included in the
``/etc/ceph`` directory, then it is unnecessary to specify a ``keyring`` entry
-in the Ceph configuration file.
+in the :ref:`Ceph configuration <configuring-ceph>` file.
Because the Ceph Storage Cluster's keyring file contains the ``client.admin``
key, we recommend copying the keyring file to nodes from which you run
.. _Monitor Bootstrapping: ../../../install/manual-deployment#monitor-bootstrapping
.. _Operating a Cluster: ../../operations/operating
.. _Manual Deployment: ../../../install/manual-deployment
-.. _Ceph configuration: ../ceph-conf
-.. _Architecture - High Availability Authentication: ../../../architecture#high-availability-authentication
-.. _User Management: ../../operations/user-management
/var/lib/ceph/mon/ceph-a
-For additional details, see the `Monitor Config Reference`_.
-
-.. _Monitor Config Reference: ../mon-config-ref
+For additional details, see the :ref:`monitor-config-reference`.
.. _ceph-osd-config:
auth_service_required = cephx
auth_client_required = cephx
-In addition, you should enable message signing. For details, see `Cephx Config
-Reference`_.
-
-.. _Cephx Config Reference: ../auth-config-ref
+In addition, you should enable message signing. For details,
+see :ref:`rados-cephx-config-ref`.
.. _ceph-monitor-config:
building a reliable :term:`Ceph Storage Cluster`. **All Ceph Storage Clusters
have at least one monitor**. The monitor complement usually remains fairly
consistent, but you can add, remove or replace a monitor in a cluster. See
-`Adding/Removing a Monitor`_ for details.
+:ref:`adding-and-removing-monitors` for details.
.. index:: Ceph Monitor; Paxos
makes it possible for Ceph clients to talk directly to Ceph OSD Daemons. Direct
communication between clients and Ceph OSD Daemons improves upon traditional
storage architectures that required clients to communicate with a central
-component. See `Scalability and High Availability`_ for more on this subject.
+component. See :ref:`arch_scalability_and_high_availability` for more on this subject.
The Ceph Monitor's primary function is to maintain a master copy of the cluster
map. Monitors also provide authentication and logging services. All changes in
must follow a specific procedure. See :ref:`Changing a Monitor's IP address` for
details.
-Monitors can also be found by clients by using DNS SRV records. See `Monitor lookup through DNS`_ for details.
+Monitors can also be found by clients by using DNS SRV records. See :ref:`mon-dns-lookup` for details.
Cluster ID
----------
.. _Monitor Keyrings: ../../../dev/mon-bootstrap#secret-keys
.. _Ceph configuration file: ../ceph-conf/#monitors
.. _Network Configuration Reference: ../network-config-ref
-.. _Monitor lookup through DNS: ../mon-lookup-dns
.. _ACID: https://en.wikipedia.org/wiki/ACID
-.. _Adding/Removing a Monitor: ../../operations/add-or-rm-mons
.. _Monitoring a Cluster: ../../operations/monitoring
.. _Monitoring OSDs and PGs: ../../operations/monitoring-osd-pg
.. _Bootstrapping a Monitor: ../../../dev/mon-bootstrap
.. _Monitor/OSD Interaction: ../mon-osd-interaction
-.. _Scalability and High Availability: ../../../architecture#scalability-and-high-availability
.. confval:: ms_inject_socket_failures
-.. _Scalability and High Availability: ../../../architecture#scalability-and-high-availability
.. _Hardware Recommendations - Networks: ../../../start/hardware-recommendations#networks
-.. _hardware recommendations: ../../../start/hardware-recommendations
.. _Monitor / OSD Interaction: ../mon-osd-interaction
.. _Message Signatures: ../auth-config-ref#signatures
.. _CIDR: https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing
re-entry of the user name and secret.
For details on configuring the Ceph Storage Cluster to use authentication, see
-`Cephx Config Reference`_. For details on the architecture of Cephx, see
-`Architecture - High Availability Authentication`_.
+:ref:`rados-cephx-config-ref`. For details on the architecture of Cephx, see
+:ref:`arch_high_availability_authentication`.
Background
==========
No matter what type of Ceph client is used (for example: Block Device, Object
Storage, Filesystem, native API), Ceph stores all data as RADOS objects within
-`pools`_. Ceph users must have access to a given pool in order to read and
+:ref:`rados_pools`. Ceph users must have access to a given pool in order to read and
write data, and Ceph users must have execute permissions in order to use Ceph's
administrative commands. The following concepts will help you understand
Ceph['s] user management.
sudo rbd map --id foo --keyring /path/to/keyring mypool/myimage
-.. _pools: ../pools
-
Limitations
===========
encrypting their data before providing it to the Ceph system.
-.. _Architecture - High Availability Authentication: ../../../architecture#high-availability-authentication
-.. _Cephx Config Reference: ../../configuration/auth-config-ref
``libvirt`` to interact with QEMU/KVM, and QEMU/KVM interacts with Ceph block
devices via ``librbd``. See `Block Devices and OpenStack`_,
`Block Devices and OpenNebula`_ and `Block Devices and CloudStack`_ for details.
-See `Installation`_ for installation details.
+See :ref:`install-overview` for installation details.
You can also use Ceph block devices with ``libvirt``, ``virsh`` and the
``libvirt`` API. See `libvirt Virtualization API`_ for details.
To configure Ceph for use with ``libvirt``, perform the following steps:
-#. `Create a pool`_. The following example uses the
+#. :ref:`Create a pool <createpool>`. The following example uses the
pool name ``libvirt-pool``.::
ceph osd pool create libvirt-pool
rbd pool init <pool-name>
-#. `Create a Ceph User`_ (or use ``client.admin`` for version 0.9.7 and
+#. :ref:`Create a Ceph User <rados_ops_adding_a_user>` (or use ``client.admin`` for version 0.9.7 and
earlier). The following example uses the Ceph user name ``client.libvirt``
and references ``libvirt-pool``. ::
ceph auth ls
**NOTE**: ``libvirt`` will access Ceph using the ID ``libvirt``,
- not the Ceph name ``client.libvirt``. See `User Management - User`_ and
+ not the Ceph name ``client.libvirt``. See :ref:`User Management - User <rados-ops-user>` and
`User Management - CLI`_ for a detailed explanation of the difference
between ID and name.
#. Save the file.
-#. If your Ceph Storage Cluster has `Ceph Authentication`_ enabled (it does by
- default), you must generate a secret. ::
+#. If your Ceph Storage Cluster has :ref:`rados-cephx-config-ref`
+ enabled (it does by default), you must generate a secret. ::
cat > secret.xml <<EOF
<secret ephemeral='no' private='no'>
within your VM.
-.. _Installation: ../../install
.. _libvirt Virtualization API: http://www.libvirt.org
.. _Block Devices and OpenStack: ../rbd-openstack
.. _Block Devices and OpenNebula: https://docs.opennebula.io/stable/open_cluster_deployment/storage_setup/ceph_ds.html#datastore-internals
.. _Block Devices and CloudStack: ../rbd-cloudstack
-.. _Create a pool: ../../rados/operations/pools#create-a-pool
-.. _Create a Ceph User: ../../rados/operations/user-management#add-a-user
.. _create an image: ../qemu-rbd#creating-images-with-qemu
.. _Virsh Command Reference: http://www.libvirt.org/virshcmdref.html
.. _KVM/VirtManager: https://help.ubuntu.com/community/KVM/VirtManager
-.. _Ceph Authentication: ../../rados/configuration/auth-config-ref
.. _Disks: http://www.libvirt.org/formatdomain.html#elementsDisks
.. _rbd create: ../rados-rbd-cmds#creating-a-block-device-image
-.. _User Management - User: ../../rados/operations/user-management#user
.. _User Management - CLI: ../../rados/operations/user-management#command-line-usage
.. _Virtio: http://www.linux-kvm.org/page/Virtio
----------------------------------------------
To enable RBD shared read-only parent image cache, the following Ceph settings
-need to added in the ``[client]`` `section`_ of your ``ceph.conf`` file::
+need to added in the ``[client]`` :ref:`section <ceph-conf-file>` of
+your ``ceph.conf`` file::
rbd parent cache enabled = true
rbd plugins = parent_cache
-----------------------------------------
``ceph-immutable-object-cache`` daemon should use a unique Ceph user ID.
-To `create a Ceph user`_, with ``ceph`` specify the ``auth get-or-create``
-command, user name, monitor caps, and OSD caps::
+To :ref:`create a Ceph user <rados_ops_adding_a_user>`, with ``ceph`` specify
+the ``auth get-or-create`` command, user name, monitor caps, and OSD caps::
ceph auth get-or-create client.ceph-immutable-object-cache.{unique id} mon 'allow r' osd 'profile rbd-read-only'
:Default: ``1``
.. _Cloned RBD Images: ../rbd-snapshot/#layering
-.. _section: ../../rados/configuration/ceph-conf/#configuration-sections
-.. _create a Ceph user: ../../rados/operations/user-management#add-a-user
size is 1 GB.
The above configurations can be set per-host, per-pool, per-image etc. Eg, to
-set per-host, add the overrides to the appropriate `section`_ in the host's
+set per-host, add the overrides to the appropriate :ref:`section <ceph-conf-file>` in the host's
``ceph.conf`` file. To set per-pool, per-image, etc, please refer to the
``rbd config`` `commands`_.
$ rbd persistent-cache invalidate rbd/foo
-.. _section: ../../rados/configuration/ceph-conf/#configuration-sections
.. _commands: ../../man/8/rbd#commands
.. _DAX: https://www.kernel.org/doc/Documentation/filesystems/dax.txt
Cephx Notes
===========
-When `cephx`_ authentication is enabled (it is by default), you must specify a
+When :ref:`cephx <rados-cephx-config-ref>` authentication is enabled (it is by default), you must specify a
user name or ID and a path to the keyring containing the corresponding key. See
:ref:`User Management <user-management>` for details.
a flattened image takes up more storage space than a layered clone does.
-.. _cephx: ../../rados/configuration/auth-config-ref/
.. _QEMU: ../qemu-rbd/
.. _OpenStack: ../rbd-openstack/
.. _OpenNebula: https://docs.opennebula.io/stable/management_and_operations/vm_management/vm_instances.html?highlight=ceph#managing-disk-snapshots