From 8fe460e7562b28e007179eeb533ef68a3e99b4c8 Mon Sep 17 00:00:00 2001 From: Nathan Cutler Date: Wed, 25 Mar 2020 23:41:28 +0100 Subject: [PATCH] doc: do not encourage readers to run multiple clusters on same HW This material was first added on December 22, 2012 by 46d0334456c14da2bc8e012f121c545b1b33b95f Signed-off-by: Nathan Cutler --- doc/rados/configuration/common.rst | 74 ++++-------------------------- 1 file changed, 9 insertions(+), 65 deletions(-) diff --git a/doc/rados/configuration/common.rst b/doc/rados/configuration/common.rst index acb5d325309..8d6dd6e6652 100644 --- a/doc/rados/configuration/common.rst +++ b/doc/rados/configuration/common.rst @@ -183,76 +183,20 @@ Example ceph.conf -Running Multiple Clusters -========================= +Running Multiple Clusters (DEPRECATED) +====================================== -With Ceph, you can run multiple Ceph Storage Clusters on the same hardware. -Running multiple clusters provides a higher level of isolation compared to -using different pools on the same cluster with different CRUSH rules. A -separate cluster will have separate monitor, OSD and metadata server processes. -When running Ceph with default settings, the default cluster name is ``ceph``, -which means you would save your Ceph configuration file with the file name -``ceph.conf`` in the ``/etc/ceph`` default directory. +Some Ceph CLI commands take a ``-c`` (cluster name) option. This option is +present purely for backward compatibility. You should not attempt to deploy +or run multiple clusters on the same hardware, and it is recommended to always +leave the cluster name as the default ("ceph"). -See `Create a Cluster`_ for details. - -.. _Create a Cluster: ../../deployment/ceph-deploy-new - -When you run multiple clusters, you must name your cluster and save the Ceph -configuration file with the name of the cluster. For example, a cluster named -``openstack`` will have a Ceph configuration file with the file name -``openstack.conf`` in the ``/etc/ceph`` default directory. - -.. important:: Cluster names must consist of letters a-z and digits 0-9 only. - -Separate clusters imply separate data disks and journals, which are not shared -between clusters. Referring to `Metavariables`_, the ``$cluster`` metavariable -evaluates to the cluster name (i.e., ``openstack`` in the foregoing example). -Various settings use the ``$cluster`` metavariable, including: - -.. _Metavariables: ../ceph-conf#Metavariables - -- ``keyring`` -- ``admin socket`` -- ``log file`` -- ``pid file`` -- ``mon data`` -- ``mon cluster log file`` -- ``osd data`` -- ``osd journal`` -- ``mds data`` -- ``rgw data`` - -See `General Settings`_, `OSD Settings`_, `Monitor Settings`_, `MDS Settings`_, -`RGW Settings`_ and `Log Settings`_ for relevant path defaults that use the -``$cluster`` metavariable. - -.. _General Settings: ../general-config-ref -.. _OSD Settings: ../osd-config-ref -.. _Monitor Settings: ../mon-config-ref -.. _MDS Settings: ../../../cephfs/mds-config-ref -.. _RGW Settings: ../../../radosgw/config-ref/ -.. _Log Settings: ../../troubleshooting/log-and-debug - - -When creating default directories or files, you should use the cluster -name at the appropriate places in the path. For example:: - - sudo mkdir /var/lib/ceph/osd/openstack-0 - sudo mkdir /var/lib/ceph/mon/openstack-a - -.. important:: When running monitors on the same host, you should use - different ports. By default, monitors use port 6789. If you already - have monitors using port 6789, use a different port for your other cluster(s). - -To invoke a cluster other than the default ``ceph`` cluster, use the -``-c {filename}.conf`` option with the ``ceph`` command. For example:: - - ceph -c {cluster-name}.conf health - ceph -c openstack.conf health +If you need to allow multiple clusters to exist on the same host, please use +`cephadm`_, which uses containers to fully isolate each cluster. .. _Hardware Recommendations: ../../../start/hardware-recommendations .. _Network Configuration Reference: ../network-config-ref .. _OSD Config Reference: ../osd-config-ref .. _Configuring Monitor/OSD Interaction: ../mon-osd-interaction +.. _cephadm: ../../cephadm -- 2.39.5