From 73124c8df822b2f9e6c25e4c9ffb8bd2a54bf782 Mon Sep 17 00:00:00 2001 From: Nathan Cutler Date: Fri, 6 Jul 2018 10:12:09 +0200 Subject: [PATCH] doc: cleanup: prune Argonaut-specific verbiage Also drop all release-specific upgrading instructions (they only go up to Firefly, anyway - none of the current releases are covered). Note that all of this verbiage I am removing here can still be accessed on docs.ceph.com via e.g. http://docs.ceph.com/docs/firefly/ Signed-off-by: Nathan Cutler --- doc/install/upgrading-ceph.rst | 512 ------------------ doc/rados/configuration/auth-config-ref.rst | 65 --- doc/rados/operations/add-or-rm-osds.rst | 36 +- doc/rados/operations/pools.rst | 8 - .../troubleshooting/troubleshooting-osd.rst | 1 - doc/start/hardware-recommendations.rst | 7 +- 6 files changed, 2 insertions(+), 627 deletions(-) diff --git a/doc/install/upgrading-ceph.rst b/doc/install/upgrading-ceph.rst index 962b90c7599ed..a209f69cb203a 100644 --- a/doc/install/upgrading-ceph.rst +++ b/doc/install/upgrading-ceph.rst @@ -68,475 +68,6 @@ Or:: sudo yum install ceph-deploy python-pushy -Argonaut to Bobtail -=================== - -When upgrading from Argonaut to Bobtail, you need to be aware of several things: - -#. Authentication now defaults to **ON**, but used to default to **OFF**. -#. Monitors use a new internal on-wire protocol. -#. RBD ``format2`` images require upgrading all OSDs before using it. - -Ensure that you update package repository paths. For example:: - - sudo rm /etc/apt/sources.list.d/ceph.list - echo deb http://download.ceph.com/debian-bobtail/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list - -See the following sections for additional details. - -Authentication --------------- - -The Ceph Bobtail release enables authentication by default. Bobtail also has -finer-grained authentication configuration settings. In previous versions of -Ceph (i.e., actually v 0.55 and earlier), you could simply specify:: - - auth supported = [cephx | none] - -This option still works, but is deprecated. New releases support -``cluster``, ``service`` and ``client`` authentication settings as -follows:: - - auth cluster required = [cephx | none] # default cephx - auth service required = [cephx | none] # default cephx - auth client required = [cephx | none] # default cephx,none - -.. important:: If your cluster does not currently have an ``auth - supported`` line that enables authentication, you must explicitly - turn it off in Bobtail using the settings below.:: - - auth cluster required = none - auth service required = none - - This will disable authentication on the cluster, but still leave - clients with the default configuration where they can talk to a - cluster that does enable it, but do not require it. - -.. important:: If your cluster already has an ``auth supported`` option defined in - the configuration file, no changes are necessary. - -See `User Management - Backward Compatibility`_ for details. - - -Monitor On-wire Protocol ------------------------- - -We recommend upgrading all monitors to Bobtail. A mixture of Bobtail and -Argonaut monitors will not be able to use the new on-wire protocol, as the -protocol requires all monitors to be Bobtail or greater. Upgrading only a -majority of the nodes (e.g., two out of three) may expose the cluster to a -situation where a single additional failure may compromise availability (because -the non-upgraded daemon cannot participate in the new protocol). We recommend -not waiting for an extended period of time between ``ceph-mon`` upgrades. - - -RBD Images ----------- - -The Bobtail release supports ``format 2`` images! However, you should not create -or use ``format 2`` RBD images until after all ``ceph-osd`` daemons have been -upgraded. Note that ``format 1`` is still the default. You can use the new -``ceph osd ls`` and ``ceph tell osd.N version`` commands to doublecheck your -cluster. ``ceph osd ls`` will give a list of all OSD IDs that are part of the -cluster, and you can use that to write a simple shell loop to display all the -OSD version strings: :: - - for i in $(ceph osd ls); do - ceph tell osd.${i} version - done - - -Argonaut to Cuttlefish -====================== - -To upgrade your cluster from Argonaut to Cuttlefish, please read this -section, and the sections on upgrading from Argonaut to Bobtail and -upgrading from Bobtail to Cuttlefish carefully. When upgrading from -Argonaut to Cuttlefish, **YOU MUST UPGRADE YOUR MONITORS FROM ARGONAUT -TO BOBTAIL v0.56.5 FIRST!!!**. All other Ceph daemons can upgrade from -Argonaut to Cuttlefish without the intermediate upgrade to Bobtail. - -.. important:: Ensure that the repository specified points to Bobtail, not - Cuttlefish. - -For example:: - - sudo rm /etc/apt/sources.list.d/ceph.list - echo deb http://download.ceph.com/debian-bobtail/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list - -We recommend upgrading all monitors to Bobtail before proceeding with the -upgrade of the monitors to Cuttlefish. A mixture of Bobtail and Argonaut -monitors will not be able to use the new on-wire protocol, as the protocol -requires all monitors to be Bobtail or greater. Upgrading only a majority of the -nodes (e.g., two out of three) may expose the cluster to a situation where a -single additional failure may compromise availability (because the non-upgraded -daemon cannot participate in the new protocol). We recommend not waiting for an -extended period of time between ``ceph-mon`` upgrades. See `Upgrading -Monitors`_ for details. - -.. note:: See the `Authentication`_ section and the - `User Management - Backward Compatibility`_ for additional information - on authentication backward compatibility settings for Bobtail. - -Once you complete the upgrade of your monitors from Argonaut to -Bobtail, and have restarted the monitor daemons, you must upgrade the -monitors from Bobtail to Cuttlefish. Ensure that you have a quorum -before beginning this upgrade procedure. Before upgrading, remember to -replace the reference to the Bobtail repository with a reference to -the Cuttlefish repository. For example:: - - sudo rm /etc/apt/sources.list.d/ceph.list - echo deb http://download.ceph.com/debian-cuttlefish/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list - -See `Upgrading Monitors`_ for details. - -The architecture of the monitors changed significantly from Argonaut to -Cuttlefish. See `Monitor Config Reference`_ and `Joao's blog post`_ for details. -Once you complete the monitor upgrade, you can upgrade the OSD daemons and the -MDS daemons using the generic procedures. See `Upgrading an OSD`_ and `Upgrading -a Metadata Server`_ for details. - - -Bobtail to Cuttlefish -===================== - -Upgrading your cluster from Bobtail to Cuttlefish has a few important -considerations. First, the monitor uses a new architecture, so you should -upgrade the full set of monitors to use Cuttlefish. Second, if you run multiple -metadata servers in a cluster, ensure the metadata servers have unique names. -See the following sections for details. - -Replace any ``apt`` reference to older repositories with a reference to the -Cuttlefish repository. For example:: - - sudo rm /etc/apt/sources.list.d/ceph.list - echo deb http://download.ceph.com/debian-cuttlefish/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list - - -Monitor -------- - -The architecture of the monitors changed significantly from Bobtail to -Cuttlefish. See `Monitor Config Reference`_ and `Joao's blog post`_ for -details. This means that v0.59 and pre-v0.59 monitors do not talk to each other -(Cuttlefish is v.0.61). When you upgrade each monitor, it will convert its -local data store to the new format. Once you upgrade a majority of monitors, -the monitors form a quorum using the new protocol and the old monitors will be -blocked until they get upgraded. For this reason, we recommend upgrading the -monitors in immediate succession. - -.. important:: Do not run a mixed-version cluster for an extended period. - - -MDS Unique Names ----------------- - -The monitor now enforces that MDS names be unique. If you have multiple metadata -server daemons that start with the same ID (e.g., mds.a) the second -metadata server will implicitly mark the first metadata server as ``failed``. -Multi-MDS configurations with identical names must be adjusted accordingly to -give daemons unique names. If you run your cluster with one metadata server, -you can disregard this notice for now. - - -ceph-deploy ------------ - -The ``ceph-deploy`` tool is now the preferred method of provisioning new clusters. -For existing clusters created via the obsolete ``mkcephfs`` tool that would like to transition to the -new tool, there is a migration path, documented at `Transitioning to ceph-deploy`_. - -Cuttlefish to Dumpling -====================== - -When upgrading from Cuttlefish (v0.61-v0.61.7) you may perform a rolling -upgrade. However, there are a few important considerations. First, you must -upgrade the ``ceph`` command line utility, because it has changed significantly. -Second, you must upgrade the full set of monitors to use Dumpling, because of a -protocol change. - -Replace any reference to older repositories with a reference to the -Dumpling repository. For example, with ``apt`` perform the following:: - - sudo rm /etc/apt/sources.list.d/ceph.list - echo deb http://download.ceph.com/debian-dumpling/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list - -With CentOS/Red Hat distributions, remove the old repository. :: - - sudo rm /etc/yum.repos.d/ceph.repo - -Then add a new ``ceph.repo`` repository entry with the following contents. - -.. code-block:: ini - - [ceph] - name=Ceph Packages and Backports $basearch - baseurl=http://download.ceph.com/rpm/el6/$basearch - enabled=1 - gpgcheck=1 - gpgkey=https://download.ceph.com/keys/release.asc - - -.. note:: Ensure you use the correct URL for your distribution. Check the - http://download.ceph.com/rpm directory for your distribution. - -.. note:: Since you can upgrade using ``ceph-deploy`` you will only need to add - the repository on Ceph Client nodes where you use the ``ceph`` command line - interface or the ``ceph-deploy`` tool. - - -Dumpling to Emperor -=================== - -When upgrading from Dumpling (v0.64) you may perform a rolling -upgrade. - -Replace any reference to older repositories with a reference to the -Emperor repository. For example, with ``apt`` perform the following:: - - sudo rm /etc/apt/sources.list.d/ceph.list - echo deb http://download.ceph.com/debian-emperor/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list - -With CentOS/Red Hat distributions, remove the old repository. :: - - sudo rm /etc/yum.repos.d/ceph.repo - -Then add a new ``ceph.repo`` repository entry with the following contents and -replace ``{distro}`` with your distribution (e.g., ``el6``, ``rhel6``, etc). - -.. code-block:: ini - - [ceph] - name=Ceph Packages and Backports $basearch - baseurl=http://download.ceph.com/rpm-emperor/{distro}/$basearch - enabled=1 - gpgcheck=1 - gpgkey=https://download.ceph.com/keys/release.asc - - -.. note:: Ensure you use the correct URL for your distribution. Check the - http://download.ceph.com/rpm directory for your distribution. - -.. note:: Since you can upgrade using ``ceph-deploy`` you will only need to add - the repository on Ceph Client nodes where you use the ``ceph`` command line - interface or the ``ceph-deploy`` tool. - - -Command Line Utility --------------------- - -In V0.65, the ``ceph`` commandline interface (CLI) utility changed -significantly. You will not be able to use the old CLI with Dumpling. This means -that you must upgrade the ``ceph-common`` library on all nodes that access the -Ceph Storage Cluster with the ``ceph`` CLI before upgrading Ceph daemons. :: - - sudo apt-get update && sudo apt-get install ceph-common - -Ensure that you have the latest version (v0.67 or later). If you do not, -you may need to uninstall, auto remove dependencies and reinstall. - -See `v0.65`_ for details on the new command line interface. - -.. _v0.65: http://docs.ceph.com/docs/master/release-notes/#v0-65 - - -Monitor -------- - -Dumpling (v0.67) ``ceph-mon`` daemons have an internal protocol change. This -means that v0.67 daemons cannot talk to v0.66 or older daemons. Once you -upgrade a majority of monitors, the monitors form a quorum using the new -protocol and the old monitors will be blocked until they get upgraded. For this -reason, we recommend upgrading all monitors at once (or in relatively quick -succession) to minimize the possibility of downtime. - -.. important:: Do not run a mixed-version cluster for an extended period. - - - -Dumpling to Firefly -=================== - -If your existing cluster is running a version older than v0.67 Dumpling, please -first upgrade to the latest Dumpling release before upgrading to v0.80 Firefly. - - -Monitor -------- - -Dumpling (v0.67) ``ceph-mon`` daemons have an internal protocol change. This -means that v0.67 daemons cannot talk to v0.66 or older daemons. Once you -upgrade a majority of monitors, the monitors form a quorum using the new -protocol and the old monitors will be blocked until they get upgraded. For this -reason, we recommend upgrading all monitors at once (or in relatively quick -succession) to minimize the possibility of downtime. - -.. important:: Do not run a mixed-version cluster for an extended period. - - -Ceph Config File Changes ------------------------- - -We recommand adding the following to the ``[mon]`` section of your -``ceph.conf`` prior to upgrade:: - - mon warn on legacy crush tunables = false - -This will prevent health warnings due to the use of legacy CRUSH placement. -Although it is possible to rebalance existing data across your cluster, we do -not normally recommend it for production environments as a large amount of data -will move and there is a significant performance impact from the rebalancing. - - -Command Line Utility --------------------- - -In V0.65, the ``ceph`` commandline interface (CLI) utility changed -significantly. You will not be able to use the old CLI with Firefly. This means -that you must upgrade the ``ceph-common`` library on all nodes that access the -Ceph Storage Cluster with the ``ceph`` CLI before upgrading Ceph daemons. - -For Debian/Ubuntu, execute:: - - sudo apt-get update && sudo apt-get install ceph-common - -For CentOS/RHEL, execute:: - - sudo yum install ceph-common - -Ensure that you have the latest version. If you do not, -you may need to uninstall, auto remove dependencies and reinstall. - -See `v0.65`_ for details on the new command line interface. - -.. _v0.65: http://docs.ceph.com/docs/master/release-notes/#v0-65 - - -Upgrade Sequence ----------------- - -Replace any reference to older repositories with a reference to the -Firely repository. For example, with ``apt`` perform the following:: - - sudo rm /etc/apt/sources.list.d/ceph.list - echo deb http://download.ceph.com/debian-firefly/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list - -With CentOS/Red Hat distributions, remove the old repository. :: - - sudo rm /etc/yum.repos.d/ceph.repo - -Then add a new ``ceph.repo`` repository entry with the following contents and -replace ``{distro}`` with your distribution (e.g., ``el6``, ``rhel6``, -``rhel7``, etc.). - -.. code-block:: ini - - [ceph] - name=Ceph Packages and Backports $basearch - baseurl=http://download.ceph.com/rpm-firefly/{distro}/$basearch - enabled=1 - gpgcheck=1 - gpgkey=https://download.ceph.com/keys/release.asc - - -Upgrade daemons in the following order: - -#. **Monitors:** If the ``ceph-mon`` daemons are not restarted prior to the - ``ceph-osd`` daemons, the monitors will not correctly register their new - capabilities with the cluster and new features may not be usable until - the monitors are restarted a second time. - -#. **OSDs** - -#. **MDSs:** If the ``ceph-mds`` daemon is restarted first, it will wait until - all OSDs have been upgraded before finishing its startup sequence. - -#. **Gateways:** Upgrade ``radosgw`` daemons together. There is a subtle change - in behavior for multipart uploads that prevents a multipart request that - was initiated with a new ``radosgw`` from being completed by an old - ``radosgw``. - -.. note:: Make sure you upgrade your **ALL** of your Ceph monitors **AND** - restart them **BEFORE** upgrading and restarting OSDs, MDSs, and gateways! - - -Emperor to Firefly -================== - -If your existing cluster is running a version older than v0.67 Dumpling, please -first upgrade to the latest Dumpling release before upgrading to v0.80 Firefly. -Please refer to `Cuttlefish to Dumpling`_ and the `Firefly release notes`_ for -details. To upgrade from a post-Emperor point release, see the `Firefly release -notes`_ for details. - - -Ceph Config File Changes ------------------------- - -We recommand adding the following to the ``[mon]`` section of your -``ceph.conf`` prior to upgrade:: - - mon warn on legacy crush tunables = false - -This will prevent health warnings due to the use of legacy CRUSH placement. -Although it is possible to rebalance existing data across your cluster, we do -not normally recommend it for production environments as a large amount of data -will move and there is a significant performance impact from the rebalancing. - - -Upgrade Sequence ----------------- - -Replace any reference to older repositories with a reference to the -Firefly repository. For example, with ``apt`` perform the following:: - - sudo rm /etc/apt/sources.list.d/ceph.list - echo deb http://download.ceph.com/debian-firefly/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list - -With CentOS/Red Hat distributions, remove the old repository. :: - - sudo rm /etc/yum.repos.d/ceph.repo - -Then add a new ``ceph.repo`` repository entry with the following contents, but -replace ``{distro}`` with your distribution (e.g., ``el6``, ``rhel6``, -``rhel7``, etc.). - -.. code-block:: ini - - [ceph] - name=Ceph Packages and Backports $basearch - baseurl=http://download.ceph.com/rpm/{distro}/$basearch - enabled=1 - gpgcheck=1 - gpgkey=https://download.ceph.com/keys/release.asc - - -.. note:: Ensure you use the correct URL for your distribution. Check the - http://download.ceph.com/rpm directory for your distribution. - -.. note:: Since you can upgrade using ``ceph-deploy`` you will only need to add - the repository on Ceph Client nodes where you use the ``ceph`` command line - interface or the ``ceph-deploy`` tool. - - -Upgrade daemons in the following order: - -#. **Monitors:** If the ``ceph-mon`` daemons are not restarted prior to the - ``ceph-osd`` daemons, the monitors will not correctly register their new - capabilities with the cluster and new features may not be usable until - the monitors are restarted a second time. - -#. **OSDs** - -#. **MDSs:** If the ``ceph-mds`` daemon is restarted first, it will wait until - all OSDs have been upgraded before finishing its startup sequence. - -#. **Gateways:** Upgrade ``radosgw`` daemons together. There is a subtle change - in behavior for multipart uploads that prevents a multipart request that - was initiated with a new ``radosgw`` from being completed by an old - ``radosgw``. - - Upgrade Procedures ================== @@ -697,50 +228,7 @@ If you do not have the latest version, you may need to uninstall, auto remove dependencies and reinstall. -Transitioning to ceph-deploy -============================ - -If you have an existing cluster that you deployed with ``mkcephfs`` (usually -Argonaut or Bobtail releases), you will need to make a few changes to your -configuration to ensure that your cluster will work with ``ceph-deploy``. - - -Monitor Keyring ---------------- - -You will need to add ``caps mon = "allow *"`` to your monitor keyring if it is -not already in the keyring. By default, the monitor keyring is located under -``/var/lib/ceph/mon/ceph-$id/keyring``. When you have added the ``caps`` -setting, your monitor keyring should look something like this:: - - [mon.] - key = AQBJIHhRuHCwDRAAZjBTSJcIBIoGpdOR9ToiyQ== - caps mon = "allow *" - -Adding ``caps mon = "allow *"`` will ease the transition from ``mkcephfs`` to -``ceph-deploy`` by allowing ``ceph-create-keys`` to use the ``mon.`` keyring -file in ``$mon_data`` and get the caps it needs. - - -Use Default Paths ------------------ - -Under the ``/var/lib/ceph`` directory, the ``mon`` and ``osd`` directories need -to use the default paths. - -- **OSDs**: The path should be ``/var/lib/ceph/osd/ceph-$id`` -- **MON**: The path should be ``/var/lib/ceph/mon/ceph-$id`` - -Under those directories, the keyring should be in a file named ``keyring``. - - - - -.. _Monitor Config Reference: ../../rados/configuration/mon-config-ref -.. _Joao's blog post: http://ceph.com/dev-notes/cephs-new-monitor-changes -.. _User Management - Backward Compatibility: ../../rados/configuration/auth-config-ref/#backward-compatibility .. _manually: ../install-storage-cluster/ .. _Operating a Cluster: ../../rados/operations/operating .. _Monitoring a Cluster: ../../rados/operations/monitoring -.. _Firefly release notes: ../../release-notes/#v0-80-firefly .. _release notes: ../../release-notes diff --git a/doc/rados/configuration/auth-config-ref.rst b/doc/rados/configuration/auth-config-ref.rst index 96ec83a604920..c6816f1e5187e 100644 --- a/doc/rados/configuration/auth-config-ref.rst +++ b/doc/rados/configuration/auth-config-ref.rst @@ -368,75 +368,10 @@ Time to Live :Default: ``60*60`` -Backward Compatibility -====================== - -For Cuttlefish and earlier releases, see `Cephx`_. - -In Ceph Argonaut v0.48 and earlier versions, if you enable ``cephx`` -authentication, Ceph only authenticates the initial communication between the -client and daemon; Ceph does not authenticate the subsequent messages they send -to each other, which has security implications. In Ceph Bobtail and subsequent -versions, Ceph authenticates all ongoing messages between the entities using the -session key set up for that initial authentication. - -We identified a backward compatibility issue between Argonaut v0.48 (and prior -versions) and Bobtail (and subsequent versions). During testing, if you -attempted to use Argonaut (and earlier) daemons with Bobtail (and later) -daemons, the Argonaut daemons did not know how to perform ongoing message -authentication, while the Bobtail versions of the daemons insist on -authenticating message traffic subsequent to the initial -request/response--making it impossible for Argonaut (and prior) daemons to -interoperate with Bobtail (and subsequent) daemons. - -We have addressed this potential problem by providing a means for Argonaut (and -prior) systems to interact with Bobtail (and subsequent) systems. Here's how it -works: by default, the newer systems will not insist on seeing signatures from -older systems that do not know how to perform them, but will simply accept such -messages without authenticating them. This new default behavior provides the -advantage of allowing two different releases to interact. **We do not recommend -this as a long term solution**. Allowing newer daemons to forgo ongoing -authentication has the unfortunate security effect that an attacker with control -of some of your machines or some access to your network can disable session -security simply by claiming to be unable to sign messages. - -.. note:: Even if you don't actually run any old versions of Ceph, - the attacker may be able to force some messages to be accepted unsigned in the - default scenario. While running Cephx with the default scenario, Ceph still - authenticates the initial communication, but you lose desirable session security. - -If you know that you are not running older versions of Ceph, or you are willing -to accept that old servers and new servers will not be able to interoperate, you -can eliminate this security risk. If you do so, any Ceph system that is new -enough to support session authentication and that has Cephx enabled will reject -unsigned messages. To preclude new servers from interacting with old servers, -include the following in the ``[global]`` section of your `Ceph -configuration`_ file directly below the line that specifies the use of Cephx -for authentication:: - - cephx require signatures = true ; everywhere possible - -You can also selectively require signatures for cluster internal -communications only, separate from client-facing service:: - - cephx cluster require signatures = true ; for cluster-internal communication - cephx service require signatures = true ; for client-facing service - -An option to make a client require signatures from the cluster is not -yet implemented. - -**We recommend migrating all daemons to the newer versions and enabling the -foregoing flag** at the nearest practical time so that you may avail yourself -of the enhanced authentication. - -.. note:: Ceph kernel modules do not support signatures yet. - - .. _Storage Cluster Quick Start: ../../../start/quick-ceph-deploy/ .. _Monitor Bootstrapping: ../../../install/manual-deployment#monitor-bootstrapping .. _Operating a Cluster: ../../operations/operating .. _Manual Deployment: ../../../install/manual-deployment -.. _Cephx: http://docs.ceph.com/docs/cuttlefish/rados/configuration/auth-config-ref/ .. _Ceph configuration: ../ceph-conf .. _Create an Admin Host: ../../deployment/ceph-deploy-admin .. _Architecture - High Availability Authentication: ../../../architecture#high-availability-authentication diff --git a/doc/rados/operations/add-or-rm-osds.rst b/doc/rados/operations/add-or-rm-osds.rst index 16364ebb70395..89d16f9f5a113 100644 --- a/doc/rados/operations/add-or-rm-osds.rst +++ b/doc/rados/operations/add-or-rm-osds.rst @@ -120,11 +120,7 @@ weight). you specify only the root bucket, the command will attach the OSD directly to the root, but CRUSH rules expect OSDs to be inside of hosts. - For Argonaut (v 0.48), execute the following:: - - ceph osd crush add {id} {name} {weight} [{bucket-type}={bucket-name} ...] - - For Bobtail (v 0.56) and later releases, execute the following:: + Execute the following:: ceph osd crush add {id-or-name} {weight} [{bucket-type}={bucket-name} ...] @@ -134,36 +130,6 @@ weight). `Add/Move an OSD`_ for details. -.. topic:: Argonaut (v0.48) Best Practices - - To limit impact on user I/O performance, add an OSD to the CRUSH map - with an initial weight of ``0``. Then, ramp up the CRUSH weight a - little bit at a time. For example, to ramp by increments of ``0.2``, - start with:: - - ceph osd crush reweight {osd-id} .2 - - and allow migration to complete before reweighting to ``0.4``, - ``0.6``, and so on until the desired CRUSH weight is reached. - - To limit the impact of OSD failures, you can set:: - - mon osd down out interval = 0 - - which prevents down OSDs from automatically being marked out, and then - ramp them down manually with:: - - ceph osd reweight {osd-num} .8 - - Again, wait for the cluster to finish migrating data, and then adjust - the weight further until you reach a weight of 0. Note that this - problem prevents the cluster to automatically re-replicate data after - a failure, so please ensure that sufficient monitoring is in place for - an administrator to intervene promptly. - - Note that this practice will no longer be necessary in Bobtail and - subsequent releases. - .. _rados-replacing-an-osd: Replacing an OSD diff --git a/doc/rados/operations/pools.rst b/doc/rados/operations/pools.rst index 80fafabc0affe..5bea83a4c57f0 100644 --- a/doc/rados/operations/pools.rst +++ b/doc/rados/operations/pools.rst @@ -233,8 +233,6 @@ If you rename a pool and you have per-pool capabilities for an authenticated user, you must update the user's capabilities (i.e., caps) with the new pool name. -.. note:: Version ``0.48`` Argonaut and above. - Show Pool Statistics ==================== @@ -250,9 +248,6 @@ To make a snapshot of a pool, execute:: ceph osd pool mksnap {pool-name} {snap-name} -.. note:: Version ``0.48`` Argonaut and above. - - Remove a Snapshot of a Pool =========================== @@ -260,8 +255,6 @@ To remove a snapshot of a pool, execute:: ceph osd pool rmsnap {pool-name} {snap-name} -.. note:: Version ``0.48`` Argonaut and above. - .. _setpoolvalues: @@ -370,7 +363,6 @@ You may set values for the following keys: :Description: Set/Unset HASHPSPOOL flag on a given pool. :Type: Integer :Valid Range: 1 sets flag, 0 unsets flag -:Version: Version ``0.48`` Argonaut and above. .. _nodelete: diff --git a/doc/rados/troubleshooting/troubleshooting-osd.rst b/doc/rados/troubleshooting/troubleshooting-osd.rst index 2ca5fdbe8b701..cb8bd9800eff8 100644 --- a/doc/rados/troubleshooting/troubleshooting-osd.rst +++ b/doc/rados/troubleshooting/troubleshooting-osd.rst @@ -313,7 +313,6 @@ same drive as your OSDs. Additionally, if you run monitors on the same host as the OSDs, you may incur performance issues related to: - Running an older kernel (pre-3.0) -- Running Argonaut with an old ``glibc`` - Running a kernel with no syncfs(2) syscall. In these cases, multiple OSDs running on the same host can drag each other down diff --git a/doc/start/hardware-recommendations.rst b/doc/start/hardware-recommendations.rst index eac5dc8c9f0be..01442366d59a8 100644 --- a/doc/start/hardware-recommendations.rst +++ b/doc/start/hardware-recommendations.rst @@ -13,10 +13,7 @@ of daemon. We recommend using other hosts for processes that utilize your data cluster (e.g., OpenStack, CloudStack, etc). -.. tip:: Check out the Ceph blog too. Articles like `Ceph Write Throughput 1`_, - `Ceph Write Throughput 2`_, `Argonaut v. Bobtail Performance Preview`_, - `Bobtail Performance - I/O Scheduler Comparison`_ and others are an - excellent source of information. +.. tip:: Check out the Ceph blog too. CPU @@ -344,7 +341,5 @@ configurations for Ceph OSDs, and a lighter configuration for monitors. .. _Ceph Write Throughput 1: http://ceph.com/community/ceph-performance-part-1-disk-controller-write-throughput/ .. _Ceph Write Throughput 2: http://ceph.com/community/ceph-performance-part-2-write-throughput-without-ssd-journals/ -.. _Argonaut v. Bobtail Performance Preview: http://ceph.com/uncategorized/argonaut-vs-bobtail-performance-preview/ -.. _Bobtail Performance - I/O Scheduler Comparison: http://ceph.com/community/ceph-bobtail-performance-io-scheduler-comparison/ .. _Mapping Pools to Different Types of OSDs: ../../rados/operations/crush-map#placing-different-pools-on-different-osds .. _OS Recommendations: ../os-recommendations -- 2.39.5