From: Zac Dover Date: Mon, 11 Nov 2024 23:47:21 +0000 (+1000) Subject: doc/start: fix "are are" typo X-Git-Tag: v20.0.0~676^2 X-Git-Url: http://git.apps.os.sepia.ceph.com/?a=commitdiff_plain;h=22010719d7edf2a6cf7de7b115ed797e7942ec47;p=ceph.git doc/start: fix "are are" typo Fix typo reading "They are are single-threaded". s/are are/are/ Fixes: https://tracker.ceph.com/issues/68901 Signed-off-by: Zac Dover --- diff --git a/doc/start/hardware-recommendations.rst b/doc/start/hardware-recommendations.rst index 823deb9b0c38d..3c3c781a815c9 100644 --- a/doc/start/hardware-recommendations.rst +++ b/doc/start/hardware-recommendations.rst @@ -22,13 +22,12 @@ another, but below are some general guidelines. CPU === -CephFS Metadata Servers (MDS) are CPU-intensive. They are -are single-threaded and perform best with CPUs with a high clock rate (GHz). MDS -servers do not need a large number of CPU cores unless they are also hosting other -services, such as SSD OSDs for the CephFS metadata pool. -OSD nodes need enough processing power to run the RADOS service, to calculate data -placement with CRUSH, to replicate data, and to maintain their own copies of the -cluster map. +CephFS Metadata Servers (MDS) are CPU-intensive. They are single-threaded +and perform best with CPUs with a high clock rate (GHz). MDS servers do not +need a large number of CPU cores unless they are also hosting other services, +such as SSD OSDs for the CephFS metadata pool. OSD nodes need enough +processing power to run the RADOS service, to calculate data placement with +CRUSH, to replicate data, and to maintain their own copies of the cluster map. With earlier releases of Ceph, we would make hardware recommendations based on the number of cores per OSD, but this cores-per-osd metric is no longer as