From f6ddfad6da9c16da72a4c32388cd412c73837431 Mon Sep 17 00:00:00 2001 From: John Spray Date: Thu, 30 Jun 2016 16:18:46 +0100 Subject: [PATCH] doc/cephfs: remove some scary warnings ...and restructure the "early adopters" page into a "best practices" guide. Early adopters are now just adopters :-) Signed-off-by: John Spray --- doc/cephfs/early-adopters.rst | 44 ++++++++++++++++++++--------------- doc/cephfs/index.rst | 7 ++---- 2 files changed, 27 insertions(+), 24 deletions(-) diff --git a/doc/cephfs/early-adopters.rst b/doc/cephfs/early-adopters.rst index 057d210b30b9e..891485838e92c 100644 --- a/doc/cephfs/early-adopters.rst +++ b/doc/cephfs/early-adopters.rst @@ -1,31 +1,37 @@ -CephFS for early adopters -========================= +CephFS best practices +===================== + +This guide provides recommendations for best results when deploying CephFS. + +For the actual configuration guide for CephFS, please see the instructions +at :doc:`/cephfs/index`. -This pages provides guidance for early adoption of CephFS by users -with an appetite for adventure. While work is ongoing to build the -scrubbing and disaster recovery tools needed to run CephFS in demanding -production environments, it is already useful for community members to -try CephFS and provide bug reports and feedback. +Which Ceph version? +=================== -Setup instructions -================== +Use at least the Jewel (v10.2.0) release of Ceph. This is the first +release to include stable CephFS code and fsck/repair tools. Make sure +you are using the latest point release to get bug fixes. -Please see the instructions at :doc:`/cephfs/index`. +Note that Ceph releases do not include a kernel, this is versioned +and released separately. See below for guidance of choosing an +appropriate kernel version if you are using the kernel client +for CephFS. Most stable configuration ========================= +Some features in CephFS are still experimental. See +:doc:`/cephfs/experimental-features` for guidance on these. + For the best chance of a happy healthy filesystem, use a **single active MDS** -and **do not use snapshots**. Both of these are the default: - -* Snapshots are disabled by default, unless they are enabled explicitly by - an administrator using the ``allow_new_snaps`` setting. -* Ceph will use a single active MDS unless an administrator explicitly sets - ``max_mds`` to a value greater than 1. Note that creating additional - MDS daemons (e.g. with ``ceph-deploy mds create``) is okay, as these will - by default simply become standbys. It is also fairly safe to enable - standby-replay mode. +and **do not use snapshots**. Both of these are the default. + +Note that creating multiple MDS daemons is fine, as these will simply be +used as standbys. However, for best stability you should avoid +adjusting ``max_mds`` upwards, as this would cause multiple +daemons to be active at once. Which client? ============= diff --git a/doc/cephfs/index.rst b/doc/cephfs/index.rst index bd9d6b10f14d0..928f5cae35ae0 100644 --- a/doc/cephfs/index.rst +++ b/doc/cephfs/index.rst @@ -7,11 +7,8 @@ a Ceph Storage Cluster to store its data. The Ceph filesystem uses the same Ceph Storage Cluster system as Ceph Block Devices, Ceph Object Storage with its S3 and Swift APIs, or native bindings (librados). -.. important:: CephFS currently lacks a robust 'fsck' check and - repair function. Please use caution when storing - important data as the disaster recovery tools are - still under development. For more information about - using CephFS today, see :doc:`/cephfs/early-adopters` +.. note:: If you are evaluating CephFS for the first time, please review + the guidance for early adopters: :doc:`/cephfs/early-adopters` .. ditaa:: +-----------------------+ +------------------------+ -- 2.39.5