From: John Spray Date: Thu, 30 Jun 2016 15:20:20 +0000 (+0100) Subject: doc/cephfs: rename early-adopters to best-practices X-Git-Tag: ses5-milestone5~530^2 X-Git-Url: http://git-server-git.apps.pok.os.sepia.ceph.com/?a=commitdiff_plain;h=refs%2Fpull%2F10068%2Fhead;p=ceph.git doc/cephfs: rename early-adopters to best-practices ...and give it a link in the TOC (in addition to where it is referenced from the top of index.rst) Signed-off-by: John Spray --- diff --git a/doc/cephfs/best-practices.rst b/doc/cephfs/best-practices.rst new file mode 100644 index 000000000000..891485838e92 --- /dev/null +++ b/doc/cephfs/best-practices.rst @@ -0,0 +1,88 @@ + +CephFS best practices +===================== + +This guide provides recommendations for best results when deploying CephFS. + +For the actual configuration guide for CephFS, please see the instructions +at :doc:`/cephfs/index`. + +Which Ceph version? +=================== + +Use at least the Jewel (v10.2.0) release of Ceph. This is the first +release to include stable CephFS code and fsck/repair tools. Make sure +you are using the latest point release to get bug fixes. + +Note that Ceph releases do not include a kernel, this is versioned +and released separately. See below for guidance of choosing an +appropriate kernel version if you are using the kernel client +for CephFS. + +Most stable configuration +========================= + +Some features in CephFS are still experimental. See +:doc:`/cephfs/experimental-features` for guidance on these. + +For the best chance of a happy healthy filesystem, use a **single active MDS** +and **do not use snapshots**. Both of these are the default. + +Note that creating multiple MDS daemons is fine, as these will simply be +used as standbys. However, for best stability you should avoid +adjusting ``max_mds`` upwards, as this would cause multiple +daemons to be active at once. + +Which client? +============= + +The fuse client is the easiest way to get up to date code, while +the kernel client will often give better performance. + +The clients do not always provide equivalent functionality, for example +the fuse client supports client-enforced quotas while the kernel client +does not. + +When encountering bugs or performance issues, it is often instructive to +try using the other client, in order to find out whether the bug was +client-specific or not (and then to let the developers know). + +Which kernel version? +--------------------- + +Because the kernel client is distributed as part of the linux kernel (not +as part of packaged ceph releases), +you will need to consider which kernel version to use on your client nodes. +Older kernels are known to include buggy ceph clients, and may not support +features that more recent Ceph clusters support. + +Remember that the "latest" kernel in a stable linux distribution is likely +to be years behind the latest upstream linux kernel where Ceph development +takes place (including bug fixes). + +As a rough guide, as of Ceph 10.x (Jewel), you should be using a least a +4.x kernel. If you absolutely have to use an older kernel, you should use +the fuse client instead of the kernel client. + +This advice does not apply if you are using a linux distribution that +includes CephFS support, as in this case the distributor will be responsible +for backporting fixes to their stable kernel: check with your vendor. + +Reporting issues +================ + +If you have identified a specific issue, please report it with as much +information as possible. Especially important information: + +* Ceph versions installed on client and server +* Whether you are using the kernel or fuse client +* If you are using the kernel client, what kernel version? +* How many clients are in play, doing what kind of workload? +* If a system is 'stuck', is that affecting all clients or just one? +* Any ceph health messages +* Any backtraces in the ceph logs from crashes + +If you are satisfied that you have found a bug, please file it on +http://tracker.ceph.com. For more general queries please write +to the ceph-users mailing list. + diff --git a/doc/cephfs/early-adopters.rst b/doc/cephfs/early-adopters.rst deleted file mode 100644 index 891485838e92..000000000000 --- a/doc/cephfs/early-adopters.rst +++ /dev/null @@ -1,88 +0,0 @@ - -CephFS best practices -===================== - -This guide provides recommendations for best results when deploying CephFS. - -For the actual configuration guide for CephFS, please see the instructions -at :doc:`/cephfs/index`. - -Which Ceph version? -=================== - -Use at least the Jewel (v10.2.0) release of Ceph. This is the first -release to include stable CephFS code and fsck/repair tools. Make sure -you are using the latest point release to get bug fixes. - -Note that Ceph releases do not include a kernel, this is versioned -and released separately. See below for guidance of choosing an -appropriate kernel version if you are using the kernel client -for CephFS. - -Most stable configuration -========================= - -Some features in CephFS are still experimental. See -:doc:`/cephfs/experimental-features` for guidance on these. - -For the best chance of a happy healthy filesystem, use a **single active MDS** -and **do not use snapshots**. Both of these are the default. - -Note that creating multiple MDS daemons is fine, as these will simply be -used as standbys. However, for best stability you should avoid -adjusting ``max_mds`` upwards, as this would cause multiple -daemons to be active at once. - -Which client? -============= - -The fuse client is the easiest way to get up to date code, while -the kernel client will often give better performance. - -The clients do not always provide equivalent functionality, for example -the fuse client supports client-enforced quotas while the kernel client -does not. - -When encountering bugs or performance issues, it is often instructive to -try using the other client, in order to find out whether the bug was -client-specific or not (and then to let the developers know). - -Which kernel version? ---------------------- - -Because the kernel client is distributed as part of the linux kernel (not -as part of packaged ceph releases), -you will need to consider which kernel version to use on your client nodes. -Older kernels are known to include buggy ceph clients, and may not support -features that more recent Ceph clusters support. - -Remember that the "latest" kernel in a stable linux distribution is likely -to be years behind the latest upstream linux kernel where Ceph development -takes place (including bug fixes). - -As a rough guide, as of Ceph 10.x (Jewel), you should be using a least a -4.x kernel. If you absolutely have to use an older kernel, you should use -the fuse client instead of the kernel client. - -This advice does not apply if you are using a linux distribution that -includes CephFS support, as in this case the distributor will be responsible -for backporting fixes to their stable kernel: check with your vendor. - -Reporting issues -================ - -If you have identified a specific issue, please report it with as much -information as possible. Especially important information: - -* Ceph versions installed on client and server -* Whether you are using the kernel or fuse client -* If you are using the kernel client, what kernel version? -* How many clients are in play, doing what kind of workload? -* If a system is 'stuck', is that affecting all clients or just one? -* Any ceph health messages -* Any backtraces in the ceph logs from crashes - -If you are satisfied that you have found a bug, please file it on -http://tracker.ceph.com. For more general queries please write -to the ceph-users mailing list. - diff --git a/doc/cephfs/index.rst b/doc/cephfs/index.rst index 928f5cae35ae..ece8fcbae5bd 100644 --- a/doc/cephfs/index.rst +++ b/doc/cephfs/index.rst @@ -8,7 +8,7 @@ Storage Cluster system as Ceph Block Devices, Ceph Object Storage with its S3 and Swift APIs, or native bindings (librados). .. note:: If you are evaluating CephFS for the first time, please review - the guidance for early adopters: :doc:`/cephfs/early-adopters` + the best practices for deployment: :doc:`/cephfs/best-practices` .. ditaa:: +-----------------------+ +------------------------+ @@ -79,6 +79,7 @@ authentication keyring. .. toctree:: :maxdepth: 1 + Deployment best practices Administrative commands POSIX compatibility Experimental Features