From: John Spray Date: Wed, 11 Feb 2015 12:11:03 +0000 (+0000) Subject: doc: add a page for CephFS early adopters X-Git-Tag: v0.93~46^2~4^2 X-Git-Url: http://git-server-git.apps.pok.os.sepia.ceph.com/?a=commitdiff_plain;h=a5804c45f138f9c1951374081cb53182672addcd;p=ceph.git doc: add a page for CephFS early adopters This is where we should point people who are trying out CephFS in evaluation environments. Signed-off-by: John Spray --- diff --git a/doc/cephfs/early-adopters.rst b/doc/cephfs/early-adopters.rst new file mode 100644 index 00000000000..1047479f258 --- /dev/null +++ b/doc/cephfs/early-adopters.rst @@ -0,0 +1,61 @@ + +CephFS for early adopters +========================= + +This pages provides guidance for early adoption of CephFS by users +with an appetite for adventure. While work is ongoing to build the +scrubbing and disaster recovery tools needed to run CephFS in demanding +production environments, it is already useful for community members to +try CephFS and provide bug reports and feedback. + +Setup instructions +================== + +Please see the instructions at :doc:`/cephfs/index`. + +Most stable configuration +========================= + +For the best chance of a happy healthy filesystem, use a **single active MDS** +and **do not use snapshots**. Both of these are the default: + +* Snapshots are disabled by default, unless they are enabled explicitly by + an administrator using the ``allow_new_snaps`` setting. +* Ceph will use a single active MDS unless an administrator explicitly sets + ``max_mds`` to a value greater than 1. Note that creating additional + MDS daemons (e.g. with ``ceph-deploy mds create``) is okay, as these will + by default simply become standbys. It is also fairly safe to enable + standby-replay mode. + +Which client? +============= + +The fuse client is the easiest way to get up to date code, while +the kernel client will often give better performance. + +The clients do not always provide equivalent functionality, for example +the fuse client supports client-enforced quotas while the kernel client +does not. + +When encountering bugs or performance issues, it is often instructive to +try using the other client, in order to find out whether the bug was +client-specific or not (and then to let the developers know). + +Reporting issues +================ + +If you have identified a specific issue, please report it with as much +information as possible. Especially important information: + +* Ceph versions installed on client and server +* Whether you are using the kernel or fuse client +* If you are using the kernel client, what kernel version? +* How many clients are in play, doing what kind of workload? +* If a system is 'stuck', is that affecting all clients or just one? +* Any ceph health messages +* Any backtraces in the ceph logs from crashes + +If you are satisfied that you have found a bug, please file it on +http://tracker.ceph.com. For more general queries please write +to the ceph-users mailing list. + diff --git a/doc/cephfs/index.rst b/doc/cephfs/index.rst index 40fac94f304..4446c7c42a4 100644 --- a/doc/cephfs/index.rst +++ b/doc/cephfs/index.rst @@ -10,7 +10,8 @@ and Swift APIs, or native bindings (librados). .. important:: CephFS currently lacks a robust 'fsck' check and repair function. Please use caution when storing important data as the disaster recovery tools are - still under development. + still under development. For more information about + using CephFS today, see :doc:`/cephfs/early-adopters` .. ditaa:: +-----------------------+ +------------------------+