From 3b9f70d4fd54969a7606ea4464d4db44d51330a8 Mon Sep 17 00:00:00 2001 From: John Wilkins Date: Thu, 27 Jul 2017 10:01:29 -0700 Subject: [PATCH] doc/cephfs: Removed contractions for ESL speakers. Signed-off-by: John Wilkins --- doc/cephfs/experimental-features.rst | 2 +- doc/cephfs/journaler.rst | 2 +- doc/cephfs/mantle.rst | 6 +++--- doc/cephfs/troubleshooting.rst | 12 ++++++------ 4 files changed, 11 insertions(+), 11 deletions(-) diff --git a/doc/cephfs/experimental-features.rst b/doc/cephfs/experimental-features.rst index 5e5b414ecbe..1dc781a11f0 100644 --- a/doc/cephfs/experimental-features.rst +++ b/doc/cephfs/experimental-features.rst @@ -4,7 +4,7 @@ Experimental Features CephFS includes a number of experimental features which are not fully stabilized or qualified for users to turn on in real deployments. We generally do our best -to clearly demarcate these and fence them off so they can't be used by mistake. +to clearly demarcate these and fence them off so they cannot be used by mistake. Some of these features are closer to being done than others, though. We describe each of them with an approximation of how risky they are and briefly describe diff --git a/doc/cephfs/journaler.rst b/doc/cephfs/journaler.rst index c90217f96e2..2121532f535 100644 --- a/doc/cephfs/journaler.rst +++ b/doc/cephfs/journaler.rst @@ -35,7 +35,7 @@ ``journaler batch max`` -:Description: Maximum bytes we'll delay flushing. +:Description: Maximum bytes we will delay flushing. :Type: 64-bit Unsigned Integer :Required: No :Default: ``0`` diff --git a/doc/cephfs/mantle.rst b/doc/cephfs/mantle.rst index 6ad973e2dbd..9be89d662e3 100644 --- a/doc/cephfs/mantle.rst +++ b/doc/cephfs/mantle.rst @@ -35,7 +35,7 @@ Quickstart with vstart Most of the time this guide will work but sometimes all MDSs lock up and you cannot actually see them spill. It is much better to run this on a cluster. -As a pre-requistie, we assume you've installed `mdtest +As a pre-requistie, we assume you have installed `mdtest `_ or pulled the `Docker image `_. We use mdtest because we need to generate enough load to get over the MIN_OFFLOAD threshold that is @@ -106,7 +106,7 @@ Mantle with `vstart.sh` done -6. When you're done, you can kill all the clients with: +6. When you are done, you can kill all the clients with: :: @@ -197,7 +197,7 @@ Here we use `lua_pcall` instead of `lua_call` because we want to handle errors in the MDBalancer. We do not want the error propagating up the call chain. The cls_lua class wants to handle the error itself because it must fail gracefully. For Mantle, we don't care if a Lua error crashes our balancer -- in that case, -we'll fall back to the original balancer. +we will fall back to the original balancer. The performance improvement of using `lua_call` over `lua_pcall` would not be leveraged here because the balancer is invoked every 10 seconds by default. diff --git a/doc/cephfs/troubleshooting.rst b/doc/cephfs/troubleshooting.rst index 4a2b3e38035..4158d327cc1 100644 --- a/doc/cephfs/troubleshooting.rst +++ b/doc/cephfs/troubleshooting.rst @@ -13,7 +13,7 @@ them. Start by looking to see if either side has stuck operations RADOS Health ============ -If part of the CephFS metadata or data pools is unavaible and CephFS isn't +If part of the CephFS metadata or data pools is unavaible and CephFS is not responding, it is probably because RADOS itself is unhealthy. Resolve those problems first (:doc:`../../rados/troubleshooting/index`). @@ -47,15 +47,15 @@ Usually the last "event" will have been an attempt to gather locks, or sending the operation off to the MDS log. If it is waiting on the OSDs, fix them. If operations are stuck on a specific inode, you probably have a client holding caps which prevent others from using it, either because the client is trying -to flush out dirty data or because you've encountered a bug in CephFS' +to flush out dirty data or because you have encountered a bug in CephFS' distributed file lock code (the file "capabilities" ["caps"] system). If it's a result of a bug in the capabilities code, restarting the MDS is likely to resolve the problem. -If there are no slow requests reported on the MDS, and it isn't reporting +If there are no slow requests reported on the MDS, and it is not reporting that clients are misbehaving, either the client has a problem or its -requests aren't reaching the MDS. +requests are not reaching the MDS. ceph-fuse debugging =================== @@ -101,7 +101,7 @@ slow requests are probably the ``mdsc`` and ``osdc`` files. * osdc: Dumps the current ops in-flight to OSDs (ie, file data IO) * osdmap: Dumps the current OSDMap epoch, pools, and OSDs -If there are no stuck requests but you have file IO which isn't progressing, +If there are no stuck requests but you have file IO which is not progressing, you might have a... Disconnected+Remounted FS @@ -109,7 +109,7 @@ Disconnected+Remounted FS Because CephFS has a "consistent cache", if your network connection is disrupted for a long enough time, the client will be forcibly disconnected from the system. At this point, the kernel client is in -a bind: it can't safely write back dirty data, and many applications +a bind: it cannot safely write back dirty data, and many applications do not handle IO errors correctly on close(). At the moment, the kernel client will remount the FS, but outstanding filesystem IO may or may not be satisfied. In these cases, you may need to reboot your -- 2.39.5