From: Kefu Chai Date: Tue, 9 Aug 2016 07:56:24 +0000 (+0800) Subject: doc: silence sphinx warnings X-Git-Tag: ses5-milestone5~136^2 X-Git-Url: http://git-server-git.apps.pok.os.sepia.ceph.com/?a=commitdiff_plain;h=b3d9b8d975e2cfdad377922261a435d02c10c570;p=ceph.git doc: silence sphinx warnings Signed-off-by: Kefu Chai --- diff --git a/doc/cephfs/troubleshooting.rst b/doc/cephfs/troubleshooting.rst index 52442c37f5bd..fc2566071523 100644 --- a/doc/cephfs/troubleshooting.rst +++ b/doc/cephfs/troubleshooting.rst @@ -1,22 +1,26 @@ ================= Troubleshooting ================= + Slow/stuck operations -~~~~~~~~~~~~~~~~ +===================== + If you are experiencing apparent hung operations, the first task is to identify where the problem is occurring: in the client, the MDS, or the network connecting them. Start by looking to see if either side has stuck operations (:ref:`slow_requests`, below), and narrow it down from there. RADOS Health -~~~~~~~~~~~~ +============ + If part of the CephFS metadata or data pools is unavaible and CephFS isn't responding, it is probably because RADOS itself is unhealthy. Resolve those problems first (:doc:`/rados/troubleshooting`). The MDS -~~~~~~~ -If an operation is hung inside the MDS, it will eventually show up in "ceph health", +======= + +If an operation is hung inside the MDS, it will eventually show up in ``ceph health``, identifying "slow requests are blocked". It may also identify clients as "failing to respond" or misbehaving in other ways. If the MDS identifies specific clients as misbehaving, you should investigate why they are doing so. @@ -31,11 +35,12 @@ Otherwise, you have probably discovered a new bug and should report it to the developers! .. _slow_requests: + Slow requests (MDS) ------------------- -You can list current operations via the admin socket by running -:: - ceph daemon mds. dump_ops_in_flight +You can list current operations via the admin socket by running:: + + ceph daemon mds. dump_ops_in_flight from the MDS host. Identify the stuck commands and examine why they are stuck. Usually the last "event" will have been an attempt to gather locks, or sending @@ -53,13 +58,13 @@ that clients are misbehaving, either the client has a problem or its requests aren't reaching the MDS. ceph-fuse debugging -~~~~~~~~~~~~~~~~~~~ +=================== ceph-fuse also supports dump_ops_in_flight. See if it has any and where they are stuck. Debug output -=================== +------------ To get more debugging information from ceph-fuse, try running in the foreground with logging to the console (``-d``) and enabling client debug @@ -71,9 +76,10 @@ If you suspect a potential monitor issue, enable monitor debugging as well Kernel mount debugging -~~~~~~~~~~~~~ +====================== + Slow requests -============== +------------- Unfortunately the kernel client does not support the admin socket, but it has similar (if limited) interfaces if your kernel has debugfs enabled. There @@ -108,8 +114,8 @@ At the moment, the kernel client will remount the FS, but outstanding filesystem IO may or may not be satisfied. In these cases, you may need to reboot your client system. -You can identify you are in this situation if dmesg/kern.log report something like -:: +You can identify you are in this situation if dmesg/kern.log report something like:: + Jul 20 08:14:38 teuthology kernel: [3677601.123718] ceph: mds0 closed our session Jul 20 08:14:38 teuthology kernel: [3677601.128019] ceph: mds0 reconnect start Jul 20 08:14:39 teuthology kernel: [3677602.093378] ceph: mds0 reconnect denied @@ -140,11 +146,11 @@ Mount 12 Error A mount 12 error with ``cannot allocate memory`` usually occurs if you have a version mismatch between the :term:`Ceph Client` version and the :term:`Ceph -Storage Cluster` version. Check the versions using:: +Storage Cluster` version. Check the versions using:: ceph -v -If the Ceph Client is behind the Ceph cluster, try to upgrade it:: +If the Ceph Client is behind the Ceph cluster, try to upgrade it:: sudo apt-get update && sudo apt-get install ceph-common diff --git a/doc/dev/cephfs-snapshots.rst b/doc/dev/cephfs-snapshots.rst index 1bd82d3718b5..7954f6625c5f 100644 --- a/doc/dev/cephfs-snapshots.rst +++ b/doc/dev/cephfs-snapshots.rst @@ -1,5 +1,5 @@ CephFS Snapshots -============== +================ CephFS supports snapshots, generally created by invoking mkdir against the (hidden, special) .snap directory. @@ -18,7 +18,7 @@ features that make CephFS snapshots different from what you might expect: very fast. Important Data Structures ------------ +------------------------- * SnapRealm: A `SnapRealm` is created whenever you create a snapshot at a new point in the hierarchy (or, when a snapshotted inode is moved outside of its parent snapshot). SnapRealms contain an `sr_t srnode`, links to `past_parents` @@ -32,7 +32,7 @@ Important Data Structures the inode number and first `snapid` of the inode/snapshot referenced. Creating a snapshot ----------- +------------------- To make a snapshot on directory "/1/2/3/foo", the client invokes "mkdir" on "/1/2/3/foo/.snaps" directory. This is transmitted to the MDS Server as a CEPH_MDS_OP_MKSNAP-tagged `MClientRequest`, and initially handled in @@ -50,32 +50,32 @@ update the `SnapContext` they are using with that data. Note that this *is not* a synchronous part of the snapshot creation! Updating a snapshot ----------- +------------------- If you delete a snapshot, or move data out of the parent snapshot's hierarchy, a similar process is followed. Extra code paths check to see if we can break the `past_parent` links between SnapRealms, or eliminate them entirely. Generating a SnapContext ---------- +------------------------ A RADOS `SnapContext` consists of a snapshot sequence ID (`snapid`) and all the snapshot IDs that an object is already part of. To generate that list, we generate a list of all `snapids` associated with the SnapRealm and all its `past_parents`. Storing snapshot data ----------- +--------------------- File data is stored in RADOS "self-managed" snapshots. Clients are careful to use the correct `SnapContext` when writing file data to the OSDs. Storing snapshot metadata ----------- +------------------------- Snapshotted dentries (and their inodes) are stored in-line as part of the directory they were in at the time of the snapshot. *All dentries* include a `first` and `last` snapid for which they are valid. (Non-snapshotted dentries will have their `last` set to CEPH_NOSNAP). Snapshot writeback ---------- +------------------ There is a great deal of code to handle writeback efficiently. When a Client receives an `MClientSnap` message, it updates the local `SnapRealm` representation and its links to specific `Inodes`, and generates a `CapSnap` @@ -88,7 +88,7 @@ process for flushing them. Dentries with outstanding `CapSnap` data is kept pinned and in the journal. Deleting snapshots --------- +------------------ Snapshots are deleted by invoking "rmdir" on the ".snaps" directory they are rooted in. (Attempts to delete a directory which roots snapshots *will fail*; you must delete the snapshots first.) Once deleted, they are entered into the @@ -97,7 +97,7 @@ Metadata is cleaned up as the directory objects are read in and written back out again. Hard links ---------- +---------- Hard links do not interact well with snapshots. A file is snapshotted when its primary link is part of a SnapRealm; other links *will not* preserve data. Generally the location where a file was first created will be its primary link, diff --git a/doc/install/install-ceph-gateway.rst b/doc/install/install-ceph-gateway.rst index dcdae298e52b..93db6430a55f 100644 --- a/doc/install/install-ceph-gateway.rst +++ b/doc/install/install-ceph-gateway.rst @@ -167,7 +167,7 @@ directory, you will want to maintain those paths in your Ceph configuration file if you used something other than default paths. A typical Ceph Object Gateway configuration file for an Apache-based deployment -looks something similar as the following:: +looks something similar as the following: On Red Hat Enterprise Linux::