From: Tommi Virtanen Date: Fri, 30 Mar 2012 18:11:12 +0000 (-0700) Subject: doc: Fix reStructuredText syntax errors. X-Git-Tag: v0.45~36 X-Git-Url: http://git-server-git.apps.pok.os.sepia.ceph.com/?a=commitdiff_plain;h=b162696b32a8565c24adf18d93c6c6afffda6525;p=ceph.git doc: Fix reStructuredText syntax errors. Signed-off-by: Tommi Virtanen --- diff --git a/doc/control.rst b/doc/control.rst index a0863fd11f15..b43767a1301e 100644 --- a/doc/control.rst +++ b/doc/control.rst @@ -84,7 +84,7 @@ mon_osd_report_timeout). :: $ ceph pg mark_unfound_lost revert Revert "lost" objects to their prior state, either a previous version -or delete them if they were just created. :: +or delete them if they were just created. OSD subsystem diff --git a/doc/dev/kernel-client-troubleshooting.rst b/doc/dev/kernel-client-troubleshooting.rst index 03ba067eba04..59e476148e5b 100644 --- a/doc/dev/kernel-client-troubleshooting.rst +++ b/doc/dev/kernel-client-troubleshooting.rst @@ -1,13 +1,13 @@ -============ +==================================== Kernel client troubleshooting (FS) -============ +==================================== If there is an issue with the cephfs kernel client, the most important thing is figuring out whether the problem is with the client or the MDS. Generally, this is easy to work out. If the kernel client broke directly, there will be output in dmesg. Collect it and any appropriate kernel state. If the problem is with the MDS, there will be hung requests that the client -is waiting on. Look in /sys/kernel/debug/ceph/*/ and cat the mdsc file to +is waiting on. Look in ``/sys/kernel/debug/ceph/*/`` and cat the ``mdsc`` file to get a listing of requests in progress. If one of them remains there, the MDS has probably "forgotten" it. We can get hints about what's going on by dumping the MDS cache: diff --git a/doc/dev/peering.rst b/doc/dev/peering.rst index 0113d6388b98..cf2c429f818c 100644 --- a/doc/dev/peering.rst +++ b/doc/dev/peering.rst @@ -86,12 +86,12 @@ Concepts osd map epoch by having the monitor set its *up_thru* in the osd map. This helps peering ignore previous *acting sets* for which peering never completed after certain sequences of failures, such as - the second interval below:: + the second interval below: - - *acting set* = [A,B] - - *acting set* = [A] - - *acting set* = [] very shortly after (e.g., simultaneous failure, but staggered detection) - - *acting set* = [B] (B restarts, A does not) + - *acting set* = [A,B] + - *acting set* = [A] + - *acting set* = [] very shortly after (e.g., simultaneous failure, but staggered detection) + - *acting set* = [B] (B restarts, A does not) *last epoch clean* the last epoch at which all nodes in the *acting set* diff --git a/doc/ops/manage/crush.rst b/doc/ops/manage/crush.rst index eb5b16b442b0..3a84143507ee 100644 --- a/doc/ops/manage/crush.rst +++ b/doc/ops/manage/crush.rst @@ -1,9 +1,9 @@ +.. _adjusting-crush: + ========================= Adjusting the CRUSH map ========================= -.. _adjusting-crush: - There are a few ways to adjust the crush map: * online, by issuing commands to the monitor diff --git a/doc/ops/manage/failures/radosgw.rst b/doc/ops/manage/failures/radosgw.rst index fd8a877c493e..0de2aa48de53 100644 --- a/doc/ops/manage/failures/radosgw.rst +++ b/doc/ops/manage/failures/radosgw.rst @@ -49,7 +49,7 @@ will dump information about current in-progress requests with the RADOS cluster. This allows one to identify if any requests are blocked by a non-responsive ceph-osd. For example, one might see:: -{ "ops": [ + { "ops": [ { "tid": 1858, "pg": "2.d2041a48", "osd": 1, diff --git a/doc/ops/manage/grow/osd.rst b/doc/ops/manage/grow/osd.rst index d64f789fa80e..4d1b4dcf4817 100644 --- a/doc/ops/manage/grow/osd.rst +++ b/doc/ops/manage/grow/osd.rst @@ -9,18 +9,18 @@ Briefly... #. Allocate a new OSD id:: - $ ceph osd create - 123 + $ ceph osd create + 123 #. Make sure ceph.conf is valid for the new OSD. #. Initialize osd data directory:: - $ ceph-osd -i 123 --mkfs --mkkey + $ ceph-osd -i 123 --mkfs --mkkey #. Register the OSD authentication key:: - $ ceph auth add osd.123 osd 'allow *' mon 'allow rwx' -i /var/lib/ceph/osd-data/123/keyring + $ ceph auth add osd.123 osd 'allow *' mon 'allow rwx' -i /var/lib/ceph/osd-data/123/keyring #. Adjust the CRUSH map to allocate data to the new device (see :ref:`adjusting-crush`). @@ -34,10 +34,10 @@ Briefly... #. Remove it from the CRUSH map:: - $ ceph osd crush remove osd.123 + $ ceph osd crush remove osd.123 #. Remove it from the osd map:: - $ ceph osd rm 123 + $ ceph osd rm 123 See also :ref:`failures-osd`.