=================
Troubleshooting
=================
+
Slow/stuck operations
-~~~~~~~~~~~~~~~~
+=====================
+
If you are experiencing apparent hung operations, the first task is to identify
where the problem is occurring: in the client, the MDS, or the network connecting
them. Start by looking to see if either side has stuck operations
(:ref:`slow_requests`, below), and narrow it down from there.
RADOS Health
-~~~~~~~~~~~~
+============
+
If part of the CephFS metadata or data pools is unavaible and CephFS isn't
responding, it is probably because RADOS itself is unhealthy. Resolve those
problems first (:doc:`/rados/troubleshooting`).
The MDS
-~~~~~~~
-If an operation is hung inside the MDS, it will eventually show up in "ceph health",
+=======
+
+If an operation is hung inside the MDS, it will eventually show up in ``ceph health``,
identifying "slow requests are blocked". It may also identify clients as
"failing to respond" or misbehaving in other ways. If the MDS identifies
specific clients as misbehaving, you should investigate why they are doing so.
the developers!
.. _slow_requests:
+
Slow requests (MDS)
-------------------
-You can list current operations via the admin socket by running
-::
- ceph daemon mds.<name> dump_ops_in_flight
+You can list current operations via the admin socket by running::
+
+ ceph daemon mds.<name> dump_ops_in_flight
from the MDS host. Identify the stuck commands and examine why they are stuck.
Usually the last "event" will have been an attempt to gather locks, or sending
requests aren't reaching the MDS.
ceph-fuse debugging
-~~~~~~~~~~~~~~~~~~~
+===================
ceph-fuse also supports dump_ops_in_flight. See if it has any and where they are
stuck.
Debug output
-===================
+------------
To get more debugging information from ceph-fuse, try running in the foreground
with logging to the console (``-d``) and enabling client debug
Kernel mount debugging
-~~~~~~~~~~~~~
+======================
+
Slow requests
-==============
+-------------
Unfortunately the kernel client does not support the admin socket, but it has
similar (if limited) interfaces if your kernel has debugfs enabled. There
IO may or may not be satisfied. In these cases, you may need to reboot your
client system.
-You can identify you are in this situation if dmesg/kern.log report something like
-::
+You can identify you are in this situation if dmesg/kern.log report something like::
+
Jul 20 08:14:38 teuthology kernel: [3677601.123718] ceph: mds0 closed our session
Jul 20 08:14:38 teuthology kernel: [3677601.128019] ceph: mds0 reconnect start
Jul 20 08:14:39 teuthology kernel: [3677602.093378] ceph: mds0 reconnect denied
A mount 12 error with ``cannot allocate memory`` usually occurs if you have a
version mismatch between the :term:`Ceph Client` version and the :term:`Ceph
-Storage Cluster` version. Check the versions using::
+Storage Cluster` version. Check the versions using::
ceph -v
-If the Ceph Client is behind the Ceph cluster, try to upgrade it::
+If the Ceph Client is behind the Ceph cluster, try to upgrade it::
sudo apt-get update && sudo apt-get install ceph-common
CephFS Snapshots
-==============
+================
CephFS supports snapshots, generally created by invoking mkdir against the
(hidden, special) .snap directory.
very fast.
Important Data Structures
------------
+-------------------------
* SnapRealm: A `SnapRealm` is created whenever you create a snapshot at a new
point in the hierarchy (or, when a snapshotted inode is moved outside of its
parent snapshot). SnapRealms contain an `sr_t srnode`, links to `past_parents`
the inode number and first `snapid` of the inode/snapshot referenced.
Creating a snapshot
-----------
+-------------------
To make a snapshot on directory "/1/2/3/foo", the client invokes "mkdir" on
"/1/2/3/foo/.snaps" directory. This is transmitted to the MDS Server as a
CEPH_MDS_OP_MKSNAP-tagged `MClientRequest`, and initially handled in
*is not* a synchronous part of the snapshot creation!
Updating a snapshot
-----------
+-------------------
If you delete a snapshot, or move data out of the parent snapshot's hierarchy,
a similar process is followed. Extra code paths check to see if we can break
the `past_parent` links between SnapRealms, or eliminate them entirely.
Generating a SnapContext
----------
+------------------------
A RADOS `SnapContext` consists of a snapshot sequence ID (`snapid`) and all
the snapshot IDs that an object is already part of. To generate that list, we
generate a list of all `snapids` associated with the SnapRealm and all its
`past_parents`.
Storing snapshot data
-----------
+---------------------
File data is stored in RADOS "self-managed" snapshots. Clients are careful to
use the correct `SnapContext` when writing file data to the OSDs.
Storing snapshot metadata
-----------
+-------------------------
Snapshotted dentries (and their inodes) are stored in-line as part of the
directory they were in at the time of the snapshot. *All dentries* include a
`first` and `last` snapid for which they are valid. (Non-snapshotted dentries
will have their `last` set to CEPH_NOSNAP).
Snapshot writeback
----------
+------------------
There is a great deal of code to handle writeback efficiently. When a Client
receives an `MClientSnap` message, it updates the local `SnapRealm`
representation and its links to specific `Inodes`, and generates a `CapSnap`
pinned and in the journal.
Deleting snapshots
---------
+------------------
Snapshots are deleted by invoking "rmdir" on the ".snaps" directory they are
rooted in. (Attempts to delete a directory which roots snapshots *will fail*;
you must delete the snapshots first.) Once deleted, they are entered into the
out again.
Hard links
----------
+----------
Hard links do not interact well with snapshots. A file is snapshotted when its
primary link is part of a SnapRealm; other links *will not* preserve data.
Generally the location where a file was first created will be its primary link,
file if you used something other than default paths.
A typical Ceph Object Gateway configuration file for an Apache-based deployment
-looks something similar as the following::
+looks something similar as the following:
On Red Hat Enterprise Linux::