From b016f846825ea28466a062b08dc2b75a83265727 Mon Sep 17 00:00:00 2001 From: john Date: Mon, 18 Aug 2014 16:57:25 +0100 Subject: [PATCH] doc: add notes on using "ceph fs new" Signed-off-by: John Spray --- doc/cephfs/createfs.rst | 62 +++++++++++++++++++++++++++++++++++++++++ doc/cephfs/index.rst | 3 +- 2 files changed, 64 insertions(+), 1 deletion(-) create mode 100644 doc/cephfs/createfs.rst diff --git a/doc/cephfs/createfs.rst b/doc/cephfs/createfs.rst new file mode 100644 index 000000000000..5f791bcca78c --- /dev/null +++ b/doc/cephfs/createfs.rst @@ -0,0 +1,62 @@ +======================== +Create a Ceph filesystem +======================== + +.. tip:: + + The ``ceph fs new`` command was introduced in Ceph 0.84. Prior to this release, + no manual steps are required to create a filesystem, and pools named ``data`` and + ``metadata`` exist by default. + + The Ceph command line now includes commands for creating and removing filesystems, + but at present only one filesystem may exist at a time. + +A Ceph filesystem requires at least two RADOS pools, one for data and one for metadata. +When configuring these pools, you might consider: + +- Using a higher replication level for the metadata pool, as any data + loss in this pool can render the whole filesystem inaccessible. +- Using lower-latency storage such as SSDs for the metadata pool, as this + will directly affect the observed latency of filesystem operations + on clients. + +Refer to :doc:`/rados/operations/pools` to learn more about managing pools. For +example, to create two pools with default settings for use with a filesystem, you +might run the following commands: + +.. code:: bash + + $ ceph osd pool create cephfs_data + $ ceph osd pool create cephfs_metadata + +Once the pools are created, you may enable the filesystem using the ``fs new`` command: + +.. code:: bash + + $ ceph fs new + +For example: + +.. code:: bash + + $ ceph fs new cephfs cephfs_metadata cephfs_data + $ ceph fs ls + name: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs_data ] + +Once a filesystem has been created, your MDS(s) will be able to enter +an *active* state. For example, in a single MDS system: + +.. code:: bash + + $ ceph mds stat + e5: 1/1/1 up {0=a=up:active} + +Once the filesystem is created and the MDS is active, you are ready to mount +the filesystem: + +.. toctree:: + :maxdepth: 1 + + Mount Ceph FS + Mount Ceph FS as FUSE + diff --git a/doc/cephfs/index.rst b/doc/cephfs/index.rst index 8c6da5ec09d2..7821701f5e0b 100644 --- a/doc/cephfs/index.rst +++ b/doc/cephfs/index.rst @@ -54,13 +54,14 @@ least one :term:`Ceph Metadata Server` running.

Step 2: Mount Ceph FS

Once you have a healthy Ceph Storage Cluster with at least -one Ceph Metadata Server, you may mount your Ceph Filesystem. +one Ceph Metadata Server, you may create and mount your Ceph Filesystem. Ensure that you client has network connectivity and the proper authentication keyring. .. toctree:: :maxdepth: 1 + Create Ceph FS Mount Ceph FS Mount Ceph FS as FUSE Mount Ceph FS in fstab -- 2.47.3