From f488908c795a11ddf89ca808365e140cd84e595a Mon Sep 17 00:00:00 2001 From: Tommi Virtanen Date: Wed, 21 Sep 2011 14:55:11 -0700 Subject: [PATCH] doc: Content for Getting Started with cephfs and rbd. Signed-off-by: Tommi Virtanen --- doc/architecture.rst | 2 + doc/index.rst | 2 + doc/ops/manage/cephfs.rst | 7 ++++ doc/start/block.rst | 84 ++++++++++++++++++++++++++++++++++++++- doc/start/filesystem.rst | 70 +++++++++++++++++++++++++++++++- 5 files changed, 163 insertions(+), 2 deletions(-) diff --git a/doc/architecture.rst b/doc/architecture.rst index 49d4b10e2a3d6..c9705b4bdd8c3 100644 --- a/doc/architecture.rst +++ b/doc/architecture.rst @@ -148,6 +148,8 @@ access control, and so on. .. _RESTful: http://en.wikipedia.org/wiki/RESTful +.. _rbd: + Rados Block Device (RBD) ======================== diff --git a/doc/index.rst b/doc/index.rst index b77b2b6e3ee46..2b3a4db691389 100644 --- a/doc/index.rst +++ b/doc/index.rst @@ -56,6 +56,8 @@ and ``btrfs``, and make sure you are running the latest Linux kernel. Radosgw is still going through heavy development, but it will likely mature next. +.. _cfuse-kernel-tradeoff: + The Ceph filesystem is functionally fairly complete, but has not been tested well enough at scale and under load yet. Multi-master MDS is still problematic and we recommend running just one active MDS diff --git a/doc/ops/manage/cephfs.rst b/doc/ops/manage/cephfs.rst index 7c460e9649498..564fa1fda0c65 100644 --- a/doc/ops/manage/cephfs.rst +++ b/doc/ops/manage/cephfs.rst @@ -2,15 +2,22 @@ Managing Cephfs ================= +.. _mounting: + Mounting ======== + Kernel client ------------- +.. todo:: one time, fstab + FUSE ---- +.. todo:: one time, fstab + Using custom pools for subtrees =============================== diff --git a/doc/start/block.rst b/doc/start/block.rst index 388368f10ef08..01f866c0d4a88 100644 --- a/doc/start/block.rst +++ b/doc/start/block.rst @@ -2,4 +2,86 @@ Starting to use RBD ===================== -.. todo:: write me +Introduction +============ + +`RBD` is the block device component of Ceph. It provides a block +device interface to a Linux machine, while striping the data across +multiple `RADOS` objects for improved performance. For more +information, see :ref:`rbd`. + + +Installation +============ + +To use `RBD`, you need to install a Ceph cluster. Follow the +instructions in :doc:`/ops/install/index`. Continue with these +instructions once you have a healthy cluster running. + + +Setup +===== + +The default `pool` used by `RBD` is called ``rbd``. It is created for +you as part of the installation. If you wish to use multiple pools, +for example for access control, see :ref:`create-new-pool`. + +First, we need a ``client`` key that is authorized to access the right +pool. Follow the instructions in :ref:`add-new-key`. Let's set the +``id`` of the key to be ``bar``. You could set up one key per machine +using `RBD`, or let them share a single key; your call. Make sure the +keyring containing the new key is available on the machine. + +Then, authorize the key to access the new pool. Follow the +instructions in :ref:`auth-pool`. + + +Usage +===== + +`RBD` can be accessed in two ways: + +- as a block device on a Linux machine +- via the ``rbd`` network storage driver in Qemu/KVM + + +.. rubric:: Example: As a block device + +Using the ``client.bar`` key you set up earlier, we can create an RBD +image called ``tengigs``:: + + rbd --name=client.bar create --size=10240 tengigs + +And then make that visible as a block device:: + + touch secretfile + chmod go= secretfile + cauthtool --name=bar --print-key /etc/ceph/client.bar.keyring >secretfile + rbd map tengigs --user bar --secret secretfile + +.. todo:: the secretfile part is really clumsy + +For more information, see :doc:`rbd `\(8). + + +.. rubric:: Example: As a Qemu/KVM storage driver via Libvirt + +You'll need ``kvm`` v0.15, and ``libvirt`` v0.8.7 or newer. + +Create the RBD image as above, and then refer to it in the ``libvirt`` +virtual machine configuration:: + + + + + + + + + `. Follow the +instructions in :ref:`mounting`. + +Once you have the filesystem mounted, you can use it like any other +filesystem. The changes you make on one client will be visible to +other clients that have mounted the same filesystem. + +You can now use snapshots, automatic disk usage tracking, and all +other features `Ceph DFS` has. All read and write operations will be +automatically distributed across your whole storage cluster, giving +you the best performance available. + +.. todo:: links for snapshots, disk usage + +You can use :doc:`cephfs `\(8) to interact with +``cephfs`` internals. + + +.. rubric:: Example: Home directories + +If you locate UNIX user account home directories under a Ceph +filesystem mountpoint, the same files will be available from all +machines set up this way. + +Users can move between hosts, or even use them simultaneously, and +always access the same files. + + +.. rubric:: Example: HPC + +In a HPC (High Performance Computing) scenario, hundreds or thousands +of machines could all mount the Ceph filesystem, and worker processes +on all of the machines could then access the same files for +input/output. + +.. todo:: point to the lazy io optimization -- 2.39.5