From 52638b48bdb3a3b7a3aadf075aef6da6e3aa4503 Mon Sep 17 00:00:00 2001 From: Kefu Chai Date: Thu, 9 Apr 2020 21:25:39 +0800 Subject: [PATCH] doc: use plantweb as fallback of sphinx-ditaa RTD does not support installing system packages, the only ways to install dependencies are setuptools and pip. while ditaa is a tool written in Java. so we need to find a native python tool allowing us to render ditaa images. plantweb is able to the web service for rendering the ditaa diagram. so let's use it as a fallback if "ditaa" is not around. also start a new line after the directive, otherwise planweb server will return 500 at seeing the diagram. Signed-off-by: Kefu Chai (cherry picked from commit 0cb56e0f13dc57167271ec7f20f11421416196a2) --- .readthedocs.yml | 1 + admin/doc-read-the-docs.txt | 1 + doc/architecture.rst | 60 ++++++++++++++----- doc/cephfs/cephfs-io-path.rst | 1 + doc/conf.py | 16 ++++- doc/dev/deduplication.rst | 3 +- doc/dev/msgr2.rst | 40 +++++++++---- doc/install/ceph-deploy/quick-ceph-deploy.rst | 1 + doc/install/ceph-deploy/quick-cephfs.rst | 1 + doc/install/ceph-deploy/quick-common.rst | 3 +- doc/install/install-vm-cloud.rst | 4 +- doc/install/manual-deployment.rst | 1 + doc/install/manual-freebsd-deployment.rst | 1 + doc/rados/api/librados-intro.rst | 10 ++-- doc/rados/configuration/mon-config-ref.rst | 7 +-- .../configuration/mon-osd-interaction.rst | 15 +++-- .../configuration/network-config-ref.rst | 2 +- doc/rados/operations/cache-tiering.rst | 2 +- doc/rados/operations/monitoring-osd-pg.rst | 12 ++-- doc/rados/operations/placement-groups.rst | 1 - doc/rados/operations/user-management.rst | 3 +- doc/radosgw/admin.rst | 3 +- doc/radosgw/index.rst | 4 +- doc/radosgw/swift/tutorial.rst | 4 +- doc/radosgw/vault.rst | 4 +- doc/rbd/index.rst | 4 +- doc/rbd/libvirt.rst | 4 +- doc/rbd/qemu-rbd.rst | 4 +- doc/rbd/rbd-cloudstack.rst | 4 +- doc/rbd/rbd-kubernetes.rst | 3 +- doc/rbd/rbd-live-migration.rst | 4 +- doc/rbd/rbd-openstack.rst | 4 +- doc/rbd/rbd-persistent-cache.rst | 4 +- doc/rbd/rbd-snapshot.rst | 12 +++- doc/start/intro.rst | 4 +- doc/start/quick-rbd.rst | 3 +- 36 files changed, 183 insertions(+), 67 deletions(-) create mode 100644 admin/doc-read-the-docs.txt diff --git a/.readthedocs.yml b/.readthedocs.yml index 61c710b33b152..24815ce2466e4 100644 --- a/.readthedocs.yml +++ b/.readthedocs.yml @@ -10,6 +10,7 @@ python: version: 3 install: - requirements: admin/doc-requirements.txt + - requirements: admin/doc-read-the-docs.txt sphinx: builder: dirhtml configuration: doc/conf.py diff --git a/admin/doc-read-the-docs.txt b/admin/doc-read-the-docs.txt new file mode 100644 index 0000000000000..bcc77ccffb0a3 --- /dev/null +++ b/admin/doc-read-the-docs.txt @@ -0,0 +1 @@ +plantweb diff --git a/doc/architecture.rst b/doc/architecture.rst index d89ddb5b85ee5..c1f7f5ff9c4b7 100644 --- a/doc/architecture.rst +++ b/doc/architecture.rst @@ -27,7 +27,9 @@ A Ceph Storage Cluster consists of two types of daemons: - :term:`Ceph Monitor` - :term:`Ceph OSD Daemon` -.. ditaa:: +---------------+ +---------------+ +.. ditaa:: + + +---------------+ +---------------+ | OSDs | | Monitors | +---------------+ +---------------+ @@ -56,7 +58,9 @@ comes through a :term:`Ceph Block Device`, :term:`Ceph Object Storage`, the file in a filesystem, which is stored on an :term:`Object Storage Device`. Ceph OSD Daemons handle the read/write operations on the storage disks. -.. ditaa:: /-----\ +-----+ +-----+ +.. ditaa:: + + /-----\ +-----+ +-----+ | obj |------>| {d} |------>| {s} | \-----/ +-----+ +-----+ @@ -70,7 +74,9 @@ attributes such as the file owner, created date, last modified date, and so forth. -.. ditaa:: /------+------------------------------+----------------\ +.. ditaa:: + + /------+------------------------------+----------------\ | ID | Binary Data | Metadata | +------+------------------------------+----------------+ | 1234 | 0101010101010100110101010010 | name1 = value1 | @@ -229,7 +235,9 @@ the client and the monitor share a secret key. .. note:: The ``client.admin`` user must provide the user ID and secret key to the user in a secure manner. -.. ditaa:: +---------+ +---------+ +.. ditaa:: + + +---------+ +---------+ | Client | | Monitor | +---------+ +---------+ | request to | @@ -252,7 +260,9 @@ user's secret key and transmits it back to the client. The client decrypts the ticket and uses it to sign requests to OSDs and metadata servers throughout the cluster. -.. ditaa:: +---------+ +---------+ +.. ditaa:: + + +---------+ +---------+ | Client | | Monitor | +---------+ +---------+ | authenticate | @@ -283,7 +293,9 @@ machine and the Ceph servers. Each message sent between a client and server, subsequent to the initial authentication, is signed using a ticket that the monitors, OSDs and metadata servers can verify with their shared secret. -.. ditaa:: +---------+ +---------+ +-------+ +-------+ +.. ditaa:: + + +---------+ +---------+ +-------+ +-------+ | Client | | Monitor | | MDS | | OSD | +---------+ +---------+ +-------+ +-------+ | request to | | | @@ -393,7 +405,8 @@ ability to leverage this computing power leads to several major benefits: and tertiary OSDs (as many OSDs as additional replicas), and responds to the client once it has confirmed the object was stored successfully. -.. ditaa:: +.. ditaa:: + +----------+ | Client | | | @@ -443,7 +456,8 @@ Ceph Clients retrieve a `Cluster Map`_ from a Ceph Monitor, and write objects to pools. The pool's ``size`` or number of replicas, the CRUSH rule and the number of placement groups determine how Ceph will place the data. -.. ditaa:: +.. ditaa:: + +--------+ Retrieves +---------------+ | Client |------------>| Cluster Map | +--------+ +---------------+ @@ -488,7 +502,8 @@ rebalance dynamically when new Ceph OSD Daemons and the underlying OSD devices come online. The following diagram depicts how CRUSH maps objects to placement groups, and placement groups to OSDs. -.. ditaa:: +.. ditaa:: + /-----\ /-----\ /-----\ /-----\ /-----\ | obj | | obj | | obj | | obj | | obj | \-----/ \-----/ \-----/ \-----/ \-----/ @@ -614,7 +629,8 @@ and each OSD gets some added capacity, so there are no load spikes on the new OSD after rebalancing is complete. -.. ditaa:: +.. ditaa:: + +--------+ +--------+ Before | OSD 1 | | OSD 2 | +--------+ +--------+ @@ -685,6 +701,7 @@ name. Chunk 1 contains ``ABC`` and is stored on **OSD5** while chunk 4 contains .. ditaa:: + +-------------------+ name | NYAN | +-------------------+ @@ -739,6 +756,7 @@ three chunks are read: **OSD2** was the slowest and its chunk was not taken into account. .. ditaa:: + +-------------------+ name | NYAN | +-------------------+ @@ -804,6 +822,7 @@ version 1). .. ditaa:: + Primary OSD +-------------+ @@ -934,6 +953,7 @@ object can be removed: ``D1v1`` on **OSD 1**, ``D2v1`` on **OSD 2** and ``C1v1`` on **OSD 3**. .. ditaa:: + Primary OSD +-------------+ @@ -972,6 +992,7 @@ to be available on all OSDs in the previous acting set ) is ``1,1`` and that will be the head of the new authoritative log. .. ditaa:: + +-------------+ | OSD 1 | | (down) | @@ -1017,6 +1038,7 @@ the erasure coding library during scrubbing and stored on the new primary .. ditaa:: + Primary OSD +-------------+ @@ -1068,7 +1090,8 @@ tier. So the cache tier and the backing storage tier are completely transparent to Ceph clients. -.. ditaa:: +.. ditaa:: + +-------------+ | Ceph Client | +------+------+ @@ -1150,7 +1173,8 @@ Cluster. Ceph packages this functionality into the ``librados`` library so that you can create your own custom Ceph Clients. The following diagram depicts the basic architecture. -.. ditaa:: +.. ditaa:: + +---------------------------------+ | Ceph Storage Cluster Protocol | | (librados) | @@ -1193,7 +1217,9 @@ notification. This enables a client to use any object as a synchronization/communication channel. -.. ditaa:: +----------+ +----------+ +----------+ +---------------+ +.. ditaa:: + + +----------+ +----------+ +----------+ +---------------+ | Client 1 | | Client 2 | | Client 3 | | OSD:Object ID | +----------+ +----------+ +----------+ +---------------+ | | | | @@ -1269,7 +1295,8 @@ take maximum advantage of Ceph's ability to distribute data across placement groups, and consequently doesn't improve performance very much. The following diagram depicts the simplest form of striping: -.. ditaa:: +.. ditaa:: + +---------------+ | Client Data | | Format | @@ -1327,7 +1354,8 @@ set (``object set 2`` in the following diagram), and begins writing to the first stripe (``stripe unit 16``) in the first object in the new object set (``object 4`` in the diagram below). -.. ditaa:: +.. ditaa:: + +---------------+ | Client Data | | Format | @@ -1443,6 +1471,7 @@ and high availability. The following diagram depicts the high-level architecture. .. ditaa:: + +--------------+ +----------------+ +-------------+ | Block Device | | Object Storage | | CephFS | +--------------+ +----------------+ +-------------+ @@ -1527,6 +1556,7 @@ Cluster. Ceph Clients mount a CephFS filesystem as a kernel object or as a Filesystem in User Space (FUSE). .. ditaa:: + +-----------------------+ +------------------------+ | CephFS Kernel Object | | CephFS FUSE | +-----------------------+ +------------------------+ diff --git a/doc/cephfs/cephfs-io-path.rst b/doc/cephfs/cephfs-io-path.rst index 61ce379f5a649..8c7810ba0a4ea 100644 --- a/doc/cephfs/cephfs-io-path.rst +++ b/doc/cephfs/cephfs-io-path.rst @@ -19,6 +19,7 @@ client cache. .. ditaa:: + +---------------------+ | Application | +---------------------+ diff --git a/doc/conf.py b/doc/conf.py index d69b8f9ab7213..92d5329e7f159 100644 --- a/doc/conf.py +++ b/doc/conf.py @@ -1,3 +1,4 @@ +import shutil import sys import os @@ -53,12 +54,19 @@ extensions = [ 'sphinx_autodoc_typehints', 'sphinx.ext.graphviz', 'sphinx.ext.todo', - 'sphinxcontrib.ditaa', 'breathe', 'edit_on_github', 'ceph_releases', ] -ditaa = 'ditaa' +ditaa = shutil.which("ditaa") +if ditaa is not None: + extensions += ['sphinxcontrib.ditaa'] +else: + extensions += ['plantweb.directive'] + plantweb_defaults = { + 'engine': 'ditaa' + } + todo_include_todos = True top_level = os.path.dirname( @@ -87,6 +95,10 @@ edit_on_github_branch = 'master' # handles edit-on-github and old version warning display def setup(app): app.add_javascript('js/ceph.js') + if ditaa is None: + # add "ditaa" as an alias of "diagram" + from plantweb.directive import DiagramDirective + app.add_directive('ditaa', DiagramDirective) # mocking ceph_module offered by ceph-mgr. `ceph_module` is required by # mgr.mgr_module diff --git a/doc/dev/deduplication.rst b/doc/dev/deduplication.rst index ffb2a283b838c..def15955e4379 100644 --- a/doc/dev/deduplication.rst +++ b/doc/dev/deduplication.rst @@ -50,7 +50,8 @@ More details in https://ieeexplore.ieee.org/document/8416369 Design ====== -.. ditaa:: +.. ditaa:: + +-------------+ | Ceph Client | +------+------+ diff --git a/doc/dev/msgr2.rst b/doc/dev/msgr2.rst index 0eed9e67fa26b..585dc34d208cd 100644 --- a/doc/dev/msgr2.rst +++ b/doc/dev/msgr2.rst @@ -74,7 +74,9 @@ If the remote party advertises required features we don't support, we can disconnect. -.. ditaa:: +---------+ +--------+ +.. ditaa:: + + +---------+ +--------+ | Client | | Server | +---------+ +--------+ | send banner | @@ -291,7 +293,9 @@ Authentication Example of authentication phase interaction when the client uses an allowed authentication method: -.. ditaa:: +---------+ +--------+ +.. ditaa:: + + +---------+ +--------+ | Client | | Server | +---------+ +--------+ | auth request | @@ -308,7 +312,9 @@ allowed authentication method: Example of authentication phase interaction when the client uses a forbidden authentication method as the first attempt: -.. ditaa:: +---------+ +--------+ +.. ditaa:: + + +---------+ +--------+ | Client | | Server | +---------+ +--------+ | auth request | @@ -615,7 +621,9 @@ Example of failure scenarios: * First client's client_ident message is lost, and then client reconnects. -.. ditaa:: +---------+ +--------+ +.. ditaa:: + + +---------+ +--------+ | Client | | Server | +---------+ +--------+ | | @@ -633,7 +641,9 @@ Example of failure scenarios: * Server's server_ident message is lost, and then client reconnects. -.. ditaa:: +---------+ +--------+ +.. ditaa:: + + +---------+ +--------+ | Client | | Server | +---------+ +--------+ | | @@ -654,7 +664,9 @@ Example of failure scenarios: * Server's server_ident message is lost, and then server reconnects. -.. ditaa:: +---------+ +--------+ +.. ditaa:: + + +---------+ +--------+ | Client | | Server | +---------+ +--------+ | | @@ -678,7 +690,9 @@ Example of failure scenarios: * Connection failure after session is established, and then client reconnects. -.. ditaa:: +---------+ +--------+ +.. ditaa:: + + +---------+ +--------+ | Client | | Server | +---------+ +--------+ | | @@ -696,7 +710,9 @@ Example of failure scenarios: * Connection failure after session is established because server reset, and then client reconnects. -.. ditaa:: +---------+ +--------+ +.. ditaa:: + + +---------+ +--------+ | Client | | Server | +---------+ +--------+ | | @@ -722,7 +738,9 @@ of the connection. * Connection failure after session is established because client reset, and then client reconnects. -.. ditaa:: +---------+ +--------+ +.. ditaa:: + + +---------+ +--------+ | Client | | Server | +---------+ +--------+ | | @@ -789,7 +807,9 @@ Example of protocol interaction (WIP) _____________________________________ -.. ditaa:: +---------+ +--------+ +.. ditaa:: + + +---------+ +--------+ | Client | | Server | +---------+ +--------+ | send banner | diff --git a/doc/install/ceph-deploy/quick-ceph-deploy.rst b/doc/install/ceph-deploy/quick-ceph-deploy.rst index c4589c7b3d3c9..03a59636b7602 100644 --- a/doc/install/ceph-deploy/quick-ceph-deploy.rst +++ b/doc/install/ceph-deploy/quick-ceph-deploy.rst @@ -163,6 +163,7 @@ cluster. Then add a Ceph Monitor and Ceph Manager to ``node2`` and ``node3`` to improve reliability and availability. .. ditaa:: + /------------------\ /----------------\ | ceph-deploy | | node1 | | Admin Node | | cCCC | diff --git a/doc/install/ceph-deploy/quick-cephfs.rst b/doc/install/ceph-deploy/quick-cephfs.rst index e8ca28f86ee16..9897219575c77 100644 --- a/doc/install/ceph-deploy/quick-cephfs.rst +++ b/doc/install/ceph-deploy/quick-cephfs.rst @@ -44,6 +44,7 @@ For example:: Now, your Ceph cluster would look like this: .. ditaa:: + /------------------\ /----------------\ | ceph-deploy | | node1 | | Admin Node | | cCCC | diff --git a/doc/install/ceph-deploy/quick-common.rst b/doc/install/ceph-deploy/quick-common.rst index 915a7b886422d..25668f798d71e 100644 --- a/doc/install/ceph-deploy/quick-common.rst +++ b/doc/install/ceph-deploy/quick-common.rst @@ -1,4 +1,5 @@ -.. ditaa:: +.. ditaa:: + /------------------\ /-----------------\ | admin-node | | node1 | | +-------->+ cCCC | diff --git a/doc/install/install-vm-cloud.rst b/doc/install/install-vm-cloud.rst index 3e1db12370f48..876422865154e 100644 --- a/doc/install/install-vm-cloud.rst +++ b/doc/install/install-vm-cloud.rst @@ -9,7 +9,9 @@ Examples of VMs include: QEMU/KVM, XEN, VMWare, LXC, VirtualBox, etc. Examples of Cloud Platforms include OpenStack, CloudStack, OpenNebula, etc. -.. ditaa:: +---------------------------------------------------+ +.. ditaa:: + + +---------------------------------------------------+ | libvirt | +------------------------+--------------------------+ | diff --git a/doc/install/manual-deployment.rst b/doc/install/manual-deployment.rst index a42a80f83db56..103c4dcd58f14 100644 --- a/doc/install/manual-deployment.rst +++ b/doc/install/manual-deployment.rst @@ -18,6 +18,7 @@ OSD nodes. .. ditaa:: + /------------------\ /----------------\ | Admin Node | | node1 | | +-------->+ | diff --git a/doc/install/manual-freebsd-deployment.rst b/doc/install/manual-freebsd-deployment.rst index d4eb14718eb62..5f8f768c9f19c 100644 --- a/doc/install/manual-freebsd-deployment.rst +++ b/doc/install/manual-freebsd-deployment.rst @@ -22,6 +22,7 @@ OSD nodes. .. ditaa:: + /------------------\ /----------------\ | Admin Node | | node1 | | +-------->+ | diff --git a/doc/rados/api/librados-intro.rst b/doc/rados/api/librados-intro.rst index c63a255897c0f..7179438a84de4 100644 --- a/doc/rados/api/librados-intro.rst +++ b/doc/rados/api/librados-intro.rst @@ -15,7 +15,7 @@ the Ceph Storage Cluster: - The :term:`Ceph Monitor`, which maintains a master copy of the cluster map. - The :term:`Ceph OSD Daemon` (OSD), which stores data as objects on a storage node. -.. ditaa:: +.. ditaa:: +---------------------------------+ | Ceph Storage Cluster Protocol | | (librados) | @@ -165,7 +165,7 @@ placement group and `OSD`_ for locating the data. Then the client application can read or write data. The client app doesn't need to learn about the topology of the cluster directly. -.. ditaa:: +.. ditaa:: +--------+ Retrieves +---------------+ | Client |------------>| Cluster Map | +--------+ +---------------+ @@ -217,7 +217,8 @@ these capabilities. The following diagram provides a high-level flow for the initial connection. -.. ditaa:: +---------+ +---------+ +.. ditaa:: + +---------+ +---------+ | Client | | Monitor | +---------+ +---------+ | | @@ -521,7 +522,8 @@ functionality includes: - Snapshot pools, list snapshots, etc. -.. ditaa:: +---------+ +---------+ +---------+ +.. ditaa:: + +---------+ +---------+ +---------+ | Client | | Monitor | | OSD | +---------+ +---------+ +---------+ | | | diff --git a/doc/rados/configuration/mon-config-ref.rst b/doc/rados/configuration/mon-config-ref.rst index dbfc20b908404..e93cd28b7d785 100644 --- a/doc/rados/configuration/mon-config-ref.rst +++ b/doc/rados/configuration/mon-config-ref.rst @@ -34,8 +34,7 @@ Monitors can query the most recent version of the cluster map during sync operations. Ceph Monitors leverage the key/value store's snapshots and iterators (using leveldb) to perform store-wide synchronization. -.. ditaa:: - +.. ditaa:: /-------------\ /-------------\ | Monitor | Write Changes | Paxos | | cCCC +-------------->+ cCCC | @@ -505,7 +504,6 @@ Ceph Clients to read and write data. So the Ceph Storage Cluster's operating capacity is 95TB, not 99TB. .. ditaa:: - +--------+ +--------+ +--------+ +--------+ +--------+ +--------+ | Rack 1 | | Rack 2 | | Rack 3 | | Rack 4 | | Rack 5 | | Rack 6 | | cCCC | | cF00 | | cCCC | | cCCC | | cCCC | | cCCC | @@ -636,7 +634,8 @@ fallen behind the other monitors. The requester asks the leader to synchronize, and the leader tells the requester to synchronize with a provider. -.. ditaa:: +-----------+ +---------+ +----------+ +.. ditaa:: + +-----------+ +---------+ +----------+ | Requester | | Leader | | Provider | +-----------+ +---------+ +----------+ | | | diff --git a/doc/rados/configuration/mon-osd-interaction.rst b/doc/rados/configuration/mon-osd-interaction.rst index a7324ebb0e554..6ef6626555385 100644 --- a/doc/rados/configuration/mon-osd-interaction.rst +++ b/doc/rados/configuration/mon-osd-interaction.rst @@ -34,7 +34,8 @@ and ``[osd]`` or ``[global]`` section of your Ceph configuration file, or by setting the value at runtime. -.. ditaa:: +---------+ +---------+ +.. ditaa:: + +---------+ +---------+ | OSD 1 | | OSD 2 | +---------+ +---------+ | | @@ -89,7 +90,9 @@ and ``mon osd reporter subtree level`` settings under the ``[mon]`` section of your Ceph configuration file, or by setting the value at runtime. -.. ditaa:: +---------+ +---------+ +---------+ +.. ditaa:: + + +---------+ +---------+ +---------+ | OSD 1 | | OSD 2 | | Monitor | +---------+ +---------+ +---------+ | | | @@ -118,7 +121,9 @@ Ceph Monitor heartbeat interval by adding an ``osd mon heartbeat interval`` setting under the ``[osd]`` section of your Ceph configuration file, or by setting the value at runtime. -.. ditaa:: +---------+ +---------+ +-------+ +---------+ +.. ditaa:: + + +---------+ +---------+ +-------+ +---------+ | OSD 1 | | OSD 2 | | OSD 3 | | Monitor | +---------+ +---------+ +-------+ +---------+ | | | | @@ -161,7 +166,9 @@ interval max`` setting under the ``[osd]`` section of your Ceph configuration file, or by setting the value at runtime. -.. ditaa:: +---------+ +---------+ +.. ditaa:: + + +---------+ +---------+ | OSD 1 | | Monitor | +---------+ +---------+ | | diff --git a/doc/rados/configuration/network-config-ref.rst b/doc/rados/configuration/network-config-ref.rst index 41da0a1752867..bd49a87b310cc 100644 --- a/doc/rados/configuration/network-config-ref.rst +++ b/doc/rados/configuration/network-config-ref.rst @@ -112,7 +112,7 @@ Each Ceph OSD Daemon on a Ceph Node may use up to four ports: #. One for sending data to other OSDs. #. Two for heartbeating on each interface. -.. ditaa:: +.. ditaa:: /---------------\ | OSD | | +---+----------------+-----------+ diff --git a/doc/rados/operations/cache-tiering.rst b/doc/rados/operations/cache-tiering.rst index c825c22c3035a..237b6e3c9f376 100644 --- a/doc/rados/operations/cache-tiering.rst +++ b/doc/rados/operations/cache-tiering.rst @@ -13,7 +13,7 @@ tier. So the cache tier and the backing storage tier are completely transparent to Ceph clients. -.. ditaa:: +.. ditaa:: +-------------+ | Ceph Client | +------+------+ diff --git a/doc/rados/operations/monitoring-osd-pg.rst b/doc/rados/operations/monitoring-osd-pg.rst index 630d268b45825..08b70dd4d51c0 100644 --- a/doc/rados/operations/monitoring-osd-pg.rst +++ b/doc/rados/operations/monitoring-osd-pg.rst @@ -33,7 +33,9 @@ not assign placement groups to the OSD. If an OSD is ``down``, it should also be .. note:: If an OSD is ``down`` and ``in``, there is a problem and the cluster will not be in a healthy state. -.. ditaa:: +----------------+ +----------------+ +.. ditaa:: + + +----------------+ +----------------+ | | | | | OSD #n In | | OSD #n Up | | | | | @@ -158,7 +160,9 @@ OSDs to establish agreement on the current state of the placement group (assuming a pool with 3 replicas of the PG). -.. ditaa:: +---------+ +---------+ +-------+ +.. ditaa:: + + +---------+ +---------+ +-------+ | OSD 1 | | OSD 2 | | OSD 3 | +---------+ +---------+ +-------+ | | | @@ -265,8 +269,8 @@ group's Acting Set will peer. Once peering is complete, the placement group status should be ``active+clean``, which means a Ceph client can begin writing to the placement group. -.. ditaa:: - +.. ditaa:: + /-----------\ /-----------\ /-----------\ | Creating |------>| Peering |------>| Active | \-----------/ \-----------/ \-----------/ diff --git a/doc/rados/operations/placement-groups.rst b/doc/rados/operations/placement-groups.rst index dbeb2fe30c785..f7f2d110a8eb5 100644 --- a/doc/rados/operations/placement-groups.rst +++ b/doc/rados/operations/placement-groups.rst @@ -244,7 +244,6 @@ OSDs. For instance, in a replicated pool of size two, each placement group will store objects on two OSDs, as shown below. .. ditaa:: - +-----------------------+ +-----------------------+ | Placement Group #1 | | Placement Group #2 | | | | | diff --git a/doc/rados/operations/user-management.rst b/doc/rados/operations/user-management.rst index 4d961d4be046a..7b7713a83bd04 100644 --- a/doc/rados/operations/user-management.rst +++ b/doc/rados/operations/user-management.rst @@ -9,7 +9,8 @@ authorization with the :term:`Ceph Storage Cluster`. Users are either individuals or system actors such as applications, which use Ceph clients to interact with the Ceph Storage Cluster daemons. -.. ditaa:: +-----+ +.. ditaa:: + +-----+ | {o} | | | +--+--+ /---------\ /---------\ diff --git a/doc/radosgw/admin.rst b/doc/radosgw/admin.rst index 4f172eab4aa10..b498cd97f63dd 100644 --- a/doc/radosgw/admin.rst +++ b/doc/radosgw/admin.rst @@ -22,7 +22,8 @@ There are two user types: - **Subuser:** The term 'subuser' reflects a user of the Swift interface. A subuser is associated to a user . -.. ditaa:: +---------+ +.. ditaa:: + +---------+ | User | +----+----+ | diff --git a/doc/radosgw/index.rst b/doc/radosgw/index.rst index 819986926c6d2..bf1392d845159 100644 --- a/doc/radosgw/index.rst +++ b/doc/radosgw/index.rst @@ -22,7 +22,9 @@ in the same Ceph Storage Cluster used to store data from Ceph File System client or Ceph Block Device clients. The S3 and Swift APIs share a common namespace, so you may write data with one API and retrieve it with the other. -.. ditaa:: +------------------------+ +------------------------+ +.. ditaa:: + + +------------------------+ +------------------------+ | S3 compatible API | | Swift compatible API | +------------------------+-+------------------------+ | radosgw | diff --git a/doc/radosgw/swift/tutorial.rst b/doc/radosgw/swift/tutorial.rst index 8287d19be5b1c..5d2889b192d20 100644 --- a/doc/radosgw/swift/tutorial.rst +++ b/doc/radosgw/swift/tutorial.rst @@ -13,7 +13,9 @@ metadata. See example code for the following languages: - `Ruby`_ -.. ditaa:: +----------------------------+ +-----------------------------+ +.. ditaa:: + + +----------------------------+ +-----------------------------+ | | | | | Create a Connection |------->| Create a Container | | | | | diff --git a/doc/radosgw/vault.rst b/doc/radosgw/vault.rst index 3deef8dac3b63..dc3ec3ddfb6e2 100644 --- a/doc/radosgw/vault.rst +++ b/doc/radosgw/vault.rst @@ -5,7 +5,9 @@ HashiCorp Vault Integration HashiCorp `Vault`_ can be used as a secure key management service for `Server-Side Encryption`_ (SSE-KMS). -.. ditaa:: +---------+ +---------+ +-------+ +-------+ +.. ditaa:: + + +---------+ +---------+ +-------+ +-------+ | Client | | RadosGW | | Vault | | OSD | +---------+ +---------+ +-------+ +-------+ | create secret | | | diff --git a/doc/rbd/index.rst b/doc/rbd/index.rst index fdcf21d6f3c67..ea63f2e4ddc5e 100644 --- a/doc/rbd/index.rst +++ b/doc/rbd/index.rst @@ -17,7 +17,9 @@ such as snapshotting, replication and consistency. Ceph's :abbr:`RADOS (Reliable Autonomic Distributed Object Store)` Block Devices (RBD) interact with OSDs using kernel modules or the ``librbd`` library. -.. ditaa:: +------------------------+ +------------------------+ +.. ditaa:: + + +------------------------+ +------------------------+ | Kernel Module | | librbd | +------------------------+-+------------------------+ | RADOS Protocol | diff --git a/doc/rbd/libvirt.rst b/doc/rbd/libvirt.rst index 3488096628c38..c0ba118197a3f 100644 --- a/doc/rbd/libvirt.rst +++ b/doc/rbd/libvirt.rst @@ -21,7 +21,9 @@ software that interfaces with ``libvirt``. The following stack diagram illustrates how ``libvirt`` and QEMU use Ceph block devices via ``librbd``. -.. ditaa:: +---------------------------------------------------+ +.. ditaa:: + + +---------------------------------------------------+ | libvirt | +------------------------+--------------------------+ | diff --git a/doc/rbd/qemu-rbd.rst b/doc/rbd/qemu-rbd.rst index 5b0143509c10c..f7ddb2ec6a4f1 100644 --- a/doc/rbd/qemu-rbd.rst +++ b/doc/rbd/qemu-rbd.rst @@ -14,7 +14,9 @@ virtual machines quickly, because the client doesn't have to download an entire image each time it spins up a new virtual machine. -.. ditaa:: +---------------------------------------------------+ +.. ditaa:: + + +---------------------------------------------------+ | QEMU | +---------------------------------------------------+ | librbd | diff --git a/doc/rbd/rbd-cloudstack.rst b/doc/rbd/rbd-cloudstack.rst index 876284bcb2c15..1b961234b8985 100644 --- a/doc/rbd/rbd-cloudstack.rst +++ b/doc/rbd/rbd-cloudstack.rst @@ -14,7 +14,9 @@ and a dual-core processor, but more CPU and RAM will perform better. The following diagram depicts the CloudStack/Ceph technology stack. -.. ditaa:: +---------------------------------------------------+ +.. ditaa:: + + +---------------------------------------------------+ | CloudStack | +---------------------------------------------------+ | libvirt | diff --git a/doc/rbd/rbd-kubernetes.rst b/doc/rbd/rbd-kubernetes.rst index 8f00dbf75e47b..caaf77d648fa4 100644 --- a/doc/rbd/rbd-kubernetes.rst +++ b/doc/rbd/rbd-kubernetes.rst @@ -14,7 +14,8 @@ To use Ceph Block Devices with Kubernetes v1.13 and higher, you must install and configure ``ceph-csi`` within your Kubernetes environment. The following diagram depicts the Kubernetes/Ceph technology stack. -.. ditaa:: +---------------------------------------------------+ +.. ditaa:: + +---------------------------------------------------+ | Kubernetes | +---------------------------------------------------+ | ceph--csi | diff --git a/doc/rbd/rbd-live-migration.rst b/doc/rbd/rbd-live-migration.rst index ca7c12d93e4e6..813deaae3fb57 100644 --- a/doc/rbd/rbd-live-migration.rst +++ b/doc/rbd/rbd-live-migration.rst @@ -20,7 +20,9 @@ the image is updated to point to the new target image. kernel module does not support live-migration at this time. -.. ditaa:: +-------------+ +-------------+ +.. ditaa:: + + +-------------+ +-------------+ | {s} c999 | | {s} | | Live | Target refers | Live | | migration |<-------------*| migration | diff --git a/doc/rbd/rbd-openstack.rst b/doc/rbd/rbd-openstack.rst index 65e74d3bc3a9d..9d02b51460089 100644 --- a/doc/rbd/rbd-openstack.rst +++ b/doc/rbd/rbd-openstack.rst @@ -16,7 +16,9 @@ quad-core processor. The following diagram depicts the OpenStack/Ceph technology stack. -.. ditaa:: +---------------------------------------------------+ +.. ditaa:: + + +---------------------------------------------------+ | OpenStack | +---------------------------------------------------+ | libvirt | diff --git a/doc/rbd/rbd-persistent-cache.rst b/doc/rbd/rbd-persistent-cache.rst index f7f1043264385..230e803945ca2 100644 --- a/doc/rbd/rbd-persistent-cache.rst +++ b/doc/rbd/rbd-persistent-cache.rst @@ -21,7 +21,9 @@ will be serviced from the local cache. .. note:: RBD shared read-only parent image cache requires the Ceph Nautilus release or later. -.. ditaa:: +--------------------------------------------------------+ +.. ditaa:: + + +--------------------------------------------------------+ | QEMU | +--------------------------------------------------------+ | librbd (cloned images) | diff --git a/doc/rbd/rbd-snapshot.rst b/doc/rbd/rbd-snapshot.rst index 5aaa23e5d19dd..3d2037d030a1f 100644 --- a/doc/rbd/rbd-snapshot.rst +++ b/doc/rbd/rbd-snapshot.rst @@ -24,7 +24,9 @@ command and many higher level interfaces, including `QEMU`_, `libvirt`_, For virtual machines, `qemu-guest-agent` can be used to automatically freeze file systems when creating a snapshot. -.. ditaa:: +------------+ +-------------+ +.. ditaa:: + + +------------+ +-------------+ | {s} | | {s} c999 | | Active |<-------*| Snapshot | | Image | | of Image | @@ -147,7 +149,9 @@ so cloning a snapshot simplifies semantics--making it possible to create clones rapidly. -.. ditaa:: +-------------+ +-------------+ +.. ditaa:: + + +-------------+ +-------------+ | {s} c999 | | {s} | | Snapshot | Child refers | COW Clone | | of Image |<------------*| of Snapshot | @@ -181,7 +185,9 @@ Ceph block device layering is a simple process. You must have an image. You must create a snapshot of the image. You must protect the snapshot. Once you have performed these steps, you can begin cloning the snapshot. -.. ditaa:: +----------------------------+ +-----------------------------+ +.. ditaa:: + + +----------------------------+ +-----------------------------+ | | | | | Create Block Device Image |------->| Create a Snapshot | | | | | diff --git a/doc/start/intro.rst b/doc/start/intro.rst index 4907789b88e8d..8d7c79887f7f6 100644 --- a/doc/start/intro.rst +++ b/doc/start/intro.rst @@ -11,7 +11,9 @@ Storage Cluster requires at least one Ceph Monitor, Ceph Manager, and Ceph OSD (Object Storage Daemon). The Ceph Metadata Server is also required when running Ceph File System clients. -.. ditaa:: +---------------+ +------------+ +------------+ +---------------+ +.. ditaa:: + + +---------------+ +------------+ +------------+ +---------------+ | OSDs | | Monitors | | Managers | | MDSs | +---------------+ +------------+ +------------+ +---------------+ diff --git a/doc/start/quick-rbd.rst b/doc/start/quick-rbd.rst index d8a331f1ff272..a400652d8b994 100644 --- a/doc/start/quick-rbd.rst +++ b/doc/start/quick-rbd.rst @@ -11,7 +11,8 @@ Device`. Block Device. -.. ditaa:: +.. ditaa:: + /------------------\ /----------------\ | Admin Node | | ceph-client | | +-------->+ cCCC | -- 2.39.5