From: Kefu Chai Date: Sat, 29 Aug 2020 16:51:12 +0000 (+0800) Subject: doc: add sphinx.ext.mathjax for math-to-MathML rendering X-Git-Tag: v16.1.0~1255^2~3 X-Git-Url: http://git-server-git.apps.pok.os.sepia.ceph.com/?a=commitdiff_plain;h=253fd2289600581c83438a89b2fb7f7044d28327;p=ceph.git doc: add sphinx.ext.mathjax for math-to-MathML rendering Signed-off-by: Kefu Chai --- diff --git a/doc/conf.py b/doc/conf.py index 496a996fb540..797000d1bc86 100644 --- a/doc/conf.py +++ b/doc/conf.py @@ -51,10 +51,11 @@ sys.path.insert(0, os.path.abspath('_ext')) extensions = [ 'sphinx.ext.autodoc', - 'sphinx_autodoc_typehints', 'sphinx.ext.graphviz', + 'sphinx.ext.mathjax', 'sphinx.ext.todo', 'sphinx-prompt', + 'sphinx_autodoc_typehints', 'sphinx_substitution_extensions', 'breathe', 'edit_on_github', diff --git a/doc/rados/operations/erasure-code-clay.rst b/doc/rados/operations/erasure-code-clay.rst index ccf3b309c39c..cb330dc1c1af 100644 --- a/doc/rados/operations/erasure-code-clay.rst +++ b/doc/rados/operations/erasure-code-clay.rst @@ -23,13 +23,13 @@ amount of information. More general parameters are provided below. The benefits when the repair is carried out for a rack that stores information on the order of Terabytes. - +-------------+---------------------------+ - | plugin | total amount of disk IO | - +=============+===========================+ - |jerasure,isa | k*S | - +-------------+---------------------------+ - | clay | d*S/(d-k+1) = (k+m-1)*S/m | - +-------------+---------------------------+ + +-------------+---------------------------------------------------------+ + | plugin | total amount of disk IO | + +=============+=========================================================+ + |jerasure,isa | :math:`k S` | + +-------------+---------------------------------------------------------+ + | clay | :math:`\frac{d S}{d - k + 1} = \frac{(k + m - 1) S}{m}` | + +-------------+---------------------------------------------------------+ where *S* is the amount of data stored on a single OSD undergoing repair. In the table above, we have used the largest possible value of *d* as this will result in the smallest amount of data download needed @@ -174,14 +174,14 @@ is a vector code and it is able to view and manipulate data within a chunk at a finer granularity termed as a sub-chunk. The number of sub-chunks within a chunk for a Clay code is given by: - sub-chunk count = q\ :sup:`(k+m)/q`, where q=d-k+1 + sub-chunk count = :math:`q^{\frac{k+m}{q}}`, where :math:`q = d - k + 1` During repair of an OSD, the helper information requested from an available OSD is only a fraction of a chunk. In fact, the number of sub-chunks within a chunk that are accessed during repair is given by: - repair sub-chunk count = sub-chunk count / q + repair sub-chunk count = :math:`\frac{sub---chunk \: count}{q}` Examples -------- @@ -203,9 +203,9 @@ are not necessarily stored consecutively within a chunk. For best disk IO performance, it is helpful to read contiguous data. For this reason, it is suggested that you choose stripe-size such that the sub-chunk size is sufficiently large. -For a given stripe-size (that's fixed based on a workload), choose ``k``, ``m``, ``d`` such that:: +For a given stripe-size (that's fixed based on a workload), choose ``k``, ``m``, ``d`` such that: - sub-chunk size = stripe-size / (k*sub-chunk count) = 4KB, 8KB, 12KB ... + sub-chunk size = :math:`\frac{stripe-size}{k sub-chunk count}` = 4KB, 8KB, 12KB ... #. For large size workloads for which the stripe size is large, it is easy to choose k, m, d. For example consider a stripe-size of size 64MB, choosing *k=16*, *m=4* and *d=19* will diff --git a/doc/rados/operations/erasure-code-shec.rst b/doc/rados/operations/erasure-code-shec.rst index dd5708a3b928..b2157f780dd7 100644 --- a/doc/rados/operations/erasure-code-shec.rst +++ b/doc/rados/operations/erasure-code-shec.rst @@ -108,11 +108,9 @@ Space Efficiency Space efficiency is a ratio of data chunks to all ones in a object and represented as k/(k+m). -In order to improve space efficiency, you should increase k or decrease m. +In order to improve space efficiency, you should increase k or decrease m: -:: - - space efficiency of SHEC(4,3,2) = 4/(4+3) = 0.57 + space efficiency of SHEC(4,3,2) = :math:`\frac{4}{4+3}` = 0.57 SHEC(5,3,2) or SHEC(4,2,2) improves SHEC(4,3,2)'s space efficiency Durability diff --git a/doc/rados/operations/placement-groups.rst b/doc/rados/operations/placement-groups.rst index f7f2d110a8eb..699c26f8e30b 100644 --- a/doc/rados/operations/placement-groups.rst +++ b/doc/rados/operations/placement-groups.rst @@ -435,11 +435,9 @@ If you have more than 50 OSDs, we recommend approximately 50-100 placement groups per OSD to balance out resource usage, data durability and distribution. If you have less than 50 OSDs, choosing among the `preselection`_ above is best. For a single pool of objects, -you can use the following formula to get a baseline:: +you can use the following formula to get a baseline - (OSDs * 100) - Total PGs = ------------ - pool size + Total PGs = :math:`\frac{OSDs \times 100}{pool \: size}` Where **pool size** is either the number of replicas for replicated pools or the K+M sum for erasure coded pools (as returned by **ceph @@ -457,11 +455,9 @@ data across your OSDs. Their use should be limited to incrementally stepping from one power of two to another. As an example, for a cluster with 200 OSDs and a pool size of 3 -replicas, you would estimate your number of PGs as follows:: +replicas, you would estimate your number of PGs as follows - (200 * 100) - ----------- = 6667. Nearest power of 2: 8192 - 3 + :math:`\frac{200 \times 100}{3} = 6667`. Nearest power of 2: 8192 When using multiple data pools for storing objects, you need to ensure that you balance the number of placement groups per pool with the