]> git-server-git.apps.pok.os.sepia.ceph.com Git - ceph.git/commitdiff
doc/dev: use http://docs.ceph.com/en/latest/ instead of /docs/master/ for docs 38241/head
authorhaoyixing <haoyixing@kuaishou.com>
Mon, 23 Nov 2020 12:36:32 +0000 (20:36 +0800)
committerhaoyixing <haoyixing@kuaishou.com>
Tue, 24 Nov 2020 04:49:47 +0000 (12:49 +0800)
Several links under http://docs.ceph.com/docs/master/ were unable to access.
Change them to http://docs.ceph.com/en/lastest so we can access them directly.

Signed-off-by: haoyixing <haoyixing@kuaishou.com>
16 files changed:
doc/dev/freebsd.rst
doc/man/8/crushtool.rst
doc/rados/operations/pools.rst
doc/rados/troubleshooting/log-and-debug.rst
doc/radosgw/multisite.rst
doc/radosgw/pools.rst
doc/rbd/iscsi-target-cli.rst
monitoring/grafana/README.md
qa/qa_scripts/openstack/connectceph.sh
qa/standalone/scrub/osd-scrub-repair.sh
qa/tasks/devstack.py
src/mgr/DaemonServer.cc
src/mon/PGMap.cc
src/osd/OSDMap.cc
src/pybind/mgr/dashboard/services/rgw_client.py
src/sample.ceph.conf

index 71568ef388dd1266277be75c61a210c3ca9a478d..b1645b873b360c77fc3427609c432dca5fd2f619 100644 (file)
@@ -44,7 +44,7 @@ MON creation
 
 Monitors are created by following the manual creation steps on::
 
-  http://docs.ceph.com/docs/master/install/manual-deployment/
+  https://docs.ceph.com/en/latest/install/manual-freebsd-deployment/
 
 
 OSD creation
index d48a75ee2e6f56d096c097991122e069a5bb4468..82cbfce84b999691cf6f501bb46bd5b1ee6a3db8 100644 (file)
@@ -264,7 +264,7 @@ Reclassify
 The *reclassify* function allows users to transition from older maps that
 maintain parallel hierarchies for OSDs of different types to a modern CRUSH
 map that makes use of the *device class* feature.  For more information,
-see http://docs.ceph.com/docs/master/rados/operations/crush-map-edits/#migrating-from-a-legacy-ssd-rule-to-device-classes.
+see https://docs.ceph.com/en/latest/rados/operations/crush-map-edits/#migrating-from-a-legacy-ssd-rule-to-device-classes.
 
 Example output from --test
 ==========================
index 18beffbfc7a6c11ca5740cadcc34641ec333d0ac..1a12fc16a7297ff7d0a7cac83ea9d1983ddee615 100644 (file)
@@ -276,21 +276,21 @@ You may set values for the following keys:
 
 ``compression_algorithm``
 
-:Description: Sets inline compression algorithm to use for underlying BlueStore. This setting overrides the `global setting <http://docs.ceph.com/docs/master/rados/configuration/bluestore-config-ref/#inline-compression>`_ of ``bluestore compression algorithm``.
+:Description: Sets inline compression algorithm to use for underlying BlueStore. This setting overrides the `global setting <https://docs.ceph.com/en/latest/rados/configuration/bluestore-config-ref/#inline-compression>`__ of ``bluestore compression algorithm``.
 
 :Type: String
 :Valid Settings: ``lz4``, ``snappy``, ``zlib``, ``zstd``
 
 ``compression_mode``
 
-:Description: Sets the policy for the inline compression algorithm for underlying BlueStore. This setting overrides the `global setting <http://docs.ceph.com/docs/master/rados/configuration/bluestore-config-ref/#inline-compression>`_ of ``bluestore compression mode``.
+:Description: Sets the policy for the inline compression algorithm for underlying BlueStore. This setting overrides the `global setting <http://docs.ceph.com/en/latest/rados/configuration/bluestore-config-ref/#inline-compression>`__ of ``bluestore compression mode``.
 
 :Type: String
 :Valid Settings: ``none``, ``passive``, ``aggressive``, ``force``
 
 ``compression_min_blob_size``
 
-:Description: Chunks smaller than this are never compressed. This setting overrides the `global setting <http://docs.ceph.com/docs/master/rados/configuration/bluestore-config-ref/#inline-compression>`_ of ``bluestore compression min blob *``.
+:Description: Chunks smaller than this are never compressed. This setting overrides the `global setting <http://docs.ceph.com/en/latest/rados/configuration/bluestore-config-ref/#inline-compression>`__ of ``bluestore compression min blob *``.
 
 :Type: Unsigned Integer
 
@@ -837,4 +837,3 @@ a size of 3).
 .. _setting the number of placement groups: ../placement-groups#set-the-number-of-placement-groups
 .. _Erasure Coding with Overwrites: ../erasure-code#erasure-coding-with-overwrites
 .. _Block Device Commands: ../../../rbd/rados-rbd-cmds/#create-a-block-device-pool
-
index 5cf9e15b350f78ff91e886c0bb33cdcb9411bd12..71170149bca815a7950efbd515324110508f0b75 100644 (file)
@@ -152,7 +152,7 @@ verbose [#]_ . In general, the logs in-memory are not sent to the output log unl
 
 - a fatal signal is raised or
 - an ``assert`` in source code is triggered or
-- upon requested. Please consult `document on admin socket <http://docs.ceph.com/docs/master/man/8/ceph/#daemon>`_ for more details.
+- upon requested. Please consult `document on admin socket <http://docs.ceph.com/en/latest/man/8/ceph/#daemon>`_ for more details.
 
 A debug logging setting can take a single value for the log level and the
 memory level, which sets them both as the same value. For example, if you
index d3963234169b7964d79f52ca26f89197b9852877..fc859594568f7f0acf640f24fbfe1a5c938507e5 100644 (file)
@@ -1387,7 +1387,7 @@ Set a Zone
 Configuring a zone involves specifying a series of Ceph Object Gateway
 pools. For consistency, we recommend using a pool prefix that is the
 same as the zone name. See
-`Pools <http://docs.ceph.com/docs/master/rados/operations/pools/#pools>`__
+`Pools <http://docs.ceph.com/en/latest/rados/operations/pools/#pools>`__
 for details of configuring pools.
 
 To set a zone, create a JSON object consisting of the pools, save the
index a904883b36c57553cc7e81ddf250afa176e26e25..a9b00eac1d62edea8295b6ad660c43d3b623a177 100644 (file)
@@ -19,7 +19,7 @@ are sufficient for some pools, but others (especially those listed in
 tuning. We recommend using the `Ceph Placement Group’s per Pool
 Calculator <http://ceph.com/pgcalc/>`__ to calculate a suitable number of
 placement groups for these pools. See
-`Pools <http://docs.ceph.com/docs/master/rados/operations/pools/#pools>`__
+`Pools <http://docs.ceph.com/en/latest/rados/operations/pools/#pools>`__
 for details on pool creation.
 
 .. _radosgw-pool-namespaces:
index 7a6683828543c1616397c0259f10d4784b46ade2..d888a34b09d944601906025faf20a78cf81980e3 100644 (file)
@@ -90,7 +90,7 @@ For rpm based instructions execute the following commands:
 
    If it does not exist instructions for creating pools can be found on the
    `RADOS pool operations page
-   <http://docs.ceph.com/docs/master/rados/operations/pools/>`_.
+   <http://docs.ceph.com/en/latest/rados/operations/pools/>`_.
 
 #. As ``root``, on a iSCSI gateway node, create a file named
    ``iscsi-gateway.cfg`` in the ``/etc/ceph/`` directory:
index 4054e9853840a4c1c9f239a02eb5ed44e047d0a8..b4bf4ec3273d07dc8ae11df34d8fae43e1d773e0 100644 (file)
@@ -3,7 +3,7 @@
 Here you can find a collection of [Grafana](https://grafana.com/grafana)
 dashboards for Ceph Monitoring. These dashboards are based on metrics collected
 from [prometheus](https://prometheus.io/) scraping the [prometheus mgr
-plugin](http://docs.ceph.com/docs/master/mgr/prometheus/) and the
+plugin](http://docs.ceph.com/en/latest/mgr/prometheus/) and the
 [node_exporter](https://github.com/prometheus/node_exporter).
 
 ### Other requirements
index 2d70df7ffb36642e24256012e49fd3326a0a2e19..d975daada0e67445af07332da72feddabe31c7ae 100755 (executable)
@@ -4,7 +4,7 @@
 #
 # Essentially implements:
 #
-# http://docs.ceph.com/docs/master/rbd/rbd-openstack/
+# http://docs.ceph.com/en/latest/rbd/rbd-openstack/
 #
 # The directory named files contains templates for the /etc/glance/glance-api.conf,
 # /etc/cinder/cinder.conf, /etc/nova/nova.conf Openstack files
index 51439aa3f4e6b7265f674b29026512c64bae0b6c..1461d0d6c610231a38794759645d926e6e110b9e 100755 (executable)
@@ -20,7 +20,7 @@ source $CEPH_ROOT/qa/standalone/ceph-helpers.sh
 if [ `uname` = FreeBSD ]; then
     # erasure coding overwrites are only tested on Bluestore
     # erasure coding on filestore is unsafe
-    # http://docs.ceph.com/docs/master/rados/operations/erasure-code/#erasure-coding-with-overwrites
+    # http://docs.ceph.com/en/latest/rados/operations/erasure-code/#erasure-coding-with-overwrites
     use_ec_overwrite=false
 else
     use_ec_overwrite=true
index 954a6fa885a3534c3a23766821cfec93d6b9ae51..2499e9e538d78ed5a3b43ea72d0f6d63f4549683 100644 (file)
@@ -51,7 +51,7 @@ def install(ctx, config):
 
     This was created using documentation found here:
         https://github.com/openstack-dev/devstack/blob/master/README.md
-        http://docs.ceph.com/docs/master/rbd/rbd-openstack/
+        http://docs.ceph.com/en/latest/rbd/rbd-openstack/
     """
     if config is None:
         config = {}
index 91e6c3759b9f091f9c3b8bc0d993bb5a353465e0..2b1878cb78f7cad83f87d307835cb1a53ddaff51 100644 (file)
@@ -863,7 +863,7 @@ void DaemonServer::log_access_denied(
                      << "entity='" << session->entity_name << "' "
                      << "cmd=" << cmdctx->cmd << ":  access denied";
   ss << "access denied: does your client key have mgr caps? "
-        "See http://docs.ceph.com/docs/master/mgr/administrator/"
+        "See http://docs.ceph.com/en/latest/mgr/administrator/"
         "#client-authentication";
 }
 
index 176b64e15717f41b80d55a8901699eed2d3001f0..1e95301dae83ed655597632cc3c142cca81a1692 100644 (file)
@@ -3487,7 +3487,7 @@ int process_pg_map_command(
   string omap_stats_note =
       "\n* NOTE: Omap statistics are gathered during deep scrub and "
       "may be inaccurate soon afterwards depending on utilization. See "
-      "http://docs.ceph.com/docs/master/dev/placement-group/#omap-statistics "
+      "http://docs.ceph.com/en/latest/dev/placement-group/#omap-statistics "
       "for further details.\n";
   bool omap_stats_note_required = false;
 
index 194a8b6a415e05d4b67396eac8fc46738f337e84..df33e819ff71f5a6dddb5f478558507ad3f9489c 100644 (file)
@@ -5955,7 +5955,7 @@ void OSDMap::check_health(CephContext *cct,
       ss << "crush map has legacy tunables (require " << min
         << ", min is " << cct->_conf->mon_crush_min_required_version << ")";
       auto& d = checks->add("OLD_CRUSH_TUNABLES", HEALTH_WARN, ss.str(), 0);
-      d.detail.push_back("see http://docs.ceph.com/docs/master/rados/operations/crush-map/#tunables");
+      d.detail.push_back("see http://docs.ceph.com/en/latest/rados/operations/crush-map/#tunables");
     }
   }
 
@@ -5966,7 +5966,7 @@ void OSDMap::check_health(CephContext *cct,
       ss << "crush map has straw_calc_version=0";
       auto& d = checks->add("OLD_CRUSH_STRAW_CALC_VERSION", HEALTH_WARN, ss.str(), 0);
       d.detail.push_back(
-       "see http://docs.ceph.com/docs/master/rados/operations/crush-map/#tunables");
+       "see http://docs.ceph.com/en/latest/rados/operations/crush-map/#tunables");
     }
   }
 
index 0013d671c74880cd6b6f815e86f054e838572e76..2355bbbc1ffe8e0480cefd53dfd5aea1347c4330 100644 (file)
@@ -163,7 +163,7 @@ def _parse_frontend_config(config):
     the first found option will be returned.
 
     Get more details about the configuration syntax here:
-    http://docs.ceph.com/docs/master/radosgw/frontends/
+    http://docs.ceph.com/en/latest/radosgw/frontends/
     https://civetweb.github.io/civetweb/UserManual.html
 
     :param config: The configuration string to parse.
index a8f8b9e04af6bc0e5e1eb41971df8472a7c08bb9..13394d31f219422d305b5f0f53b75e651e0a04f9 100644 (file)
@@ -31,7 +31,7 @@
 #             ; Example: /var/run/ceph/$cluster-$name.asok
 
 [global]
-### http://docs.ceph.com/docs/master/rados/configuration/general-config-ref/
+### http://docs.ceph.com/en/latest/rados/configuration/general-config-ref/
 
     ;fsid                       = {UUID}    # use `uuidgen` to generate your own UUID
     ;public network             = 192.168.0.0/24
@@ -51,8 +51,8 @@
     ;max open files             = 131072
 
 
-### http://docs.ceph.com/docs/master/rados/operations/
-### http://docs.ceph.com/docs/master/rados/configuration/auth-config-ref/
+### http://docs.ceph.com/en/latest/rados/operations/
+### http://docs.ceph.com/en/latest/rados/configuration/auth-config-ref/
 
     # If enabled, the Ceph Storage Cluster daemons (i.e., ceph-mon, ceph-osd,
     # and ceph-mds) must authenticate with each other.
@@ -90,7 +90,7 @@
     ;keyring                  = /etc/ceph/$cluster.$name.keyring
 
 
-### http://docs.ceph.com/docs/master/rados/configuration/pool-pg-config-ref/
+### http://docs.ceph.com/en/latest/rados/configuration/pool-pg-config-ref/
 
 
     ## Replication level, number of data copies.
     ;osd crush chooseleaf type = 1
 
 
-### http://docs.ceph.com/docs/master/rados/troubleshooting/log-and-debug/
+### http://docs.ceph.com/en/latest/rados/troubleshooting/log-and-debug/
 
     # The location of the logging file for your cluster.
     # Type: String
     ;log to syslog              = true
 
 
-### http://docs.ceph.com/docs/master/rados/configuration/ms-ref/
+### http://docs.ceph.com/en/latest/rados/configuration/ms-ref/
 
     # Enable if you want your daemons to bind to IPv6 address instead of
     # IPv4 ones. (Not required if you specify a daemon or cluster IP.)
 ## You need at least one. You need at least three if you want to
 ## tolerate any node failures. Always create an odd number.
 [mon]
-### http://docs.ceph.com/docs/master/rados/configuration/mon-config-ref/
-### http://docs.ceph.com/docs/master/rados/configuration/mon-osd-interaction/
+### http://docs.ceph.com/en/latest/rados/configuration/mon-config-ref/
+### http://docs.ceph.com/en/latest/rados/configuration/mon-osd-interaction/
 
     # The IDs of initial monitors in a cluster during startup.
     # If specified, Ceph requires an odd number of monitors to form an
     # (Default: 900)
     ;mon osd report timeout          = 300
 
-### http://docs.ceph.com/docs/master/rados/troubleshooting/log-and-debug/
+### http://docs.ceph.com/en/latest/rados/troubleshooting/log-and-debug/
 
     # logging, for debugging monitor crashes, in order of
     # their likelihood of being helpful :)
 # experimental support for running multiple metadata servers. Do not run
 # multiple metadata servers in production.
 [mds]
-### http://docs.ceph.com/docs/master/cephfs/mds-config-ref/
+### http://docs.ceph.com/en/latest/cephfs/mds-config-ref/
 
     # where the mds keeps it's secret encryption keys
     ;keyring                    = /var/lib/ceph/mds/$name/keyring
 # You need at least one.  Two or more if you want data to be replicated.
 # Define as many as you like.
 [osd]
-### http://docs.ceph.com/docs/master/rados/configuration/osd-config-ref/
+### http://docs.ceph.com/en/latest/rados/configuration/osd-config-ref/
 
     # The path to the OSDs data.
     # You must create the directory when deploying Ceph.
     # (Default: false)
     ;osd check for log corruption = true
 
-### http://docs.ceph.com/docs/master/rados/configuration/journal-ref/
+### http://docs.ceph.com/en/latest/rados/configuration/journal-ref/
 
     # The size of the journal in megabytes. If this is 0,
     # and the journal is a block device, the entire block device is used.
     ;debug filestore              = 20
     ;debug journal                = 20
 
-### http://docs.ceph.com/docs/master/rados/configuration/filestore-config-ref/
+### http://docs.ceph.com/en/latest/rados/configuration/filestore-config-ref/
 
     # The maximum interval in seconds for synchronizing the filestore.
     # Type: Double (optional)
 
     ## Filestore and OSD settings can be tweak to achieve better performance
 
-### http://docs.ceph.com/docs/master/rados/configuration/filestore-config-ref/#misc
+### http://docs.ceph.com/en/latest/rados/configuration/filestore-config-ref/#misc
 
     # Min number of files in a subdir before merging into parent NOTE: A negative value means to disable subdir merging
     # Type: Integer
 ## client settings
 [client]
 
-### http://docs.ceph.com/docs/master/rbd/rbd-config-ref/
+### http://docs.ceph.com/en/latest/rbd/rbd-config-ref/
 
     # Enable caching for RADOS Block Device (RBD).
     # Type: Boolean
 ## radosgw client settings
 [client.radosgw.gateway]
 
-### http://docs.ceph.com/docs/master/radosgw/config-ref/
+### http://docs.ceph.com/en/latest/radosgw/config-ref/
 
     # Sets the location of the data files for Ceph Object Gateway.
     # You must create the directory when deploying Ceph.