From f2dd6666a986fd06741d154753231d67aab2ced7 Mon Sep 17 00:00:00 2001 From: Ville Ojamo <14869000+bluikko@users.noreply.github.com> Date: Mon, 16 Mar 2026 16:39:16 +0700 Subject: [PATCH] doc/cephadm: Fix more spelling errors And other such minor obvious issues, including a spelling error introduced in the previous commit 2565579caa1a118e9032283b55e969f9badcd6b6 Signed-off-by: Ville Ojamo --- doc/cephadm/host-management.rst | 2 +- doc/cephadm/install.rst | 6 ++-- doc/cephadm/operations.rst | 2 +- doc/cephadm/services/index.rst | 6 ++-- doc/cephadm/services/iscsi.rst | 2 +- doc/cephadm/services/mds.rst | 2 +- doc/cephadm/services/mgmt-gateway.rst | 22 ++++++------ doc/cephadm/services/mon.rst | 50 +++++++++++++-------------- doc/cephadm/services/monitoring.rst | 34 +++++++++--------- doc/cephadm/services/nfs.rst | 8 ++--- doc/cephadm/services/oauth2-proxy.rst | 6 ++-- doc/cephadm/services/osd.rst | 2 +- doc/cephadm/services/rgw.rst | 8 ++--- doc/cephadm/services/smb.rst | 36 +++++++++---------- doc/cephadm/services/snmp-gateway.rst | 8 ++--- doc/cephadm/services/tracing.rst | 10 +++--- doc/cephadm/troubleshooting.rst | 2 +- doc/cephadm/upgrade.rst | 2 +- 18 files changed, 104 insertions(+), 104 deletions(-) diff --git a/doc/cephadm/host-management.rst b/doc/cephadm/host-management.rst index fc60681fd9a..3f15fb8fce8 100644 --- a/doc/cephadm/host-management.rst +++ b/doc/cephadm/host-management.rst @@ -171,7 +171,7 @@ Host Labels =========== The orchestrator supports assigning labels to hosts. Labels -are free-form and have no particular meaning by themsevels. Each host +are free-form and have no particular meaning by themselves. Each host can have multiple labels. They can be used to specify the placement of daemons. For more information, see :ref:`orch-placement-by-labels`. diff --git a/doc/cephadm/install.rst b/doc/cephadm/install.rst index e65b1644b5f..971994e79e7 100644 --- a/doc/cephadm/install.rst +++ b/doc/cephadm/install.rst @@ -451,8 +451,8 @@ of two kinds of custom container registry can be used in this scenario: (1) a Podman-based or Docker-based insecure registry, or (2) a secure registry. The practice of installing software on systems that are not connected directly -to the internet is called "airgapping" and registries that are not connected -directly to the internet are referred to as "airgapped". +to the Internet is called "airgapping" and registries that are not connected +directly to the Internet are referred to as "airgapped". Make sure that your container image is inside the registry. Make sure that you have access to all hosts that you plan to add to the cluster. @@ -488,7 +488,7 @@ have access to all hosts that you plan to add to the cluster. mgr/cephadm/container_image_prometheus = :5000/prometheus mgr/cephadm/container_image_node_exporter = :5000/node_exporter mgr/cephadm/container_image_grafana = :5000/grafana - mgr/cephadm/container_image_alertmanager = :5000/alertmanger + mgr/cephadm/container_image_alertmanager = :5000/alertmanager EOF #. Run bootstrap using the temporary configuration file and pass the name of your diff --git a/doc/cephadm/operations.rst b/doc/cephadm/operations.rst index 6fb4c796e1c..e941399d5f6 100644 --- a/doc/cephadm/operations.rst +++ b/doc/cephadm/operations.rst @@ -258,7 +258,7 @@ Data Location ============= Cephadm stores daemon data and logs in different locations than did -older, pre-cephadm (pre Octopus) versions of Ceph: +older, pre-cephadm (pre-Octopus) versions of Ceph: * ``/var/log/ceph/`` contains all cluster logs. By default, cephadm logs via stderr and the container runtime. These diff --git a/doc/cephadm/services/index.rst b/doc/cephadm/services/index.rst index d38d3555c22..dfe737f83d2 100644 --- a/doc/cephadm/services/index.rst +++ b/doc/cephadm/services/index.rst @@ -42,13 +42,13 @@ type, use the optional ``--type`` parameter .. prompt:: bash # - ceph orch ls [--service_type type] [--service_name name] [--export] [--format f] [--refresh] + ceph orch ls [--service_type type] [--service_name name] [--export] [--format f] [--refresh] Discover the status of a particular service or daemon: .. prompt:: bash # - ceph orch ls --service_type type --service_name [--refresh] + ceph orch ls --service_type type --service_name [--refresh] To export the service specifications known to the orchestrator, run the following command: @@ -819,7 +819,7 @@ In order to fully remove a service, see :ref:`orch-rm`. Disabling Automatic Management of Daemons ----------------------------------------- -To disable the automatic management of dameons, set ``unmanaged=True`` in the +To disable the automatic management of daemons, set ``unmanaged=True`` in the :ref:`orchestrator-cli-service-spec` (``mgr.yaml``). ``mgr.yaml``: diff --git a/doc/cephadm/services/iscsi.rst b/doc/cephadm/services/iscsi.rst index bb2de792150..ad3e4903691 100644 --- a/doc/cephadm/services/iscsi.rst +++ b/doc/cephadm/services/iscsi.rst @@ -7,7 +7,7 @@ iSCSI Service Deploying iSCSI =============== -To deploy an iSCSI gateway, create a yaml file containing a +To deploy an iSCSI gateway, create a YAML file containing a service specification for iscsi: .. code-block:: yaml diff --git a/doc/cephadm/services/mds.rst b/doc/cephadm/services/mds.rst index 96b7c2dda78..7a38fd7a48e 100644 --- a/doc/cephadm/services/mds.rst +++ b/doc/cephadm/services/mds.rst @@ -8,7 +8,7 @@ MDS Service Deploy CephFS ============= -One or more MDS daemons is required to use the :term:`CephFS` file system. +One or more MDS daemons are required to use the :term:`CephFS` file system. These are created automatically if the newer ``ceph fs volume`` interface is used to create a new file system. For more information, see :ref:`fs-volumes-and-subvolumes`. diff --git a/doc/cephadm/services/mgmt-gateway.rst b/doc/cephadm/services/mgmt-gateway.rst index f5a0b6b1321..bb33f8cfcbb 100644 --- a/doc/cephadm/services/mgmt-gateway.rst +++ b/doc/cephadm/services/mgmt-gateway.rst @@ -22,28 +22,28 @@ In order to deploy the ``mgmt-gateway`` service, use the following command: ceph orch apply mgmt-gateway [--placement ...] ... Once applied cephadm will reconfigure specific running daemons (such as monitoring) to run behind the -new created service. External access to those services will not be possible anymore. Access will be +newly created service. External access to those services will not be possible anymore. Access will be consolidated behind the new service endpoint: ``https://:``. Benefits of the mgmt-gateway service ==================================== -* ``Unified Access``: Consolidated access through nginx improves security and provide a single entry point to services. -* ``Improved user experience``: User no longer need to know where each application is running (ip/host). +* ``Unified Access``: Consolidated access through nginx improves security and provides a single entry point to services. +* ``Improved user experience``: Users no longer need to know where each application is running (IP/host). * ``High Availability for dashboard``: nginx HA mechanisms are used to provide high availability for the Ceph dashboard. * ``High Availability for monitoring``: nginx HA mechanisms are used to provide high availability for monitoring. Security enhancements ===================== -Once the ``mgmt-gateway`` service is deployed user cannot access monitoring services without authentication through the +Once the ``mgmt-gateway`` service is deployed, users cannot access monitoring services without authentication through the Ceph dashboard. High availability enhancements ============================== nginx HA mechanisms are used to provide high availability for all the Ceph management applications including the Ceph dashboard -and monitoring stack. In case of the Ceph dashboard user no longer need to know where the active manager is running. +and monitoring stack. In case of the Ceph dashboard, users no longer need to know where the active manager is running. ``mgmt-gateway`` handles manager failover transparently and redirects the user to the active manager. In case of the monitoring ``mgmt-gateway`` takes care of handling HA when several instances of Prometheus, Alertmanager or Grafana are available. The reverse proxy will automatically detect healthy instances and use them to process user requests. @@ -58,7 +58,7 @@ even if certain core components for the service fail, including the ``mgmt-gatew Multiple ``mgmt-gateway`` instances can be deployed in an active/standby configuration using keepalived for seamless failover. The ``oauth2-proxy`` service can be deployed as multiple stateless instances, -with nginx acting as a load balancer across them using round-robin strategy. This setup removes +with nginx acting as a load balancer across them using a round-robin strategy. This setup removes single points of failure and enhances the resilience of the entire system. In this setup, the underlying internal services follow the same high availability mechanism. Instead of @@ -66,13 +66,13 @@ directly accessing the ``mgmt-gateway`` internal endpoint, services use the virt This ensures that the high availability mechanism for ``mgmt-gateway`` is transparent to other services. The simplest and recommended way to deploy the ``mgmt-gateway`` in high availability mode is by using labels. To -run the ``mgmt-gateway`` in HA mode users can either use the cephadm command line as follows: +run the ``mgmt-gateway`` in HA mode, users can either use the cephadm command line as follows: .. prompt:: bash # ceph orch apply mgmt-gateway --virtual_ip 192.168.100.220 --enable-auth=true --placement="label:mgmt" -Or provide specification files as following: +Or provide specification files as follows: ``mgmt-gateway`` configuration: @@ -109,7 +109,7 @@ the ``mgmt-gateway`` daemons are replicated to the corresponding keepalived inst Accessing services with mgmt-gateway ==================================== -Once the ``mgmt-gateway`` service is deployed direct access to the monitoring services will not be allowed anymore. +Once the ``mgmt-gateway`` service is deployed, direct access to the monitoring services will not be allowed anymore. Applications including: Prometheus, Grafana and Alertmanager are now accessible through links from ``Administration > Services``. @@ -194,8 +194,8 @@ The ``mgmt-gateway`` service internally makes use of nginx reverse proxy. The fo mgr/cephadm/container_image_nginx = 'quay.io/ceph/nginx:sclorg-nginx-126' -Admins can specify the image to be used by changing the ``container_image_nginx`` cephadm module option. If there were already -running daemon(s) you must redeploy the daemon(s) in order to have them actually use the new image. +Admins can specify the image to be used by changing the ``container_image_nginx`` cephadm module option. If there are already +running daemon(s), you must redeploy the daemon(s) in order to have them actually use the new image. For example: diff --git a/doc/cephadm/services/mon.rst b/doc/cephadm/services/mon.rst index ba695cea03b..d640507f971 100644 --- a/doc/cephadm/services/mon.rst +++ b/doc/cephadm/services/mon.rst @@ -4,28 +4,28 @@ MON Service .. _deploy_additional_monitors: -Deploying additional monitors +Deploying additional Monitors ============================= -A typical Ceph cluster has three or five monitor daemons that are spread -across different hosts. We recommend deploying five monitors if there are +A typical Ceph cluster has three or five Monitor daemons that are spread +across different hosts. We recommend deploying five Monitors if there are five or more nodes in your cluster. .. _CIDR: https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing#CIDR_notation -Ceph deploys monitor daemons automatically as the cluster grows and Ceph -scales back monitor daemons automatically as the cluster shrinks. The +Ceph deploys Monitor daemons automatically as the cluster grows and Ceph +scales back Monitor daemons automatically as the cluster shrinks. The smooth execution of this automatic growing and shrinking depends upon proper subnet configuration. -The cephadm bootstrap procedure assigns the first monitor daemon in the +The cephadm bootstrap procedure assigns the first Monitor daemon in the cluster to a particular subnet. ``cephadm`` designates that subnet as the -default subnet of the cluster. New monitor daemons will be assigned by +default subnet of the cluster. New Monitor daemons will be assigned by default to that subnet unless cephadm is instructed to do otherwise. -If all of the Ceph monitor daemons in your cluster are in the same subnet, -manual administration of the Ceph monitor daemons is not necessary. -``cephadm`` will automatically add up to five monitors to the subnet, as +If all of the Ceph Monitor daemons in your cluster are in the same subnet, +manual administration of the Ceph Monitor daemons is not necessary. +``cephadm`` will automatically add up to five Monitors to the subnet, as needed, as new hosts are added to the cluster. By default, cephadm will deploy 5 daemons on arbitrary hosts. See @@ -35,7 +35,7 @@ the placement of daemons. Designating a Particular Subnet for Monitors -------------------------------------------- -To designate a particular IP subnet for use by Ceph monitor daemons, use a +To designate a particular IP subnet for use by Ceph Monitor daemons, use a command of the following form, including the subnet's address in `CIDR`_ format (e.g., ``10.1.2.0/24``): @@ -49,7 +49,7 @@ format (e.g., ``10.1.2.0/24``): ceph config set mon public_network 10.1.2.0/24 -Cephadm deploys new monitor daemons only on hosts that have IP addresses in +Cephadm deploys new Monitor daemons only on hosts that have IP addresses in the designated subnet. You can also specify two public networks by using a list of networks: @@ -68,22 +68,22 @@ You can also specify two public networks by using a list of networks: Deploying Monitors on a Particular Network ------------------------------------------ -You can explicitly specify the IP address or CIDR network for each monitor and -control where each monitor is placed. To disable automated monitor deployment, +You can explicitly specify the IP address or CIDR network for each Monitor and +control where each Monitor is placed. To disable automated Monitor deployment, run this command: .. prompt:: bash # ceph orch apply mon --unmanaged - To deploy each additional monitor: + To deploy each additional Monitor: .. prompt:: bash # ceph orch daemon add mon - For example, to deploy a second monitor on ``newhost1`` using an IP - address ``10.1.2.123`` and a third monitor on ``newhost2`` in + For example, to deploy a second Monitor on ``newhost1`` using an IP + address ``10.1.2.123`` and a third Monitor on ``newhost2`` in network ``10.1.2.0/24``, run the following commands: .. prompt:: bash # @@ -92,7 +92,7 @@ run this command: ceph orch daemon add mon newhost1:10.1.2.123 ceph orch daemon add mon newhost2:10.1.2.0/24 - Now, enable automatic placement of Daemons + Now, enable automatic placement of daemons .. prompt:: bash # @@ -111,8 +111,8 @@ run this command: Moving Monitors to a Different Network -------------------------------------- -To move Monitors to a new network, deploy new monitors on the new network and -subsequently remove monitors from the old network. It is not advised to +To move Monitors to a new network, deploy new Monitors on the new network and +subsequently remove Monitors from the old network. It is not advised to modify and inject the ``monmap`` manually. First, disable the automated placement of daemons: @@ -121,14 +121,14 @@ First, disable the automated placement of daemons: ceph orch apply mon --unmanaged -To deploy each additional monitor: +To deploy each additional Monitor: .. prompt:: bash # ceph orch daemon add mon -For example, to deploy a second monitor on ``newhost1`` using an IP -address ``10.1.2.123`` and a third monitor on ``newhost2`` in +For example, to deploy a second Monitor on ``newhost1`` using an IP +address ``10.1.2.123`` and a third Monitor on ``newhost2`` in network ``10.1.2.0/24``, run the following commands: .. prompt:: bash # @@ -137,7 +137,7 @@ network ``10.1.2.0/24``, run the following commands: ceph orch daemon add mon newhost1:10.1.2.123 ceph orch daemon add mon newhost2:10.1.2.0/24 - Subsequently remove monitors from the old network: + Subsequently remove Monitors from the old network: .. prompt:: bash # @@ -155,7 +155,7 @@ network ``10.1.2.0/24``, run the following commands: ceph config set mon public_network 10.1.2.0/24 - Now, enable automatic placement of Daemons + Now, enable automatic placement of daemons .. prompt:: bash # diff --git a/doc/cephadm/services/monitoring.rst b/doc/cephadm/services/monitoring.rst index f88b28507c4..d89818cf9e1 100644 --- a/doc/cephadm/services/monitoring.rst +++ b/doc/cephadm/services/monitoring.rst @@ -18,8 +18,8 @@ metrics on cluster utilization and performance. Ceph users have three options: The monitoring stack consists of `Prometheus `_, Prometheus exporters (:ref:`mgr-prometheus`, `Node exporter -`_), `Prometheus Alert -Manager `_ and `Grafana +`_), `Prometheus +Alertmanager `_ and `Grafana `_. .. note:: @@ -42,10 +42,10 @@ Deploying Monitoring with Cephadm --------------------------------- The default behavior of ``cephadm`` is to deploy a basic monitoring stack. It -is however possible that you have a Ceph cluster without a monitoring stack, +is, however, possible that you have a Ceph cluster without a monitoring stack, and you would like to add a monitoring stack to it. (Here are some ways that you might have come to have a Ceph cluster without a monitoring stack: You -might have passed the ``--skip-monitoring stack`` option to ``cephadm`` during +might have passed the ``--skip-monitoring-stack`` option to ``cephadm`` during the installation of the cluster, or you might have converted an existing cluster (which had no monitoring stack) to cephadm management.) @@ -96,7 +96,7 @@ measures, set this option to ``true`` with a command of the following form: ceph config set mgr mgr/cephadm/secure_monitoring_stack true -This change will trigger a sequence of reconfigurations across all monitoring daemons, typically requiring +This change will trigger a sequence of reconfigurations across all monitoring daemons, typically requiring a few minutes until all components are fully operational. The updated secure configuration includes the following modifications: #. Prometheus: basic authentication is required to access the web portal and TLS is enabled for secure communication. @@ -105,10 +105,10 @@ few minutes until all components are fully operational. The updated secure confi #. Grafana: TLS is enabled and authentication is required to access the datasource information. #. Cephadm service discovery endpoint: basic authentication is required to access service discovery information, and TLS is enabled for secure communication. -In this secure setup, users will need to setup authentication +In this secure setup, users will need to set up authentication (username/password) for both Prometheus and Alertmanager. By default the username and password are set to ``admin``/``admin``. The user can change these -value with the commands ``ceph orch prometheus set-credentials`` and ``ceph +values with the commands ``ceph orch prometheus set-credentials`` and ``ceph orch alertmanager set-credentials`` respectively. These commands offer the flexibility to input the username/password either as parameters or via a JSON file, which enhances security. Additionally, Cephadm provides the commands @@ -199,7 +199,7 @@ https://lists.ceph.io/hyperkitty/list/ceph-users@ceph.io/thread/QGC66QIFBKRTPZAQ ``var/lib/ceph/{FSID}/cephadm.{DIGEST}``, where ``{DIGEST}`` is an alphanumeric string representing the currently-running version of Ceph. -To see the default container images, run below command: +To see the default container images, run the below command: .. prompt:: bash # @@ -442,14 +442,14 @@ Here's an example Prometheus job definition that uses the cephadm service discov .. code-block:: yaml - - job_name: 'ceph-exporter' - http_sd_configs: + - job_name: 'ceph-exporter' + http_sd_configs: - url: https://:8765/sd/prometheus/sd-config?service=ceph-exporter - basic_auth: - username: '' - password: '' - tls_config: - ca_file: '/path/to/ca.crt' + basic_auth: + username: '' + password: '' + tls_config: + ca_file: '/path/to/ca.crt' * To enable the dashboard's Prometheus-based alerting, see :ref:`dashboard-alerting`. @@ -516,7 +516,7 @@ Manually Setting the Grafana URL Cephadm automatically configures Prometheus, Grafana, and Alertmanager in all cases except one. -In a some setups, the Dashboard user's browser might not be able to access the +In some setups, the Dashboard user's browser might not be able to access the Grafana URL that is configured in Ceph Dashboard. This can happen when the cluster and the accessing user are in different DNS zones. @@ -615,7 +615,7 @@ Turning off Anonymous Access ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ By default, cephadm allows anonymous users (users who have not provided any -login information) limited, viewer only access to the Grafana dashboard. In +login information) limited, viewer-only access to the Grafana dashboard. In order to set up Grafana to only allow viewing from logged in users, you can set ``anonymous_access: False`` in your Grafana spec. diff --git a/doc/cephadm/services/nfs.rst b/doc/cephadm/services/nfs.rst index d1d34677a86..2bb4b0eafd6 100644 --- a/doc/cephadm/services/nfs.rst +++ b/doc/cephadm/services/nfs.rst @@ -11,7 +11,7 @@ commands; see :ref:`mgr-nfs`. This document covers how to manage the cephadm services directly, which should only be necessary for unusual NFS configurations. -Deploying NFS ganesha +Deploying NFS Ganesha ===================== Cephadm deploys NFS Ganesha daemon (or set of daemons). The configuration for @@ -172,7 +172,7 @@ High-availability NFS Deploying an *ingress* service for an existing *nfs* service will provide: * a stable, virtual IP that can be used to access the NFS server -* fail-over between hosts if there is a host failure +* failover between hosts if there is a host failure * load distribution across multiple NFS gateways (although this is rarely necessary) Ingress for NFS can be deployed for an existing NFS service @@ -199,7 +199,7 @@ A few notes: property to match against IPs in other networks; see :ref:`ingress-virtual-ip` for more information. * The *monitor_port* is used to access the haproxy load status - page. The user is ``admin`` by default, but can be modified by + page. The user is ``admin`` by default, but can be modified via an *admin* property in the spec. If a password is not specified via a *password* property in the spec, the auto-generated password can be found with: @@ -222,7 +222,7 @@ NFS with virtual IP but no haproxy ---------------------------------- Cephadm also supports deploying nfs with keepalived but not haproxy. This -offers a virtual ip supported by keepalived that the nfs daemon can directly bind +offers a virtual IP supported by keepalived that the nfs daemon can directly bind to instead of having traffic go through haproxy. In this setup, you'll either want to set up the service using the nfs module diff --git a/doc/cephadm/services/oauth2-proxy.rst b/doc/cephadm/services/oauth2-proxy.rst index 9ede68aee46..515ba4410de 100644 --- a/doc/cephadm/services/oauth2-proxy.rst +++ b/doc/cephadm/services/oauth2-proxy.rst @@ -41,7 +41,7 @@ a secure and flexible authentication mechanism. High availability -============================== +================= In general, `oauth2-proxy` is used in conjunction with the `mgmt-gateway`. The `oauth2-proxy` service can be deployed as multiple stateless instances, with the `mgmt-gateway` (nginx reverse-proxy) handling load balancing across these instances using a round-robin strategy. Since oauth2-proxy integrates with an external identity provider (IDP), ensuring high availability for login is managed externally @@ -59,7 +59,7 @@ seamlessly with the Ceph management stack. Service Specification ===================== -Before deploying `oauth2-proxy` service please remember to deploy the `mgmt-gateway` service by turning on the `--enable_auth` flag. i.e: +Before deploying the `oauth2-proxy` service, please remember to deploy the `mgmt-gateway` service by turning on the ``--enable_auth`` flag, i.e.: .. prompt:: bash # @@ -104,7 +104,7 @@ project documentation. :members: The specification can then be applied by running the below command. Once becomes available, cephadm will automatically redeploy -the `mgmt-gateway` service while adapting its configuration to redirect the authentication to the newly deployed `oauth2-service`. +the `mgmt-gateway` service while adapting its configuration to redirect the authentication to the newly deployed `oauth2-proxy` service. .. prompt:: bash # diff --git a/doc/cephadm/services/osd.rst b/doc/cephadm/services/osd.rst index 71c3686b097..e6421cb120f 100644 --- a/doc/cephadm/services/osd.rst +++ b/doc/cephadm/services/osd.rst @@ -60,7 +60,7 @@ Example (Reef): .. warning:: Although the ``libstoragemgmt`` library issues standard SCSI (SES) inquiry calls, there is no guarantee that your hardware and firmware properly implement these standards. - This can lead to erratic behaviour and even bus resets on some older + This can lead to erratic behavior and even bus resets on some older hardware. It is therefore recommended that, before enabling this feature, you first test your hardware's compatibility with ``libstoragemgmt`` to avoid unplanned interruptions to services. diff --git a/doc/cephadm/services/rgw.rst b/doc/cephadm/services/rgw.rst index cc235200767..f95ea2e6421 100644 --- a/doc/cephadm/services/rgw.rst +++ b/doc/cephadm/services/rgw.rst @@ -12,7 +12,7 @@ single-cluster deployment or a particular *realm* and *zone* in a multisite deployment. (For more information about realms and zones, see :ref:`multisite`.) -Note that with ``cephadm``, ``radosgw`` daemons are configured via the monitor +Note that with ``cephadm``, ``radosgw`` daemons are configured via the Monitor configuration database instead of via a `ceph.conf` or the command line. If that configuration isn't already in place (usually in the ``client.rgw.`` section), then the ``radosgw`` @@ -501,7 +501,7 @@ where the properties of this service specification are: * ``use_keepalived_multicast`` Default is False. By default, cephadm will deploy keepalived config to use unicast IPs, using the IPs of the hosts. The IPs chosen will be the same IPs cephadm uses to connect - to the machines. But if multicast is prefered, we can set ``use_keepalived_multicast`` + to the machines. But if multicast is preferred, we can set ``use_keepalived_multicast`` to ``True`` and Keepalived will use multicast IP (224.0.0.18) to communicate between instances, using the same interfaces as where the VIPs are. * ``vrrp_interface_network`` @@ -535,10 +535,10 @@ where the properties of this service specification are: 'cephadm-signed'. This should have the key .pem format. * ``monitor_ip_addrs`` If ``monitor_ip_addrs`` is provided and the specified IP address is assigned to the host, - that IP address will be used. If IP address is not present, then 'monitor_networks' will be checked. + that IP address will be used. If an IP address is not present, then 'monitor_networks' will be checked. * ``monitor_networks`` If ``monitor_networks`` is specified, an IP address that matches one of the specified - networks will be used. If IP not present, then default host ip will be used. + networks will be used. If an IP address is not present, then default host IP will be used. .. _ingress-virtual-ip: diff --git a/doc/cephadm/services/smb.rst b/doc/cephadm/services/smb.rst index 7b7e74fe933..ea7ae632f9e 100644 --- a/doc/cephadm/services/smb.rst +++ b/doc/cephadm/services/smb.rst @@ -7,7 +7,7 @@ SMB Service .. warning:: SMB support is under active development and many features may be - missing or immature. A Ceph MGR module, named smb, is available to help + missing or immature. A Ceph Manager module, named smb, is available to help organize and manage SMB-related features. Unless the smb module has been determined to be unsuitable for your needs we recommend using that module over directly using the smb service spec. @@ -19,20 +19,20 @@ Deploying Samba Containers Cephadm deploys `Samba `_ servers using container images built by the `samba-container project `_. -In order to host SMB Shares with access to CephFS file systems, deploy -Samba Containers with the following command: +In order to host SMB shares with access to CephFS file systems, deploy +Samba containers with the following command: .. prompt:: bash # ceph orch apply smb [--features ...] [--placement ...] ... There are a number of additional parameters that the command accepts. See -the Service Specification for a description of these options. +the Service Specification section for a description of these options. Service Specification ===================== -An SMB Service can be applied using a specification. An example in YAML follows: +An SMB service can be applied using a specification. An example in YAML follows: .. code-block:: yaml @@ -63,7 +63,7 @@ The specification can then be applied by running the following command: Service Spec Options -------------------- -Fields specific to the ``spec`` section of the SMB Service are described below. +Fields specific to the ``spec`` section of the SMB service are described below. cluster_id A short name identifying the SMB "cluster". In this case a cluster is @@ -80,7 +80,7 @@ features config_uri A string containing a (standard or de-facto) URI that identifies a - configuration source that should be loaded by the samba-container as the + configuration source that should be loaded by the Samba container as the primary configuration file. Supported URI schemes include ``http:``, ``https:``, ``rados:``, and ``rados:mon-config-key:``. @@ -99,7 +99,7 @@ join_sources custom_dns A list of IP addresses that will be used as the DNS servers for a Samba - container. This features allows Samba Containers to integrate with + container. This feature allows Samba containers to integrate with Active Directory even if the Ceph host nodes are not tied into the Active Directory DNS domain(s). @@ -128,7 +128,7 @@ bind_addrs For example, ``192.168.7.0/24``. include_ceph_users - A list of cephx user (aka entity) names that the Samba Containers may use. + A list of cephx user (aka entity) names that the Samba containers may use. The cephx keys for each user in the list will automatically be added to the keyring in the container. @@ -175,7 +175,7 @@ cluster_public_addrs will deploy additional containers that manage this coordination. Additionally, the cluster_meta_uri and cluster_lock_uri values must be specified. The former is used by cephadm to describe the smb cluster layout - to the samba containers. The latter is used by Samba's CTDB component to + to the Samba containers. The latter is used by Samba's CTDB component to manage an internal cluster lock. @@ -189,22 +189,22 @@ Configuring an SMB Service in an end-to-end manner. The following discussion is provided for the sake of completeness and to explain how the software layers interact. -Creating an SMB Service spec is not sufficient for complete operation of a -Samba Container on Ceph. It is important to create valid configurations and +Creating an SMB service spec is not sufficient for complete operation of a +Samba container on Ceph. It is important to create valid configurations and place them in locations that the container can read. The complete specification of these configurations is out of scope for this document. You can refer to the `documentation for Samba `_ as -well as the `samba server container +well as the `Samba server container `_ and the `configuration file `_ it accepts. When one has composed a configuration it should be stored in a location -that the Samba Container can access. The recommended approach for running -Samba Containers within Ceph orchestration is to store the configuration +that the Samba container can access. The recommended approach for running +Samba containers within Ceph orchestration is to store the configuration in the Ceph cluster. There are a few ways to store the configuration -in ceph: +in Ceph: RADOS ~~~~~ @@ -254,7 +254,7 @@ HTTP/HTTPS ~~~~~~~~~~ A configuration file can be stored on an HTTP(S) server and automatically read -by the Samba Container. Managing a configuration file on HTTP(S) is left as an +by the Samba container. Managing a configuration file on HTTP(S) is left as an exercise for the reader. .. note:: All URI schemes are supported by parameters that accept URIs. Each @@ -270,6 +270,6 @@ A non-exhaustive list of important limitations for the SMB service follows: SMB service for domain membership, either the Ceph host node must be configured so that it can resolve the Active Directory (AD) domain or the ``custom_dns`` option may be used. In both cases DNS hosts for the AD domain - must still be reachable from whatever network segment the ceph cluster is on. + must still be reachable from whatever network segment the Ceph cluster is on. * Services must bind to TCP port 445. Running multiple SMB services on the same node is not yet supported and will trigger a port-in-use conflict. diff --git a/doc/cephadm/services/snmp-gateway.rst b/doc/cephadm/services/snmp-gateway.rst index 9bc572e140d..b31e9890cdb 100644 --- a/doc/cephadm/services/snmp-gateway.rst +++ b/doc/cephadm/services/snmp-gateway.rst @@ -4,7 +4,7 @@ SNMP Gateway Service `SNMP`_ is a widely used protocol for monitoring distributed systems and devices. Ceph's SNMP integration focuses on forwarding alerts from its Prometheus Alertmanager -cluster to a gateway daemon. The gateway daemon transforms the alert into an SNMP Notification and sends +cluster to a gateway daemon. The gateway daemon transforms the alert into an SNMP notification and sends it on to a designated SNMP management platform. The gateway daemon is from the `snmp_notifier`_ project, which provides SNMP V2c and V3 support (authentication and encryption). @@ -17,7 +17,7 @@ your SNMP management platform will receive multiple notifications for the same e Compatibility ============= -The table below shows the SNMP versions that are supported by the gateway implementation +The table below shows the SNMP versions that are supported by the gateway implementation. ================ =========== =============================================== SNMP Version Supported Notes @@ -75,7 +75,7 @@ with a credentials file that contains; --- snmp_community: public -Alternatively, you can create a yaml definition for the gateway and apply it from a single file +Alternatively, you can create a YAML definition for the gateway and apply it from a single file .. prompt:: bash # @@ -166,6 +166,6 @@ alert that has an `OID`_ label to the SNMP gateway daemon for processing. Implementing the MIB ====================== To make sense of SNMP notifications and traps, you'll need to apply the MIB to your SNMP management platform. The MIB (``CEPH-MIB.txt``) can -downloaded from the main Ceph GitHub `repository`_ +be downloaded from the main Ceph GitHub `repository`_. .. _repository: https://github.com/ceph/ceph/tree/master/monitoring/snmp diff --git a/doc/cephadm/services/tracing.rst b/doc/cephadm/services/tracing.rst index 5f7d2dc3d5a..dd7fa08a48f 100644 --- a/doc/cephadm/services/tracing.rst +++ b/doc/cephadm/services/tracing.rst @@ -23,20 +23,20 @@ Jaeger tracing consists of 3 services: #. Jaeger Collector #. Jaeger Query -Jaeger requires a database for the traces. We use ElasticSearch (version 6) +Jaeger requires a database for the traces. We use Elasticsearch (version 6) by default. -To deploy Jaeger services when not using your own ElasticSearch (deploys -all 3 services with a new ElasticSearch container): +To deploy Jaeger services when not using your own Elasticsearch (deploys +all 3 services with a new Elasticsearch container): .. prompt:: bash # ceph orch apply jaeger -To deploy Jaeger services with an existing ElasticSearch cluster and +To deploy Jaeger services with an existing Elasticsearch cluster and an existing Jaeger query (deploys agents and collectors only): .. prompt:: bash # - ceph orch apply jaeger --without-query --es_nodes=ip:port,.. + ceph orch apply jaeger --without-query --es_nodes=ip:port,... diff --git a/doc/cephadm/troubleshooting.rst b/doc/cephadm/troubleshooting.rst index 57301f29e8f..09e554e4733 100644 --- a/doc/cephadm/troubleshooting.rst +++ b/doc/cephadm/troubleshooting.rst @@ -573,7 +573,7 @@ generate a script that can debug a process in a running container. This creates a script that includes the container command that ``cephadm`` would use to create a shell. Modify the script by removing the ``--init`` argument and replace it with the argument that joins to the namespace used for -a running running container. For example, assume we want to debug the Manager +a running container. For example, assume we want to debug the Manager and have determined that the Manager is running in a container named ``ceph-bc615290-685b-11ee-84a6-525400220000-mgr-ceph0-sluwsk``. In this case, the argument diff --git a/doc/cephadm/upgrade.rst b/doc/cephadm/upgrade.rst index 4681ff63432..003e0ea1533 100644 --- a/doc/cephadm/upgrade.rst +++ b/doc/cephadm/upgrade.rst @@ -214,7 +214,7 @@ following command: This alert (``UPGRADE_FAILED_PULL``) means that Ceph was unable to pull the container image for the target version. This can happen if you specify a version or container image that does not exist (e.g. "1.2.3"), or if the -container registry can not be reached by one or more hosts in the cluster. +container registry cannot be reached by one or more hosts in the cluster. To cancel the existing upgrade and to specify a different target version, run the following commands: -- 2.47.3