===========
The orchestrator supports assigning labels to hosts. Labels
-are free-form and have no particular meaning by themsevels. Each host
+are free-form and have no particular meaning by themselves. Each host
can have multiple labels. They can be used to specify the placement
of daemons. For more information, see :ref:`orch-placement-by-labels`.
Podman-based or Docker-based insecure registry, or (2) a secure registry.
The practice of installing software on systems that are not connected directly
-to the internet is called "airgapping" and registries that are not connected
-directly to the internet are referred to as "airgapped".
+to the Internet is called "airgapping" and registries that are not connected
+directly to the Internet are referred to as "airgapped".
Make sure that your container image is inside the registry. Make sure that you
have access to all hosts that you plan to add to the cluster.
mgr/cephadm/container_image_prometheus = <hostname>:5000/prometheus
mgr/cephadm/container_image_node_exporter = <hostname>:5000/node_exporter
mgr/cephadm/container_image_grafana = <hostname>:5000/grafana
- mgr/cephadm/container_image_alertmanager = <hostname>:5000/alertmanger
+ mgr/cephadm/container_image_alertmanager = <hostname>:5000/alertmanager
EOF
#. Run bootstrap using the temporary configuration file and pass the name of your
=============
Cephadm stores daemon data and logs in different locations than did
-older, pre-cephadm (pre Octopus) versions of Ceph:
+older, pre-cephadm (pre-Octopus) versions of Ceph:
* ``/var/log/ceph/<cluster-fsid>`` contains all cluster logs. By
default, cephadm logs via stderr and the container runtime. These
.. prompt:: bash #
- ceph orch ls [--service_type type] [--service_name name] [--export] [--format f] [--refresh]
+ ceph orch ls [--service_type type] [--service_name name] [--export] [--format f] [--refresh]
Discover the status of a particular service or daemon:
.. prompt:: bash #
- ceph orch ls --service_type type --service_name <name> [--refresh]
+ ceph orch ls --service_type type --service_name <name> [--refresh]
To export the service specifications known to the orchestrator, run the following command:
Disabling Automatic Management of Daemons
-----------------------------------------
-To disable the automatic management of dameons, set ``unmanaged=True`` in the
+To disable the automatic management of daemons, set ``unmanaged=True`` in the
:ref:`orchestrator-cli-service-spec` (``mgr.yaml``).
``mgr.yaml``:
Deploying iSCSI
===============
-To deploy an iSCSI gateway, create a yaml file containing a
+To deploy an iSCSI gateway, create a YAML file containing a
service specification for iscsi:
.. code-block:: yaml
Deploy CephFS
=============
-One or more MDS daemons is required to use the :term:`CephFS` file system.
+One or more MDS daemons are required to use the :term:`CephFS` file system.
These are created automatically if the newer ``ceph fs volume``
interface is used to create a new file system. For more information,
see :ref:`fs-volumes-and-subvolumes`.
ceph orch apply mgmt-gateway [--placement ...] ...
Once applied cephadm will reconfigure specific running daemons (such as monitoring) to run behind the
-new created service. External access to those services will not be possible anymore. Access will be
+newly created service. External access to those services will not be possible anymore. Access will be
consolidated behind the new service endpoint: ``https://<node-ip>:<port>``.
Benefits of the mgmt-gateway service
====================================
-* ``Unified Access``: Consolidated access through nginx improves security and provide a single entry point to services.
-* ``Improved user experience``: User no longer need to know where each application is running (ip/host).
+* ``Unified Access``: Consolidated access through nginx improves security and provides a single entry point to services.
+* ``Improved user experience``: Users no longer need to know where each application is running (IP/host).
* ``High Availability for dashboard``: nginx HA mechanisms are used to provide high availability for the Ceph dashboard.
* ``High Availability for monitoring``: nginx HA mechanisms are used to provide high availability for monitoring.
Security enhancements
=====================
-Once the ``mgmt-gateway`` service is deployed user cannot access monitoring services without authentication through the
+Once the ``mgmt-gateway`` service is deployed, users cannot access monitoring services without authentication through the
Ceph dashboard.
High availability enhancements
==============================
nginx HA mechanisms are used to provide high availability for all the Ceph management applications including the Ceph dashboard
-and monitoring stack. In case of the Ceph dashboard user no longer need to know where the active manager is running.
+and monitoring stack. In case of the Ceph dashboard, users no longer need to know where the active manager is running.
``mgmt-gateway`` handles manager failover transparently and redirects the user to the active manager. In case of the
monitoring ``mgmt-gateway`` takes care of handling HA when several instances of Prometheus, Alertmanager or Grafana are
available. The reverse proxy will automatically detect healthy instances and use them to process user requests.
Multiple ``mgmt-gateway`` instances can be deployed in an active/standby configuration using keepalived
for seamless failover. The ``oauth2-proxy`` service can be deployed as multiple stateless instances,
-with nginx acting as a load balancer across them using round-robin strategy. This setup removes
+with nginx acting as a load balancer across them using a round-robin strategy. This setup removes
single points of failure and enhances the resilience of the entire system.
In this setup, the underlying internal services follow the same high availability mechanism. Instead of
This ensures that the high availability mechanism for ``mgmt-gateway`` is transparent to other services.
The simplest and recommended way to deploy the ``mgmt-gateway`` in high availability mode is by using labels. To
-run the ``mgmt-gateway`` in HA mode users can either use the cephadm command line as follows:
+run the ``mgmt-gateway`` in HA mode, users can either use the cephadm command line as follows:
.. prompt:: bash #
ceph orch apply mgmt-gateway --virtual_ip 192.168.100.220 --enable-auth=true --placement="label:mgmt"
-Or provide specification files as following:
+Or provide specification files as follows:
``mgmt-gateway`` configuration:
Accessing services with mgmt-gateway
====================================
-Once the ``mgmt-gateway`` service is deployed direct access to the monitoring services will not be allowed anymore.
+Once the ``mgmt-gateway`` service is deployed, direct access to the monitoring services will not be allowed anymore.
Applications including: Prometheus, Grafana and Alertmanager are now accessible through links
from ``Administration > Services``.
mgr/cephadm/container_image_nginx = 'quay.io/ceph/nginx:sclorg-nginx-126'
-Admins can specify the image to be used by changing the ``container_image_nginx`` cephadm module option. If there were already
-running daemon(s) you must redeploy the daemon(s) in order to have them actually use the new image.
+Admins can specify the image to be used by changing the ``container_image_nginx`` cephadm module option. If there are already
+running daemon(s), you must redeploy the daemon(s) in order to have them actually use the new image.
For example:
.. _deploy_additional_monitors:
-Deploying additional monitors
+Deploying additional Monitors
=============================
-A typical Ceph cluster has three or five monitor daemons that are spread
-across different hosts. We recommend deploying five monitors if there are
+A typical Ceph cluster has three or five Monitor daemons that are spread
+across different hosts. We recommend deploying five Monitors if there are
five or more nodes in your cluster.
.. _CIDR: https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing#CIDR_notation
-Ceph deploys monitor daemons automatically as the cluster grows and Ceph
-scales back monitor daemons automatically as the cluster shrinks. The
+Ceph deploys Monitor daemons automatically as the cluster grows and Ceph
+scales back Monitor daemons automatically as the cluster shrinks. The
smooth execution of this automatic growing and shrinking depends upon
proper subnet configuration.
-The cephadm bootstrap procedure assigns the first monitor daemon in the
+The cephadm bootstrap procedure assigns the first Monitor daemon in the
cluster to a particular subnet. ``cephadm`` designates that subnet as the
-default subnet of the cluster. New monitor daemons will be assigned by
+default subnet of the cluster. New Monitor daemons will be assigned by
default to that subnet unless cephadm is instructed to do otherwise.
-If all of the Ceph monitor daemons in your cluster are in the same subnet,
-manual administration of the Ceph monitor daemons is not necessary.
-``cephadm`` will automatically add up to five monitors to the subnet, as
+If all of the Ceph Monitor daemons in your cluster are in the same subnet,
+manual administration of the Ceph Monitor daemons is not necessary.
+``cephadm`` will automatically add up to five Monitors to the subnet, as
needed, as new hosts are added to the cluster.
By default, cephadm will deploy 5 daemons on arbitrary hosts. See
Designating a Particular Subnet for Monitors
--------------------------------------------
-To designate a particular IP subnet for use by Ceph monitor daemons, use a
+To designate a particular IP subnet for use by Ceph Monitor daemons, use a
command of the following form, including the subnet's address in `CIDR`_
format (e.g., ``10.1.2.0/24``):
ceph config set mon public_network 10.1.2.0/24
-Cephadm deploys new monitor daemons only on hosts that have IP addresses in
+Cephadm deploys new Monitor daemons only on hosts that have IP addresses in
the designated subnet.
You can also specify two public networks by using a list of networks:
Deploying Monitors on a Particular Network
------------------------------------------
-You can explicitly specify the IP address or CIDR network for each monitor and
-control where each monitor is placed. To disable automated monitor deployment,
+You can explicitly specify the IP address or CIDR network for each Monitor and
+control where each Monitor is placed. To disable automated Monitor deployment,
run this command:
.. prompt:: bash #
ceph orch apply mon --unmanaged
- To deploy each additional monitor:
+ To deploy each additional Monitor:
.. prompt:: bash #
ceph orch daemon add mon <host1:ip-or-network1>
- For example, to deploy a second monitor on ``newhost1`` using an IP
- address ``10.1.2.123`` and a third monitor on ``newhost2`` in
+ For example, to deploy a second Monitor on ``newhost1`` using an IP
+ address ``10.1.2.123`` and a third Monitor on ``newhost2`` in
network ``10.1.2.0/24``, run the following commands:
.. prompt:: bash #
ceph orch daemon add mon newhost1:10.1.2.123
ceph orch daemon add mon newhost2:10.1.2.0/24
- Now, enable automatic placement of Daemons
+ Now, enable automatic placement of daemons
.. prompt:: bash #
Moving Monitors to a Different Network
--------------------------------------
-To move Monitors to a new network, deploy new monitors on the new network and
-subsequently remove monitors from the old network. It is not advised to
+To move Monitors to a new network, deploy new Monitors on the new network and
+subsequently remove Monitors from the old network. It is not advised to
modify and inject the ``monmap`` manually.
First, disable the automated placement of daemons:
ceph orch apply mon --unmanaged
-To deploy each additional monitor:
+To deploy each additional Monitor:
.. prompt:: bash #
ceph orch daemon add mon <newhost1:ip-or-network1>
-For example, to deploy a second monitor on ``newhost1`` using an IP
-address ``10.1.2.123`` and a third monitor on ``newhost2`` in
+For example, to deploy a second Monitor on ``newhost1`` using an IP
+address ``10.1.2.123`` and a third Monitor on ``newhost2`` in
network ``10.1.2.0/24``, run the following commands:
.. prompt:: bash #
ceph orch daemon add mon newhost1:10.1.2.123
ceph orch daemon add mon newhost2:10.1.2.0/24
- Subsequently remove monitors from the old network:
+ Subsequently remove Monitors from the old network:
.. prompt:: bash #
ceph config set mon public_network 10.1.2.0/24
- Now, enable automatic placement of Daemons
+ Now, enable automatic placement of daemons
.. prompt:: bash #
The monitoring stack consists of `Prometheus <https://prometheus.io/>`_,
Prometheus exporters (:ref:`mgr-prometheus`, `Node exporter
-<https://prometheus.io/docs/guides/node-exporter/>`_), `Prometheus Alert
-Manager <https://prometheus.io/docs/alerting/alertmanager/>`_ and `Grafana
+<https://prometheus.io/docs/guides/node-exporter/>`_), `Prometheus
+Alertmanager <https://prometheus.io/docs/alerting/alertmanager/>`_ and `Grafana
<https://grafana.com/>`_.
.. note::
---------------------------------
The default behavior of ``cephadm`` is to deploy a basic monitoring stack. It
-is however possible that you have a Ceph cluster without a monitoring stack,
+is, however, possible that you have a Ceph cluster without a monitoring stack,
and you would like to add a monitoring stack to it. (Here are some ways that
you might have come to have a Ceph cluster without a monitoring stack: You
-might have passed the ``--skip-monitoring stack`` option to ``cephadm`` during
+might have passed the ``--skip-monitoring-stack`` option to ``cephadm`` during
the installation of the cluster, or you might have converted an existing
cluster (which had no monitoring stack) to cephadm management.)
ceph config set mgr mgr/cephadm/secure_monitoring_stack true
-This change will trigger a sequence of reconfigurations across all monitoring daemons, typically requiring
+This change will trigger a sequence of reconfigurations across all monitoring daemons, typically requiring a
few minutes until all components are fully operational. The updated secure configuration includes the following modifications:
#. Prometheus: basic authentication is required to access the web portal and TLS is enabled for secure communication.
#. Grafana: TLS is enabled and authentication is required to access the datasource information.
#. Cephadm service discovery endpoint: basic authentication is required to access service discovery information, and TLS is enabled for secure communication.
-In this secure setup, users will need to setup authentication
+In this secure setup, users will need to set up authentication
(username/password) for both Prometheus and Alertmanager. By default the
username and password are set to ``admin``/``admin``. The user can change these
-value with the commands ``ceph orch prometheus set-credentials`` and ``ceph
+values with the commands ``ceph orch prometheus set-credentials`` and ``ceph
orch alertmanager set-credentials`` respectively. These commands offer the
flexibility to input the username/password either as parameters or via a JSON
file, which enhances security. Additionally, Cephadm provides the commands
``var/lib/ceph/{FSID}/cephadm.{DIGEST}``, where ``{DIGEST}`` is an alphanumeric
string representing the currently-running version of Ceph.
-To see the default container images, run below command:
+To see the default container images, run the below command:
.. prompt:: bash #
.. code-block:: yaml
- - job_name: 'ceph-exporter'
- http_sd_configs:
+ - job_name: 'ceph-exporter'
+ http_sd_configs:
- url: https://<mgr-ip>:8765/sd/prometheus/sd-config?service=ceph-exporter
- basic_auth:
- username: '<username>'
- password: '<password>'
- tls_config:
- ca_file: '/path/to/ca.crt'
+ basic_auth:
+ username: '<username>'
+ password: '<password>'
+ tls_config:
+ ca_file: '/path/to/ca.crt'
* To enable the dashboard's Prometheus-based alerting, see :ref:`dashboard-alerting`.
Cephadm automatically configures Prometheus, Grafana, and Alertmanager in
all cases except one.
-In a some setups, the Dashboard user's browser might not be able to access the
+In some setups, the Dashboard user's browser might not be able to access the
Grafana URL that is configured in Ceph Dashboard. This can happen when the
cluster and the accessing user are in different DNS zones.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
By default, cephadm allows anonymous users (users who have not provided any
-login information) limited, viewer only access to the Grafana dashboard. In
+login information) limited, viewer-only access to the Grafana dashboard. In
order to set up Grafana to only allow viewing from logged in users, you can
set ``anonymous_access: False`` in your Grafana spec.
cephadm services directly, which should only be necessary for unusual NFS
configurations.
-Deploying NFS ganesha
+Deploying NFS Ganesha
=====================
Cephadm deploys NFS Ganesha daemon (or set of daemons). The configuration for
Deploying an *ingress* service for an existing *nfs* service will provide:
* a stable, virtual IP that can be used to access the NFS server
-* fail-over between hosts if there is a host failure
+* failover between hosts if there is a host failure
* load distribution across multiple NFS gateways (although this is rarely necessary)
Ingress for NFS can be deployed for an existing NFS service
property to match against IPs in other networks; see
:ref:`ingress-virtual-ip` for more information.
* The *monitor_port* is used to access the haproxy load status
- page. The user is ``admin`` by default, but can be modified by
+ page. The user is ``admin`` by default, but can be modified
via an *admin* property in the spec. If a password is not
specified via a *password* property in the spec, the auto-generated password
can be found with:
----------------------------------
Cephadm also supports deploying nfs with keepalived but not haproxy. This
-offers a virtual ip supported by keepalived that the nfs daemon can directly bind
+offers a virtual IP supported by keepalived that the nfs daemon can directly bind
to instead of having traffic go through haproxy.
In this setup, you'll either want to set up the service using the nfs module
High availability
-==============================
+=================
In general, `oauth2-proxy` is used in conjunction with the `mgmt-gateway`. The `oauth2-proxy` service can be deployed as multiple
stateless instances, with the `mgmt-gateway` (nginx reverse-proxy) handling load balancing across these instances using a round-robin strategy.
Since oauth2-proxy integrates with an external identity provider (IDP), ensuring high availability for login is managed externally
Service Specification
=====================
-Before deploying `oauth2-proxy` service please remember to deploy the `mgmt-gateway` service by turning on the `--enable_auth` flag. i.e:
+Before deploying the `oauth2-proxy` service, please remember to deploy the `mgmt-gateway` service by turning on the ``--enable_auth`` flag, i.e.:
.. prompt:: bash #
:members:
The specification can then be applied by running the below command. Once becomes available, cephadm will automatically redeploy
-the `mgmt-gateway` service while adapting its configuration to redirect the authentication to the newly deployed `oauth2-service`.
+the `mgmt-gateway` service while adapting its configuration to redirect the authentication to the newly deployed `oauth2-proxy` service.
.. prompt:: bash #
.. warning::
Although the ``libstoragemgmt`` library issues standard SCSI (SES) inquiry calls,
there is no guarantee that your hardware and firmware properly implement these standards.
- This can lead to erratic behaviour and even bus resets on some older
+ This can lead to erratic behavior and even bus resets on some older
hardware. It is therefore recommended that, before enabling this feature,
you first test your hardware's compatibility with ``libstoragemgmt`` to avoid
unplanned interruptions to services.
multisite deployment. (For more information about realms and zones,
see :ref:`multisite`.)
-Note that with ``cephadm``, ``radosgw`` daemons are configured via the monitor
+Note that with ``cephadm``, ``radosgw`` daemons are configured via the Monitor
configuration database instead of via a `ceph.conf` or the command line. If
that configuration isn't already in place (usually in the
``client.rgw.<something>`` section), then the ``radosgw``
* ``use_keepalived_multicast``
Default is False. By default, cephadm will deploy keepalived config to use unicast IPs,
using the IPs of the hosts. The IPs chosen will be the same IPs cephadm uses to connect
- to the machines. But if multicast is prefered, we can set ``use_keepalived_multicast``
+ to the machines. But if multicast is preferred, we can set ``use_keepalived_multicast``
to ``True`` and Keepalived will use multicast IP (224.0.0.18) to communicate between instances,
using the same interfaces as where the VIPs are.
* ``vrrp_interface_network``
'cephadm-signed'. This should have the key .pem format.
* ``monitor_ip_addrs``
If ``monitor_ip_addrs`` is provided and the specified IP address is assigned to the host,
- that IP address will be used. If IP address is not present, then 'monitor_networks' will be checked.
+ that IP address will be used. If an IP address is not present, then 'monitor_networks' will be checked.
* ``monitor_networks``
If ``monitor_networks`` is specified, an IP address that matches one of the specified
- networks will be used. If IP not present, then default host ip will be used.
+ networks will be used. If an IP address is not present, then default host IP will be used.
.. _ingress-virtual-ip:
.. warning::
SMB support is under active development and many features may be
- missing or immature. A Ceph MGR module, named smb, is available to help
+ missing or immature. A Ceph Manager module, named smb, is available to help
organize and manage SMB-related features. Unless the smb module
has been determined to be unsuitable for your needs we recommend using that
module over directly using the smb service spec.
Cephadm deploys `Samba <http://www.samba.org>`_ servers using container images
built by the `samba-container project <http://github.com/samba-in-kubernetes/samba-container>`_.
-In order to host SMB Shares with access to CephFS file systems, deploy
-Samba Containers with the following command:
+In order to host SMB shares with access to CephFS file systems, deploy
+Samba containers with the following command:
.. prompt:: bash #
ceph orch apply smb <cluster_id> <config_uri> [--features ...] [--placement ...] ...
There are a number of additional parameters that the command accepts. See
-the Service Specification for a description of these options.
+the Service Specification section for a description of these options.
Service Specification
=====================
-An SMB Service can be applied using a specification. An example in YAML follows:
+An SMB service can be applied using a specification. An example in YAML follows:
.. code-block:: yaml
Service Spec Options
--------------------
-Fields specific to the ``spec`` section of the SMB Service are described below.
+Fields specific to the ``spec`` section of the SMB service are described below.
cluster_id
A short name identifying the SMB "cluster". In this case a cluster is
config_uri
A string containing a (standard or de-facto) URI that identifies a
- configuration source that should be loaded by the samba-container as the
+ configuration source that should be loaded by the Samba container as the
primary configuration file.
Supported URI schemes include ``http:``, ``https:``, ``rados:``, and
``rados:mon-config-key:``.
custom_dns
A list of IP addresses that will be used as the DNS servers for a Samba
- container. This features allows Samba Containers to integrate with
+ container. This feature allows Samba containers to integrate with
Active Directory even if the Ceph host nodes are not tied into the Active
Directory DNS domain(s).
For example, ``192.168.7.0/24``.
include_ceph_users
- A list of cephx user (aka entity) names that the Samba Containers may use.
+ A list of cephx user (aka entity) names that the Samba containers may use.
The cephx keys for each user in the list will automatically be added to
the keyring in the container.
will deploy additional containers that manage this coordination.
Additionally, the cluster_meta_uri and cluster_lock_uri values must be
specified. The former is used by cephadm to describe the smb cluster layout
- to the samba containers. The latter is used by Samba's CTDB component to
+ to the Samba containers. The latter is used by Samba's CTDB component to
manage an internal cluster lock.
in an end-to-end manner. The following discussion is provided for the sake
of completeness and to explain how the software layers interact.
-Creating an SMB Service spec is not sufficient for complete operation of a
-Samba Container on Ceph. It is important to create valid configurations and
+Creating an SMB service spec is not sufficient for complete operation of a
+Samba container on Ceph. It is important to create valid configurations and
place them in locations that the container can read. The complete specification
of these configurations is out of scope for this document. You can refer to the
`documentation for Samba <https://wiki.samba.org/index.php/Main_Page>`_ as
-well as the `samba server container
+well as the `Samba server container
<https://github.com/samba-in-kubernetes/samba-container/blob/master/docs/server.md>`_
and the `configuration file
<https://github.com/samba-in-kubernetes/sambacc/blob/master/docs/configuration.md>`_
it accepts.
When one has composed a configuration it should be stored in a location
-that the Samba Container can access. The recommended approach for running
-Samba Containers within Ceph orchestration is to store the configuration
+that the Samba container can access. The recommended approach for running
+Samba containers within Ceph orchestration is to store the configuration
in the Ceph cluster. There are a few ways to store the configuration
-in ceph:
+in Ceph:
RADOS
~~~~~
~~~~~~~~~~
A configuration file can be stored on an HTTP(S) server and automatically read
-by the Samba Container. Managing a configuration file on HTTP(S) is left as an
+by the Samba container. Managing a configuration file on HTTP(S) is left as an
exercise for the reader.
.. note:: All URI schemes are supported by parameters that accept URIs. Each
SMB service for domain membership, either the Ceph host node must be
configured so that it can resolve the Active Directory (AD) domain or the
``custom_dns`` option may be used. In both cases DNS hosts for the AD domain
- must still be reachable from whatever network segment the ceph cluster is on.
+ must still be reachable from whatever network segment the Ceph cluster is on.
* Services must bind to TCP port 445. Running multiple SMB services on the same
node is not yet supported and will trigger a port-in-use conflict.
`SNMP`_ is a widely used protocol for monitoring distributed systems and devices.
Ceph's SNMP integration focuses on forwarding alerts from its Prometheus Alertmanager
-cluster to a gateway daemon. The gateway daemon transforms the alert into an SNMP Notification and sends
+cluster to a gateway daemon. The gateway daemon transforms the alert into an SNMP notification and sends
it on to a designated SNMP management platform. The gateway daemon is from the `snmp_notifier`_ project,
which provides SNMP V2c and V3 support (authentication and encryption).
Compatibility
=============
-The table below shows the SNMP versions that are supported by the gateway implementation
+The table below shows the SNMP versions that are supported by the gateway implementation.
================ =========== ===============================================
SNMP Version Supported Notes
---
snmp_community: public
-Alternatively, you can create a yaml definition for the gateway and apply it from a single file
+Alternatively, you can create a YAML definition for the gateway and apply it from a single file
.. prompt:: bash #
Implementing the MIB
======================
To make sense of SNMP notifications and traps, you'll need to apply the MIB to your SNMP management platform. The MIB (``CEPH-MIB.txt``) can
-downloaded from the main Ceph GitHub `repository`_
+be downloaded from the main Ceph GitHub `repository`_.
.. _repository: https://github.com/ceph/ceph/tree/master/monitoring/snmp
#. Jaeger Collector
#. Jaeger Query
-Jaeger requires a database for the traces. We use ElasticSearch (version 6)
+Jaeger requires a database for the traces. We use Elasticsearch (version 6)
by default.
-To deploy Jaeger services when not using your own ElasticSearch (deploys
-all 3 services with a new ElasticSearch container):
+To deploy Jaeger services when not using your own Elasticsearch (deploys
+all 3 services with a new Elasticsearch container):
.. prompt:: bash #
ceph orch apply jaeger
-To deploy Jaeger services with an existing ElasticSearch cluster and
+To deploy Jaeger services with an existing Elasticsearch cluster and
an existing Jaeger query (deploys agents and collectors only):
.. prompt:: bash #
- ceph orch apply jaeger --without-query --es_nodes=ip:port,..
+ ceph orch apply jaeger --without-query --es_nodes=ip:port,...
This creates a script that includes the container command that ``cephadm``
would use to create a shell. Modify the script by removing the ``--init``
argument and replace it with the argument that joins to the namespace used for
-a running running container. For example, assume we want to debug the Manager
+a running container. For example, assume we want to debug the Manager
and have determined that the Manager is running in a container named
``ceph-bc615290-685b-11ee-84a6-525400220000-mgr-ceph0-sluwsk``. In this case,
the argument
This alert (``UPGRADE_FAILED_PULL``) means that Ceph was unable to pull the
container image for the target version. This can happen if you specify a
version or container image that does not exist (e.g. "1.2.3"), or if the
-container registry can not be reached by one or more hosts in the cluster.
+container registry cannot be reached by one or more hosts in the cluster.
To cancel the existing upgrade and to specify a different target version, run
the following commands: