as ceph-ansible. These instructions describe how to set up
a ceph-mgr daemon manually.
-First, create an authentication key for your daemon:
+First, create an authentication key for your daemon::
-.. prompt:: bash #
-
- ceph auth get-or-create mgr.$name mon 'allow profile mgr' osd 'allow *' mds 'allow *'
+ ceph auth get-or-create mgr.$name mon 'allow profile mgr' osd 'allow *' mds 'allow *'
Place that key as file named ``keyring`` into ``mgr data`` path, which for a cluster "ceph"
and mgr $name "foo" would be ``/var/lib/ceph/mgr/ceph-foo`` respective ``/var/lib/ceph/mgr/ceph-foo/keyring``.
-Start the ceph-mgr daemon:
-
-.. prompt:: bash #
+Start the ceph-mgr daemon::
- ceph-mgr -i $name
+ ceph-mgr -i $name
Check that the mgr has come up by looking at the output
-of ``ceph status``, which should now include a mgr status line:
-
-.. prompt:: bash #
+of ``ceph status``, which should now include a mgr status line::
- mgr active: $name
+ mgr active: $name
Client authentication
---------------------
a cluster from an old version of Ceph, or use the default install/deploy tools,
your admin client should get this capability automatically. If you use tooling from
elsewhere, you may get EACCES errors when invoking certain ceph cluster commands.
-To fix that, add a ``mgr allow \*`` stanza to your client's cephx capabilities by
+To fix that, add a "mgr allow \*" stanza to your client's cephx capabilities by
`Modifying User Capabilities`_.
High availability
Here is an example of enabling the :term:`Dashboard` module:
-.. prompt:: bash #
-
- ceph mgr module ls
-
-::
+.. code-block:: console
+ $ ceph mgr module ls
{
"enabled_modules": [
"status"
]
}
-.. prompt:: bash #
-
- ceph mgr module enable dashboard
- ceph mgr module ls
-
-::
-
+ $ ceph mgr module enable dashboard
+ $ ceph mgr module ls
{
"enabled_modules": [
"status",
]
}
-.. prompt:: bash #
-
- ceph mgr services
-
-::
-
+ $ ceph mgr services
{
"dashboard": "http://myserver.com:7789/"
}
Where a module implements command line hooks, the commands will
be accessible as ordinary Ceph commands. Ceph will automatically incorporate
module commands into the standard CLI interface and route them appropriately to
-the module.
-
-.. prompt:: bash #
+the module.::
- ceph <command | help>
+ ceph <command | help>
Configuration
-------------
Enabling
--------
-The *alerts* module is enabled with:
+The *alerts* module is enabled with::
-.. prompt:: bash #
-
- ceph mgr module enable alerts
+ ceph mgr module enable alerts
Configuration
-------------
To configure SMTP, all of the following config options must be set
-(When setting ``mgr/alerts/smtp_destination``, you can use commas to separate multiple):
-
-.. prompt:: bash #
-
- ceph config set mgr mgr/alerts/smtp_host *<smtp-server>*
- ceph config set mgr mgr/alerts/smtp_destination *<email-address-to-send-to>*
- ceph config set mgr mgr/alerts/smtp_sender *<from-email-address>*
-
-By default, the module will use SSL and port 465. To change that:
+(When setting ``mgr/alerts/smtp_destination``, you can use commas to separate multiple)::
-.. prompt:: bash #
+ ceph config set mgr mgr/alerts/smtp_host *<smtp-server>*
+ ceph config set mgr mgr/alerts/smtp_destination *<email-address-to-send-to>*
+ ceph config set mgr mgr/alerts/smtp_sender *<from-email-address>*
- ceph config set mgr mgr/alerts/smtp_ssl false # if not SSL
- ceph config set mgr mgr/alerts/smtp_port *<port-number>* # if not 465
+By default, the module will use SSL and port 465. To change that,::
-To authenticate to the SMTP server, you must set the user and password:
+ ceph config set mgr mgr/alerts/smtp_ssl false # if not SSL
+ ceph config set mgr mgr/alerts/smtp_port *<port-number>* # if not 465
-.. prompt:: bash #
+To authenticate to the SMTP server, you must set the user and password::
- ceph config set mgr mgr/alerts/smtp_user *<username>*
- ceph config set mgr mgr/alerts/smtp_password *<password>*
+ ceph config set mgr mgr/alerts/smtp_user *<username>*
+ ceph config set mgr mgr/alerts/smtp_password *<password>*
By default, the name in the ``From:`` line is simply ``Ceph``. To
-change that (e.g., to identify which cluster this is):
+change that (e.g., to identify which cluster this is),::
-.. prompt:: bash #
-
- ceph config set mgr mgr/alerts/smtp_from_name 'Ceph Cluster Foo'
+ ceph config set mgr mgr/alerts/smtp_from_name 'Ceph Cluster Foo'
By default, the module will check the cluster health once per minute
and, if there is a change, send a message. To change that
-frequency:
-
-.. prompt:: bash #
+frequency,::
- ceph config set mgr mgr/alerts/interval *<interval>* # e.g., "5m" for 5 minutes
+ ceph config set mgr mgr/alerts/interval *<interval>* # e.g., "5m" for 5 minutes
Commands
--------
-To force an alert to be send immediately:
-
-.. prompt:: bash #
+To force an alert to be send immediately,::
- ceph alerts send
+ ceph alerts send
{ "token": "<redacted_token>", ...}
The token obtained must be passed together with every API request in the
-``Authorization`` HTTP header:
+``Authorization`` HTTP header::
-.. prompt:: bash $
-
- curl -H "Authorization: Bearer <token>" ...
+ curl -H "Authorization: Bearer <token>" ...
Authentication and authorization can be further configured from the
Ceph CLI, the Ceph-Dashboard UI and the Ceph API itself (please refer to
Enabling
--------
-The *cli api commands* module is enabled with:
+The *cli api commands* module is enabled with::
-.. prompt:: bash #
+ ceph mgr module enable cli_api
- ceph mgr module enable cli_api
+To check that it is enabled, run::
-To check that it is enabled, run:
-
-.. prompt:: bash #
-
- ceph mgr module ls | grep cli_api
+ ceph mgr module ls | grep cli_api
Usage
--------
-To run a mgr module command, run:
-
-.. prompt:: bash #
-
- ceph mgr cli <command> <param>
-
-For example, use the following command to print the list of servers:
-
-.. prompt:: bash #
-
- ceph mgr cli list_servers
+To run a mgr module command, run::
-List all available mgr module commands with:
+ ceph mgr cli <command> <param>
-.. prompt:: bash #
+For example, use the following command to print the list of servers::
- ceph mgr cli --help
+ ceph mgr cli list_servers
-To benchmark a command, run:
+List all available mgr module commands with::
-.. prompt:: bash #
+ ceph mgr cli --help
- ceph mgr cli_benchmark <number of calls> <number of threads> <command> <param>
+To benchmark a command, run::
-For example, use the following command to benchmark the command to get osd_map:
+ ceph mgr cli_benchmark <number of calls> <number of threads> <command> <param>
-.. prompt:: bash #
+For example, use the following command to benchmark the command to get osd_map::
- ceph mgr cli_benchmark 100 10 get osd_map
+ ceph mgr cli_benchmark 100 10 get osd_map
Enabling
--------
-The *crash* module is enabled with:
+The *crash* module is enabled with::
-.. prompt:: bash #
+ ceph mgr module enable crash
- ceph mgr module enable crash
+The *crash* upload key is generated with::
-The *crash* upload key is generated with:
-
-.. prompt:: bash #
-
- ceph auth get-or-create client.crash mon 'profile crash' mgr 'profile crash'
+ ceph auth get-or-create client.crash mon 'profile crash' mgr 'profile crash'
On each node, you should store this key in
``/etc/ceph/ceph.client.crash.keyring``.
Commands
--------
-.. prompt:: bash #
+::
- ceph crash post -i <metafile>
+ ceph crash post -i <metafile>
Save a crash dump. The metadata file is a JSON blob stored in the crash
dir as ``meta``. As usual, the ceph command can be invoked with ``-i -``,
and will read from stdin.
-.. prompt:: bash #
+::
- ceph crash rm <crashid>
+ ceph crash rm <crashid>
Remove a specific crash dump.
-.. prompt:: bash #
+::
- ceph crash ls
+ ceph crash ls
List the timestamp/uuid crashids for all new and archived crash info.
-.. prompt:: bash #
+::
- ceph crash ls-new
+ ceph crash ls-new
List the timestamp/uuid crashids for all newcrash info.
-.. prompt:: bash #
+::
- ceph crash stat
+ ceph crash stat
Show a summary of saved crash info grouped by age.
-.. prompt:: bash #
+::
- ceph crash info <crashid>
+ ceph crash info <crashid>
Show all details of a saved crash.
-.. prompt:: bash #
+::
ceph crash prune <keep>
Remove saved crashes older than 'keep' days. <keep> must be an integer.
-.. prompt:: bash #
+::
ceph crash archive <crashid>
Archive a crash report so that it is no longer considered for the ``RECENT_CRASH`` health check and does not appear in the ``crash ls-new`` output (it will still appear in the ``crash ls`` output).
-.. prompt:: bash #
+::
ceph crash archive-all
Within a running Ceph cluster, the Ceph Dashboard is enabled with:
-.. prompt:: bash #
+.. prompt:: bash $
ceph mgr module enable dashboard
To get the dashboard up and running quickly, you can generate and install a
self-signed certificate:
-.. prompt:: bash #
+.. prompt:: bash $
ceph dashboard create-self-signed-cert
The ``dashboard.crt`` file should then be signed by a CA. Once that is done, you
can enable it for Ceph manager instances by running the following commands:
-.. prompt:: bash #
+.. prompt:: bash $
ceph dashboard set-ssl-certificate -i dashboard.crt
ceph dashboard set-ssl-certificate-key -i dashboard.key
the name of the instance can be included as follows (where ``$name`` is the name
of the ``ceph-mgr`` instance, usually the hostname):
-.. prompt:: bash #
+.. prompt:: bash $
ceph dashboard set-ssl-certificate $name -i dashboard.crt
ceph dashboard set-ssl-certificate-key $name -i dashboard.key
SSL can also be disabled by setting this configuration value:
-.. prompt:: bash #
+.. prompt:: bash $
ceph config set mgr mgr/dashboard/ssl false
fail mgr`` or by disabling and re-enabling the dashboard module (which also
triggers the manager to respawn itself):
- .. prompt:: bash #
+ .. prompt:: bash $
ceph mgr module disable dashboard
ceph mgr module enable dashboard
These defaults can be changed via the configuration key facility on a
cluster-wide level (so they apply to all manager instances) as follows:
-.. prompt:: bash #
+.. prompt:: bash $
ceph config set mgr mgr/dashboard/server_addr $IP
ceph config set mgr mgr/dashboard/server_port $PORT
necessary to configure them separately. The IP address and port for a specific
manager instance can be changed with the following commands:
-.. prompt:: bash #
+.. prompt:: bash $
ceph config set mgr mgr/dashboard/$name/server_addr $IP
ceph config set mgr mgr/dashboard/$name/server_port $PORT
To create a user with the administrator role you can use the following
commands:
-.. prompt:: bash #
+.. prompt:: bash $
ceph dashboard ac-user-create <username> -i <file-containing-password> administrator
attacks. The user can get or set the default number of lock-out attempts using
these commands respectively:
-.. prompt:: bash #
+.. prompt:: bash $
ceph dashboard get-account-lockout-attempts
ceph dashboard set-account-lockout-attempts <value:int>
However, by disabling this feature, the account is more vulnerable to brute-force or
dictionary based attacks. This can be disabled by:
- .. prompt:: bash #
+ .. prompt:: bash $
ceph dashboard set-account-lockout-attempts 0
it needs to be manually enabled by the administrator. This can be done by the following
command:
-.. prompt:: bash #
+.. prompt:: bash $
ceph dashboard ac-user-enable <username>
dashboard will be automatically configured. You can also manually force the
credentials to be set up with:
-.. prompt:: bash #
+.. prompt:: bash $
ceph dashboard set-rgw-credentials
If you've configured a custom 'admin' resource in your RGW admin API, you should set it here also:
-.. prompt:: bash #
+.. prompt:: bash $
ceph dashboard set-rgw-api-admin-resource <admin_resource>
connections, e.g. caused by certificates signed by unknown CA or not matching
the host name:
-.. prompt:: bash #
+.. prompt:: bash $
ceph dashboard set-rgw-api-ssl-verify False
To set a custom hostname or address for an RGW gateway, set the value of ``RGW_HOSTNAME_PER_DAEMON``
accordingly:
-.. promt:: bash #
+.. promt:: bash $
ceph dashboard set-rgw-hostname <gateway_name> <hostname>
The setting can be unset using:
-.. promt:: bash #
+.. promt:: bash $
ceph dashboard unset-rgw-hostname <gateway_name>
If the Object Gateway takes too long to process requests and the dashboard runs
into timeouts, you can set the timeout value to your needs:
-.. prompt:: bash #
+.. prompt:: bash $
ceph dashboard set-rest-requests-timeout <seconds>
To disable API SSL verification run the following command:
-.. prompt:: bash #
+.. prompt:: bash $
ceph dashboard set-iscsi-api-ssl-verification false
The available iSCSI gateways must be defined using the following commands:
-.. prompt:: bash #
+.. prompt:: bash $
ceph dashboard iscsi-gateway-list
# Gateway URL format for a new gateway: <scheme>://<username>:<password>@<host>[:port]
impact of denial of service attacks.
Please see `Prometheus' Security model
- <https://prometheus.io/docs/operating/security/>`_ for more detailed
+ <https://prometheus.io/docs/operating/security/>` for more detailed
information.
Installation and Configuration using cephadm
#. Enable the Ceph Exporter which comes as Ceph Manager module by running:
- .. prompt:: bash #
+ .. prompt:: bash $
ceph mgr module enable prometheus
#. Install the `vonage-status-panel and grafana-piechart-panel` plugins using:
- .. prompt:: bash #
+ .. prompt:: bash $
- grafana-cli plugins install vonage-status-panel
- grafana-cli plugins install grafana-piechart-panel
+ grafana-cli plugins install vonage-status-panel
+ grafana-cli plugins install grafana-piechart-panel
#. Add Dashboards to Grafana:
You need to tell the dashboard on which URL the Grafana instance is
running/deployed:
-.. prompt:: bash #
+.. prompt:: bash $
ceph dashboard set-grafana-api-url <grafana-server-url> # default: ''
which can be a result of certificates signed by an unknown CA or that do not
match the host name:
-.. prompt:: bash #
+.. prompt:: bash $
ceph dashboard set-grafana-api-ssl-verify False
Ceph Dashboard configuration information can also be unset. For example, to
clear the Grafana API URL we configured above:
- .. prompt:: bash #
+ .. prompt:: bash $
ceph dashboard reset-grafana-api-url
user's browser to directly access the URL configured in Ceph Dashboard. To solve
this issue, a separate URL can be configured which will solely be used to tell
the frontend (the user's browser) which URL it should use to access Grafana.
-This setting won't ever be changed automatically, unlike the ``GRAFANA_API_URL``
+This setting won't ever be changed automatically, unlike the GRAFANA_API_URL
which is set by :ref:`cephadm` (only if cephadm is used to deploy monitoring
services).
To change the URL that is returned to the frontend issue the following command:
-.. prompt:: bash #
+.. prompt:: bash $
ceph dashboard set-grafana-frontend-api-url <grafana-server-url>
If no value is set for that option, it will simply fall back to the value of the
-``GRAFANA_API_URL`` option. If set, it will instruct the browser to use this URL to
+GRAFANA_API_URL option. If set, it will instruct the browser to use this URL to
access Grafana.
.. _dashboard-sso-support:
To configure SSO on Ceph Dashboard, you should use the following command:
-.. prompt:: bash #
+.. prompt:: bash $
ceph dashboard sso setup saml2 <ceph_dashboard_base_url> <idp_metadata> {<idp_username_attribute>} {<idp_entity_id>} {<sp_x_509_cert>} {<sp_private_key>}
To display the current SAML 2.0 configuration, use the following command:
-.. prompt:: bash #
+.. prompt:: bash $
ceph dashboard sso show saml2
To disable SSO:
-.. prompt:: bash #
+.. prompt:: bash $
ceph dashboard sso disable
To check if SSO is enabled:
-.. prompt:: bash #
+.. prompt:: bash $
ceph dashboard sso status
To enable SSO:
-.. prompt:: bash #
+.. prompt:: bash $
ceph dashboard sso enable saml2
To use it, specify the host and port of the Alertmanager server:
- .. prompt:: bash #
+ .. prompt:: bash $
ceph dashboard set-alertmanager-api-host <alertmanager-host:port> # default: ''
For example:
- .. prompt:: bash #
+ .. prompt:: bash $
ceph dashboard set-alertmanager-api-host 'http://localhost:9093'
that a new silence will match a corresponding alert.
- .. prompt:: bash #
+ .. prompt:: bash $
ceph dashboard set-prometheus-api-host <prometheus-host:port> # default: ''
For example:
- .. prompt:: bash #
+ .. prompt:: bash $
ceph dashboard set-prometheus-api-host 'http://localhost:9090'
- For Prometheus:
-.. prompt:: bash #
+.. prompt:: bash $
ceph dashboard set-prometheus-api-ssl-verify False
- For Alertmanager:
-.. prompt:: bash #
+.. prompt:: bash $
ceph dashboard set-alertmanager-api-ssl-verify False
The password policy feature can be switched on or off completely:
-.. prompt:: bash #
+.. prompt:: bash $
- ceph dashboard set-pwd-policy-enabled <true|false>
+ ceph dashboard set-pwd-policy-enabled <true|false>
The following individual checks can also be switched on or off:
-.. prompt:: bash #
+.. prompt:: bash $
- ceph dashboard set-pwd-policy-check-length-enabled <true|false>
- ceph dashboard set-pwd-policy-check-oldpwd-enabled <true|false>
- ceph dashboard set-pwd-policy-check-username-enabled <true|false>
- ceph dashboard set-pwd-policy-check-exclusion-list-enabled <true|false>
- ceph dashboard set-pwd-policy-check-complexity-enabled <true|false>
- ceph dashboard set-pwd-policy-check-sequential-chars-enabled <true|false>
- ceph dashboard set-pwd-policy-check-repetitive-chars-enabled <true|false>
+ ceph dashboard set-pwd-policy-check-length-enabled <true|false>
+ ceph dashboard set-pwd-policy-check-oldpwd-enabled <true|false>
+ ceph dashboard set-pwd-policy-check-username-enabled <true|false>
+ ceph dashboard set-pwd-policy-check-exclusion-list-enabled <true|false>
+ ceph dashboard set-pwd-policy-check-complexity-enabled <true|false>
+ ceph dashboard set-pwd-policy-check-sequential-chars-enabled <true|false>
+ ceph dashboard set-pwd-policy-check-repetitive-chars-enabled <true|false>
Additionally the following options are available to configure password
policy.
- Minimum password length (defaults to 8):
- .. prompt:: bash #
+.. prompt:: bash $
- ceph dashboard set-pwd-policy-min-length <N>
+ ceph dashboard set-pwd-policy-min-length <N>
- Minimum password complexity (defaults to 10):
- .. prompt:: bash #
+ .. prompt:: bash $
ceph dashboard set-pwd-policy-min-complexity <N>
- A list of comma separated words that are not allowed to be used in a
password:
- .. prompt:: bash #
+ .. prompt:: bash $
ceph dashboard set-pwd-policy-exclusion-list <word>[,...]
- *Show User(s)*:
- .. prompt:: bash #
+ .. prompt:: bash $
ceph dashboard ac-user-show [<username>]
- *Create User*:
- .. prompt:: bash #
+ .. prompt:: bash $
ceph dashboard ac-user-create [--enabled] [--force-password] [--pwd_update_required] <username> -i <file-containing-password> [<rolename>] [<name>] [<email>] [<pwd_expiration_date>]
- To bypass password policy checks use the ``force-password`` option.
- Add the option ``pwd_update_required`` so that a newly created user has
+ To bypass password policy checks use the `force-password` option.
+ Add the option `pwd_update_required` so that a newly created user has
to change their password after the first login.
- *Delete User*:
- .. prompt:: bash #
+ .. prompt:: bash $
ceph dashboard ac-user-delete <username>
- *Change Password*:
- .. prompt:: bash #
+ .. prompt:: bash $
ceph dashboard ac-user-set-password [--force-password] <username> -i <file-containing-password>
- *Change Password Hash*:
- .. prompt:: bash #
+ .. prompt:: bash $
ceph dashboard ac-user-set-password-hash <username> -i <file-containing-password-hash>
- *Modify User (name, and email)*:
- .. prompt:: bash #
+ .. prompt:: bash $
ceph dashboard ac-user-set-info <username> <name> <email>
- *Disable User*:
- .. prompt:: bash #
+ .. prompt:: bash $
ceph dashboard ac-user-disable <username>
- *Enable User*:
- .. prompt:: bash #
+ .. prompt:: bash $
ceph dashboard ac-user-enable <username>
The list of available roles can be retrieved with the following command:
-.. prompt:: bash #
+.. prompt:: bash $
ceph dashboard ac-role-show [<rolename>]
- *Create Role*:
- .. prompt:: bash #
+ .. prompt:: bash $
ceph dashboard ac-role-create <rolename> [<description>]
- *Delete Role*:
- .. prompt:: bash #
+ .. prompt:: bash $
ceph dashboard ac-role-delete <rolename>
- *Add Scope Permissions to Role*:
- .. prompt:: bash #
+ .. prompt:: bash $
ceph dashboard ac-role-add-scope-perms <rolename> <scopename> <permission> [<permission>...]
- *Delete Scope Permission from Role*:
- .. prompt:: bash #
+ .. prompt:: bash $
ceph dashboard ac-role-del-scope-perms <rolename> <scopename>
- *Set User Roles*:
- .. prompt:: bash #
+ .. prompt:: bash $
ceph dashboard ac-user-set-roles <username> <rolename> [<rolename>...]
- *Add Roles To User*:
- .. prompt:: bash #
+ .. prompt:: bash $
ceph dashboard ac-user-add-roles <username> <rolename> [<rolename>...]
- *Delete Roles from User*:
- .. prompt:: bash #
+ .. prompt:: bash $
ceph dashboard ac-user-del-roles <username> <rolename> [<rolename>...]
1. *Create the user*:
- .. prompt:: bash #
+ .. prompt:: bash $
ceph dashboard ac-user-create bob -i <file-containing-password>
2. *Create role and specify scope permissions*:
- .. prompt:: bash #
+ .. prompt:: bash $
ceph dashboard ac-role-create rbd/pool-manager
ceph dashboard ac-role-add-scope-perms rbd/pool-manager rbd-image read create update delete
3. *Associate roles to user*:
- .. prompt:: bash #
+ .. prompt:: bash $
ceph dashboard ac-user-set-roles bob rbd/pool-manager read-only
to use hyperlinks that include your prefix, you can set the
``url_prefix`` setting:
-.. prompt:: bash #
+.. prompt:: bash $
ceph config set mgr mgr/dashboard/url_prefix $PREFIX
following command to get the dashboard to respond with an HTTP error (500 by default)
instead of redirecting to the active dashboard:
-.. prompt:: bash #
+.. prompt:: bash $
ceph config set mgr mgr/dashboard/standby_behaviour "error"
To reset the setting to default redirection, use the following command:
-.. prompt:: bash #
+.. prompt:: bash $
ceph config set mgr mgr/dashboard/standby_behaviour "redirect"
When redirection is disabled, you may want to customize the HTTP status
code of standby dashboards. To do so you need to run the command:
-.. prompt:: bash #
+.. prompt:: bash $
ceph config set mgr mgr/dashboard/standby_error_status_code 503
To activate redirection from standby dashboards to active dashboards via the
manager's hostname, run the following command:
-.. prompt:: bash #
+.. prompt:: bash $
ceph config set mgr mgr/dashboard/redirect_resolve_ip_addr True
audit log. This feature is disabled by default, but can be enabled with the
following command:
-.. prompt:: bash #
+.. prompt:: bash $
ceph dashboard set-audit-api-enabled <true|false>
The logging of the request payload (the arguments and their values) is enabled
by default. Execute the following command to disable this behaviour:
-.. prompt:: bash #
+.. prompt:: bash $
ceph dashboard set-audit-api-log-payload <true|false>
If you are unsure of the location of the Ceph Dashboard, run the following command:
-.. prompt:: bash #
+.. prompt:: bash $
ceph mgr services | jq .dashboard
#. Verify the Ceph Dashboard module is enabled:
- .. prompt:: bash #
+ .. prompt:: bash $
ceph mgr module ls | jq .enabled_modules
#. If it is not listed, activate the module with the following command:
- .. prompt:: bash #
+ .. prompt:: bash $
ceph mgr module enable dashboard
* Check if ``ceph-mgr`` log messages are written to a file by:
- .. prompt:: bash #
+ .. prompt:: bash $
ceph config get mgr log_to_file
::
- true
+ true
* Get the location of the log file (it's ``/var/log/ceph/<cluster-name>-<daemon-name>.log``
by default):
- .. prompt:: bash #
+ .. prompt:: bash $
ceph config get mgr log_file
::
- /var/log/ceph/$cluster-$name.log
+ /var/log/ceph/$cluster-$name.log
#. Ensure the SSL/TLS support is configured properly:
* Check if the SSL/TLS support is enabled:
- .. prompt:: bash #
+ .. prompt:: bash $
ceph config get mgr mgr/dashboard/ssl
* If the command returns ``true``, verify a certificate exists by:
- .. prompt:: bash #
+ .. prompt:: bash $
ceph config-key get mgr/dashboard/crt
and:
- .. prompt:: bash #
+ .. prompt:: bash $
ceph config-key get mgr/dashboard/key
certificate or follow the instructions outlined in
:ref:`dashboard-ssl-tls-support`:
- .. prompt:: bash #
+ .. prompt:: bash $
ceph dashboard create-self-signed-cert
#. If your user credentials are correct, but you are experiencing the same
error, check that the user account exists:
- .. prompt:: bash #
+ .. prompt:: bash $
ceph dashboard ac-user-show <username>
#. Check if the user is enabled:
- .. prompt:: bash #
+ .. prompt:: bash $
ceph dashboard ac-user-show <username> | jq .enabled
::
- true
+ true
Check if ``enabled`` is set to ``true`` for your user. If not the user is
not enabled, run:
- .. prompt:: bash #
+ .. prompt:: bash $
ceph dashboard ac-user-enable <username>
To enable it via the CLI, run the following command:
-.. prompt:: bash #
+.. prompt:: bash $
ceph dashboard debug enable
#. Increase the logging level of manager daemons:
- .. prompt:: bash #
+ .. prompt:: bash $
ceph tell mgr config set debug_mgr 20
and click the edit button. Modify the ``log_level`` configuration.
* To adjust it via the CLI, run the following command:
- .. prompt:: bash #
+ .. prompt:: bash $
- ceph config set mgr mgr/dashboard/log_level debug
+ bin/ceph config set mgr mgr/dashboard/log_level debug
3. High log levels can result in considerable log volume, which can
easily fill up your filesystem. Set a calendar reminder for an hour, a day,
or a week in the future to revert this temporary logging increase. This looks
something like this:
- .. prompt:: bash #
+ .. prompt:: bash $
ceph config log
::
- ...
- --- 11 --- 2020-11-07 11:11:11.960659 --- mgr.x/dashboard/log_level = debug ---
- ...
+ ...
+ --- 11 --- 2020-11-07 11:11:11.960659 --- mgr.x/dashboard/log_level = debug ---
+ ...
- .. prompt:: bash #
+ .. prompt:: bash $
ceph config reset 11
3. To see debug-level messages as well as info-level events, run the following command via CLI:
- .. prompt:: bash #
+ .. prompt:: bash $
ceph config set mgr mgr/cephadm/log_to_cluster_level debug
4. To enable logging to files, run the following commands via CLI:
- .. prompt:: bash #
+ .. prompt:: bash $
ceph config set global log_to_file true
ceph config set global mon_cluster_log_to_file true
the user can see their API access key. This key is used for authentication
when creating a new issue. To store the Ceph API access key, in the CLI run:
-.. prompt:: bash #
+.. prompt:: bash $
- ceph dashboard set-issue-tracker-api-key -i <file-containing-key>
+ ``ceph dashboard set-issue-tracker-api-key -i <file-containing-key>``
Then on successful update, you can create an issue using:
-.. prompt:: bash #
+.. prompt:: bash $
- ceph dashboard create issue <project> <tracker_type> <subject> <description>
+ ``ceph dashboard create issue <project> <tracker_type> <subject> <description>``
The available projects to create an issue on are:
-
#. dashboard
#. block
#. object
#. core_ceph
The available tracker types are:
-
#. bug
#. feature
This plugin allows to customize the behaviour of the dashboard according to the
debug mode. It can be enabled, disabled or checked with the following command:
-.. prompt:: bash #
+.. prompt:: bash $
ceph dashboard debug status
Debug: 'disabled'
-.. prompt:: bash #
+.. prompt:: bash $
ceph dashboard debug enable
Debug: 'enabled'
-.. prompt:: bash #
+.. prompt:: bash $
ceph dashboard debug disable
To retrieve a list of features and their current statuses:
-.. prompt:: bash #
+.. prompt:: bash $
ceph dashboard feature status
::
- Feature 'cephfs': 'enabled'
- Feature 'iscsi': 'enabled'
- Feature 'mirroring': 'enabled'
- Feature 'rbd': 'enabled'
- Feature 'rgw': 'enabled'
- Feature 'nfs': 'enabled'
+ Feature 'cephfs': 'enabled'
+ Feature 'iscsi': 'enabled'
+ Feature 'mirroring': 'enabled'
+ Feature 'rbd': 'enabled'
+ Feature 'rgw': 'enabled'
+ Feature 'nfs': 'enabled'
To enable or disable the status of a single or multiple features:
-.. prompt:: bash #
+.. prompt:: bash $
ceph dashboard feature disable iscsi mirroring
::
- Feature 'iscsi': disabled
- Feature 'mirroring': disabled
+ Feature 'iscsi': disabled
+ Feature 'mirroring': disabled
After a feature status has changed, the API REST endpoints immediately respond to
that change, while for the front-end UI elements, it may take up to 20 seconds to
Displays a configured `message of the day` at the top of the Ceph Dashboard.
The importance of a MOTD can be configured by its severity, which is
-``info``, ``warning`` or ``danger``. The MOTD can expire after a given time,
+`info`, `warning` or `danger`. The MOTD can expire after a given time,
this means it will not be displayed in the UI anymore. Use the following
-syntax to specify the expiration time: ``Ns|m|h|d|w`` for seconds, minutes,
-hours, days and weeks. If the MOTD should expire after 2 hours, use ``2h``
-or ``5w`` for 5 weeks. Use ``0`` to configure a MOTD that does not expire.
+syntax to specify the expiration time: `Ns|m|h|d|w` for seconds, minutes,
+hours, days and weeks. If the MOTD should expire after 2 hours, use `2h`
+or `5w` for 5 weeks. Use `0` to configure a MOTD that does not expire.
To configure a MOTD, run the following command:
-.. prompt:: bash #
+.. prompt:: bash $
ceph dashboard motd set <severity:info|warning|danger> <expires> <message>
To show the configured MOTD:
-.. prompt:: bash #
+.. prompt:: bash $
ceph dashboard motd get
To clear the configured MOTD run:
-.. prompt:: bash #
+.. prompt:: bash $
ceph dashboard motd clear
-A MOTD with a ``info`` or ``warning`` severity can be closed by the user. The
-``info`` MOTD is not displayed anymore until the local storage cookies are
+A MOTD with a `info` or `warning` severity can be closed by the user. The
+`info` MOTD is not displayed anymore until the local storage cookies are
cleared or a new MOTD with a different severity is displayed. A MOTD with
-a ``warning`` severity will be displayed again in a new session.
+a 'warning' severity will be displayed again in a new session.
========
Run the following command to enable the *diskprediction_local* module in the Ceph
-environment:
+environment::
-.. prompt:: bash #
+ ceph mgr module enable diskprediction_local
- ceph mgr module enable diskprediction_local
+To enable the local predictor::
-To enable the local predictor:
+ ceph config set global device_failure_prediction_mode local
-.. prompt:: bash #
+To disable prediction::
- ceph config set global device_failure_prediction_mode local
-
-To disable prediction:
-
-.. prompt:: bash #
-
- ceph config set global device_failure_prediction_mode none
+ ceph config set global device_failure_prediction_mode none
*diskprediction_local* requires at least six datasets of device health metrics to
Run the following command to retrieve the life expectancy of given device.
-.. prompt:: bash #
+::
- ceph device predict-life-expectancy <device id>
+ ceph device predict-life-expectancy <device id>
Configuration
=============
The module performs the prediction on a daily basis by default. You can adjust
-this interval with:
-
-.. prompt:: bash #
+this interval with::
- ceph config set mgr mgr/diskprediction_local/predict_interval <interval-in-seconds>
+ ceph config set mgr mgr/diskprediction_local/predict_interval <interval-in-seconds>
Debugging
=========
debug mgr = 20
With logging set to debug for the manager the module will print out logging
-message with prefix ``mgr[diskprediction]`` for easy filtering.
+message with prefix *mgr[diskprediction]* for easy filtering.
Enabling
--------
-The *hello* module is enabled with:
+The *hello* module is enabled with::
-.. prompt:: bash #
+ ceph mgr module enable hello
- ceph mgr module enable hello
+To check that it is enabled, run::
-To check that it is enabled, run:
+ ceph mgr module ls
-.. prompt:: bash #
+After editing the module file (found in ``src/pybind/mgr/hello/module.py``), you can see changes by running::
- ceph mgr module ls
+ ceph mgr module disable hello
+ ceph mgr module enable hello
-After editing the module file (found in ``src/pybind/mgr/hello/module.py``), you can see changes by running:
+or::
-.. prompt:: bash #
+ init-ceph restart mgr
- ceph mgr module disable hello
- ceph mgr module enable hello
+To execute the module, run::
-or:
-
-.. prompt:: bash #
-
- init-ceph restart mgr
-
-To execute the module, run:
-
-.. prompt:: bash #
-
- ceph hello
+ ceph hello
The log is found at::
To enable the module, use the following command:
-.. prompt:: bash #
+.. prompt:: bash $
- ceph mgr module enable influx
+ ceph mgr module enable influx
If you wish to subsequently disable the module, you can use the equivalent
*disable* command:
-.. prompt:: bash #
+.. prompt:: bash $
- ceph mgr module disable influx
+ ceph mgr module disable influx
-------------
Configuration
Set configuration values using the following command:
-.. prompt:: bash #
+.. prompt:: bash $
- ceph config set mgr mgr/influx/<key> <value>
+ ceph config set mgr mgr/influx/<key> <value>
The most important settings are :confval:`mgr/influx/hostname`,
:confval:`mgr/influx/username` and :confval:`mgr/influx/password`.
For example, a typical configuration might look like this:
-.. prompt:: bash #
+.. prompt:: bash $
- ceph config set mgr mgr/influx/hostname influx.mydomain.com
- ceph config set mgr mgr/influx/username admin123
- ceph config set mgr mgr/influx/password p4ssw0rd
+ ceph config set mgr mgr/influx/hostname influx.mydomain.com
+ ceph config set mgr mgr/influx/username admin123
+ ceph config set mgr mgr/influx/password p4ssw0rd
Following is the list of all configuration settings:
By default, a few debugging statements as well as error statements have been set to print in the log files. Users can add more if necessary.
To make use of the debugging option in the module:
-- Add this to the ``ceph.conf`` file.
+- Add this to the ceph.conf file.
.. code-block:: ini
debug_mgr = 20
- Use this command ``ceph influx self-test``.
-- Check the log files. Users may find it easier to filter the log files using ``mgr[influx]``.
+- Check the log files. Users may find it easier to filter the log files using *mgr[influx]*.
--------------------
Interesting counters
Enabling
--------
-The *insights* module is enabled with:
+The *insights* module is enabled with::
-.. prompt:: bash #
-
- ceph mgr module enable insights
+ ceph mgr module enable insights
Commands
--------
-.. prompt:: bash #
+::
- ceph insights
+ ceph insights
Generate the full report.
-.. prompt:: bash #
+::
- ceph insights prune-health <hours>
+ ceph insights prune-health <hours>
Remove historical health data older than <hours>. Passing `0` for <hours> will
clear all health data.
Enabling
--------
-To check if the *iostat* module is enabled, run:
+To check if the *iostat* module is enabled, run::
-.. prompt:: bash #
+ ceph mgr module ls
- ceph mgr module ls
+The module can be enabled with::
-The module can be enabled with:
+ ceph mgr module enable iostat
-.. prompt:: bash #
+To execute the module, run::
- ceph mgr module enable iostat
-
-To execute the module, run:
-
-.. prompt:: bash #
-
- ceph iostat
+ ceph iostat
To change the frequency at which the statistics are printed, use the ``-p``
-option:
-
-.. prompt:: bash #
-
- ceph iostat -p <period in seconds>
+option::
-For example, use the following command to print the statistics every 5 seconds:
+ ceph iostat -p <period in seconds>
-.. prompt:: bash #
+For example, use the following command to print the statistics every 5 seconds::
- ceph iostat -p 5
+ ceph iostat -p 5
To stop the module, press Ctrl-C.
Enabling
--------
-The *localpool* module is enabled with:
+The *localpool* module is enabled with::
-.. prompt:: bash #
-
- ceph mgr module enable localpool
+ ceph mgr module enable localpool
Configuring
-----------
:default: by-$subtreetype-
These options are set via the config-key interface. For example, to
-change the replication level to 2x with only 64 PGs:
-
-.. prompt:: bash #
+change the replication level to 2x with only 64 PGs, ::
- ceph config set mgr mgr/localpool/num_rep 2
- ceph config set mgr mgr/localpool/pg_num 64
+ ceph config set mgr mgr/localpool/num_rep 2
+ ceph config set mgr mgr/localpool/pg_num 64
.. mgr_module:: None
daemons are available. It works by adjusting the placement specification for
the orchestrator backend of the MDS service. To enable, use:
-.. prompt:: bash #
+.. sh:
ceph mgr module enable mds_autoscaler
Creating a module
-----------------
-In ``pybind/mgr/``, create a python module. Within your module, create a class
+In pybind/mgr/, create a python module. Within your module, create a class
that inherits from ``MgrModule``. For ceph-mgr to detect your module, your
-directory must contain a file called ``module.py``.
+directory must contain a file called `module.py`.
The most important methods to override are:
Once your module is present in the location set by the
``mgr module path`` configuration setting, you can enable it
-via the ``ceph mgr module enable`` command:
+via the ``ceph mgr module enable`` command::
-.. prompt:: bash #
-
- ceph mgr module enable mymodule
+ ceph mgr module enable mymodule
Note that the MgrModule interface is not stable, so any modules maintained
outside of the Ceph tree are liable to break when run against any newer
Each module has a ``log_level`` option that specifies the current Python
logging level of the module.
To change or query the logging level of the module use the following Ceph
-commands:
-
-.. prompt:: bash #
+commands::
- ceph config get mgr mgr/<module_name>/log_level
- ceph config set mgr mgr/<module_name>/log_level <info|debug|critical|error|warning|>
+ ceph config get mgr mgr/<module_name>/log_level
+ ceph config set mgr mgr/<module_name>/log_level <info|debug|critical|error|warning|>
The logging level used upon the module's start is determined by the current
logging level of the mgr daemon, unless if the ``log_level`` option was
* <= +inf is DEBUG
We can unset the module log level and fallback to the mgr daemon logging level
-by running the following command:
-
-.. prompt:: bash #
+by running the following command::
- ceph config set mgr mgr/<module_name>/log_level ''
+ ceph config set mgr mgr/<module_name>/log_level ''
By default, modules' logging messages are processed by the Ceph logging layer
where they will be recorded in the mgr daemon's log file.
<mgr_daemon_log_file_name>.<module_name>.log
-To enable the file logging on a module use the following command:
-
-.. prompt:: bash #
+To enable the file logging on a module use the following command::
ceph config set mgr mgr/<module_name>/log_to_file true
module's log file.
It's also possible to check the status and disable the file logging with the
-following commands:
-
-.. prompt:: bash #
+following commands::
ceph config get mgr mgr/<module_name>/log_to_file
ceph config set mgr mgr/<module_name>/log_to_file false
Option(name="my_option")
]
-If you try to use ``set_module_option`` or ``get_module_option`` on options
-not declared in ``MODULE_OPTIONS``, an exception will be raised.
+If you try to use set_module_option or get_module_option on options not declared
+in ``MODULE_OPTIONS``, an exception will be raised.
You may choose to provide setter commands in your module to perform
high level validation. Users can also modify configuration using
-the normal ``ceph config set`` command, where the configuration options
-for a mgr module are named like ``mgr/<module name>/<option>``.
+the normal `ceph config set` command, where the configuration options
+for a mgr module are named like `mgr/<module name>/<option>`.
If a configuration option is different depending on which node the mgr
is running on, then use *localized* configuration (
Hints for using config options:
* Reads are fast: ceph-mgr keeps a local in-memory copy, so in many cases
- you can just do a ``get_module_option`` every time you use a option, rather than
+ you can just do a get_module_option every time you use a option, rather than
copying it out into a variable.
* Writes block until the value is persisted (i.e. round trip to the monitor),
but reads from another thread will see the new value immediately.
-* If a user has used ``config set`` from the command line, then the new
- value will become visible to ``get_module_option`` immediately, although the
- mon->mgr update is asynchronous, so ``config set`` will return a fraction
+* If a user has used `config set` from the command line, then the new
+ value will become visible to `get_module_option` immediately, although the
+ mon->mgr update is asynchronous, so `config set` will return a fraction
of a second before the new value is visible on the mgr.
* To delete a config value (i.e. revert to default), just pass ``None`` to
- ``set_module_option``.
+ set_module_option.
.. automethod:: MgrModule.get_module_option
.. automethod:: MgrModule.set_module_option
Following is a sample session, in which the Ceph version is queried by
inputting ``print(mgr.version)`` at the prompt. And later
``timeit`` module is imported to measure the execution time of
-``mgr.get_mgr_id()``.
+`mgr.get_mgr_id()`.
.. code-block:: console
Create NFS Ganesha Cluster
--------------------------
-.. prompt:: bash #
+.. code:: bash
- ceph nfs cluster create <cluster_id> [<placement>] [--ingress] [--virtual_ip <value>] [--ingress-mode {default|keepalive-only|haproxy-standard|haproxy-protocol}] [--port <int>]
+ $ ceph nfs cluster create <cluster_id> [<placement>] [--ingress] [--virtual_ip <value>] [--ingress-mode {default|keepalive-only|haproxy-standard|haproxy-protocol}] [--port <int>]
This creates a common recovery pool for all NFS Ganesha daemons, new user based on
``cluster_id``, and a common NFS Ganesha config RADOS object.
daemon containers running on them and, optionally, the total number of NFS
Ganesha daemons on the cluster (should you want to have more than one NFS Ganesha
daemon running per node). For example, the following placement string means
-"deploy NFS Ganesha daemons on nodes host1 and host2" (one daemon per host)::
+"deploy NFS Ganesha daemons on nodes host1 and host2 (one daemon per host)::
"host1,host2"
wish to check that these services do successfully start and stay running.
When using cephadm orchestration, these commands check service status:
-.. prompt:: bash #
+.. code:: bash
- ceph orch ls --service_name=nfs.<cluster_id>
- ceph orch ls --service_name=ingress.nfs.<cluster_id>
+ $ ceph orch ls --service_name=nfs.<cluster_id>
+ $ ceph orch ls --service_name=ingress.nfs.<cluster_id>
Ingress
To examine an NFS cluster's IP endpoints, including the IPs for the individual NFS
daemons, and the virtual IP (if any) for the ingress service,
-.. prompt:: bash #
+.. code:: bash
- ceph nfs cluster info [<cluster_id>]
+ $ ceph nfs cluster info [<cluster_id>]
.. note:: This will not work with the rook backend. Instead, expose the port with
the kubectl patch command and fetch the port details with kubectl get services
- command:
+ command::
- .. prompt:: bash #
-
- kubectl patch service -n rook-ceph -p '{"spec":{"type": "NodePort"}}' rook-ceph-nfs-<cluster-name>-<node-id>
- kubectl get services -n rook-ceph rook-ceph-nfs-<cluster-name>-<node-id>
+ $ kubectl patch service -n rook-ceph -p '{"spec":{"type": "NodePort"}}' rook-ceph-nfs-<cluster-name>-<node-id>
+ $ kubectl get services -n rook-ceph rook-ceph-nfs-<cluster-name>-<node-id>
Delete NFS Ganesha Cluster
--------------------------
-.. prompt:: bash #
+.. code:: bash
- ceph nfs cluster rm <cluster_id>
+ $ ceph nfs cluster rm <cluster_id>
This deletes the deployed cluster.
wish to check that these services are no longer reported. When using cephadm
orchestration, these commands check service status:
-.. prompt:: bash #
+.. code:: bash
- ceph orch ls --service_name=nfs.<cluster_id>
- ceph orch ls --service_name=ingress.nfs.<cluster_id>
+ $ ceph orch ls --service_name=nfs.<cluster_id>
+ $ ceph orch ls --service_name=ingress.nfs.<cluster_id>
Updating an NFS Cluster
that is to export the current spec, modify it, and then re-apply it. For example,
to modify the ``nfs.foo`` service,
-.. prompt:: bash #
+.. code:: bash
- ceph orch ls --service-name nfs.foo --export > nfs.foo.yaml
- vi nfs.foo.yaml
- ceph orch apply -i nfs.foo.yaml
+ $ ceph orch ls --service-name nfs.foo --export > nfs.foo.yaml
+ $ vi nfs.foo.yaml
+ $ ceph orch apply -i nfs.foo.yaml
For more information about the NFS service spec, see :ref:`deploy-cephadm-nfs-ganesha`.
List NFS Ganesha Clusters
-------------------------
-.. prompt:: bash #
+.. code:: bash
- ceph nfs cluster ls
+ $ ceph nfs cluster ls
This lists deployed clusters.
Set Customized NFS Ganesha Configuration
----------------------------------------
-.. prompt:: bash #
+.. code:: bash
- ceph nfs cluster config set <cluster_id> -i <config_file>
+ $ ceph nfs cluster config set <cluster_id> -i <config_file>
With this the nfs cluster will use the specified config and it will have
precedence over default config blocks.
#. Adding custom export block.
The following sample block creates a single export. This export will not be
- managed by ``ceph nfs export`` interface::
+ managed by `ceph nfs export` interface::
EXPORT {
Export_Id = 100;
.. note:: User specified in FSAL block should have proper caps for NFS-Ganesha
daemons to access ceph cluster. User can be created in following way using
- ``auth get-or-create``:
+ `auth get-or-create`::
- .. prompt:: bash #
-
- ceph auth get-or-create client.<user_id> mon 'allow r' osd 'allow rw pool=.nfs namespace=<nfs_cluster_name>, allow rw tag cephfs data=<fs_name>' mds 'allow rw path=<export_path>'
+ # ceph auth get-or-create client.<user_id> mon 'allow r' osd 'allow rw pool=.nfs namespace=<nfs_cluster_name>, allow rw tag cephfs data=<fs_name>' mds 'allow rw path=<export_path>'
View Customized NFS Ganesha Configuration
-----------------------------------------
-.. prompt:: bash #
+.. code:: bash
- ceph nfs cluster config get <cluster_id>
+ $ ceph nfs cluster config get <cluster_id>
This will output the user defined configuration (if any).
Reset NFS Ganesha Configuration
-------------------------------
-.. prompt:: bash #
+.. code:: bash
- ceph nfs cluster config reset <cluster_id>
+ $ ceph nfs cluster config reset <cluster_id>
This removes the user defined configuration.
Create CephFS Export
--------------------
-.. prompt:: bash #
+.. code:: bash
- ceph nfs export create cephfs --cluster-id <cluster_id> --pseudo-path <pseudo_path> --fsname <fsname> [--readonly] [--path=/path/in/cephfs] [--client_addr <value>...] [--squash <value>] [--sectype <value>...] [--cmount_path <value>]
+ $ ceph nfs export create cephfs --cluster-id <cluster_id> --pseudo-path <pseudo_path> --fsname <fsname> [--readonly] [--path=/path/in/cephfs] [--client_addr <value>...] [--squash <value>] [--sectype <value>...] [--cmount_path <value>]
This creates export RADOS objects containing the export block, where
``<path>`` is the path within cephfs. Valid path should be given and default
path is '/'. It need not be unique. Subvolume path can be fetched using:
-.. prompt:: bash #
+.. code::
- ceph fs subvolume getpath <vol_name> <subvol_name> [--group_name <subvol_group_name>]
+ $ ceph fs subvolume getpath <vol_name> <subvol_name> [--group_name <subvol_group_name>]
``<client_addr>`` is the list of client address for which these export
permissions will be applicable. By default all clients can access the export
for permissible values.
``<squash>`` defines the kind of user id squashing to be performed. The default
-value is ``no_root_squash``. See the `NFS-Ganesha Export Sample`_ for
+value is `no_root_squash`. See the `NFS-Ganesha Export Sample`_ for
permissible values.
``<sectype>`` specifies which authentication methods will be used when
To export a *bucket*:
-.. prompt:: bash #
+.. code::
- ceph nfs export create rgw --cluster-id <cluster_id> --pseudo-path <pseudo_path> --bucket <bucket_name> [--user-id <user-id>] [--readonly] [--client_addr <value>...] [--squash <value>] [--sectype <value>...]
+ $ ceph nfs export create rgw --cluster-id <cluster_id> --pseudo-path <pseudo_path> --bucket <bucket_name> [--user-id <user-id>] [--readonly] [--client_addr <value>...] [--squash <value>] [--sectype <value>...]
For example, to export *mybucket* via NFS cluster *mynfs* at the pseudo-path */bucketdata* to any host in the ``192.168.10.0/24`` network
-.. prompt:: bash #
+.. code::
- ceph nfs export create rgw --cluster-id mynfs --pseudo-path /bucketdata --bucket mybucket --client_addr 192.168.10.0/24
+ $ ceph nfs export create rgw --cluster-id mynfs --pseudo-path /bucketdata --bucket mybucket --client_addr 192.168.10.0/24
.. note:: Export creation is supported only for NFS Ganesha clusters deployed using nfs interface.
for permissible values.
``<squash>`` defines the kind of user id squashing to be performed. The default
-value is ``no_root_squash``. See the `NFS-Ganesha Export Sample`_ for
+value is `no_root_squash`. See the `NFS-Ganesha Export Sample`_ for
permissible values.
``<sectype>`` specifies which authentication methods will be used when
To export an RGW *user*:
-.. prompt:: bash #
+.. code::
- ceph nfs export create rgw --cluster-id <cluster_id> --pseudo-path <pseudo_path> --user-id <user-id> [--readonly] [--client_addr <value>...] [--squash <value>]
+ $ ceph nfs export create rgw --cluster-id <cluster_id> --pseudo-path <pseudo_path> --user-id <user-id> [--readonly] [--client_addr <value>...] [--squash <value>]
For example, to export *myuser* via NFS cluster *mynfs* at the pseudo-path */myuser* to any host in the ``192.168.10.0/24`` network
-.. prompt:: bash #
+.. code::
- ceph nfs export create rgw --cluster-id mynfs --pseudo-path /bucketdata --user-id myuser --client_addr 192.168.10.0/24
+ $ ceph nfs export create rgw --cluster-id mynfs --pseudo-path /bucketdata --user-id myuser --client_addr 192.168.10.0/24
Delete Export
-------------
-.. prompt:: bash #
+.. code:: bash
- ceph nfs export rm <cluster_id> <pseudo_path>
+ $ ceph nfs export rm <cluster_id> <pseudo_path>
This deletes an export in an NFS Ganesha cluster, where:
List Exports
------------
-.. prompt:: bash #
+.. code:: bash
- ceph nfs export ls <cluster_id> [--detailed]
+ $ ceph nfs export ls <cluster_id> [--detailed]
It lists exports for a cluster, where:
Get Export
----------
-.. prompt:: bash #
+.. code:: bash
- ceph nfs export info <cluster_id> <pseudo_path>
+ $ ceph nfs export info <cluster_id> <pseudo_path>
This displays export block for a cluster based on pseudo root name,
where:
.. prompt:: bash #
- ceph nfs export info *<cluster_id>* *<pseudo_path>*
+ ceph nfs export info *<cluster_id>* *<pseudo_path>*
An export can be created or modified by importing a JSON description in the
same format:
.. prompt:: bash #
- ceph nfs export apply *<cluster_id>* -i <json_file>
-
-For example:
-
-.. prompt:: bash #
-
- ceph nfs export info mynfs /cephfs > update_cephfs_export.json
- cat update_cephfs_export.json
-
-::
-
- {
- "export_id": 1,
- "path": "/",
- "cluster_id": "mynfs",
- "pseudo": "/cephfs",
- "access_type": "RW",
- "squash": "no_root_squash",
- "security_label": true,
- "protocols": [
- 4
- ],
- "transports": [
- "TCP"
- ],
- "fsal": {
- "name": "CEPH",
- "fs_name": "a",
- "sec_label_xattr": "",
- "cmount_path": "/"
- },
- "clients": []
- }
+ ceph nfs export apply *<cluster_id>* -i <json_file>
+
+For example,::
+
+ $ ceph nfs export info mynfs /cephfs > update_cephfs_export.json
+ $ cat update_cephfs_export.json
+ {
+ "export_id": 1,
+ "path": "/",
+ "cluster_id": "mynfs",
+ "pseudo": "/cephfs",
+ "access_type": "RW",
+ "squash": "no_root_squash",
+ "security_label": true,
+ "protocols": [
+ 4
+ ],
+ "transports": [
+ "TCP"
+ ],
+ "fsal": {
+ "name": "CEPH",
+ "fs_name": "a",
+ "sec_label_xattr": "",
+ "cmount_path": "/"
+ },
+ "clients": []
+ }
The imported JSON can be a single dict describing a single export, or a JSON list
containing multiple export dicts.
authentication credentials, which will be carried over from the
previous state of the export where possible.
-.. note:: The ``user_id`` in the ``fsal`` block should not be modified or
- mentioned in the JSON file as it is auto-generated for CephFS exports.
- It's auto-generated in the format ``nfs.<cluster_id>.<fs_name>.<hash_id>``.
-
-.. prompt:: bash #
-
- ceph nfs export apply mynfs -i update_cephfs_export.json
- cat update_cephfs_export.json
+!! NOTE: The ``user_id`` in the ``fsal`` block should not be modified or mentioned in the JSON file as it is auto-generated for CephFS exports.
+It's auto-generated in the format ``nfs.<cluster_id>.<fs_name>.<hash_id>``.
::
- {
- "export_id": 1,
- "path": "/",
- "cluster_id": "mynfs",
- "pseudo": "/cephfs_testing",
- "access_type": "RO",
- "squash": "no_root_squash",
- "security_label": true,
- "protocols": [
- 4
- ],
- "transports": [
- "TCP"
- ],
- "fsal": {
- "name": "CEPH",
- "fs_name": "a",
- "sec_label_xattr": "",
- "cmount_path": "/"
- },
- "clients": []
- }
+ $ ceph nfs export apply mynfs -i update_cephfs_export.json
+ $ cat update_cephfs_export.json
+ {
+ "export_id": 1,
+ "path": "/",
+ "cluster_id": "mynfs",
+ "pseudo": "/cephfs_testing",
+ "access_type": "RO",
+ "squash": "no_root_squash",
+ "security_label": true,
+ "protocols": [
+ 4
+ ],
+ "transports": [
+ "TCP"
+ ],
+ "fsal": {
+ "name": "CEPH",
+ "fs_name": "a",
+ "sec_label_xattr": "",
+ "cmount_path": "/"
+ },
+ "clients": []
+ }
An export can also be created or updated by injecting a Ganesha NFS EXPORT config
-fragment. For example:
-
-.. prompt:: bash #
-
- ceph nfs export apply mynfs -i update_cephfs_export.conf
- cat update_cephfs_export.conf
-
-::
-
- EXPORT {
- FSAL {
- name = "CEPH";
- filesystem = "a";
- }
- export_id = 1;
- path = "/";
- pseudo = "/a";
- access_type = "RW";
- squash = "none";
- attr_expiration_time = 0;
- security_label = true;
- protocols = 4;
- transports = "TCP";
- }
+fragment. For example,::
+
+ $ ceph nfs export apply mynfs -i update_cephfs_export.conf
+ $ cat update_cephfs_export.conf
+ EXPORT {
+ FSAL {
+ name = "CEPH";
+ filesystem = "a";
+ }
+ export_id = 1;
+ path = "/";
+ pseudo = "/a";
+ access_type = "RW";
+ squash = "none";
+ attr_expiration_time = 0;
+ security_label = true;
+ protocols = 4;
+ transports = "TCP";
+ }
Mounting
After the exports are successfully created and NFS Ganesha daemons are
deployed, exports can be mounted with:
-.. prompt:: bash #
+.. code:: bash
- mount -t nfs <ganesha-host-name>:<pseudo_path> <mount-point>
+ $ mount -t nfs <ganesha-host-name>:<pseudo_path> <mount-point>
For example, if the NFS cluster was created with ``--ingress --virtual-ip 192.168.10.10``
and the export's pseudo-path was ``/foo``, the export can be mounted at ``/mnt`` with:
-.. prompt:: bash #
+.. code:: bash
- mount -t nfs 192.168.10.10:/foo /mnt
+ $ mount -t nfs 192.168.10.10:/foo /mnt
If the NFS service is running on a non-standard port number:
-.. prompt:: bash #
+.. code:: bash
- mount -t nfs -o port=<ganesha-port> <ganesha-host-name>:<ganesha-pseudo_path> <mount-point>
+ $ mount -t nfs -o port=<ganesha-port> <ganesha-host-name>:<ganesha-pseudo_path> <mount-point>
.. note:: Only NFS v4.0+ is supported.
1) ``cephadm``: The NFS daemons can be listed with:
- .. prompt:: bash #
+ .. code:: bash
- ceph orch ps --daemon-type nfs
+ $ ceph orch ps --daemon-type nfs
You can via the logs for a specific daemon (e.g., ``nfs.mynfs.0.0.myhost.xkfzal``) on
the relevant host with:
- .. prompt:: bash #
+ .. code:: bash
- cephadm logs --fsid <fsid> --name nfs.mynfs.0.0.myhost.xkfzal
+ # cephadm logs --fsid <fsid> --name nfs.mynfs.0.0.myhost.xkfzal
2) ``rook``:
- .. prompt:: bash #
+ .. code:: bash
- kubectl logs -n rook-ceph rook-ceph-nfs-<cluster_id>-<node_id> nfs-ganesha
+ $ kubectl logs -n rook-ceph rook-ceph-nfs-<cluster_id>-<node_id> nfs-ganesha
-The NFS log level can be adjusted using ``nfs cluster config set`` command (see :ref:`nfs-cluster-set`).
+The NFS log level can be adjusted using `nfs cluster config set` command (see :ref:`nfs-cluster-set`).
.. _nfs-ganesha-config:
Status
======
-.. prompt:: bash #
+.. prompt:: bash $
ceph orch status [--detail]
Creating/growing/shrinking/removing services:
-.. prompt:: bash #
+.. prompt:: bash $
- ceph orch apply mds <fs_name> [--placement=<placement>] [--dry-run]
- ceph orch apply rgw <name> [--realm=<realm>] [--zone=<zone>] [--port=<port>] [--ssl] [--placement=<placement>] [--dry-run]
- ceph orch apply nfs <name> <pool> [--namespace=<namespace>] [--placement=<placement>] [--dry-run]
- ceph orch rm <service_name> [--force]
+ ceph orch apply mds <fs_name> [--placement=<placement>] [--dry-run]
+ ceph orch apply rgw <name> [--realm=<realm>] [--zone=<zone>] [--port=<port>] [--ssl] [--placement=<placement>] [--dry-run]
+ ceph orch apply nfs <name> <pool> [--namespace=<namespace>] [--placement=<placement>] [--dry-run]
+ ceph orch rm <service_name> [--force]
where ``placement`` is a :ref:`orchestrator-cli-placement-spec`.
Service Commands:
-.. prompt:: bash #
+.. prompt:: bash $
- ceph orch <start|stop|restart|redeploy|reconfig> <service_name>
+ ceph orch <start|stop|restart|redeploy|reconfig> <service_name>
.. note:: These commands apply only to cephadm containerized daemons.
Enable the orchestrator by using the ``set backend`` command to select the orchestrator module that will be used:
-.. prompt:: bash #
+.. prompt:: bash $
- ceph orch set backend <module>
+ ceph orch set backend <module>
Example - Configuring the Orchestrator CLI
------------------------------------------
For example, to enable the Rook orchestrator module and use it with the CLI:
-.. prompt:: bash #
+.. prompt:: bash $
- ceph mgr module enable rook
- ceph orch set backend rook
+ ceph mgr module enable rook
+ ceph orch set backend rook
Confirm that the backend is properly configured:
-.. prompt:: bash #
+.. prompt:: bash $
- ceph orch status
+ ceph orch status
Disable the Orchestrator
------------------------
To disable the orchestrator, use the empty string ``""``:
-.. prompt:: bash #
+.. prompt:: bash $
- ceph orch set backend ""
- ceph mgr module disable rook
+ ceph orch set backend ""
+ ceph mgr module disable rook
Current Implementation Status
=============================
(Placement Groups) that are affected by events such as (1) OSDs being marked
in or out and (2) ``pg_autoscaler`` trying to match the target PG number.
-The ``ceph -s`` command returns something called "Global Recovery Progress",
+The ``ceph -s`` command returns something called " Global Recovery Progress",
which reports the overall recovery progress of PGs and is based on the number
of PGs that are in the ``active+clean`` state.
.. prompt:: bash #
- ceph progress on
+ ceph progress on
The module can be disabled at anytime by running the following command:
.. prompt:: bash #
- ceph progress off
+ ceph progress off
Commands
--------
.. prompt:: bash #
- ceph progress
+ ceph progress
Show the summary of ongoing and completed events in JSON format:
.. prompt:: bash #
- ceph progress json
+ ceph progress json
Clear all ongoing and completed events:
.. prompt:: bash #
- ceph progress clear
+ ceph progress clear
PG Recovery Event
-----------------
An event for each PG affected by recovery event can be shown in
-``ceph progress`` This is completely optional, and disabled by default
+`ceph progress` This is completely optional, and disabled by default
due to CPU overhead:
.. prompt:: bash #
- ceph config set mgr mgr/progress/allow_pg_recovery_event true
+ ceph config set mgr mgr/progress/allow_pg_recovery_event true
The *prometheus* module is enabled with:
-.. prompt:: bash #
+.. prompt:: bash $
ceph mgr module enable prometheus
is registered with Prometheus's `registry
<https://github.com/prometheus/prometheus/wiki/Default-port-allocations>`_.
-.. prompt:: bash #
+.. prompt:: bash $
- ceph config set mgr mgr/prometheus/server_addr 0.0.0.0
+ ceph config set mgr mgr/prometheus/server_addr 0.0.0.
ceph config set mgr mgr/prometheus/server_port 9283
.. warning::
To set a different scrape interval in the Prometheus module, set
``scrape_interval`` to the desired value:
-.. prompt:: bash #
+.. prompt:: bash $
ceph config set mgr mgr/prometheus/scrape_interval 20
To tell the module to respond with possibly stale data, set it to ``return``:
-.. prompt:: bash #
+.. prompt:: bash $
ceph config set mgr mgr/prometheus/stale_cache_strategy return
To tell the module to respond with "service unavailable", set it to ``fail``:
-.. prompt:: bash #
+.. prompt:: bash $
ceph config set mgr mgr/prometheus/stale_cache_strategy fail
If you are confident that you don't require the cache, you can disable it:
-.. prompt:: bash #
+.. prompt:: bash $
ceph config set mgr mgr/prometheus/cache false
loadbalancer, you can simplify discovering the active instance by switching
to ``error``-mode:
-.. prompt:: bash #
+.. prompt:: bash $
ceph config set mgr mgr/prometheus/standby_behaviour error
from the standby instance. The default error code is 500, but you can configure
the HTTP response code with:
-.. prompt:: bash #
+.. prompt:: bash $
ceph config set mgr mgr/prometheus/standby_error_status_code 503
To switch back to the default behaviour, simply set the config key to ``default``:
-.. prompt:: bash #
+.. prompt:: bash $
ceph config set mgr mgr/prometheus/standby_behaviour default
The health check history is made available through the following commands;
-.. prompt:: bash #
+::
- ceph healthcheck history ls [--format {plain|json|json-pretty}]
- ceph healthcheck history clear
+ healthcheck history ls [--format {plain|json|json-pretty}]
+ healthcheck history clear
The ``ls`` command provides an overview of the health checks that the cluster has
encountered, or since the last ``clear`` command was issued. The example below;
-.. prompt:: bash #
-
- ceph healthcheck history ls
-
::
+ [ceph: root@c8-node1 /]# ceph healthcheck history ls
Healthcheck Name First Seen (UTC) Last seen (UTC) Count Active
OSDMAP_FLAGS 2021/09/16 03:17:47 2021/09/16 22:07:40 2 No
OSD_DOWN 2021/09/17 00:11:59 2021/09/17 00:11:59 1 Yes
Example to activate the RBD-enabled pools ``pool1``, ``pool2`` and ``poolN``:
-.. prompt:: bash #
+.. prompt:: bash $
ceph config set mgr mgr/prometheus/rbd_stats_pools "pool1,pool2,poolN"
The wildcard can be used to indicate all pools or namespaces:
-.. prompt:: bash #
+.. prompt:: bash $
ceph config set mgr mgr/prometheus/rbd_stats_pools "*"
Example to turn up the sync interval to 10 minutes:
-.. prompt:: bash #
+.. prompt:: bash $
ceph config set mgr mgr/prometheus/rbd_stats_pools_refresh_interval 600
perf counters as prometheus metrics by default. However, one may re-enable exporting these metrics by setting
the module option ``exclude_perf_counters`` to ``false``:
-.. prompt:: bash #
+.. prompt:: bash $
ceph config set mgr mgr/prometheus/exclude_perf_counters false
ceph_disk_occupation_human{ceph_daemon="osd.0", device="sdd", exported_instance="myhost"}
To use this to get disk statistics by OSD ID, use either the ``and`` operator or
-the ``*`` operator in your prometheus query. All metadata metrics (like
-``ceph_disk_occupation_human`` have the value 1 so they act neutral with ``*``. Using ``*``
+the ``*`` operator in your prometheus query. All metadata metrics (like ``
+ceph_disk_occupation_human`` have the value 1 so they act neutral with ``*``. Using ``*``
allows to use ``group_left`` and ``group_right`` grouping modifiers, so that
the resulting metric has additional labels from one side of the query.
Enabling
--------
-The *rgw* module is enabled with:
+The *rgw* module is enabled with::
-.. prompt:: bash #
-
- ceph mgr module enable rgw
+ ceph mgr module enable rgw
RGW Realm Operations
.. prompt:: bash #
- ceph rgw realm bootstrap [--realm-name] [--zonegroup-name] [--zone-name] [--port] [--placement] [--start-radosgw]
+ ceph rgw realm bootstrap [--realm-name] [--zonegroup-name] [--zone-name] [--port] [--placement] [--start-radosgw]
The command supports providing the configuration through a spec file (`-i option`):
.. prompt:: bash #
- ceph rgw realm bootstrap -i myrgw.yaml
+ ceph rgw realm bootstrap -i myrgw.yaml
Following is an example of RGW multisite spec file:
Users can list the available tokens for the created (or already existing) realms.
The token is a base64 string that encapsulates the realm information and its
master zone endpoint authentication data. Following is an example of
-the ``ceph rgw realm tokens`` output:
+the `ceph rgw realm tokens` output:
.. prompt:: bash #
- ceph rgw realm tokens | jq
+ ceph rgw realm tokens | jq
.. code-block:: json
.. prompt:: bash #
- ceph rgw zone create -i zone-spec.yaml
+ ceph rgw zone create -i zone-spec.yaml
.. note:: The spec file used by RGW has the same format as the one used by the orchestrator. Thus,
the user can provide any orchestration supported rgw parameters including advanced
Commands
--------
-.. prompt:: bash #
+::
- ceph rgw realm bootstrap -i spec.yaml
+ ceph rgw realm bootstrap -i spec.yaml
Create a new realm + zonegroup + zone and deploy rgw daemons via the
orchestrator using the information specified in the YAML file.
-.. prompt:: bash #
+::
- ceph rgw realm tokens
+ ceph rgw realm tokens
List the tokens of all the available realms
-.. prompt:: bash #
+::
- ceph rgw zone create -i spec.yaml
+ ceph rgw zone create -i spec.yaml
Join an existing realm by creating a new secondary zone (using the realm token)
-.. prompt:: bash #
+::
- ceph rgw admin [*]
+ ceph rgw admin [*]
RGW admin command
#. From the primary node, ensure that the ``curl`` command can be run by the
user:
- .. prompt:: bash [primary-node]$
+ .. prompt:: bash [root@primary-node]#
curl https://<host_ip>:443
Create Cluster
++++++++++++++
-.. prompt:: bash #
+.. code:: bash
- ceph smb cluster create <cluster_id> {user|active-directory} [--domain-realm=<domain_realm>] [--domain-join-user-pass=<domain_join_user_pass>] [--define-user-pass=<define_user_pass>] [--custom-dns=<custom_dns>] [--placement=<placement>] [--clustering=<clustering>] [--password-filter=<password_filter>] [--password-filter-out=<password_filter_out>]
+ $ ceph smb cluster create <cluster_id> {user|active-directory} [--domain-realm=<domain_realm>] [--domain-join-user-pass=<domain_join_user_pass>] [--define-user-pass=<define_user_pass>] [--custom-dns=<custom_dns>] [--placement=<placement>] [--clustering=<clustering>] [--password-filter=<password_filter>] [--password-filter-out=<password_filter_out>]
Create a new logical cluster, identified by the cluster id value. The cluster
create command must specify the authentication mode the cluster will use. This
Remove Cluster
++++++++++++++
-.. prompt:: bash #
+.. code:: bash
- ceph smb cluster rm <cluster_id> [--password-filter=<password_filter>]
+ $ ceph smb cluster rm <cluster_id> [--password-filter=<password_filter>]
Remove a logical SMB cluster from the Ceph cluster.
List Clusters
++++++++++++++
-.. prompt:: bash #
+.. code:: bash
- ceph smb cluster ls [--format=<format>]
+ $ ceph smb cluster ls [--format=<format>]
Print a listing of cluster ids. The output defaults to JSON, select YAML
encoding with the ``--format=yaml`` option.
Create Share
++++++++++++
-.. prompt:: bash #
+.. code:: bash
- ceph smb share create <cluster_id> <share_id> <cephfs_volume> <path> [--share-name=<share_name>] [--subvolume=<subvolume>] [--readonly]
+ $ ceph smb share create <cluster_id> <share_id> <cephfs_volume> <path> [--share-name=<share_name>] [--subvolume=<subvolume>] [--readonly]
Create a new SMB share, hosted by the named cluster, that maps to the given
CephFS volume and path.
Remove Share
++++++++++++
-.. prompt:: bash #
+.. code:: bash
- ceph smb share rm <cluster_id> <share_id>
+ $ ceph smb share rm <cluster_id> <share_id>
Remove an SMB Share from the cluster.
List Shares
+++++++++++
-.. prompt:: bash #
+.. code:: bash
- ceph smb share ls <cluster_id> [--format=<format>]
+ $ ceph smb share ls <cluster_id> [--format=<format>]
Print a listing of share ids. The output defaults to JSON, select YAML
encoding with the ``--format=yaml`` option.
specifications can be applied to the cluster using the ``ceph smb apply``
command, for example:
-.. prompt:: bash #
+.. code:: bash
- ceph smb apply -i /path/to/resources.yaml
+ $ ceph smb apply -i /path/to/resources.yaml
In addition to the resource specification the ``apply`` sub-command accepts
options that control how the input and output of the command behave:
-.. prompt:: bash #
+.. code:: bash
- ceph smb apply [--format=<format>] [--password-filter=<password_filter>] [--password-filter-out=<password_filter_out>] -i <input>
+ $ ceph smb apply [--format=<format>] [--password-filter=<password_filter>] [--password-filter-out=<password_filter_out>] -i <input>
Options:
Resources that have already been applied to the Ceph cluster configuration can
be viewed using the ``ceph smb show`` command. For example:
-.. prompt:: bash #
+.. code:: bash
- ceph smb show ceph.smb.cluster.cluster1
+ $ ceph smb show ceph.smb.cluster.cluster1
The ``show`` command can show all resources, resources of a given type, or specific
resource items. Options can be provided that control the output of the command.
-.. prompt:: bash #
+.. code:: bash
- ceph smb show [resource_name...] [--format=<format>] [--results=<results>] [--password-filter=<password_filter>]
+ $ ceph smb show [resource_name...] [--format=<format>] [--results=<results>] [--password-filter=<password_filter>]
Options:
For example:
-.. prompt:: bash #
+.. code:: bash
- ceph smb show ceph.smb.cluster.bob ceph.smb.share.bob
+ $ ceph smb show ceph.smb.cluster.bob ceph.smb.share.bob
Will show one cluster resource (if it exists) for the cluster "bob" as well as
all share resources associated with the cluster "bob".
Save this text to a YAML file named ``resources.yaml`` and make it available
on a cluster admin host. Then run:
-.. prompt:: bash #
+.. code:: bash
- ceph smb apply -i resources.yaml
+ $ ceph smb apply -i resources.yaml
The command will print a summary of the changes made and begin to automatically
deploy the needed resources. See `Accessing Shares`_ for more information
By issuing the command:
-.. prompt:: bash #
+.. code:: bash
- ceph smb apply -i removed.yaml
+ $ ceph smb apply -i removed.yaml
SMB Cluster Management
To enable the module, use the following command:
-.. prompt:: bash #
+::
- ceph mgr module enable telegraf
+ ceph mgr module enable telegraf
If you wish to subsequently disable the module, you can use the corresponding
-``disable`` command:
+*disable* command:
-.. prompt:: bash #
+::
- ceph mgr module disable telegraf
+ ceph mgr module disable telegraf
-------------
Configuration
Set configuration values using the following command:
-.. prompt:: bash #
+::
- ceph telegraf config-set <key> <value>
+ ceph telegraf config-set <key> <value>
The most important settings are ``address`` and ``interval``.
For example, a typical configuration might look like this:
-.. prompt:: bash #
+::
- ceph telegraf config-set address udp://:8094
- ceph telegraf config-set interval 10
+ ceph telegraf config-set address udp://:8094
+ ceph telegraf config-set interval 10
The default values for these configuration keys are:
-- ``address``: ``unixgram:///tmp/telegraf.sock``
-- ``interval``: ``15``
+- address: unixgram:///tmp/telegraf.sock
+- interval: 15
----------------
Socket Listener
Data is sent secured to *https://telemetry.ceph.com*.
-Individual channels can be enabled or disabled with:
+Individual channels can be enabled or disabled with::
-.. prompt:: bash #
+ ceph telemetry enable channel basic
+ ceph telemetry enable channel crash
+ ceph telemetry enable channel device
+ ceph telemetry enable channel ident
+ ceph telemetry enable channel perf
- ceph telemetry enable channel basic
- ceph telemetry enable channel crash
- ceph telemetry enable channel device
- ceph telemetry enable channel ident
- ceph telemetry enable channel perf
+ ceph telemetry disable channel basic
+ ceph telemetry disable channel crash
+ ceph telemetry disable channel device
+ ceph telemetry disable channel ident
+ ceph telemetry disable channel perf
-.. prompt:: bash #
+Multiple channels can be enabled or disabled with::
- ceph telemetry disable channel basic
- ceph telemetry disable channel crash
- ceph telemetry disable channel device
- ceph telemetry disable channel ident
- ceph telemetry disable channel perf
+ ceph telemetry enable channel basic crash device ident perf
+ ceph telemetry disable channel basic crash device ident perf
-Multiple channels can be enabled or disabled with:
+Channels can be enabled or disabled all at once with::
-.. prompt:: bash #
-
- ceph telemetry enable channel basic crash device ident perf
- ceph telemetry disable channel basic crash device ident perf
-
-Channels can be enabled or disabled all at once with:
-
-.. prompt:: bash #
-
- ceph telemetry enable channel all
- ceph telemetry disable channel all
+ ceph telemetry enable channel all
+ ceph telemetry disable channel all
Please note that telemetry should be on for these commands to take effect.
-List all channels with:
+List all channels with::
-.. prompt:: bash #
+ ceph telemetry channel ls
- ceph telemetry channel ls
-
-::
-
- NAME ENABLED DEFAULT DESC
- basic ON ON Share basic cluster information (size, version)
- crash ON ON Share metadata about Ceph daemon crashes (version, stack straces, etc)
- device ON ON Share device health metrics (e.g., SMART data, minus potentially identifying info like serial numbers)
- ident OFF OFF Share a user-provided description and/or contact email for the cluster
- perf ON OFF Share various performance metrics of a cluster
+ NAME ENABLED DEFAULT DESC
+ basic ON ON Share basic cluster information (size, version)
+ crash ON ON Share metadata about Ceph daemon crashes (version, stack straces, etc)
+ device ON ON Share device health metrics (e.g., SMART data, minus potentially identifying info like serial numbers)
+ ident OFF OFF Share a user-provided description and/or contact email for the cluster
+ perf ON OFF Share various performance metrics of a cluster
Enabling Telemetry
------------------
-To allow the *telemetry* module to start sharing data:
+To allow the *telemetry* module to start sharing data::
-.. prompt:: bash #
-
- ceph telemetry on
+ ceph telemetry on
Please note: Telemetry data is licensed under the Community Data License
Agreement - Sharing - Version 1.0 (https://cdla.io/sharing-1-0/). Hence,
-telemetry module can be enabled only after you add ``--license sharing-1-0`` to
-the ``ceph telemetry on`` command.
+telemetry module can be enabled only after you add '--license sharing-1-0' to
+the 'ceph telemetry on' command.
Once telemetry is on, please consider enabling channels which are off by
-default, such as the ``perf`` channel. ``ceph telemetry on`` output will list the
+default, such as the 'perf' channel. 'ceph telemetry on' output will list the
exact command to enable these channels.
-Telemetry can be disabled at any time with:
-
-.. prompt:: bash #
+Telemetry can be disabled at any time with::
- ceph telemetry off
+ ceph telemetry off
Sample report
-------------
-You can look at what data is reported at any time with the command:
+You can look at what data is reported at any time with the command::
-.. prompt:: bash #
+ ceph telemetry show
- ceph telemetry show
+If telemetry is off, you can preview a sample report with::
-If telemetry is off, you can preview a sample report with:
-
-.. prompt:: bash #
-
- ceph telemetry preview
+ ceph telemetry preview
Generating a sample report might take a few moments in big clusters (clusters
with hundreds of OSDs or more).
To protect your privacy, device reports are generated separately, and data such
as hostname and device serial number is anonymized. The device telemetry is
sent to a different endpoint and does not associate the device data with a
-particular cluster. To see a preview of the device report use the command:
-
-.. prompt:: bash #
-
- ceph telemetry show-device
+particular cluster. To see a preview of the device report use the command::
-If telemetry is off, you can preview a sample device report with:
+ ceph telemetry show-device
-.. prompt:: bash #
+If telemetry is off, you can preview a sample device report with::
- ceph telemetry preview-device
+ ceph telemetry preview-device
Please note: In order to generate the device report we use Smartmontools
version 7.0 and up, which supports JSON output.
If you have any concerns about privacy with regard to the information included in
this report, please contact the Ceph developers.
-In case you prefer to have a single output of both reports, and telemetry is on, use:
+In case you prefer to have a single output of both reports, and telemetry is on, use::
-.. prompt:: bash #
+ ceph telemetry show-all
- ceph telemetry show-all
+If you would like to view a single output of both reports, and telemetry is off, use::
-If you would like to view a single output of both reports, and telemetry is off, use:
-
-.. prompt:: bash #
-
- ceph telemetry preview-all
+ ceph telemetry preview-all
**Sample report by channel**
-When telemetry is on you can see what data is reported by channel with:
-
-.. prompt:: bash #
+When telemetry is on you can see what data is reported by channel with::
- ceph telemetry show <channel_name>
+ ceph telemetry show <channel_name>
Please note: If telemetry is on, and <channel_name> is disabled, the command
above will output a sample report by that channel, according to the collections
the user is enrolled to. However this data is not reported, since the channel
is disabled.
-If telemetry is off you can preview a sample report by channel with:
+If telemetry is off you can preview a sample report by channel with::
-.. prompt:: bash #
-
- ceph telemetry preview <channel_name>
+ ceph telemetry preview <channel_name>
Collections
-----------
Collections represent different aspects of data that we collect within a channel.
-List all collections with:
-
-.. prompt:: bash #
-
- ceph telemetry collection ls
-
-::
-
- NAME STATUS DESC
- basic_base NOT REPORTING: NOT OPTED-IN Basic information about the cluster (capacity, number and type of daemons, version, etc.)
- basic_mds_metadata NOT REPORTING: NOT OPTED-IN MDS metadata
- basic_pool_flags NOT REPORTING: NOT OPTED-IN Per-pool flags
- basic_pool_options_bluestore NOT REPORTING: NOT OPTED-IN Per-pool bluestore config options
- basic_pool_usage NOT REPORTING: NOT OPTED-IN Default pool application and usage statistics
- basic_rook_v01 NOT REPORTING: NOT OPTED-IN Basic Rook deployment data
- basic_stretch_cluster NOT REPORTING: NOT OPTED-IN Stretch Mode information for stretch clusters deployments
- basic_usage_by_class NOT REPORTING: NOT OPTED-IN Default device class usage statistics
- crash_base NOT REPORTING: NOT OPTED-IN Information about daemon crashes (daemon type and version, backtrace, etc.)
- device_base NOT REPORTING: NOT OPTED-IN Information about device health metrics
- ident_base NOT REPORTING: NOT OPTED-IN, CHANNEL ident IS OFF User-provided identifying information about the cluster
- perf_memory_metrics NOT REPORTING: NOT OPTED-IN, CHANNEL perf IS OFF Heap stats and mempools for mon and mds
- perf_perf NOT REPORTING: NOT OPTED-IN, CHANNEL perf IS OFF Information about performance counters of the cluster
+List all collections with::
+
+ ceph telemetry collection ls
+
+ NAME STATUS DESC
+ basic_base NOT REPORTING: NOT OPTED-IN Basic information about the cluster (capacity, number and type of daemons, version, etc.)
+ basic_mds_metadata NOT REPORTING: NOT OPTED-IN MDS metadata
+ basic_pool_flags NOT REPORTING: NOT OPTED-IN Per-pool flags
+ basic_pool_options_bluestore NOT REPORTING: NOT OPTED-IN Per-pool bluestore config options
+ basic_pool_usage NOT REPORTING: NOT OPTED-IN Default pool application and usage statistics
+ basic_rook_v01 NOT REPORTING: NOT OPTED-IN Basic Rook deployment data
+ basic_stretch_cluster NOT REPORTING: NOT OPTED-IN Stretch Mode information for stretch clusters deployments
+ basic_usage_by_class NOT REPORTING: NOT OPTED-IN Default device class usage statistics
+ crash_base NOT REPORTING: NOT OPTED-IN Information about daemon crashes (daemon type and version, backtrace, etc.)
+ device_base NOT REPORTING: NOT OPTED-IN Information about device health metrics
+ ident_base NOT REPORTING: NOT OPTED-IN, CHANNEL ident IS OFF User-provided identifying information about the cluster
+ perf_memory_metrics NOT REPORTING: NOT OPTED-IN, CHANNEL perf IS OFF Heap stats and mempools for mon and mds
+ perf_perf NOT REPORTING: NOT OPTED-IN, CHANNEL perf IS OFF Information about performance counters of the cluster
Where:
**DESC**: General description of the collection.
See the diff between the collections you are enrolled to, and the new,
-available collections with:
-
-.. prompt:: bash #
-
- ceph telemetry diff
+available collections with::
-Enroll to the most recent collections with:
+ ceph telemetry diff
-.. prompt:: bash #
+Enroll to the most recent collections with::
- ceph telemetry on
+ ceph telemetry on
-Then enable new channels that are off with:
+Then enable new channels that are off with::
-.. prompt:: bash #
-
- ceph telemetry enable channel <channel_name>
+ ceph telemetry enable channel <channel_name>
Interval
--------
The module compiles and sends a new report every 24 hours by default.
-You can adjust this interval with:
-
-.. prompt:: bash #
+You can adjust this interval with::
- ceph config set mgr mgr/telemetry/interval 72 # report every three days
+ ceph config set mgr mgr/telemetry/interval 72 # report every three days
Status
--------
-The see the current configuration:
+The see the current configuration::
-.. prompt:: bash #
-
- ceph telemetry status
+ ceph telemetry status
Manually sending telemetry
--------------------------
-To ad hoc send telemetry data:
-
-.. prompt:: bash #
+To ad hoc send telemetry data::
- ceph telemetry send
+ ceph telemetry send
-In case telemetry is not enabled (with ``ceph telemetry on``), you need to add
-``--license sharing-1-0`` to ``ceph telemetry send`` command.
+In case telemetry is not enabled (with 'ceph telemetry on'), you need to add
+'--license sharing-1-0' to 'ceph telemetry send' command.
Sending telemetry through a proxy
---------------------------------
If the cluster cannot directly connect to the configured telemetry
endpoint (default *telemetry.ceph.com*), you can configure a HTTP/HTTPS
-proxy server with:
+proxy server with::
-.. prompt:: bash #
+ ceph config set mgr mgr/telemetry/proxy https://10.0.0.1:8080
- ceph config set mgr mgr/telemetry/proxy https://10.0.0.1:8080
+You can also include a *user:pass* if needed::
-You can also include a *user:pass* if needed:
-
-.. prompt:: bash #
-
- ceph config set mgr mgr/telemetry/proxy https://ceph:telemetry@10.0.0.1:8080
+ ceph config set mgr mgr/telemetry/proxy https://ceph:telemetry@10.0.0.1:8080
Contact and Description
-----------------------
A contact and description can be added to the report. This is
-completely optional, and disabled by default.
-
-.. prompt:: bash #
+completely optional, and disabled by default.::
- ceph config set mgr mgr/telemetry/contact 'John Doe <john.doe@example.com>'
- ceph config set mgr mgr/telemetry/description 'My first Ceph cluster'
- ceph config set mgr mgr/telemetry/channel_ident true
+ ceph config set mgr mgr/telemetry/contact 'John Doe <john.doe@example.com>'
+ ceph config set mgr mgr/telemetry/description 'My first Ceph cluster'
+ ceph config set mgr mgr/telemetry/channel_ident true
Leaderboard
-----------
To participate in a leaderboard in the `public dashboards
<https://telemetry-public.ceph.com/>`_, run the following command:
-.. prompt:: bash #
+.. prompt:: bash $
ceph config set mgr mgr/telemetry/leaderboard true
total storage capacity and the number of OSDs. To add a description of the
cluster, run a command of the following form:
-.. prompt:: bash #
+.. prompt:: bash $
ceph config set mgr mgr/telemetry/leaderboard_description 'Ceph cluster for Computational Biology at the University of XYZ'