-Crash plugin
+Crash Module
============
-The crash plugin collects information about daemon crashdumps and stores
+The crash module collects information about daemon crashdumps and stores
it in the Ceph cluster for later analysis.
Daemon crashdumps are dumped in /var/lib/ceph/crash by default; this can
be configured with the option 'crash dir'. Crash directories are named by
time and date and a randomly-generated UUID, and contain a metadata file
'meta' and a recent log file, with a "crash_id" that is the same.
-This plugin allows the metadata about those dumps to be persisted in
+This module allows the metadata about those dumps to be persisted in
the monitors' storage.
Enabling
=====================
-DISKPREDICTION PLUGIN
+Diskprediction Module
=====================
-The *diskprediction* plugin supports two modes: cloud mode and local mode. In cloud mode, the disk and Ceph operating status information is collected from Ceph cluster and sent to a cloud-based DiskPrediction server over the Internet. DiskPrediction server analyzes the data and provides the analytics and prediction results of performance and disk health states for Ceph clusters.
+The *diskprediction* module supports two modes: cloud mode and local mode. In cloud mode, the disk and Ceph operating status information is collected from Ceph cluster and sent to a cloud-based DiskPrediction server over the Internet. DiskPrediction server analyzes the data and provides the analytics and prediction results of performance and disk health states for Ceph clusters.
-Local mode doesn't require any external server for data analysis and output results. In local mode, the *diskprediction* plugin uses an internal predictor module for disk prediction service, and then returns the disk prediction result to the Ceph system.
+Local mode doesn't require any external server for data analysis and output results. In local mode, the *diskprediction* module uses an internal predictor module for disk prediction service, and then returns the disk prediction result to the Ceph system.
| Local predictor: 70% accuracy
| Cloud predictor for free: 95% accuracy
Local Mode
----------
-The *diskprediction* plugin leverages Ceph device health check to collect disk health metrics and uses internal predictor module to produce the disk failure prediction and returns back to Ceph. Thus, no connection settings are required in local mode. The local predictor module requires at least six datasets of device health metrics to implement the prediction.
+The *diskprediction* module leverages Ceph device health check to collect disk health metrics and uses internal predictor module to produce the disk failure prediction and returns back to Ceph. Thus, no connection settings are required in local mode. The local predictor module requires at least six datasets of device health metrics to implement the prediction.
Run the following command to use local predictor predict device life expectancy.
Diskprediction Data
===================
-The *diskprediction* plugin actively sends/retrieves the following data to/from DiskPrediction server.
+The *diskprediction* module actively sends/retrieves the following data to/from DiskPrediction server.
Metrics Data
+----------------------+-----------------------------------------+
- Ceph each objects correlation information
-- The plugin agent information
-- The plugin agent cluster information
-- The plugin agent host information
+- The module agent information
+- The module agent cluster information
+- The module agent host information
SMART Data
-----------
-- Ceph physical device SMART data (provided by Ceph *devicehealth* plugin)
+- Ceph physical device SMART data (provided by Ceph *devicehealth* module)
Prediction Data
debug mgr = 20
-With logging set to debug for the manager the plugin will print out logging
+With logging set to debug for the manager the module will print out logging
message with prefix *mgr[diskprediction]* for easy filtering.
-hello world
-===========
+Hello World Module
+==================
This is a simple module skeleton for documentation purposes.
Documenting
-----------
-After adding a new mgr module/plugin, be sure to add its documentation to ``doc/mgr/plugin_name.rst``.
-Also, add a link to your new plugin into ``doc/mgr/index.rst``.
+After adding a new mgr module, be sure to add its documentation to ``doc/mgr/module_name.rst``.
+Also, add a link to your new module into ``doc/mgr/index.rst``.
:maxdepth: 1
Installation and Configuration <administrator>
- Writing plugins <plugins>
+ Writing modules <modules>
Writing orchestrator plugins <orchestrator_modules>
- Dashboard plugin <dashboard>
- DiskPrediction plugin <diskprediction>
- Local pool plugin <localpool>
- RESTful plugin <restful>
- Zabbix plugin <zabbix>
- Prometheus plugin <prometheus>
- Influx plugin <influx>
- Hello plugin <hello>
- Telegraf plugin <telegraf>
- Telemetry plugin <telemetry>
- Iostat plugin <iostat>
- Crash plugin <crash>
- Orchestrator CLI plugin <orchestrator_cli>
- Rook plugin <rook>
- DeepSea plugin <deepsea>
- Insights plugin <insights>
- Ansible plugin <ansible>
+ Dashboard module <dashboard>
+ DiskPrediction module <diskprediction>
+ Local pool module <localpool>
+ RESTful module <restful>
+ Zabbix module <zabbix>
+ Prometheus module <prometheus>
+ Influx module <influx>
+ Hello module <hello>
+ Telegraf module <telegraf>
+ Telemetry module <telemetry>
+ Iostat module <iostat>
+ Crash module <crash>
+ Orchestrator CLI module <orchestrator_cli>
+ Rook module <rook>
+ DeepSea module <deepsea>
+ Insights module <insights>
+ Ansible module <ansible>
SSH orchestrator <ssh>
=============
-Influx Plugin
+Influx Module
=============
-The influx plugin continuously collects and sends time series data to an
+The influx module continuously collects and sends time series data to an
influxdb database.
-The influx plugin was introduced in the 13.x *Mimic* release.
+The influx module was introduced in the 13.x *Mimic* release.
--------
Enabling
-Insights plugin
+Insights Module
===============
-The insights plugin collects and exposes system information to the Insights Core
+The insights module collects and exposes system information to the Insights Core
data analysis framework. It is intended to replace explicit interrogation of
Ceph CLIs and daemon admin sockets, reducing the API surface that Insights
depends on. The insights reports contains the following:
iostat
======
-This plugin shows the current throughput and IOPS done on the Ceph cluster.
+This module shows the current throughput and IOPS done on the Ceph cluster.
Enabling
--------
-Local pool plugin
+Local Pool Module
=================
-The *localpool* plugin can automatically create RADOS pools that are
+The *localpool* module can automatically create RADOS pools that are
localized to a subset of the overall cluster. For example, by default, it will
create a pool for each distinct rack in the cluster. This can be useful for some
deployments that want to distribute some data locally as well as globally across the cluster .
--- /dev/null
+
+
+.. _mgr-module-dev:
+
+ceph-mgr module developer's guide
+=================================
+
+.. warning::
+
+ This is developer documentation, describing Ceph internals that
+ are only relevant to people writing ceph-mgr modules.
+
+Creating a module
+-----------------
+
+In pybind/mgr/, create a python module. Within your module, create a class
+that inherits from ``MgrModule``. For ceph-mgr to detect your module, your
+directory must contain a file called `module.py`.
+
+The most important methods to override are:
+
+* a ``serve`` member function for server-type modules. This
+ function should block forever.
+* a ``notify`` member function if your module needs to
+ take action when new cluster data is available.
+* a ``handle_command`` member function if your module
+ exposes CLI commands.
+
+Some modules interface with external orchestrators to deploy
+Ceph services. These also inherit from ``Orchestrator``, which adds
+additional methods to the base ``MgrModule`` class. See
+:ref:`Orchestrator modules <orchestrator-modules>` for more on
+creating these modules.
+
+Installing a module
+-------------------
+
+Once your module is present in the location set by the
+``mgr module path`` configuration setting, you can enable it
+via the ``ceph mgr module enable`` command::
+
+ ceph mgr module enable mymodule
+
+Note that the MgrModule interface is not stable, so any modules maintained
+outside of the Ceph tree are liable to break when run against any newer
+or older versions of Ceph.
+
+Logging
+-------
+
+``MgrModule`` instances have a ``log`` property which is a logger instance that
+sends log messages into the Ceph logging layer where they will be recorded
+in the mgr daemon's log file.
+
+Use it the same way you would any other python logger. The python
+log levels debug, info, warn, err are mapped into the Ceph
+severities 20, 4, 1 and 0 respectively.
+
+Exposing commands
+-----------------
+
+Set the ``COMMANDS`` class attribute of your module to a list of dicts
+like this::
+
+ COMMANDS = [
+ {
+ "cmd": "foobar name=myarg,type=CephString",
+ "desc": "Do something awesome",
+ "perm": "rw",
+ # optional:
+ "poll": "true"
+ }
+ ]
+
+The ``cmd`` part of each entry is parsed in the same way as internal
+Ceph mon and admin socket commands (see mon/MonCommands.h in
+the Ceph source for examples). Note that the "poll" field is optional,
+and is set to False by default; this indicates to the ``ceph`` CLI
+that it should call this command repeatedly and output results (see
+``ceph -h`` and its ``--period`` option).
+
+Each command is expected to return a tuple ``(retval, stdout, stderr)``.
+``retval`` is an integer representing a libc error code (e.g. EINVAL,
+EPERM, or 0 for no error), ``stdout`` is a string containing any
+non-error output, and ``stderr`` is a string containing any progress or
+error explanation output. Either or both of the two strings may be empty.
+
+Implement the ``handle_command`` function to respond to the commands
+when they are sent:
+
+
+.. py:currentmodule:: mgr_module
+.. automethod:: MgrModule.handle_command
+
+Configuration options
+---------------------
+
+Modules can load and store configuration options using the
+``set_module_option`` and ``get_module_option`` methods.
+
+.. note:: Use ``set_module_option`` and ``get_module_option`` to
+ manage user-visible configuration options that are not blobs (like
+ certificates). If you want to persist module-internal data or
+ binary configuration data consider using the `KV store`_.
+
+You must declare your available configuration options in the
+``MODULE_OPTIONS`` class attribute, like this:
+
+::
+
+ MODULE_OPTIONS = [
+ {
+ "name": "my_option"
+ }
+ ]
+
+If you try to use set_module_option or get_module_option on options not declared
+in ``MODULE_OPTIONS``, an exception will be raised.
+
+You may choose to provide setter commands in your module to perform
+high level validation. Users can also modify configuration using
+the normal `ceph config set` command, where the configuration options
+for a mgr module are named like `mgr/<module name>/<option>`.
+
+If a configuration option is different depending on which node the mgr
+is running on, then use *localized* configuration (
+``get_localized_module_option``, ``set_localized_module_option``).
+This may be necessary for options such as what address to listen on.
+Localized options may also be set externally with ``ceph config set``,
+where they key name is like ``mgr/<module name>/<mgr id>/<option>``
+
+If you need to load and store data (e.g. something larger, binary, or multiline),
+use the KV store instead of configuration options (see next section).
+
+Hints for using config options:
+
+* Reads are fast: ceph-mgr keeps a local in-memory copy, so in many cases
+ you can just do a get_module_option every time you use a option, rather than
+ copying it out into a variable.
+* Writes block until the value is persisted (i.e. round trip to the monitor),
+ but reads from another thread will see the new value immediately.
+* If a user has used `config set` from the command line, then the new
+ value will become visible to `get_module_option` immediately, although the
+ mon->mgr update is asynchronous, so `config set` will return a fraction
+ of a second before the new value is visible on the mgr.
+* To delete a config value (i.e. revert to default), just pass ``None`` to
+ set_module_option.
+
+.. automethod:: MgrModule.get_module_option
+.. automethod:: MgrModule.set_module_option
+.. automethod:: MgrModule.get_localized_module_option
+.. automethod:: MgrModule.set_localized_module_option
+
+KV store
+--------
+
+Modules have access to a private (per-module) key value store, which
+is implemented using the monitor's "config-key" commands. Use
+the ``set_store`` and ``get_store`` methods to access the KV store from
+your module.
+
+The KV store commands work in a similar way to the configuration
+commands. Reads are fast, operating from a local cache. Writes block
+on persistence and do a round trip to the monitor.
+
+This data can be access from outside of ceph-mgr using the
+``ceph config-key [get|set]`` commands. Key names follow the same
+conventions as configuration options. Note that any values updated
+from outside of ceph-mgr will not be seen by running modules until
+the next restart. Users should be discouraged from accessing module KV
+data externally -- if it is necessary for users to populate data, modules
+should provide special commands to set the data via the module.
+
+Use the ``get_store_prefix`` function to enumerate keys within
+a particular prefix (i.e. all keys starting with a particular substring).
+
+
+.. automethod:: MgrModule.get_store
+.. automethod:: MgrModule.set_store
+.. automethod:: MgrModule.get_localized_store
+.. automethod:: MgrModule.set_localized_store
+.. automethod:: MgrModule.get_store_prefix
+
+
+Accessing cluster data
+----------------------
+
+Modules have access to the in-memory copies of the Ceph cluster's
+state that the mgr maintains. Accessor functions as exposed
+as members of MgrModule.
+
+Calls that access the cluster or daemon state are generally going
+from Python into native C++ routines. There is some overhead to this,
+but much less than for example calling into a REST API or calling into
+an SQL database.
+
+There are no consistency rules about access to cluster structures or
+daemon metadata. For example, an OSD might exist in OSDMap but
+have no metadata, or vice versa. On a healthy cluster these
+will be very rare transient states, but modules should be written
+to cope with the possibility.
+
+Note that these accessors must not be called in the modules ``__init__``
+function. This will result in a circular locking exception.
+
+.. automethod:: MgrModule.get
+.. automethod:: MgrModule.get_server
+.. automethod:: MgrModule.list_servers
+.. automethod:: MgrModule.get_metadata
+.. automethod:: MgrModule.get_daemon_status
+.. automethod:: MgrModule.get_perf_schema
+.. automethod:: MgrModule.get_counter
+.. automethod:: MgrModule.get_mgr_id
+
+Exposing health checks
+----------------------
+
+Modules can raise first class Ceph health checks, which will be reported
+in the output of ``ceph status`` and in other places that report on the
+cluster's health.
+
+If you use ``set_health_checks`` to report a problem, be sure to call
+it again with an empty dict to clear your health check when the problem
+goes away.
+
+.. automethod:: MgrModule.set_health_checks
+
+What if the mons are down?
+--------------------------
+
+The manager daemon gets much of its state (such as the cluster maps)
+from the monitor. If the monitor cluster is inaccessible, whichever
+manager was active will continue to run, with the latest state it saw
+still in memory.
+
+However, if you are creating a module that shows the cluster state
+to the user then you may well not want to mislead them by showing
+them that out of date state.
+
+To check if the manager daemon currently has a connection to
+the monitor cluster, use this function:
+
+.. automethod:: MgrModule.have_mon_connection
+
+Reporting if your module cannot run
+-----------------------------------
+
+If your module cannot be run for any reason (such as a missing dependency),
+then you can report that by implementing the ``can_run`` function.
+
+.. automethod:: MgrModule.can_run
+
+Note that this will only work properly if your module can always be imported:
+if you are importing a dependency that may be absent, then do it in a
+try/except block so that your module can be loaded far enough to use
+``can_run`` even if the dependency is absent.
+
+Sending commands
+----------------
+
+A non-blocking facility is provided for sending monitor commands
+to the cluster.
+
+.. automethod:: MgrModule.send_command
+
+Receiving notifications
+-----------------------
+
+The manager daemon calls the ``notify`` function on all active modules
+when certain important pieces of cluster state are updated, such as the
+cluster maps.
+
+The actual data is not passed into this function, rather it is a cue for
+the module to go and read the relevant structure if it is interested. Most
+modules ignore most types of notification: to ignore a notification
+simply return from this function without doing anything.
+
+.. automethod:: MgrModule.notify
+
+Accessing RADOS or CephFS
+-------------------------
+
+If you want to use the librados python API to access data stored in
+the Ceph cluster, you can access the ``rados`` attribute of your
+``MgrModule`` instance. This is an instance of ``rados.Rados`` which
+has been constructed for you using the existing Ceph context (an internal
+detail of the C++ Ceph code) of the mgr daemon.
+
+Always use this specially constructed librados instance instead of
+constructing one by hand.
+
+Similarly, if you are using libcephfs to access the filesystem, then
+use the libcephfs ``create_with_rados`` to construct it from the
+``MgrModule.rados`` librados instance, and thereby inherit the correct context.
+
+Remember that your module may be running while other parts of the cluster
+are down: do not assume that librados or libcephfs calls will return
+promptly -- consider whether to use timeouts or to block if the rest of
+the cluster is not fully available.
+
+Implementing standby mode
+-------------------------
+
+For some modules, it is useful to run on standby manager daemons as well
+as on the active daemon. For example, an HTTP server can usefully
+serve HTTP redirect responses from the standby managers so that
+the user can point his browser at any of the manager daemons without
+having to worry about which one is active.
+
+Standby manager daemons look for a subclass of ``StandbyModule``
+in each module. If the class is not found then the module is not
+used at all on standby daemons. If the class is found, then
+its ``serve`` method is called. Implementations of ``StandbyModule``
+must inherit from ``mgr_module.MgrStandbyModule``.
+
+The interface of ``MgrStandbyModule`` is much restricted compared to
+``MgrModule`` -- none of the Ceph cluster state is available to
+the module. ``serve`` and ``shutdown`` methods are used in the same
+way as a normal module class. The ``get_active_uri`` method enables
+the standby module to discover the address of its active peer in
+order to make redirects. See the ``MgrStandbyModule`` definition
+in the Ceph source code for the full list of methods.
+
+For an example of how to use this interface, look at the source code
+of the ``dashboard`` module.
+
+Communicating between modules
+-----------------------------
+
+Modules can invoke member functions of other modules.
+
+.. automethod:: MgrModule.remote
+
+Be sure to handle ``ImportError`` to deal with the case that the desired
+module is not enabled.
+
+If the remote method raises a python exception, this will be converted
+to a RuntimeError on the calling side, where the message string describes
+the exception that was originally thrown. If your logic intends
+to handle certain errors cleanly, it is better to modify the remote method
+to return an error value instead of raising an exception.
+
+At time of writing, inter-module calls are implemented without
+copies or serialization, so when you return a python object, you're
+returning a reference to that object to the calling module. It
+is recommend *not* to rely on this reference passing, as in future the
+implementation may change to serialize arguments and return
+values.
+
+
+Logging
+-------
+
+Use your module's ``log`` attribute as your logger. This is a logger
+configured to output via the ceph logging framework, to the local ceph-mgr
+log files.
+
+Python log severities are mapped to ceph severities as follows:
+
+* DEBUG is 20
+* INFO is 4
+* WARN is 1
+* ERR is 0
+
+Shutting down cleanly
+---------------------
+
+If a module implements the ``serve()`` method, it should also implement
+the ``shutdown()`` method to shutdown cleanly: misbehaving modules
+may otherwise prevent clean shutdown of ceph-mgr.
+
+Limitations
+-----------
+
+It is not possible to call back into C++ code from a module's
+``__init__()`` method. For example calling ``self.get_module_option()`` at
+this point will result in an assertion failure in ceph-mgr. For modules
+that implement the ``serve()`` method, it usually makes sense to do most
+initialization inside that method instead.
+
+Is something missing?
+---------------------
+
+The ceph-mgr python interface is not set in stone. If you have a need
+that is not satisfied by the current interface, please bring it up
+on the ceph-devel mailing list. While it is desired to avoid bloating
+the interface, it is not generally very hard to expose existing data
+to the Python code when there is a good reason.
+
+++ /dev/null
-
-
-.. _mgr-module-dev:
-
-ceph-mgr module developer's guide
-=================================
-
-.. warning::
-
- This is developer documentation, describing Ceph internals that
- are only relevant to people writing ceph-mgr modules.
-
-Creating a plugin
------------------
-
-In pybind/mgr/, create a python module. Within your module, create a class
-that inherits from ``MgrModule``. For ceph-mgr to detect your module, your
-directory must contain a file called `module.py`.
-
-The most important methods to override are:
-
-* a ``serve`` member function for server-type modules. This
- function should block forever.
-* a ``notify`` member function if your module needs to
- take action when new cluster data is available.
-* a ``handle_command`` member function if your module
- exposes CLI commands.
-
-Some modules interface with external orchestrators to deploy
-Ceph services. These also inherit from ``Orchestrator``, which adds
-additional methods to the base ``MgrModule`` class. See
-:ref:`Orchestrator modules <orchestrator-modules>` for more on
-creating these modules.
-
-Installing a plugin
--------------------
-
-Once your module is present in the location set by the
-``mgr module path`` configuration setting, you can enable it
-via the ``ceph mgr module enable`` command::
-
- ceph mgr module enable mymodule
-
-Note that the MgrModule interface is not stable, so any modules maintained
-outside of the Ceph tree are liable to break when run against any newer
-or older versions of Ceph.
-
-Logging
--------
-
-``MgrModule`` instances have a ``log`` property which is a logger instance that
-sends log messages into the Ceph logging layer where they will be recorded
-in the mgr daemon's log file.
-
-Use it the same way you would any other python logger. The python
-log levels debug, info, warn, err are mapped into the Ceph
-severities 20, 4, 1 and 0 respectively.
-
-Exposing commands
------------------
-
-Set the ``COMMANDS`` class attribute of your plugin to a list of dicts
-like this::
-
- COMMANDS = [
- {
- "cmd": "foobar name=myarg,type=CephString",
- "desc": "Do something awesome",
- "perm": "rw",
- # optional:
- "poll": "true"
- }
- ]
-
-The ``cmd`` part of each entry is parsed in the same way as internal
-Ceph mon and admin socket commands (see mon/MonCommands.h in
-the Ceph source for examples). Note that the "poll" field is optional,
-and is set to False by default; this indicates to the ``ceph`` CLI
-that it should call this command repeatedly and output results (see
-``ceph -h`` and its ``--period`` option).
-
-Each command is expected to return a tuple ``(retval, stdout, stderr)``.
-``retval`` is an integer representing a libc error code (e.g. EINVAL,
-EPERM, or 0 for no error), ``stdout`` is a string containing any
-non-error output, and ``stderr`` is a string containing any progress or
-error explanation output. Either or both of the two strings may be empty.
-
-Implement the ``handle_command`` function to respond to the commands
-when they are sent:
-
-
-.. py:currentmodule:: mgr_module
-.. automethod:: MgrModule.handle_command
-
-Configuration options
----------------------
-
-Modules can load and store configuration options using the
-``set_module_option`` and ``get_module_option`` methods.
-
-.. note:: Use ``set_module_option`` and ``get_module_option`` to
- manage user-visible configuration options that are not blobs (like
- certificates). If you want to persist module-internal data or
- binary configuration data consider using the `KV store`_.
-
-You must declare your available configuration options in the
-``MODULE_OPTIONS`` class attribute, like this:
-
-::
-
- MODULE_OPTIONS = [
- {
- "name": "my_option"
- }
- ]
-
-If you try to use set_module_option or get_module_option on options not declared
-in ``MODULE_OPTIONS``, an exception will be raised.
-
-You may choose to provide setter commands in your module to perform
-high level validation. Users can also modify configuration using
-the normal `ceph config set` command, where the configuration options
-for a mgr module are named like `mgr/<module name>/<option>`.
-
-If a configuration option is different depending on which node the mgr
-is running on, then use *localized* configuration (
-``get_localized_module_option``, ``set_localized_module_option``).
-This may be necessary for options such as what address to listen on.
-Localized options may also be set externally with ``ceph config set``,
-where they key name is like ``mgr/<module name>/<mgr id>/<option>``
-
-If you need to load and store data (e.g. something larger, binary, or multiline),
-use the KV store instead of configuration options (see next section).
-
-Hints for using config options:
-
-* Reads are fast: ceph-mgr keeps a local in-memory copy, so in many cases
- you can just do a get_module_option every time you use a option, rather than
- copying it out into a variable.
-* Writes block until the value is persisted (i.e. round trip to the monitor),
- but reads from another thread will see the new value immediately.
-* If a user has used `config set` from the command line, then the new
- value will become visible to `get_module_option` immediately, although the
- mon->mgr update is asynchronous, so `config set` will return a fraction
- of a second before the new value is visible on the mgr.
-* To delete a config value (i.e. revert to default), just pass ``None`` to
- set_module_option.
-
-.. automethod:: MgrModule.get_module_option
-.. automethod:: MgrModule.set_module_option
-.. automethod:: MgrModule.get_localized_module_option
-.. automethod:: MgrModule.set_localized_module_option
-
-KV store
---------
-
-Modules have access to a private (per-module) key value store, which
-is implemented using the monitor's "config-key" commands. Use
-the ``set_store`` and ``get_store`` methods to access the KV store from
-your module.
-
-The KV store commands work in a similar way to the configuration
-commands. Reads are fast, operating from a local cache. Writes block
-on persistence and do a round trip to the monitor.
-
-This data can be access from outside of ceph-mgr using the
-``ceph config-key [get|set]`` commands. Key names follow the same
-conventions as configuration options. Note that any values updated
-from outside of ceph-mgr will not be seen by running modules until
-the next restart. Users should be discouraged from accessing module KV
-data externally -- if it is necessary for users to populate data, modules
-should provide special commands to set the data via the module.
-
-Use the ``get_store_prefix`` function to enumerate keys within
-a particular prefix (i.e. all keys starting with a particular substring).
-
-
-.. automethod:: MgrModule.get_store
-.. automethod:: MgrModule.set_store
-.. automethod:: MgrModule.get_localized_store
-.. automethod:: MgrModule.set_localized_store
-.. automethod:: MgrModule.get_store_prefix
-
-
-Accessing cluster data
-----------------------
-
-Modules have access to the in-memory copies of the Ceph cluster's
-state that the mgr maintains. Accessor functions as exposed
-as members of MgrModule.
-
-Calls that access the cluster or daemon state are generally going
-from Python into native C++ routines. There is some overhead to this,
-but much less than for example calling into a REST API or calling into
-an SQL database.
-
-There are no consistency rules about access to cluster structures or
-daemon metadata. For example, an OSD might exist in OSDMap but
-have no metadata, or vice versa. On a healthy cluster these
-will be very rare transient states, but plugins should be written
-to cope with the possibility.
-
-Note that these accessors must not be called in the modules ``__init__``
-function. This will result in a circular locking exception.
-
-.. automethod:: MgrModule.get
-.. automethod:: MgrModule.get_server
-.. automethod:: MgrModule.list_servers
-.. automethod:: MgrModule.get_metadata
-.. automethod:: MgrModule.get_daemon_status
-.. automethod:: MgrModule.get_perf_schema
-.. automethod:: MgrModule.get_counter
-.. automethod:: MgrModule.get_mgr_id
-
-Exposing health checks
-----------------------
-
-Modules can raise first class Ceph health checks, which will be reported
-in the output of ``ceph status`` and in other places that report on the
-cluster's health.
-
-If you use ``set_health_checks`` to report a problem, be sure to call
-it again with an empty dict to clear your health check when the problem
-goes away.
-
-.. automethod:: MgrModule.set_health_checks
-
-What if the mons are down?
---------------------------
-
-The manager daemon gets much of its state (such as the cluster maps)
-from the monitor. If the monitor cluster is inaccessible, whichever
-manager was active will continue to run, with the latest state it saw
-still in memory.
-
-However, if you are creating a module that shows the cluster state
-to the user then you may well not want to mislead them by showing
-them that out of date state.
-
-To check if the manager daemon currently has a connection to
-the monitor cluster, use this function:
-
-.. automethod:: MgrModule.have_mon_connection
-
-Reporting if your module cannot run
------------------------------------
-
-If your module cannot be run for any reason (such as a missing dependency),
-then you can report that by implementing the ``can_run`` function.
-
-.. automethod:: MgrModule.can_run
-
-Note that this will only work properly if your module can always be imported:
-if you are importing a dependency that may be absent, then do it in a
-try/except block so that your module can be loaded far enough to use
-``can_run`` even if the dependency is absent.
-
-Sending commands
-----------------
-
-A non-blocking facility is provided for sending monitor commands
-to the cluster.
-
-.. automethod:: MgrModule.send_command
-
-Receiving notifications
------------------------
-
-The manager daemon calls the ``notify`` function on all active modules
-when certain important pieces of cluster state are updated, such as the
-cluster maps.
-
-The actual data is not passed into this function, rather it is a cue for
-the module to go and read the relevant structure if it is interested. Most
-modules ignore most types of notification: to ignore a notification
-simply return from this function without doing anything.
-
-.. automethod:: MgrModule.notify
-
-Accessing RADOS or CephFS
--------------------------
-
-If you want to use the librados python API to access data stored in
-the Ceph cluster, you can access the ``rados`` attribute of your
-``MgrModule`` instance. This is an instance of ``rados.Rados`` which
-has been constructed for you using the existing Ceph context (an internal
-detail of the C++ Ceph code) of the mgr daemon.
-
-Always use this specially constructed librados instance instead of
-constructing one by hand.
-
-Similarly, if you are using libcephfs to access the filesystem, then
-use the libcephfs ``create_with_rados`` to construct it from the
-``MgrModule.rados`` librados instance, and thereby inherit the correct context.
-
-Remember that your module may be running while other parts of the cluster
-are down: do not assume that librados or libcephfs calls will return
-promptly -- consider whether to use timeouts or to block if the rest of
-the cluster is not fully available.
-
-Implementing standby mode
--------------------------
-
-For some modules, it is useful to run on standby manager daemons as well
-as on the active daemon. For example, an HTTP server can usefully
-serve HTTP redirect responses from the standby managers so that
-the user can point his browser at any of the manager daemons without
-having to worry about which one is active.
-
-Standby manager daemons look for a subclass of ``StandbyModule``
-in each module. If the class is not found then the module is not
-used at all on standby daemons. If the class is found, then
-its ``serve`` method is called. Implementations of ``StandbyModule``
-must inherit from ``mgr_module.MgrStandbyModule``.
-
-The interface of ``MgrStandbyModule`` is much restricted compared to
-``MgrModule`` -- none of the Ceph cluster state is available to
-the module. ``serve`` and ``shutdown`` methods are used in the same
-way as a normal module class. The ``get_active_uri`` method enables
-the standby module to discover the address of its active peer in
-order to make redirects. See the ``MgrStandbyModule`` definition
-in the Ceph source code for the full list of methods.
-
-For an example of how to use this interface, look at the source code
-of the ``dashboard`` module.
-
-Communicating between modules
------------------------------
-
-Modules can invoke member functions of other modules.
-
-.. automethod:: MgrModule.remote
-
-Be sure to handle ``ImportError`` to deal with the case that the desired
-module is not enabled.
-
-If the remote method raises a python exception, this will be converted
-to a RuntimeError on the calling side, where the message string describes
-the exception that was originally thrown. If your logic intends
-to handle certain errors cleanly, it is better to modify the remote method
-to return an error value instead of raising an exception.
-
-At time of writing, inter-module calls are implemented without
-copies or serialization, so when you return a python object, you're
-returning a reference to that object to the calling module. It
-is recommend *not* to rely on this reference passing, as in future the
-implementation may change to serialize arguments and return
-values.
-
-
-Logging
--------
-
-Use your module's ``log`` attribute as your logger. This is a logger
-configured to output via the ceph logging framework, to the local ceph-mgr
-log files.
-
-Python log severities are mapped to ceph severities as follows:
-
-* DEBUG is 20
-* INFO is 4
-* WARN is 1
-* ERR is 0
-
-Shutting down cleanly
----------------------
-
-If a module implements the ``serve()`` method, it should also implement
-the ``shutdown()`` method to shutdown cleanly: misbehaving modules
-may otherwise prevent clean shutdown of ceph-mgr.
-
-Limitations
------------
-
-It is not possible to call back into C++ code from a module's
-``__init__()`` method. For example calling ``self.get_module_option()`` at
-this point will result in an assertion failure in ceph-mgr. For modules
-that implement the ``serve()`` method, it usually makes sense to do most
-initialization inside that method instead.
-
-Is something missing?
----------------------
-
-The ceph-mgr python interface is not set in stone. If you have a need
-that is not satisfied by the current interface, please bring it up
-on the ceph-devel mailing list. While it is desired to avoid bloating
-the interface, it is not generally very hard to expose existing data
-to the Python code when there is a good reason.
-
=================
-Prometheus plugin
+Prometheus Module
=================
Provides a Prometheus exporter to pass on Ceph performance counters
from the collection point in ceph-mgr. Ceph-mgr receives MMgrReport
messages from all MgrClient processes (mons and OSDs, for instance)
with performance counter schema data and actual counter data, and keeps
-a circular buffer of the last N samples. This plugin creates an HTTP
+a circular buffer of the last N samples. This module creates an HTTP
endpoint (like all Prometheus exporters) and retrieves the latest sample
of every counter when polled (or "scraped" in Prometheus terminology).
The HTTP path and query parameters are ignored; all extant counters
-restful plugin
+Restful Module
==============
-RESTful plugin offers the REST API access to the status of the cluster
+RESTful module offers the REST API access to the status of the cluster
over an SSL-secured connection.
Enabling
===============
-Telegraf Plugin
+Telegraf Module
===============
-The Telegraf plugin collects and sends statistics series to a Telegraf agent.
+The Telegraf module collects and sends statistics series to a Telegraf agent.
The Telegraf agent can buffer, aggregate, parse and process the data before
sending it to an output which can be InfluxDB, ElasticSearch and many more.
use the socket listener. The module can send statistics over UDP, TCP or
a UNIX socket.
-The Telegraf plugin was introduced in the 13.x *Mimic* release.
+The Telegraf module was introduced in the 13.x *Mimic* release.
--------
Enabling
-Telemetry plugin
+Telemetry Module
================
-The telemetry plugin sends anonymous data about the cluster, in which it is running, back to the Ceph project.
+The telemetry module sends anonymous data about the cluster, in which it is running, back to the Ceph project.
The data being sent back to the project does not contain any sensitive data like pool names, object names, object contents or hostnames.
-Zabbix plugin
+Zabbix Module
=============
-The Zabbix plugin actively sends information to a Zabbix server like:
+The Zabbix module actively sends information to a Zabbix server like:
- Ceph status
- I/O operations
Requirements
------------
-The plugin requires that the *zabbix_sender* executable is present on *all*
+The module requires that the *zabbix_sender* executable is present on *all*
machines running ceph-mgr. It can be installed on most distributions using
the package manager.
Template
^^^^^^^^
A `template <https://raw.githubusercontent.com/ceph/ceph/9c54334b615362e0a60442c2f41849ed630598ab/src/pybind/mgr/zabbix/zabbix_template.xml>`_.
-(XML) to be used on the Zabbix server can be found in the source directory of the plugin.
+(XML) to be used on the Zabbix server can be found in the source directory of the module.
This template contains all items and a few triggers. You can customize the triggers afterwards to fit your needs.
[mgr]
debug mgr = 20
-With logging set to debug for the manager the plugin will print various logging
-lines prefixed with *mgr[zabbix]* for easy filtering.
-
+With logging set to debug for the manager the module will print various logging
+lines prefixed with *mgr[zabbix]* for easy filtering.
\ No newline at end of file
responsible for keeping track of runtime metrics and the current
state of the Ceph cluster, including storage utilization, current
performance metrics, and system load. The Ceph Manager daemons also
- host python-based plugins to manage and expose Ceph cluster
+ host python-based modules to manage and expose Ceph cluster
information, including a web-based :ref:`mgr-dashboard` and
`REST API`_. At least two managers are normally required for high
availability.