of event types or for all "Removed" and "Created" event types (which is the default). The
notification may also filter out events based on matches of the prefixes and
suffixes of (1) the keys, (2) the metadata attributes attached to the object,
-or (3) the object tags. Regular-expression matching can also be used on these
+or (3) the object tags. Regular expression matching can also be used on these
to create filters. Notifications and topics have a many-to-many relationship.
A topic can receive multiple notifications and a notification could be delivered
to multiple topics.
S3 Bucket Notification Compatibility <s3-notification-compatibility>
-.. note:: To enable bucket notifications API, the `rgw_enable_apis` configuration parameter should contain: "notifications".
+.. note:: To enable bucket notifications API, the ``rgw_enable_apis``
+ configuration parameter should contain: "notifications".
Notification Reliability
------------------------
.. note:: If the notification fails with an error, cannot be delivered, or
times out, it is retried until it is successfully acknowledged.
- You can control its retry with time_to_live/max_retries to have a time/retry limit and
- control the retry frequency with retry_sleep_duration
+ You can control its retry with ``time_to_live``/``max_retries`` to have a time/retry limit and
+ control the retry frequency with ``retry_sleep_duration``.
.. tip:: To minimize the latency added by asynchronous notification, we
recommended placing the "log" pool on fast media.
.. prompt:: bash #
- radosgw-admin topic list [--tenant={tenant}] [--uid={user}]
+ radosgw-admin topic list [--tenant={tenant}] [--uid={user}]
Fetch the configuration of a specific topic by running the following command:
radosgw-admin topic rm --topic={topic-name} [--tenant={tenant}]
-Fetch persistent topic stats (i.e. reservations, entries and size) by running the following command:
+Fetch persistent topic stats (i.e. reservations, entries and size) by running
+the following command:
.. prompt:: bash #
radosgw-admin topic stats --topic={topic-name} [--tenant={tenant}]
-Dump (in JSON format) all pending bucket notifications of a persistent topic by running the following command:
+Dump (in JSON format) all pending bucket notifications of a persistent topic
+by running the following command:
.. prompt:: bash #
Notification Performance Statistics
-----------------------------------
-- ``persistent_topic_size``: queue size in bytes.
+- ``persistent_topic_size``: queue size in bytes
- ``persistent_topic_len``: shows how many notifications are currently waiting
in the queue
-- ``pubsub_push_ok``: a running counter, for all notifications, of events successfully pushed to their endpoints
-- ``pubsub_push_fail``: a running counter, for all notifications, of events that failed to be pushed to their endpoints
-- ``pubsub_push_pending``: the gauge value of events pushed to an endpoint but
- not acked or nacked yet. This does not include the notifications waiting in
+- ``pubsub_push_ok``: a running counter, for all notifications, of events
+ successfully pushed to their endpoints
+- ``pubsub_push_fail``: a running counter, for all notifications, of events
+ that failed to be pushed to their endpoints
+- ``pubsub_push_pending``: The gauge value of events pushed to an endpoint but
+ not acked or nacked yet: this does not include the notifications waiting in
the persistent queue. Only the notifications that are in flight in both
persistent and non-persistent cases are counted.
on each notification.
Configuration Options
-------------------------------
+---------------------
The following are global configuration options for the different endpoints:
HTTP
Request parameters:
-- push-endpoint: This is the URI of an endpoint to send push notifications to.
-- OpaqueData: Opaque data is set in the topic configuration and added to all
+- ``push-endpoint``: This is the URI of an endpoint to send push notifications to.
+- ``OpaqueData``: Opaque data is set in the topic configuration and added to all
notifications that are triggered by the topic.
-- persistent: This indicates whether notifications to this endpoint are
+- ``persistent``: This indicates whether notifications to this endpoint are
persistent (=asynchronous) or not persistent. (This is "false" by default.)
-- time_to_live: This will limit the time (in seconds) to retain the notifications.
- default value is taken from `rgw_topic_persistency_time_to_live`.
- providing a value overrides the global value.
- zero value means infinite time to live.
-- max_retries: This will limit the max retries before expiring notifications.
- default value is taken from `rgw_topic_persistency_max_retries`.
- providing a value overrides the global value.
- zero value means infinite retries.
-- retry_sleep_duration: This will control the frequency of retrying the notifications.
- default value is taken from `rgw_topic_persistency_sleep_duration`.
- providing a value overrides the global value.
- zero value mean there is no delay between retries.
-- Policy: This will control who can access the topic in addition to the owner of the topic.
+- ``time_to_live``: This will limit the time (in seconds) to retain the notifications.
+ Default value is taken from ``rgw_topic_persistency_time_to_live``.
+ Providing a value overrides the global value.
+ Zero value means infinite time to live.
+- ``max_retries``: This will limit the max retries before expiring notifications.
+ Default value is taken from ``rgw_topic_persistency_max_retries``.
+ Providing a value overrides the global value.
+ Zero value means infinite retries.
+- ``retry_sleep_duration``: This will control the frequency of retrying the notifications.
+ Default value is taken from ``rgw_topic_persistency_sleep_duration``.
+ Providing a value overrides the global value.
+ Zero value mean there is no delay between retries.
+- ``Policy``: This will control who can access the topic in addition to the owner of the topic.
The policy passed needs to be a JSON string similar to bucket policy.
For example, one can send a policy string as follows::
}
Currently, we support only the following actions:
- - sns:GetTopicAttributes To list or get existing topics
- - sns:SetTopicAttributes To set attributes for the existing topic
- - sns:DeleteTopic To delete the existing topic
- - sns:Publish To be able to create/subscribe notification on existing topic
+
+ - ``sns:GetTopicAttributes``: to list or get existing topics
+ - ``sns:SetTopicAttributes``: to set attributes for the existing topic
+ - ``sns:DeleteTopic``: to delete the existing topic
+ - ``sns:Publish``: to be able to create/subscribe notification on existing topic
- HTTP endpoint
- URI: ``http[s]://<fqdn>[:<port]``
- - port: This defaults to 80 for HTTP and 443 for HTTPS.
- - verify-ssl: This indicates whether the server certificate is validated by
+ - ``port``: This defaults to 80 for HTTP and 443 for HTTPS.
+ - ``verify-ssl``: This indicates whether the server certificate is validated by
the client. (This is "true" by default.)
- - cloudevents: This indicates whether the HTTP header should contain
+ - ``cloudevents``: This indicates whether the HTTP header should contain
attributes according to the `S3 CloudEvents Spec`_. (This is "false" by
default.)
- AMQP0.9.1 endpoint
- URI: ``amqp[s]://[<user>:<password>@]<fqdn>[:<port>][/<vhost>]``
- - user/password: This defaults to "guest/guest".
- - user/password: This must be provided only over HTTPS. Topic creation
+ - ``user``/``password``: This defaults to "guest/guest".
+
+ This must be provided only over HTTPS. Topic creation
requests will otherwise be rejected.
- - port: This defaults to 5672 for unencrypted connections and 5671 for
+ - ``port``: This defaults to 5672 for unencrypted connections and 5671 for
SSL-encrypted connections.
- - vhost: This defaults to "/".
- - verify-ssl: This indicates whether the server certificate is validated by
+ - ``vhost``: This defaults to "/".
+ - ``verify-ssl``: This indicates whether the server certificate is validated by
the client. (This is "true" by default.)
- If ``ca-location`` is provided and a secure connection is used, the
specified CA will be used to authenticate the broker. The default CA will
not be used.
- - amqp-exchange: The exchanges must exist and must be able to route messages
+ - ``amqp-exchange``: The exchanges must exist and must be able to route messages
based on topics. This parameter is mandatory.
- - amqp-ack-level: No end2end acking is required. Messages may persist in the
+ - ``amqp-ack-level``: No end2end acking is required. Messages may persist in the
broker before being delivered to their final destinations. Three ack methods
exist:
- - "none": The message is considered "delivered" if it is sent to the broker.
- - "broker": The message is considered "delivered" if it is acked by the broker (default).
- - "routable": The message is considered "delivered" if the broker can route to a consumer.
+ - ``none``: The message is considered "delivered" if it is sent to the broker.
+ - ``broker``: The message is considered "delivered" if it is acked by the broker (default).
+ - ``routable``: The message is considered "delivered" if the broker can route to a consumer.
.. tip:: The topic-name (see :ref:`Create a Topic`) is used for the
AMQP topic ("routing key" for a topic exchange).
- ``ca-location``: If this is provided and a secure connection is used, the
specified CA will be used instead of the default CA to authenticate the
broker.
- - user/password: This should be provided over HTTPS. If not, the config parameter `rgw_allow_notification_secrets_in_cleartext` must be `true` in order to create topics.
- - user/password: This should be provided together with ``use-ssl``. If not, the broker credentials will be sent over insecure transport.
- - mechanism: may be provided together with user/password (default: ``PLAIN``). The supported SASL mechanisms are:
- - ``user-name``: User name to use when connecting to the Kafka broker. If both this parameter and URI user are provided then this parameter overrides the URI user.
- The same security considerations are in place for this parameter as are for user/password.
- - ``password``: Password to use when connecting to the Kafka broker. If both this parameter and URI password are provided then this parameter overrides the URI password.
- The same security considerations are in place for this parameter as are for user/password.
+ - ``user``/``password``: This should be provided over HTTPS. If not, the
+ config parameter ``rgw_allow_notification_secrets_in_cleartext`` must be
+ "true" in order to create topics.
+
+ This should be provided together with ``use-ssl``. If not, the broker
+ credentials will be sent over insecure transport.
+ - ``user-name``: User name to use when connecting to the Kafka broker: if
+ both this parameter and URI ``user`` are provided then this parameter
+ overrides the URI ``user``.
+
+ The same security considerations are in place for this parameter as are
+ for ``user``/``password``.
+ - ``password``: Password to use when connecting to the Kafka broker: if
+ both this parameter and URI ``password`` are provided then this parameter
+ overrides the URI ``password``.
+
+ The same security considerations are in place for this parameter as are
+ for ``user``/``password``.
+ - ``mechanism``: May be provided together with ``user``/``password``
+ (default: ``PLAIN``). The supported SASL mechanisms are:
- PLAIN
- SCRAM-SHA-256
- GSSAPI
- OAUTHBEARER
- - port: This defaults to 9092.
- - kafka-ack-level: No end2end acking is required. Messages may persist in the
+ - ``port``: This defaults to 9092.
+ - ``kafka-ack-level``: No end2end acking is required. Messages may persist in the
broker before being delivered to their final destinations. Two ack methods
exist:
- - "none": Messages are considered "delivered" if sent to the broker.
- - "broker": Messages are considered "delivered" if acked by the broker. (This
+ - ``none``: Messages are considered "delivered" if sent to the broker.
+ - ``broker``: Messages are considered "delivered" if acked by the broker. (This
is the default.)
- - kafka-brokers: A command-separated list of host:port of kafka brokers. These brokers (may contain a broker which is defined in kafka uri) will be added to kafka uri to support sending notifcations to a kafka cluster.
+ - ``kafka-brokers``: A command-separated list of ``host:port`` of Kafka brokers:
+ these brokers (may contain a broker which is defined in Kafka URI) will be
+ added to Kafka URI to support sending notifcations to a Kafka cluster.
.. note::