Dynamic bucket index resharding <dynamicresharding>
Multi factor authentication <mfa>
Sync Modules <sync-modules>
+ Bucket Notifications <notifications>
Data Layout in RADOS <layout>
STS Lite <STSLite>
Role <role>
--- /dev/null
+====================
+Bucket Notifications
+====================
+
+.. versionadded:: Nautilus
+
+.. contents::
+
+Bucket notifications provide a mechanism for sending information out of the radosgw when certain events are happening on the bucket.
+Currently, notifications could be sent to HTTP and AMQP0.9.1 endpoints.
+
+Note, that if the events should be stored in Ceph, in addition, or instead of being pushed to an endpoint,
+the `PubSub Module`_ should be used instead of the bucket notification mechanism.
+
+A user can create different topics. A topic entity is defined by its user and its name. A
+user can only manage its own topics, and can only associate them with buckets it owns.
+
+In order to send notifications for events for a specific bucket, a notification entity needs to be created. A
+notification can be created on a subset of event types, or for all event types (default).
+The notification may also filter out events based on preffix/suffix and/or regular expression matching of the keys. As well as,
+on the metadata attributes attached to the object.
+There can be multiple notifications for any specific topic, and the same topic could be used for multiple notifications.
+
+REST API has been defined to provide configuration and control interfaces for the bucket notification
+mechanism. This API is similar to the one defined as S3-compatible API of the pubsub sync module.
+
+.. toctree::
+ :maxdepth: 1
+
+ S3 Bucket Notification Compatibility <s3-notification-compatibility>
+
+Notificatios Performance Stats
+------------------------------
+Same counters are shared between the pubsub sync module and the bucket notification mechanism.
+
+- ``pubsub_event_triggered``: running counter of events with at lease one topic associated with them
+- ``pubsub_event_lost``: running counter of events that had topics associated with them but that were not pushed to any of the endpoints
+- ``pubsub_push_ok``: running counter, for all notifications, of events successfully pushed to their endpoint
+- ``pubsub_push_fail``: running counter, for all notifications, of events failed to be pushed to their endpoint
+- ``pubsub_push_pending``: gauge value of events pushed to an endpoint but not acked or nacked yet
+
+.. note::
+
+ ``pubsub_event_triggered`` and ``pubsub_event_lost`` are incremented per event, while:
+ ``pubsub_push_ok``, ``pubsub_push_fail``, are incremented per push action on each notification.
+
+Bucket Notification REST API
+----------------------------
+
+Topics
+~~~~~~
+
+Create a Topic
+``````````````
+
+This will create a new topic. The topic should be provided with push endpoint parameters that would be used later
+when a notification is created.
+Upon successful request, the response will include the topic ARN that could be later used to reference this topic in the notification request.
+To update a topic, use the same command used for topic creation, with the topic name of an existing topic and different endpoint values.
+
+.. tip:: Any notification already associated with the topic needs to be re-created for the topic update to take effect
+
+::
+
+ POST
+ Action=CreateTopic
+ &Name=<topic-name>
+ &push-endpoint=<endpoint>
+ [&Attributes.entry.1.key=amqp-exchange&Attributes.entry.1.value=<exchange>]
+ [&Attributes.entry.2.key=amqp-sck-level&Attributes.entry.2.value=ack-level]
+ &Attributes.entry.3.key=verify-sll&Attributes.entry.3.value=true|false]
+
+Request parameters:
+
+- push-endpoint: URI of endpoint to send push notification to
+
+ - URI schema is: ``http[s]|amqp://[<user>:<password>@]<fqdn>[:<port>][/<amqp-vhost>]``
+ - Same schema is used for HTTP and AMQP endpoints (except amqp-vhost which is specific to AMQP)
+ - Default values for HTTP/S: no user/password, port 80/443
+ - Default values for AMQP: user/password=guest/guest, port 5672, amqp-vhost is "/"
+
+- verify-ssl: can be used with https endpoints (ignored for other endpoints), indicate whether the server certificate is validated or not ("true" by default)
+- amqp-exchange: mandatory parameter for AMQP endpoint. The exchanges must exist and be able to route messages based on topics
+- amqp-ack-level: No end2end acking is required, as messages may persist in the broker before delivered into their final destination. 2 ack methods exist:
+
+ - "none" - message is considered "delivered" if sent to broker
+ - "broker" message is considered "delivered" if acked by broker
+
+.. note::
+
+ - The key/value of a specific parameter does not have to reside in the same line, or in any specific order, but must use the same index
+ - Attribute indexing does not need to be sequntial or start from any specific value
+ - `AWS Create Topic`_ has detailed explanation on endpoint attributes format. However, in our case different keys and values are used
+
+The response will have the following format:
+
+::
+
+ <CreateTopicResponse xmlns="https://sns.amazonaws.com/doc/2010-03-31/">
+ <CreateTopicResult>
+ <TopicArn></TopicArn>
+ </CreateTopicResult>
+ <ResponseMetadata>
+ <RequestId></RequestId>
+ </ResponseMetadata>
+ </CreateTopicResponse>
+
+The topic ARN in the response will have the following format:
+
+::
+
+ arn:aws:sns:<zone-group>:<tenant>:<topic>
+
+Get Topic Information
+`````````````````````
+
+Returns information about specific topic. This includes push-endpoint information, if provided.
+
+::
+
+ POST
+ Action=GetTopic&TopicArn=<topic-arn>
+
+Response will have the following format:
+
+::
+
+ <GetTopicResponse>
+ <GetTopicRersult>
+ <Topic>
+ <User></User>
+ <Name></Name>
+ <EndPoint>
+ <EndpointAddress></EndpointAddress>
+ <EndpointArgs></EndpointArgs>
+ <EndpointTopic></EndpointTopic>
+ </EndPoint>
+ <TopicArn></TopicArn>
+ </Topic>
+ </GetTopicResult>
+ <ResponseMetadata>
+ <RequestId></RequestId>
+ </ResponseMetadata>
+ </GetTopicResponse>
+
+- User: name of the user that created the topic
+- Name: name of the topic
+- EndPoinjtAddress: the push-endpoint URL
+- EndPointArgs: the push-endpoint args
+- EndpointTopic: the topic name that should be sent to the endpoint (mat be different than the above topic name)
+- TopicArn: topic ARN
+
+Delete Topic
+````````````
+
+::
+
+ POST
+ Action=DeleteTopic&TopicArn=<topic-arn>
+
+Delete the specified topic. Note that deleting a deleted topic should result with no-op and not a failure.
+
+The response will have the following format:
+
+::
+
+ <DeleteTopicResponse xmlns="https://sns.amazonaws.com/doc/2010-03-31/">
+ <ResponseMetadata>
+ <RequestId></RequestId>
+ </ResponseMetadata>
+ </DeleteTopicResponse>
+
+List Topics
+```````````
+
+List all topics that user defined.
+
+::
+
+ POST
+ Action=ListTopics
+
+Response will have the following format:
+
+::
+
+ <ListTopicdResponse xmlns="https://sns.amazonaws.com/doc/2010-03-31/">
+ <ListTopicsRersult>
+ <Topics>
+ <member>
+ <User></User>
+ <Name></Name>
+ <EndPoint>
+ <EndpointAddress></EndpointAddress>
+ <EndpointArgs></EndpointArgs>
+ <EndpointTopic></EndpointTopic>
+ </EndPoint>
+ <TopicArn></TopicArn>
+ </member>
+ </Topics>
+ </ListTopicsResult>
+ <ResponseMetadata>
+ <RequestId></RequestId>
+ </ResponseMetadata>
+ </ListTopicsResponse>
+
+Notifications
+~~~~~~~~~~~~~
+
+Detailed under: `Bucket Operations`_.
+
+.. note::
+
+ - "Abort Multipart Upload" request does not emit a notification
+ - "Delete Multiple Objects" request does not emit a notification
+ - Both "Initiate Multipart Upload" and "POST Object" requests will emit an ``s3:ObjectCreated:Post`` notification
+
+
+Events
+~~~~~~
+
+The events are in JSON format (regardless of the actual endpoint), and share the same structure as the S3-compatible events
+pushed or pulled using the pubsub sync module.
+
+::
+
+ {"Records":[
+ {
+ "eventVersion":"2.1"
+ "eventSource":"aws:s3",
+ "awsRegion":"",
+ "eventTime":"",
+ "eventName":"",
+ "userIdentity":{
+ "principalId":""
+ },
+ "requestParameters":{
+ "sourceIPAddress":""
+ },
+ "responseElements":{
+ "x-amz-request-id":"",
+ "x-amz-id-2":""
+ },
+ "s3":{
+ "s3SchemaVersion":"1.0",
+ "configurationId":"",
+ "bucket":{
+ "name":"",
+ "ownerIdentity":{
+ "principalId":""
+ },
+ "arn":"",
+ "id:""
+ },
+ "object":{
+ "key":"",
+ "size":"",
+ "eTag":"",
+ "versionId":"",
+ "sequencer": "",
+ "metadata":""
+ }
+ },
+ "eventId":"",
+ }
+ ]}
+
+- awsRegion: zonegroup
+- eventTime: timestamp indicating when the event was triggered
+- eventName: for list of supported events see: `S3 Notification Compatibility`_
+- userIdentity.principalId: user that triggered the change
+- requestParameters.sourceIPAddress: not supported
+- responseElements.x-amz-request-id: request ID of the original change
+- responseElements.x_amz_id_2: RGW on which the change was made
+- s3.configurationId: notification ID that created the event
+- s3.bucket.name: name of the bucket
+- s3.bucket.ownerIdentity.principalId: owner of the bucket
+- s3.bucket.arn: ARN of the bucket
+- s3.bucket.id: Id of the bucket (an extension to the S3 notification API)
+- s3.object.key: object key
+- s3.object.size: object size
+- s3.object.eTag: object etag
+- s3.object.version: object version in case of versioned bucket
+- s3.object.sequencer: monotonically increasing identifier of the change per object (hexadecimal format)
+- s3.object.metadata: any metadata set on the object sent as: ``x-amz-meta-`` (an extension to the S3 notification API)
+- s3.eventId: not supported (an extension to the S3 notification API)
+
+.. _PubSub Module : ../pubsub-module
+.. _S3 Notification Compatibility: ../s3-notification-compatibility
+.. _AWS Create Topic: https://docs.aws.amazon.com/sns/latest/api/API_CreateTopic.html
+.. _Bucket Operations: ../s3/bucketops
-=========================
+==================
PubSub Sync Module
-=========================
+==================
.. versionadded:: Nautilus
+.. contents::
+
This sync module provides a publish and subscribe mechanism for the object store modification
-events. Events are published into defined topics. Topics can be subscribed to, and events
+events. Events are published into predefined topics. Topics can be subscribed to, and events
can be pulled from them. Events need to be acked. Also, events will expire and disappear
-after a period of time. A push notification mechanism exists too, currently supporting HTTP and
-AMQP0.9.1 endpoints.
+after a period of time.
+
+A push notification mechanism exists too, currently supporting HTTP and
+AMQP0.9.1 endpoints, on top of storing the events in Ceph. If events should only be pushed to an endpoint
+and do not need to be stored in Ceph, the `Bucket Notification`_ mechanism should be used instead of pubsub sync module.
A user can create different topics. A topic entity is defined by its user and its name. A
user can only manage its own topics, and can only subscribe to events published by buckets
it owns.
-In order to publish events for specific bucket a notification needs to be created. A
-notification can be created only on subset of event types, or for all event types (default).
-There can be multiple notifications for any specific topic.
+In order to publish events for specific bucket a notification entity needs to be created. A
+notification can be created on a subset of event types, or for all event types (default).
+There can be multiple notifications for any specific topic, and the same topic could be used for multiple notifications.
A subscription to a topic can also be defined. There can be multiple subscriptions for any
specific topic.
-A new REST api has been defined to provide configuration and control interfaces for the pubsub
-mechanisms.
+REST API has been defined to provide configuration and control interfaces for the pubsub
+mechanisms. This API has two flavors, one is S3-compatible and one is not. The two flavors can be used
+together, although it is recommended to use the S3-compatible one.
+The S3-compatible API is similar to the one used in the bucket notification mechanism.
-Events are stored as rgw objects in a special bucket, under a special user. Events cannot
-be accessed directly, but need to be pulled and acked using the new REST api.
+Events are stored as RGW objects in a special bucket, under a special user. Events cannot
+be accessed directly, but need to be pulled and acked using the new REST API.
+.. toctree::
+ :maxdepth: 1
+ S3 Bucket Notification Compatibility <s3-notification-compatibility>
+
+PubSub Zone Configuration
+-------------------------
-PubSub Tier Type Configuration
--------------------------------------
+The pubsub sync module requires the creation of a new zone in a `Multisite`_ environment.
+First, a master zone must exist (see: :ref:`master-zone-label`),
+then a secondary zone should be created (see :ref:`secondary-zone-label`).
+In the creation of the secondary zone, its tier type must be set to ``pubsub``:
::
- {
- "tenant": <tenant>, # default: <empty>
- "uid": <uid>, # default: "pubsub"
- "data_bucket_prefix": <prefix> # default: "pubsub-"
- "data_oid_prefix": <prefix> #
+ # radosgw-admin zone create --rgw-zonegroup={zone-group-name} \
+ --rgw-zone={zone-name} \
+ --endpoints={http://fqdn}[,{http://fqdn}] \
+ --sync-from-all=0 \
+ --sync-from={master-zone-name} \
+ --tier-type=pubsub
- "events_retention_days": <days> # default: 7
- }
+PubSub Zone Configuration Parameters
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+::
+ {
+ "tenant": <tenant>, # default: <empty>
+ "uid": <uid>, # default: "pubsub"
+ "data_bucket_prefix": <prefix> # default: "pubsub-"
+ "data_oid_prefix": <prefix> #
+ "events_retention_days": <days> # default: 7
+ }
* ``tenant`` (string)
How many days to keep events that weren't acked.
-How to Configure
-~~~~~~~~~~~~~~~~
-
-See `Multisite Configuration`_ for how to multisite config instructions. The pubsub sync module requires a creation of a new zone. The zone
-tier type needs to be defined as ``pubsub``:
-
-::
-
- # radosgw-admin zone create --rgw-zonegroup={zone-group-name} \
- --rgw-zone={zone-name} \
- --endpoints={http://fqdn}[,{http://fqdn}]
- --tier-type=pubsub
-
+Configuring Parameters via CLI
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-The tier configuration can be then done using the following command
+The tier configuration could be set using the following command:
::
- # radosgw-admin zone modify --rgw-zonegroup={zone-group-name} \
+ # radosgw-admin zone modify --rgw-zonegroup={zone-group-name} \
--rgw-zone={zone-name} \
--tier-config={key}={val}[,{key}={val}]
-The ``key`` in the configuration specifies the config variable that needs to be updated, and
-the ``val`` specifies its new value. Nested values can be accessed using period. For example:
+Where the ``key`` in the configuration specifies the configuration variable that needs to be updated (from the list above), and
+the ``val`` specifies its new value. For example, setting the pubsub control user ``uid`` to ``user_ps``:
::
- # radosgw-admin zone modify --rgw-zonegroup={zone-group-name} \
+ # radosgw-admin zone modify --rgw-zonegroup={zone-group-name} \
--rgw-zone={zone-name} \
--tier-config=uid=pubsub
-
A configuration field can be removed by using ``--tier-config-rm={key}``.
PubSub Performance Stats
-------------------------
-- **pubsub_event_triggered**: running counter of events with at lease one pubsub topic associated with them
-- **pubsub_event_lost**: running counter of events that had pubsub topics and subscriptions associated with them but that were not stored or pushed to any of the subscriptions
-- **pubsub_store_ok**: running counter, for all subscriptions, of stored pubsub events
-- **pubsub_store_fail**: running counter, for all subscriptions, of pubsub events that needed to be stored but failed
-- **pubsub_push_ok**: running counter, for all subscriptions, of pubsub events successfully pushed to their endpoint
-- **pubsub_push_fail**: running counter, for all subscriptions, of pubsub events failed to be pushed to their endpoint
-- **pubsub_push_pending**: gauge value of pubsub events pushed to a endpoined but not acked or nacked yet
+Same counters are shared between the pubsub sync module and the notification mechanism.
+
+- ``pubsub_event_triggered``: running counter of events with at lease one topic associated with them
+- ``pubsub_event_lost``: running counter of events that had topics and subscriptions associated with them but that were not stored or pushed to any of the subscriptions
+- ``pubsub_store_ok``: running counter, for all subscriptions, of stored events
+- ``pubsub_store_fail``: running counter, for all subscriptions, of events failed to be stored
+- ``pubsub_push_ok``: running counter, for all subscriptions, of events successfully pushed to their endpoint
+- ``pubsub_push_fail``: running counter, for all subscriptions, of events failed to be pushed to their endpoint
+- ``pubsub_push_pending``: gauge value of events pushed to an endpoint but not acked or nacked yet
-Note that **pubsub_event_triggered** and **pubsub_event_lost** are incremented per event, while: **pubsub_store_ok**, **pubsub_store_fail**, **pubsub_push_ok**, **pubsub_push_fail**, are incremented per store/push action on each subscriptions.
+.. note::
+
+ ``pubsub_event_triggered`` and ``pubsub_event_lost`` are incremented per event, while:
+ ``pubsub_store_ok``, ``pubsub_store_fail``, ``pubsub_push_ok``, ``pubsub_push_fail``, are incremented per store/push action on each subscriptions.
PubSub REST API
--------------------------
+---------------
+.. tip:: PubSub REST calls, and only them, should be sent to an RGW which belong to a PubSub zone
Topics
~~~~~~
-
+
Create a Topic
-``````````````````````````
+``````````````
+
+This will create a new topic. Topic creation is needed both for both flavors of the API.
+Optionally the topic could be provided with push endpoint parameters that would be used later
+when an S3-compatible notification is created.
+Upon successful request, the response will include the topic ARN that could be later used to reference this topic in an S3-compatible notification request.
+To update a topic, use the same command used for topic creation, with the topic name of an existing topic and different endpoint values.
-This will create a new topic.
+.. tip:: Any S3-compatible notification already associated with the topic needs to be re-created for the topic update to take effect
::
- PUT /topics/<topic-name>
+ PUT /topics/<topic-name>[?push-endpoint=<endpoint>[&amqp-exchange=<exchange>][&amqp-ack-level=<level>][&verify-ssl=true|false]]
+
+Request parameters:
+
+- push-endpoint: URI of endpoint to send push notification to
+
+ - URI schema is: ``http[s]|amqp://[<user>:<password>@]<fqdn>[:<port>][/<amqp-vhost>]``
+ - Same schema is used for HTTP and AMQP endpoints (except amqp-vhost which is specific to AMQP)
+ - Default values for HTTP/S: no user/password, port 80/443
+ - Default values for AMQP: user/password=guest/guest, port 5672, amqp-vhost is "/"
+
+- verify-ssl: can be used with https endpoints (ignored for other endpoints), indicate whether the server certificate is validated or not ("true" by default)
+- amqp-exchange: mandatory parameter for AMQP endpoint. The exchanges must exist and be able to route messages based on topics
+- amqp-ack-level: No end2end acking is required, as messages may persist in the broker before delivered into their final destination. 2 ack methods exist:
+
+ - "none" - message is considered "delivered" if sent to broker
+ - "broker" message is considered "delivered" if acked by broker
+
+The topic ARN in the response will have the following format:
+::
+
+ arn:aws:sns:<zone-group>:<tenant>:<topic>
Get Topic Information
-````````````````````````````````
+`````````````````````
-Returns information about specific topic. This includes subscriptions to that topic.
+Returns information about specific topic. This includes subscriptions to that topic, and push-endpoint information, if provided.
::
GET /topics/<topic-name>
+Response will have the following format (JSON):
+
+::
+ {
+ "topic":{
+ "user":"",
+ "name":"",
+ "dest":{
+ "bucket_name":"",
+ "oid_prefix":"",
+ "push_endpoint":"",
+ "push_endpoint_args":""
+ },
+ "arn":""
+ },
+ "subs":[]
+ }
+
+- topic.user: name of the user that created the topic
+- name: name of the topic
+- dest.bucket_name: not used
+- dest.oid_prefix: not used
+- dest.push_endpoint: in case of S3-compliant notifications, this value will be used as the push-endpoint URL
+- dest.push_endpoint_args: in case of S3-compliant notifications, this value will be used as the push-endpoint args
+- topic.arn: topic ARN
+- subs: list of subscriptions associated with this topic
Delete Topic
-````````````````````````````````````
+````````````
::
Delete the specified topic.
List Topics
-````````````````````````````````````
+```````````
List all topics that user defined.
::
GET /topics
+
+S3-Compliant Notifications
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+Detailed under: `Bucket Operations`_.
+.. note::
+
+ - Notification creation will also create a subscription for pushing/pulling events
+ - The generated subscription's name will have the same as the notification Id, and could be used later to fetch and ack events with the subscription API.
+ - Notification deletion will deletes all generated subscriptions
+ - In case that bucket deletion implicitly deletes the notification,
+ the associated subscription will not be deleted automatically (any events of the deleted bucket could still be access),
+ and will have to be deleted explicitly with the subscription deletion API
+ - Filtering based on metadata (which is an extension to S3) is not supported, and such rules will be ignored
-Notifications
-~~~~~~~~~~~~~
+
+Non S3-Compliant Notifications
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Create a Notification
-``````````````````````````
+`````````````````````
This will create a publisher for a specific bucket into a topic.
PUT /notifications/bucket/<bucket>?topic=<topic-name>[&events=<event>[,<event>]]
+Request parameters:
-Request Params:
- - topic-name: name of topic
- - event: event type (string), one of: OBJECT_CREATE, OBJECT_DELETE
-
-
-
+- topic-name: name of topic
+- event: event type (string), one of: ``OBJECT_CREATE``, ``OBJECT_DELETE``, ``DELETE_MARKER_CREATE``
+
Delete Notification Information
-````````````````````````````````
+```````````````````````````````
Delete publisher from a specific bucket into a specific topic.
DELETE /notifications/bucket/<bucket>?topic=<topic-name>
-Request Params:
- - topic-name: name of topic
+Request parameters:
+
+- topic-name: name of topic
+
+.. note:: When the bucket is deleted, any notification defined on it is also deleted
+
+List Notifications
+``````````````````
+
+List all topics with associated events defined on a bucket.
+
+::
+
+ GET /notifications/bucket/<bucket>
+
+Response will have the following format (JSON):
+
+::
+ {"topics":[
+ {
+ "topic":{
+ "user":"",
+ "name":"",
+ "dest":{
+ "bucket_name":"",
+ "oid_prefix":"",
+ "push_endpoint":"",
+ "push_endpoint_args":""
+ }
+ "arn":""
+ },
+ "events":[]
+ }
+ ]}
+Subscriptions
+~~~~~~~~~~~~~
-Create Subscription
-````````````````````````````````````
+Create a Subscription
+`````````````````````
Creates a new subscription.
PUT /subscriptions/<sub-name>?topic=<topic-name>[&push-endpoint=<endpoint>[&amqp-exchange=<exchange>][&amqp-ack-level=<level>][&verify-ssl=true|false]]
-Request Params:
+Request parameters:
- - topic-name: name of topic
- - push-endpoint: URI of endpoint to send push notification to
+- topic-name: name of topic
+- push-endpoint: URI of endpoint to send push notification to
- - URI schema is: ``http|amqp://[<user>:<password>@]<fqdn>[:<port>][/<amqp-vhost>]``
- - Same schema is used for HTTP and AMQP endpoints (except amqp-vhost which is specific to AMQP)
- - Default values for HTTP: no user/password, port 80
- - Default values for AMQP: user/password=guest/guest, port 5672, amqp-vhost is "/"
+ - URI schema is: ``http[s]|amqp://[<user>:<password>@]<fqdn>[:<port>][/<amqp-vhost>]``
+ - Same schema is used for HTTP and AMQP endpoints (except amqp-vhost which is specific to AMQP)
+ - Default values for HTTP/S: no user/password, port 80/443
+ - Default values for AMQP: user/password=guest/guest, port 5672, amqp-vhost is "/"
- - verify-ssl: can be used with https endpoints (ignored for other endpoints), indicate whether the server certificate is validated or not ("true" by default)
- - amqp-exchange: mandatory parameter for AMQP endpoint. The exchanges must exist and be able to route messages based on topics
- - amqp-ack-level: 2 ack levels exist: "none" - message is considered "delivered" if sent to broker;
- "broker" message is considered "delivered" if acked by broker.
- No end2end acking is required, as messages may persist in the broker before delivered into their final destination
+- verify-ssl: can be used with https endpoints (ignored for other endpoints), indicate whether the server certificate is validated or not ("true" by default)
+- amqp-exchange: mandatory parameter for AMQP endpoint. The exchanges must exist and be able to route messages based on topics
+- amqp-ack-level: No end2end acking is required, as messages may persist in the broker before delivered into their final destination. 2 ack methods exist:
-Get Subscription Info
-````````````````````````````````````
+ - "none": message is considered "delivered" if sent to broker
+ - "broker": message is considered "delivered" if acked by broker
-Returns info about specific subscription
+Get Subscription Information
+````````````````````````````
+
+Returns information about specific subscription.
::
GET /subscriptions/<sub-name>
+Response will have the following format (JSON):
+
+::
+
+ {
+ "user":"",
+ "name":"",
+ "topic":"",
+ "dest":{
+ "bucket_name":"",
+ "oid_prefix":"",
+ "push_endpoint":"",
+ "push_endpoint_args":""
+ }
+ "s3_id":""
+ }
+
+- user: name of the user that created the subscription
+- name: name of the subscription
+- topic: name of the topic the subscription is associated with
Delete Subscription
-`````````````````````````````````
+```````````````````
-Removes a subscription
+Removes a subscription.
::
DELETE /subscriptions/<sub-name>
-
Events
~~~~~~
Pull Events
-`````````````````````````````````
+```````````
-Pull events sent to a specific subscription
+Pull events sent to a specific subscription.
::
GET /subscriptions/<sub-name>?events[&max-entries=<max-entries>][&marker=<marker>]
-Request Params:
- - marker: pagination marker for list of events, if not specified will start from the oldest
- - max-entries: max number of events to return
+Request parameters:
+
+- marker: pagination marker for list of events, if not specified will start from the oldest
+- max-entries: max number of events to return
+
+The response will hold information on the current marker and whether there are more events not fetched:
+
+::
+
+ {"next_marker":"","is_truncated":"",...}
+
+
+The actual content of the response is depended with how the subscription was created.
+In case that the subscription was created via an S3-compatible notification,
+the events will have an S3-compatible record format (JSON):
+
+::
+
+ {"Records":[
+ {
+ "eventVersion":"2.1"
+ "eventSource":"aws:s3",
+ "awsRegion":"",
+ "eventTime":"",
+ "eventName":"",
+ "userIdentity":{
+ "principalId":""
+ },
+ "requestParameters":{
+ "sourceIPAddress":""
+ },
+ "responseElements":{
+ "x-amz-request-id":"",
+ "x-amz-id-2":""
+ },
+ "s3":{
+ "s3SchemaVersion":"1.0",
+ "configurationId":"",
+ "bucket":{
+ "name":"",
+ "ownerIdentity":{
+ "principalId":""
+ },
+ "arn":"",
+ "id":""
+ },
+ "object":{
+ "key":"",
+ "size":"0",
+ "eTag":"",
+ "versionId":"",
+ "sequencer":"",
+ "metadata":""
+ }
+ },
+ "eventId":"",
+ }
+ ]}
+
+- awsRegion: zonegroup
+- eventTime: timestamp indicating when the event was triggered
+- eventName: either ``s3:ObjectCreated:``, or ``s3:ObjectRemoved:``
+- userIdentity: not supported
+- requestParameters: not supported
+- responseElements: not supported
+- s3.configurationId: notification ID that created the subscription for the event
+- s3.eventId: unique ID of the event, that could be used for acking (an extension to the S3 notification API)
+- s3.bucket.name: name of the bucket
+- s3.bucket.ownerIdentity.principalId: owner of the bucket
+- s3.bucket.arn: ARN of the bucket
+- s3.bucket.id: Id of the bucket (an extension to the S3 notification API)
+- s3.object.key: object key
+- s3.object.size: not supported
+- s3.object.eTag: object etag
+- s3.object.version: object version in case of versioned bucket
+- s3.object.sequencer: monotonically increasing identifier of the change per object (hexadecimal format)
+- s3.object.metadata: not supported (an extension to the S3 notification API)
+- s3.eventId: unique ID of the event, that could be used for acking (an extension to the S3 notification API)
+
+In case that the subscription was not created via a non S3-compatible notification,
+the events will have the following event format (JSON):
+
+::
+ {"events":[
+ {
+ "id":"",
+ "event":"",
+ "timestamp":"",
+ "info":{
+ "attrs":{
+ "mtime":""
+ },
+ "bucket":{
+ "bucket_id":"",
+ "name":"",
+ "tenant":""
+ },
+ "key":{
+ "instance":"",
+ "name":""
+ }
+ }
+ }
+ ]}
+
+- id: unique ID of the event, that could be used for acking
+- event: one of: ``OBJECT_CREATE``, ``OBJECT_DELETE``, ``DELETE_MARKER_CREATE``
+- timestamp: timestamp indicating when the event was sent
+- info.attrs.mtime: timestamp indicating when the event was triggered
+- info.bucket.bucket_id: id of the bucket
+- info.bucket.name: name of the bucket
+- info.bucket.tenant: tenant the bucket belongs to
+- info.key.instance: object version in case of versioned bucket
+- info.key.name: object key
Ack Event
-`````````````````````````````````
+`````````
Ack event so that it can be removed from the subscription history.
POST /subscriptions/<sub-name>?ack&event-id=<event-id>
+Request parameters:
-Request Params:
- - event-id: id of event to be acked
+- event-id: id of event to be acked
-.. _Multisite Configuration: ./multisite.rst
+.. _Multisite : ../multisite
+.. _Bucket Notification : ../notifications
+.. _Bucket Operations: ../s3/bucketops
--- /dev/null
+=====================================
+S3 Bucket Notifications Compatibility
+=====================================
+
+Ceph's `Bucket Notifications`_ and `PubSub Module`_ APIs follow `AWS S3 Bucket Notifications API`_. However, some differences exist, as listed below.
+
+
+.. note::
+
+ Compatibility is different depending on which of the above mechanism is used
+
+Supported Destination
+---------------------
+
+AWS supports: **SNS**, **SQS** and **Lambda** as possible destinations (AWS internal destinations).
+Currently, we support: **HTTP/S** and **AMQP**. And also support pulling and acking of events stored in Ceph (as an intenal destination).
+
+We are using the **SNS** ARNs to represent the **HTTP/S** and **AMQP** destinations.
+
+Notification Configuration XML
+------------------------------
+
+Following tags (and the tags inside them) are not supported:
+
++-----------------------------------+----------------------------------------------+
+| Tag | Remaks |
++===================================+==============================================+
+| ``<QueueConfiguration>`` | not needed, we treat all destinations as SNS |
++-----------------------------------+----------------------------------------------+
+| ``<CloudFunctionConfiguration>`` | not needed, we treat all destinations as SNS |
++-----------------------------------+----------------------------------------------+
+
+REST API Extension
+------------------
+
+Ceph's bucket notification API has the following extensions:
+
+- Deletion of a specific notification, or all notifications on a bucket, using the ``DELETE`` verb
+
+ - In S3, all notifications are deleted when the bucket is deleted, or when an empty notification is set on the bucket
+
+- Getting the information on a specific notification (when more than one exists on a bucket)
+
+ - In S3, it is only possible to fetch all notifications on a bucket
+
+- In addition to filtering based on prefix/suffix of object keys we support:
+
+ - Filtering based on regular expression matching
+
+ - Filtering based on metadata attributes attached to the object
+
+- Filtering overlapping is allowed, so that same event could be sent as different notification
+
+
+Unsupported Fields in the Event Record
+--------------------------------------
+
+The records sent for bucket notification follow format described in: `Event Message Structure`_.
+However, the following fields may be sent empty, under the different deployment options (Notification/PubSub):
+
++----------------------------------------+--------------+---------------+------------------------------------------------------------+
+| Field | Notification | PubSub | Description |
++========================================+==============+===============+============================================================+
+| ``userIdentity.principalId`` | Supported | Not Supported | The identity of the user that triggered the event |
++----------------------------------------+--------------+---------------+------------------------------------------------------------+
+| ``requestParameters.sourceIPAddress`` | Not Supported | The IP address of the client that triggered the event |
++----------------------------------------+--------------+---------------+------------------------------------------------------------+
+| ``requestParameters.x-amz-request-id`` | Supported | Not Supported | The request id that triggered the event |
++----------------------------------------+--------------+---------------+------------------------------------------------------------+
+| ``requestParameters.x-amz-id-2`` | Supported | Not Supported | The IP address of the RGW on which the event was triggered |
++----------------------------------------+--------------+---------------+------------------------------------------------------------+
+| ``s3.object.size`` | Supported | Not Supported | The size of the object |
++----------------------------------------+--------------+---------------+------------------------------------------------------------+
+
+Event Types
+-----------
+
++----------------------------------------------+-----------------+-------------------------------------------+
+| Event | Notification | PubSub |
++==============================================+=================+===========================================+
+| ``s3:ObjectCreated:*`` | Supported |
++----------------------------------------------+-----------------+-------------------------------------------+
+| ``s3:ObjectCreated:Put`` | Supported | Supported at ``s3:ObjectCreated:*`` level |
++----------------------------------------------+-----------------+-------------------------------------------+
+| ``s3:ObjectCreated:Post`` | Supported | Not Supported |
++----------------------------------------------+-----------------+-------------------------------------------+
+| ``s3:ObjectCreated:Copy`` | Supported | Supported at ``s3:ObjectCreated:*`` level |
++----------------------------------------------+-----------------+-------------------------------------------+
+| ``s3:ObjectCreated:CompleteMultipartUpload`` | Supported | Supported at ``s3:ObjectCreated:*`` level |
++----------------------------------------------+-----------------+-------------------------------------------+
+| ``s3:ObjectRemoved:*`` | Supported | Supported only the specific events below |
++----------------------------------------------+-----------------+-------------------------------------------+
+| ``s3:ObjectRemoved:Delete`` | Supported |
++----------------------------------------------+-----------------+-------------------------------------------+
+| ``s3:ObjectRemoved:DeleteMarkerCreated`` | Supported |
++----------------------------------------------+-----------------+-------------------------------------------+
+| ``s3:ObjectRestore:Post`` | Not applicable to Ceph |
++----------------------------------------------+-----------------+-------------------------------------------+
+| ``s3:ObjectRestore:Complete`` | Not applicable to Ceph |
++----------------------------------------------+-----------------+-------------------------------------------+
+| ``s3:ReducedRedundancyLostObject`` | Not applicable to Ceph |
++----------------------------------------------+-----------------+-------------------------------------------+
+
+Topic Configuration
+-------------------
+In the case of bucket notifications, the topics management API will be derived from `AWS Simple Notification Service API`_.
+Note that most of the API is not applicable to Ceph, and only the following actions are implemented:
+
+ - ``CreateTopic``
+ - ``DeleteTopic``
+ - ``ListTopics``
+
+We also extend it by:
+
+ - ``GetTopic`` - allowing for fetching a specific topic, instead of all user topics
+ - In ``CreateTopic`` we allow setting endpoint attributes
+
+.. _AWS Simple Notification Service API: https://docs.aws.amazon.com/sns/latest/api/API_Operations.html
+.. _AWS S3 Bucket Notifications API: https://docs.aws.amazon.com/AmazonS3/latest/dev/NotificationHowTo.html
+.. _Event Message Structure: https://docs.aws.amazon.com/AmazonS3/latest/dev/notification-content-structure.html
+.. _`PubSub Module`: ../pubsub-module
+.. _`Bucket Notifications`: ../notifications
+---------------------------------+-----------------+----------------------------------------+
| **Bucket Location** | Supported | |
+---------------------------------+-----------------+----------------------------------------+
-| **Bucket Notification** | Not Supported | |
+| **Bucket Notification** | Supported | See `S3 Notification Compatibility`_ |
+---------------------------------+-----------------+----------------------------------------+
| **Bucket Object Versions** | Supported | |
+---------------------------------+-----------------+----------------------------------------+
+----------------------------+------------+
.. _Amazon S3 API: http://docs.aws.amazon.com/AmazonS3/latest/API/APIRest.html
+.. _S3 Notification Compatibility: ../s3-notification-compatibility
+-----------------------------+-----------+---------------------------------------------------------------------------+
| ``Status`` | String | Sets the versioning state of the bucket. Valid Values: Suspended/Enabled |
+-----------------------------+-----------+---------------------------------------------------------------------------+
+
+
+Create Notification
+-------------------
+
+Create a publisher for a specific bucket into a topic.
+
+Syntax
+~~~~~~
+
+::
+
+ PUT /<bucket name>?notification HTTP/1.1
+
+
+Request Entities
+~~~~~~~~~~~~~~~~
+
+Parameters are XML encoded in the body of the request, in the following format:
+
+::
+
+ <NotificationConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
+ <TopicConfiguration>
+ <Id></Id>
+ <Topic></Topic>
+ <Event></Event>
+ <Filter>
+ <S3Key>
+ <FilterRule>
+ <Name></Name>
+ <Value></Value>
+ </FilterRule>
+ </S3Key>
+ <S3Metadata>
+ <FilterRule>
+ <Name></Name>
+ <Value></Value>
+ </FilterRule>
+ </s3Metadata>
+ </Filter>
+ </TopicConfiguration>
+ </NotificationConfiguration>
+
++-------------------------------+-----------+--------------------------------------------------------------------------------------+----------+
+| Name | Type | Description | Required |
++===============================+===========+======================================================================================+==========+
+| ``NotificationConfiguration`` | Container | Holding list of ``TopicConfiguration`` entities | Yes |
++-------------------------------+-----------+--------------------------------------------------------------------------------------+----------+
+| ``TopicConfiguration`` | Container | Holding ``Id``, ``Topic`` and list of ``Event`` entities | Yes |
++-------------------------------+-----------+--------------------------------------------------------------------------------------+----------+
+| ``Id`` | String | Name of the notification | Yes |
++-------------------------------+-----------+--------------------------------------------------------------------------------------+----------+
+| ``Topic`` | String | Topic ARN. Topic must be created beforehand | Yes |
++-------------------------------+-----------+--------------------------------------------------------------------------------------+----------+
+| ``Event`` | String | List of supported events see: `S3 Notification Compatibility`_. Multiple ``Event`` | No |
+| | | entities can be used. If omitted, all events are handled | |
++-------------------------------+-----------+--------------------------------------------------------------------------------------+----------+
+| ``Filter`` | Container | Holding ``S3Key`` and ``S3Metadata`` entities | No |
++-------------------------------+-----------+--------------------------------------------------------------------------------------+----------+
+| ``S3Key`` | Container | Holding a list of ``FilterRule`` entities, for filtering based on object key. | No |
+| | | At most, 3 entities may be in the list, with ``Name`` be ``prefix``, ``suffix`` or | |
+| | | ``regex``. All filter rules in the list must match for the filter to match. | |
++-------------------------------+-----------+--------------------------------------------------------------------------------------+----------+
+| ``S3Metadata`` | Container | Holding a list of ``FilterRule`` entities, for filtering based on object metadata. | No |
+| | | All filter rules in the list must match the ones defined on the object. The object, | |
+| | | have other metadata entitied not listed in the filter. | |
++-------------------------------+-----------+--------------------------------------------------------------------------------------+----------+
+| ``S3Key.FilterRule`` | Container | Holding ``Name`` and ``Value`` entities. ``Name`` would be: ``prefix``, ``suffix`` | Yes |
+| | | or ``regex``. The ``Value`` would hold the key prefix, key suffix or a regular | |
+| | | expression for matching the key, accordingly. | |
++-------------------------------+-----------+--------------------------------------------------------------------------------------+----------+
+| ``S3Metadata.FilterRule`` | Container | Holding ``Name`` and ``Value`` entities. ``Name`` would be the name of the metadata | Yes |
+| | | attribute (e.g. ``x-amz-meta-xxx``). The ``Value`` would be the expected value for | |
+| | | this attribute | |
++-------------------------------+-----------+--------------------------------------------------------------------------------------+----------+
+
+
+HTTP Response
+~~~~~~~~~~~~~
+
++---------------+-----------------------+----------------------------------------------------------+
+| HTTP Status | Status Code | Description |
++===============+=======================+==========================================================+
+| ``400`` | MalformedXML | The XML is not well-formed |
++---------------+-----------------------+----------------------------------------------------------+
+| ``400`` | InvalidArgument | Missing Id; Missing/Invalid Topic ARN; Invalid Event |
++---------------+-----------------------+----------------------------------------------------------+
+| ``404`` | NoSuchBucket | The bucket does not exist |
++---------------+-----------------------+----------------------------------------------------------+
+| ``404`` | NoSuchKey | The topic does not exist |
++---------------+-----------------------+----------------------------------------------------------+
+
+
+Delete Notification
+-------------------
+
+Delete a specific, or all, notifications from a bucket.
+
+.. note::
+
+ - Notification deletion is an extension to the S3 notification API
+ - When the bucket is deleted, any notification defined on it is also deleted
+ - Deleting an unkown notification (e.g. double delete) is not considered an error
+
+Syntax
+~~~~~~
+
+::
+
+ DELETE /bucket?notification[=<notification-id>] HTTP/1.1
+
+
+Parameters
+~~~~~~~~~~
+
++------------------------+-----------+----------------------------------------------------------------------------------------+
+| Name | Type | Description |
++========================+===========+========================================================================================+
+| ``notification-id`` | String | Name of the notification. If not provided, all notifications on the bucket are deleted |
++------------------------+-----------+----------------------------------------------------------------------------------------+
+
+HTTP Response
+~~~~~~~~~~~~~
+
++---------------+-----------------------+----------------------------------------------------------+
+| HTTP Status | Status Code | Description |
++===============+=======================+==========================================================+
+| ``404`` | NoSuchBucket | The bucket does not exist |
++---------------+-----------------------+----------------------------------------------------------+
+
+Get/List Notification
+---------------------
+
+Get a specific notification, or list all notifications configured on a bucket.
+
+Syntax
+~~~~~~
+
+::
+
+ GET /bucket?notification[=<notification-id>] HTTP/1.1
+
+
+Parameters
+~~~~~~~~~~
+
++------------------------+-----------+----------------------------------------------------------------------------------------+
+| Name | Type | Description |
++========================+===========+========================================================================================+
+| ``notification-id`` | String | Name of the notification. If not provided, all notifications on the bucket are listed |
++------------------------+-----------+----------------------------------------------------------------------------------------+
+
+Response Entities
+~~~~~~~~~~~~~~~~~
+
+Response is XML encoded in the body of the request, in the following format:
+
+::
+
+ <NotificationConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
+ <TopicConfiguration>
+ <Id></Id>
+ <Topic></Topic>
+ <Event></Event>
+ <Filter>
+ <S3Key>
+ <FilterRule>
+ <Name></Name>
+ <Value></Value>
+ </FilterRule>
+ </S3Key>
+ <S3Metadata>
+ <FilterRule>
+ <Name></Name>
+ <Value></Value>
+ </FilterRule>
+ </s3Metadata>
+ </Filter>
+ </TopicConfiguration>
+ </NotificationConfiguration>
+
++-------------------------------+-----------+--------------------------------------------------------------------------------------+----------+
+| Name | Type | Description | Required |
++===============================+===========+======================================================================================+==========+
+| ``NotificationConfiguration`` | Container | Holding list of ``TopicConfiguration`` entities | Yes |
++-------------------------------+-----------+--------------------------------------------------------------------------------------+----------+
+| ``TopicConfiguration`` | Container | Holding ``Id``, ``Topic`` and list of ``Event`` entities | Yes |
++-------------------------------+-----------+--------------------------------------------------------------------------------------+----------+
+| ``Id`` | String | Name of the notification | Yes |
++-------------------------------+-----------+--------------------------------------------------------------------------------------+----------+
+| ``Topic`` | String | Topic ARN | Yes |
++-------------------------------+-----------+--------------------------------------------------------------------------------------+----------+
+| ``Event`` | String | Handled event. Multiple ``Event`` entities may exist | Yes |
++-------------------------------+-----------+--------------------------------------------------------------------------------------+----------+
+| ``Filter`` | Container | Holding the filters configured for this notification | No |
++-------------------------------+-----------+--------------------------------------------------------------------------------------+----------+
+
+HTTP Response
+~~~~~~~~~~~~~
+
++---------------+-----------------------+----------------------------------------------------------+
+| HTTP Status | Status Code | Description |
++===============+=======================+==========================================================+
+| ``404`` | NoSuchBucket | The bucket does not exist |
++---------------+-----------------------+----------------------------------------------------------+
+| ``404`` | NoSuchKey | The notification does not exist (if provided) |
++---------------+-----------------------+----------------------------------------------------------+
+
+.. _S3 Notification Compatibility: ../s3-notification-compatibility
--- /dev/null
+overrides:
+ rgw-multisite:
+ realm:
+ name: test-realm
+ is default: true
+ zonegroups:
+ - name: test-zonegroup
+ is_master: true
+ is_default: true
+ endpoints: [c1.client.0]
+ zones:
+ - name: test-zone1
+ is_master: true
+ is_default: true
+ endpoints: [c1.client.0]
+ - name: test-zone2
+ is_default: true
+ endpoints: [c2.client.0]
+ - name: test-zone3
+ endpoints: [c1.client.1]
+ - name: test-zone4
+ endpoints: [c2.client.1]
+ is_pubsub: true
from util.rados import create_ec_pool, create_replicated_pool
from rgw_multi import multisite
from rgw_multi.zone_rados import RadosZone as RadosZone
+from rgw_multi.zone_ps import PSZone as PSZone
from teuthology.orchestra import run
from teuthology import misc
* 'is_master' is passed on the command line as --master
* 'is_default' is passed on the command line as --default
+ * 'is_pubsub' is used to create a zone with tier-type=pubsub
* 'endpoints' given as client names are replaced with actual endpoints
zonegroups:
- name: test-zone2
is_default: true
endpoints: [c2.client.0]
+ - name: test-zone3
+ is_pubsub: true
+ endpoints: [c1.client.1]
"""
def __init__(self, ctx, config):
def create_zone(ctx, cluster, gateways, creds, zonegroup, config):
""" create a zone with the given configuration """
zone = multisite.Zone(config['name'], zonegroup, cluster)
- zone = RadosZone(config['name'], zonegroup, cluster)
+ if config.pop('is_pubsub', False):
+ zone = PSZone(config['name'], zonegroup, cluster)
+ else:
+ zone = RadosZone(config['name'], zonegroup, cluster)
# collect Gateways for the zone's endpoints
endpoints = config.get('endpoints')
from teuthology.task import Task
from teuthology import misc
-from rgw_multi import multisite, tests
+from rgw_multi import multisite, tests, tests_ps
log = logging.getLogger(__name__)
+
class RGWMultisiteTests(Task):
"""
Runs the rgw_multi tests against a multisite configuration created by the
# run nose tests in the rgw_multi.tests module
conf = nose.config.Config(stream=get_log_stream(), verbosity=2)
+ error_msg = ''
result = nose.run(defaultTest=tests.__name__, argv=argv, config=conf)
if not result:
- raise RuntimeError('rgw multisite test failures')
+ error_msg += 'rgw multisite, '
+ result = nose.run(defaultTest=tests_ps.__name__, argv=argv, config=conf)
+ if not result:
+ error_msg += 'rgw multisite pubsub, '
+ if error_msg:
+ raise RuntimeError(error_msg + 'test failures')
+
def get_log_stream():
""" return a log stream for nose output """
return LogStream()
+
task = RGWMultisiteTests
"will be located in the path that is specified here. "),
Option("rgw_enable_apis", Option::TYPE_STR, Option::LEVEL_ADVANCED)
- .set_default("s3, s3website, swift, swift_auth, admin, sts")
+ .set_default("s3, s3website, swift, swift_auth, admin, sts, pubsub")
.set_description("A list of set of RESTful APIs that rgw handles."),
Option("rgw_cache_enabled", Option::TYPE_BOOL, Option::LEVEL_ADVANCED)
rgw_aio_throttle.cc
rgw_auth.cc
rgw_auth_s3.cc
+ rgw_arn.cc
rgw_basic_types.cc
rgw_bucket.cc
rgw_cache.cc
rgw_sync_module_log.cc
rgw_sync_module_pubsub.cc
rgw_pubsub_push.cc
+ rgw_notify.cc
+ rgw_notify_event_type.cc
rgw_sync_module_pubsub_rest.cc
rgw_sync_log_trim.cc
rgw_sync_trace.cc
rgw_rest_conn.cc
rgw_rest_log.cc
rgw_rest_metadata.cc
+ rgw_rest_pubsub.cc
+ rgw_rest_pubsub_common.cc
rgw_rest_realm.cc
rgw_rest_role.cc
rgw_rest_s3.cc
string sub_dest_bucket;
string sub_push_endpoint;
string event_id;
- set<string, ltstr_nocase> event_types;
+ rgw::notify::EventTypeList event_types;
for (std::vector<const char*>::iterator i = args.begin(); i != args.end(); ) {
if (ceph_argparse_double_dash(args, i)) {
} else if (ceph_argparse_witharg(args, i, &val, "--event-id", (char*)NULL)) {
event_id = val;
} else if (ceph_argparse_witharg(args, i, &val, "--event-type", "--event-types", (char*)NULL)) {
- get_str_set(val, ",", event_types);
+ rgw::notify::from_string_list(val, event_types);
} else if (ceph_argparse_binary_flag(args, i, &detail, NULL, "--detail", (char*)NULL)) {
// do nothing
} else if (strncmp(*i, "-", 1) == 0) {
RGWUserInfo& user_info = user_op.get_user_info();
RGWUserPubSub ups(store, user_info.user_id);
- RGWUserPubSub::Sub::list_events_result result;
-
if (!max_entries_specified) {
- max_entries = 100;
+ max_entries = RGWUserPubSub::Sub::DEFAULT_MAX_EVENTS;
}
auto sub = ups.get_sub(sub_name);
- ret = sub->list_events(marker, max_entries, &result);
+ ret = sub->list_events(marker, max_entries);
if (ret < 0) {
cerr << "ERROR: could not list events: " << cpp_strerror(-ret) << std::endl;
return -ret;
}
- encode_json("result", result, formatter);
+ encode_json("result", *sub, formatter);
formatter->flush(cout);
}
// -*- mode:C++; tab-width:8; c-basic-offset:2; indent-tabs-mode:t -*-
// vim: ts=8 sw=2 smarttab
+#include "include/compat.h"
#include "rgw_amqp.h"
#include <amqp.h>
#include <amqp_tcp_socket.h>
#include <atomic>
#include <mutex>
#include <boost/lockfree/queue.hpp>
+#include "common/dout.h"
+
+#define dout_subsys ceph_subsys_rgw
// TODO investigation, not necessarily issues:
// (1) in case of single threaded writer context use spsc_queue
namespace rgw::amqp {
// RGW AMQP status codes for publishing
-static const int RGW_AMQP_STATUS_BROKER_NACK = -0x1001;
-static const int RGW_AMQP_STATUS_CONNECTION_CLOSED = -0x1002;
-static const int RGW_AMQP_STATUS_QUEUE_FULL = -0x1003;
-static const int RGW_AMQP_STATUS_MAX_INFLIGHT = -0x1004;
+static const int RGW_AMQP_STATUS_BROKER_NACK = -0x1001;
+static const int RGW_AMQP_STATUS_CONNECTION_CLOSED = -0x1002;
+static const int RGW_AMQP_STATUS_QUEUE_FULL = -0x1003;
+static const int RGW_AMQP_STATUS_MAX_INFLIGHT = -0x1004;
+static const int RGW_AMQP_STATUS_MANAGER_STOPPED = -0x1005;
// RGW AMQP status code for connection opening
-static const int RGW_AMQP_STATUS_CONN_ALLOC_FAILED = -0x2001;
-static const int RGW_AMQP_STATUS_SOCKET_ALLOC_FAILED = -0x2002;
-static const int RGW_AMQP_STATUS_SOCKET_OPEN_FAILED = -0x2003;
-static const int RGW_AMQP_STATUS_LOGIN_FAILED = -0x2004;
-static const int RGW_AMQP_STATUS_CHANNEL_OPEN_FAILED = -0x2005;
+static const int RGW_AMQP_STATUS_CONN_ALLOC_FAILED = -0x2001;
+static const int RGW_AMQP_STATUS_SOCKET_ALLOC_FAILED = -0x2002;
+static const int RGW_AMQP_STATUS_SOCKET_OPEN_FAILED = -0x2003;
+static const int RGW_AMQP_STATUS_LOGIN_FAILED = -0x2004;
+static const int RGW_AMQP_STATUS_CHANNEL_OPEN_FAILED = -0x2005;
static const int RGW_AMQP_STATUS_VERIFY_EXCHANGE_FAILED = -0x2006;
-static const int RGW_AMQP_STATUS_Q_DECLARE_FAILED = -0x2007;
+static const int RGW_AMQP_STATUS_Q_DECLARE_FAILED = -0x2007;
static const int RGW_AMQP_STATUS_CONFIRM_DECLARE_FAILED = -0x2008;
static const int RGW_AMQP_STATUS_CONSUME_DECLARE_FAILED = -0x2009;
-static const int RGW_AMQP_RESPONSE_SOCKET_ERROR = -0x3008;
-static const int RGW_AMQP_NO_REPLY_CODE = 0x0;
+static const int RGW_AMQP_RESPONSE_SOCKET_ERROR = -0x3008;
+static const int RGW_AMQP_NO_REPLY_CODE = 0x0;
// key class for the connection list
struct connection_id_t {
};
};
+std::string to_string(const connection_id_t& id) {
+ return id.host+":"+"/"+id.vhost;
+}
+
// connection_t state cleaner
// could be used for automatic cleanup when getting out of scope
class ConnectionCleaner {
int reply_type;
int reply_code;
mutable std::atomic<int> ref_count;
+ CephContext* cct;
CallbackList callbacks;
// default ctor
status(AMQP_STATUS_OK),
reply_type(AMQP_RESPONSE_NORMAL),
reply_code(RGW_AMQP_NO_REPLY_CODE),
- ref_count(0) {}
+ ref_count(0),
+ cct(nullptr) {}
// cleanup of all internal connection resource
// the object can still remain, and internal connection
amqp_bytes_free(reply_to_queue);
reply_to_queue = amqp_empty_bytes;
// fire all remaining callbacks
- std::for_each(callbacks.begin(), callbacks.end(), [s](auto& cb_tag) {
- cb_tag.cb(s);
+ std::for_each(callbacks.begin(), callbacks.end(), [this](auto& cb_tag) {
+ cb_tag.cb(status);
+ ldout(cct, 20) << "AMQP destroy: invoking callback with tag=" << cb_tag.tag << dendl;
});
+ callbacks.clear();
delivery_tag = 1;
}
return "RGW_AMQP_STATUS_QUEUE_FULL";
case RGW_AMQP_STATUS_MAX_INFLIGHT:
return "RGW_AMQP_STATUS_MAX_INFLIGHT";
+ case RGW_AMQP_STATUS_MANAGER_STOPPED:
+ return "RGW_AMQP_STATUS_MANAGER_STOPPED";
case RGW_AMQP_STATUS_CONN_ALLOC_FAILED:
return "RGW_AMQP_STATUS_CONN_ALLOC_FAILED";
case RGW_AMQP_STATUS_SOCKET_ALLOC_FAILED:
// utility function to create a new connection
connection_ptr_t create_new_connection(const amqp_connection_info& info,
- const std::string& exchange) {
+ const std::string& exchange, CephContext* cct) {
// create connection state
connection_ptr_t conn = new connection_t;
conn->exchange = exchange;
conn->user.assign(info.user);
conn->password.assign(info.password);
+ conn->cct = cct;
return create_connection(conn, info);
}
MessageQueue messages;
std::atomic<size_t> queued;
std::atomic<size_t> dequeued;
+ CephContext* const cct;
mutable std::mutex connections_lock;
std::thread runner;
if (!conn->is_ok()) {
// connection had an issue while message was in the queue
// TODO add error stats
+ ldout(conn->cct, 1) << "AMQP publish: connection had an issue while message was in the queue" << dendl;
if (message->cb) {
message->cb(RGW_AMQP_STATUS_CONNECTION_CLOSED);
}
nullptr,
amqp_cstring_bytes(message->message.c_str()));
if (rc == AMQP_STATUS_OK) {
+ ldout(conn->cct, 20) << "AMQP publish (no callback): OK" << dendl;
return;
}
+ ldout(conn->cct, 1) << "AMQP publish (no callback): failed with error " << status_to_string(rc) << dendl;
// an error occurred, close connection
// it will be retied by the main loop
conn->destroy(rc);
amqp_cstring_bytes(message->message.c_str()));
if (rc == AMQP_STATUS_OK) {
- if (conn->callbacks.size() < max_inflight) {
+ auto const q_len = conn->callbacks.size();
+ if (q_len < max_inflight) {
+ ldout(conn->cct, 20) << "AMQP publish (with callback, tag=" << conn->delivery_tag << "): OK. Queue has: " << q_len << " callbacks" << dendl;
conn->callbacks.emplace_back(conn->delivery_tag++, message->cb);
} else {
// immediately invoke callback with error
+ ldout(conn->cct, 1) << "AMQP publish (with callback): failed with error: callback queue full" << dendl;
message->cb(RGW_AMQP_STATUS_MAX_INFLIGHT);
}
} else {
// an error occurred, close connection
// it will be retied by the main loop
+ ldout(conn->cct, 1) << "AMQP publish (with callback): failed with error: " << status_to_string(rc) << dendl;
conn->destroy(rc);
// immediately invoke callback with error
message->cb(rc);
auto& conn = conn_it->second;
// delete the connection if marked for deletion
if (conn->marked_for_deletion) {
+ ldout(conn->cct, 10) << "AMQP run: connection is deleted" << dendl;
conn->destroy(RGW_AMQP_STATUS_CONNECTION_CLOSED);
std::lock_guard<std::mutex> lock(connections_lock);
// erase is safe - does not invalidate any other iterator
info.vhost = const_cast<char*>(conn_it->first.vhost.c_str());
info.user = const_cast<char*>(conn->user.c_str());
info.password = const_cast<char*>(conn->password.c_str());
+ ldout(conn->cct, 20) << "AMQP run: retry connection" << dendl;
if (create_connection(conn, info)->is_ok() == false) {
+ ldout(conn->cct, 10) << "AMQP run: connection (" << to_string(conn_it->first) << ") retry failed" << dendl;
// TODO: add error counter for failed retries
// TODO: add exponential backoff for retries
+ } else {
+ ldout(conn->cct, 10) << "AMQP run: connection (" << to_string(conn_it->first) << ") retry successfull" << dendl;
}
INCREMENT_AND_CONTINUE(conn_it);
}
if (rc != AMQP_STATUS_OK) {
// an error occurred, close connection
// it will be retied by the main loop
+ ldout(conn->cct, 1) << "AMQP run: connection read error: " << status_to_string(rc) << dendl;
conn->destroy(rc);
INCREMENT_AND_CONTINUE(conn_it);
}
if (frame.frame_type != AMQP_FRAME_METHOD) {
+ ldout(conn->cct, 10) << "AMQP run: ignoring non n/ack messages" << dendl;
// handler is for publish confirmation only - handle only method frames
// TODO: add a counter
INCREMENT_AND_CONTINUE(conn_it);
case AMQP_CHANNEL_CLOSE_METHOD:
{
// other side closed the connection, no need to continue
+ ldout(conn->cct, 10) << "AMQP run: connection was closed by broker" << dendl;
conn->destroy(rc);
INCREMENT_AND_CONTINUE(conn_it);
}
case AMQP_BASIC_RETURN_METHOD:
// message was not delivered, returned to sender
// TODO: add a counter
+ ldout(conn->cct, 10) << "AMQP run: message delivery error" << dendl;
INCREMENT_AND_CONTINUE(conn_it);
break;
default:
// unexpected method
// TODO: add a counter
+ ldout(conn->cct, 10) << "AMQP run: unexpected message" << dendl;
INCREMENT_AND_CONTINUE(conn_it);
}
const auto& callbacks_end = conn->callbacks.end();
const auto& callbacks_begin = conn->callbacks.begin();
- const auto it = std::find(callbacks_begin, callbacks_end, tag);
- if (it != callbacks_end) {
+ const auto tag_it = std::find(callbacks_begin, callbacks_end, tag);
+ if (tag_it != callbacks_end) {
if (multiple) {
// n/ack all up to (and including) the tag
- for (auto rit = it; rit >= callbacks_begin; --rit) {
- rit->cb(result);
- conn->callbacks.erase(rit);
+ ldout(conn->cct, 20) << "AMQP run: multiple n/acks received with tag=" << tag << " and result=" << result << dendl;
+ auto it = callbacks_begin;
+ while (it->tag <= tag && it != conn->callbacks.end()) {
+ ldout(conn->cct, 20) << "AMQP run: invoking callback with tag=" << it->tag << dendl;
+ it->cb(result);
+ it = conn->callbacks.erase(it);
}
} else {
// n/ack a specific tag
- it->cb(result);
- conn->callbacks.erase(it);
+ ldout(conn->cct, 20) << "AMQP run: n/ack received, invoking callback with tag=" << tag << " and result=" << result << dendl;
+ tag_it->cb(result);
+ conn->callbacks.erase(tag_it);
}
} else {
// TODO add counter for acks with no callback
+ ldout(conn->cct, 10) << "AMQP run: unsolicited n/ack received with tag=" << tag << dendl;
}
// just increment the iterator
++conn_it;
Manager(size_t _max_connections,
size_t _max_inflight,
size_t _max_queue,
- long _usec_timeout) :
+ long _usec_timeout,
+ CephContext* _cct) :
max_connections(_max_connections),
max_inflight(_max_inflight),
max_queue(_max_queue),
messages(max_queue),
queued(0),
dequeued(0),
+ cct(_cct),
runner(&Manager::run, this) {
// The hashmap has "max connections" as the initial number of buckets,
// and allows for 10 collisions per bucket before rehash.
// This is to prevent rehashing so that iterators are not invalidated
// when a new connection is added.
connections.max_load_factor(10.0);
+ // give the runner thread a name for easier debugging
+ const auto rc = ceph_pthread_setname(runner.native_handle(), "amqp_manager");
+ ceph_assert(rc==0);
}
// non copyable
connection_ptr_t connect(const std::string& url, const std::string& exchange) {
if (stopped) {
// TODO: increment counter
+ ldout(cct, 1) << "AMQP connect: manager is stopped" << dendl;
return nullptr;
}
std::vector<char> url_cache(url.c_str(), url.c_str()+url.size()+1);
if (AMQP_STATUS_OK != amqp_parse_url(url_cache.data(), &info)) {
// TODO: increment counter
+ ldout(cct, 1) << "AMQP connect: URL parsing failed" << dendl;
return nullptr;
}
if (it != connections.end()) {
if (it->second->marked_for_deletion) {
// TODO: increment counter
+ ldout(cct, 1) << "AMQP connect: endpoint marked for deletion" << dendl;
return nullptr;
} else if (it->second->exchange != exchange) {
// TODO: increment counter
+ ldout(cct, 1) << "AMQP connect: exchange mismatch" << dendl;
return nullptr;
}
// connection found - return even if non-ok
+ ldout(cct, 20) << "AMQP connect: connection found" << dendl;
return it->second;
}
// connection not found, creating a new one
if (connection_count >= max_connections) {
// TODO: increment counter
+ ldout(cct, 1) << "AMQP connect: max connections exceeded" << dendl;
return nullptr;
}
- const auto conn = create_new_connection(info, exchange);
+ const auto conn = create_new_connection(info, exchange, cct);
// create_new_connection must always return a connection object
// even if error occurred during creation.
// in such a case the creation will be retried in the main thread
ceph_assert(conn);
++connection_count;
+ ldout(cct, 10) << "AMQP connect: new connection is created. Total connections: " << connection_count << dendl;
+ ldout(cct, 10) << "AMQP connect: new connection status is: " << status_to_string(conn->status) << dendl;
return connections.emplace(id, conn).first->second;
}
int publish(connection_ptr_t& conn,
const std::string& topic,
const std::string& message) {
+ if (stopped) {
+ return RGW_AMQP_STATUS_MANAGER_STOPPED;
+ }
if (!conn || !conn->is_ok()) {
return RGW_AMQP_STATUS_CONNECTION_CLOSED;
}
const std::string& topic,
const std::string& message,
reply_callback_t cb) {
+ if (stopped) {
+ return RGW_AMQP_STATUS_MANAGER_STOPPED;
+ }
if (!conn || !conn->is_ok()) {
return RGW_AMQP_STATUS_CONNECTION_CLOSED;
}
// singleton manager
// note that the manager itself is not a singleton, and multiple instances may co-exist
-// TODO get parameters from conf
-Manager s_manager(256, 8192, 8192, 100);
+// TODO make the pointer atomic in allocation and deallocation to avoid race conditions
+static Manager* s_manager = nullptr;
+
+static const size_t MAX_CONNECTIONS_DEFAULT = 256;
+static const size_t MAX_INFLIGHT_DEFAULT = 8192;
+static const size_t MAX_QUEUE_DEFAULT = 8192;
+
+bool init(CephContext* cct) {
+ if (s_manager) {
+ return false;
+ }
+ // TODO: take conf from CephContext
+ s_manager = new Manager(MAX_CONNECTIONS_DEFAULT, MAX_INFLIGHT_DEFAULT, MAX_QUEUE_DEFAULT, 100, cct);
+ return true;
+}
+
+void shutdown() {
+ delete s_manager;
+ s_manager = nullptr;
+}
connection_ptr_t connect(const std::string& url, const std::string& exchange) {
- return s_manager.connect(url, exchange);
+ if (!s_manager) return nullptr;
+ return s_manager->connect(url, exchange);
}
int publish(connection_ptr_t& conn,
const std::string& topic,
const std::string& message) {
- return s_manager.publish(conn, topic, message);
+ if (!s_manager) return RGW_AMQP_STATUS_MANAGER_STOPPED;
+ return s_manager->publish(conn, topic, message);
}
int publish_with_confirm(connection_ptr_t& conn,
const std::string& topic,
const std::string& message,
reply_callback_t cb) {
- return s_manager.publish_with_confirm(conn, topic, message, cb);
+ if (!s_manager) return RGW_AMQP_STATUS_MANAGER_STOPPED;
+ return s_manager->publish_with_confirm(conn, topic, message, cb);
}
size_t get_connection_count() {
- return s_manager.get_connection_count();
+ if (!s_manager) return 0;
+ return s_manager->get_connection_count();
}
size_t get_inflight() {
- return s_manager.get_inflight();
+ if (!s_manager) return 0;
+ return s_manager->get_inflight();
}
size_t get_queued() {
- return s_manager.get_queued();
+ if (!s_manager) return 0;
+ return s_manager->get_queued();
}
size_t get_dequeued() {
- return s_manager.get_dequeued();
+ if (!s_manager) return 0;
+ return s_manager->get_dequeued();
}
size_t get_max_connections() {
- return s_manager.max_connections;
+ if (!s_manager) return MAX_CONNECTIONS_DEFAULT;
+ return s_manager->max_connections;
}
size_t get_max_inflight() {
- return s_manager.max_inflight;
+ if (!s_manager) return MAX_INFLIGHT_DEFAULT;
+ return s_manager->max_inflight;
}
size_t get_max_queue() {
- return s_manager.max_queue;
+ if (!s_manager) return MAX_QUEUE_DEFAULT;
+ return s_manager->max_queue;
}
bool disconnect(connection_ptr_t& conn) {
- return s_manager.disconnect(conn);
+ if (!s_manager) return false;
+ return s_manager->disconnect(conn);
}
} // namespace amqp
#include <functional>
#include <boost/smart_ptr/intrusive_ptr.hpp>
+class CephContext;
+
namespace rgw::amqp {
// forward declaration of connection object
struct connection_t;
// indicating the result, and not to return anything
typedef std::function<void(int)> reply_callback_t;
+// initialize the amqp manager
+bool init(CephContext* cct);
+
+// shutdown the amqp manager
+void shutdown();
+
// connect to an amqp endpoint
connection_ptr_t connect(const std::string& url, const std::string& exchange);
--- /dev/null
+// -*- mode:C++; tab-width:8; c-basic-offset:2; indent-tabs-mode:t -*-
+// vim: ts=8 sw=2 smarttab ft=cpp
+
+#include "rgw_arn.h"
+#include "rgw_common.h"
+#include <regex>
+
+namespace rgw {
+
+namespace {
+boost::optional<Partition> to_partition(const smatch::value_type& p,
+ bool wildcards) {
+ if (p == "aws") {
+ return Partition::aws;
+ } else if (p == "aws-cn") {
+ return Partition::aws_cn;
+ } else if (p == "aws-us-gov") {
+ return Partition::aws_us_gov;
+ } else if (p == "*" && wildcards) {
+ return Partition::wildcard;
+ } else {
+ return boost::none;
+ }
+
+ ceph_abort();
+}
+
+boost::optional<Service> to_service(const smatch::value_type& s,
+ bool wildcards) {
+ static const unordered_map<string, Service> services = {
+ { "acm", Service::acm },
+ { "apigateway", Service::apigateway },
+ { "appstream", Service::appstream },
+ { "artifact", Service::artifact },
+ { "autoscaling", Service::autoscaling },
+ { "aws-marketplace", Service::aws_marketplace },
+ { "aws-marketplace-management",
+ Service::aws_marketplace_management },
+ { "aws-portal", Service::aws_portal },
+ { "cloudformation", Service::cloudformation },
+ { "cloudfront", Service::cloudfront },
+ { "cloudhsm", Service::cloudhsm },
+ { "cloudsearch", Service::cloudsearch },
+ { "cloudtrail", Service::cloudtrail },
+ { "cloudwatch", Service::cloudwatch },
+ { "codebuild", Service::codebuild },
+ { "codecommit", Service::codecommit },
+ { "codedeploy", Service::codedeploy },
+ { "codepipeline", Service::codepipeline },
+ { "cognito-identity", Service::cognito_identity },
+ { "cognito-idp", Service::cognito_idp },
+ { "cognito-sync", Service::cognito_sync },
+ { "config", Service::config },
+ { "datapipeline", Service::datapipeline },
+ { "devicefarm", Service::devicefarm },
+ { "directconnect", Service::directconnect },
+ { "dms", Service::dms },
+ { "ds", Service::ds },
+ { "dynamodb", Service::dynamodb },
+ { "ec2", Service::ec2 },
+ { "ecr", Service::ecr },
+ { "ecs", Service::ecs },
+ { "elasticache", Service::elasticache },
+ { "elasticbeanstalk", Service::elasticbeanstalk },
+ { "elasticfilesystem", Service::elasticfilesystem },
+ { "elasticloadbalancing", Service::elasticloadbalancing },
+ { "elasticmapreduce", Service::elasticmapreduce },
+ { "elastictranscoder", Service::elastictranscoder },
+ { "es", Service::es },
+ { "events", Service::events },
+ { "firehose", Service::firehose },
+ { "gamelift", Service::gamelift },
+ { "glacier", Service::glacier },
+ { "health", Service::health },
+ { "iam", Service::iam },
+ { "importexport", Service::importexport },
+ { "inspector", Service::inspector },
+ { "iot", Service::iot },
+ { "kinesis", Service::kinesis },
+ { "kinesisanalytics", Service::kinesisanalytics },
+ { "kms", Service::kms },
+ { "lambda", Service::lambda },
+ { "lightsail", Service::lightsail },
+ { "logs", Service::logs },
+ { "machinelearning", Service::machinelearning },
+ { "mobileanalytics", Service::mobileanalytics },
+ { "mobilehub", Service::mobilehub },
+ { "opsworks", Service::opsworks },
+ { "opsworks-cm", Service::opsworks_cm },
+ { "polly", Service::polly },
+ { "rds", Service::rds },
+ { "redshift", Service::redshift },
+ { "route53", Service::route53 },
+ { "route53domains", Service::route53domains },
+ { "s3", Service::s3 },
+ { "sdb", Service::sdb },
+ { "servicecatalog", Service::servicecatalog },
+ { "ses", Service::ses },
+ { "sns", Service::sns },
+ { "sqs", Service::sqs },
+ { "ssm", Service::ssm },
+ { "states", Service::states },
+ { "storagegateway", Service::storagegateway },
+ { "sts", Service::sts },
+ { "support", Service::support },
+ { "swf", Service::swf },
+ { "trustedadvisor", Service::trustedadvisor },
+ { "waf", Service::waf },
+ { "workmail", Service::workmail },
+ { "workspaces", Service::workspaces }};
+
+ if (wildcards && s == "*") {
+ return Service::wildcard;
+ }
+
+ auto i = services.find(s);
+ if (i == services.end()) {
+ return boost::none;
+ } else {
+ return i->second;
+ }
+}
+}
+ARN::ARN(const rgw_obj& o)
+ : partition(Partition::aws),
+ service(Service::s3),
+ region(),
+ account(o.bucket.tenant),
+ resource(o.bucket.name)
+{
+ resource.push_back('/');
+ resource.append(o.key.name);
+}
+
+ARN::ARN(const rgw_bucket& b)
+ : partition(Partition::aws),
+ service(Service::s3),
+ region(),
+ account(b.tenant),
+ resource(b.name) { }
+
+ARN::ARN(const rgw_bucket& b, const std::string& o)
+ : partition(Partition::aws),
+ service(Service::s3),
+ region(),
+ account(b.tenant),
+ resource(b.name) {
+ resource.push_back('/');
+ resource.append(o);
+}
+
+ARN::ARN(const std::string& resource_name, const std::string& type, const std::string& tenant, bool has_path)
+ : partition(Partition::aws),
+ service(Service::iam),
+ region(),
+ account(tenant),
+ resource(type) {
+ if (! has_path)
+ resource.push_back('/');
+ resource.append(resource_name);
+}
+
+boost::optional<ARN> ARN::parse(const std::string& s, bool wildcards) {
+ static const std::regex rx_wild("arn:([^:]*):([^:]*):([^:]*):([^:]*):([^:]*)",
+ std::regex_constants::ECMAScript |
+ std::regex_constants::optimize);
+ static const std::regex rx_no_wild(
+ "arn:([^:*]*):([^:*]*):([^:*]*):([^:*]*):(.*)",
+ std::regex_constants::ECMAScript |
+ std::regex_constants::optimize);
+
+ smatch match;
+
+ if ((s == "*") && wildcards) {
+ return ARN(Partition::wildcard, Service::wildcard, "*", "*", "*");
+ } else if (regex_match(s, match, wildcards ? rx_wild : rx_no_wild) &&
+ match.size() == 6) {
+ if (auto p = to_partition(match[1], wildcards)) {
+ if (auto s = to_service(match[2], wildcards)) {
+ return ARN(*p, *s, match[3], match[4], match[5]);
+ }
+ }
+ }
+ return boost::none;
+}
+
+std::string ARN::to_string() const {
+ std::string s{"arn:"};
+
+ if (partition == Partition::aws) {
+ s.append("aws:");
+ } else if (partition == Partition::aws_cn) {
+ s.append("aws-cn:");
+ } else if (partition == Partition::aws_us_gov) {
+ s.append("aws-us-gov:");
+ } else {
+ s.append("*:");
+ }
+
+ static const std::unordered_map<Service, string> services = {
+ { Service::acm, "acm" },
+ { Service::apigateway, "apigateway" },
+ { Service::appstream, "appstream" },
+ { Service::artifact, "artifact" },
+ { Service::autoscaling, "autoscaling" },
+ { Service::aws_marketplace, "aws-marketplace" },
+ { Service::aws_marketplace_management, "aws-marketplace-management" },
+ { Service::aws_portal, "aws-portal" },
+ { Service::cloudformation, "cloudformation" },
+ { Service::cloudfront, "cloudfront" },
+ { Service::cloudhsm, "cloudhsm" },
+ { Service::cloudsearch, "cloudsearch" },
+ { Service::cloudtrail, "cloudtrail" },
+ { Service::cloudwatch, "cloudwatch" },
+ { Service::codebuild, "codebuild" },
+ { Service::codecommit, "codecommit" },
+ { Service::codedeploy, "codedeploy" },
+ { Service::codepipeline, "codepipeline" },
+ { Service::cognito_identity, "cognito-identity" },
+ { Service::cognito_idp, "cognito-idp" },
+ { Service::cognito_sync, "cognito-sync" },
+ { Service::config, "config" },
+ { Service::datapipeline, "datapipeline" },
+ { Service::devicefarm, "devicefarm" },
+ { Service::directconnect, "directconnect" },
+ { Service::dms, "dms" },
+ { Service::ds, "ds" },
+ { Service::dynamodb, "dynamodb" },
+ { Service::ec2, "ec2" },
+ { Service::ecr, "ecr" },
+ { Service::ecs, "ecs" },
+ { Service::elasticache, "elasticache" },
+ { Service::elasticbeanstalk, "elasticbeanstalk" },
+ { Service::elasticfilesystem, "elasticfilesystem" },
+ { Service::elasticloadbalancing, "elasticloadbalancing" },
+ { Service::elasticmapreduce, "elasticmapreduce" },
+ { Service::elastictranscoder, "elastictranscoder" },
+ { Service::es, "es" },
+ { Service::events, "events" },
+ { Service::firehose, "firehose" },
+ { Service::gamelift, "gamelift" },
+ { Service::glacier, "glacier" },
+ { Service::health, "health" },
+ { Service::iam, "iam" },
+ { Service::importexport, "importexport" },
+ { Service::inspector, "inspector" },
+ { Service::iot, "iot" },
+ { Service::kinesis, "kinesis" },
+ { Service::kinesisanalytics, "kinesisanalytics" },
+ { Service::kms, "kms" },
+ { Service::lambda, "lambda" },
+ { Service::lightsail, "lightsail" },
+ { Service::logs, "logs" },
+ { Service::machinelearning, "machinelearning" },
+ { Service::mobileanalytics, "mobileanalytics" },
+ { Service::mobilehub, "mobilehub" },
+ { Service::opsworks, "opsworks" },
+ { Service::opsworks_cm, "opsworks-cm" },
+ { Service::polly, "polly" },
+ { Service::rds, "rds" },
+ { Service::redshift, "redshift" },
+ { Service::route53, "route53" },
+ { Service::route53domains, "route53domains" },
+ { Service::s3, "s3" },
+ { Service::sdb, "sdb" },
+ { Service::servicecatalog, "servicecatalog" },
+ { Service::ses, "ses" },
+ { Service::sns, "sns" },
+ { Service::sqs, "sqs" },
+ { Service::ssm, "ssm" },
+ { Service::states, "states" },
+ { Service::storagegateway, "storagegateway" },
+ { Service::sts, "sts" },
+ { Service::support, "support" },
+ { Service::swf, "swf" },
+ { Service::trustedadvisor, "trustedadvisor" },
+ { Service::waf, "waf" },
+ { Service::workmail, "workmail" },
+ { Service::workspaces, "workspaces" }};
+
+ auto i = services.find(service);
+ if (i != services.end()) {
+ s.append(i->second);
+ } else {
+ s.push_back('*');
+ }
+ s.push_back(':');
+
+ s.append(region);
+ s.push_back(':');
+
+ s.append(account);
+ s.push_back(':');
+
+ s.append(resource);
+
+ return s;
+}
+
+bool operator ==(const ARN& l, const ARN& r) {
+ return ((l.partition == r.partition) &&
+ (l.service == r.service) &&
+ (l.region == r.region) &&
+ (l.account == r.account) &&
+ (l.resource == r.resource));
+}
+bool operator <(const ARN& l, const ARN& r) {
+ return ((l.partition < r.partition) ||
+ (l.service < r.service) ||
+ (l.region < r.region) ||
+ (l.account < r.account) ||
+ (l.resource < r.resource));
+}
+
+// The candidate is not allowed to have wildcards. The only way to
+// do that sanely would be to use unification rather than matching.
+bool ARN::match(const ARN& candidate) const {
+ if ((candidate.partition == Partition::wildcard) ||
+ (partition != candidate.partition && partition
+ != Partition::wildcard)) {
+ return false;
+ }
+
+ if ((candidate.service == Service::wildcard) ||
+ (service != candidate.service && service != Service::wildcard)) {
+ return false;
+ }
+
+ if (!match_policy(region, candidate.region, MATCH_POLICY_ARN)) {
+ return false;
+ }
+
+ if (!match_policy(account, candidate.account, MATCH_POLICY_ARN)) {
+ return false;
+ }
+
+ if (!match_policy(resource, candidate.resource, MATCH_POLICY_RESOURCE)) {
+ return false;
+ }
+
+ return true;
+}
+
+boost::optional<ARNResource> ARNResource::parse(const std::string& s) {
+ static const std::regex rx("^([^:/]*)[:/]?([^:/]*)?[:/]?(.*)$",
+ std::regex_constants::ECMAScript |
+ std::regex_constants::optimize);
+ std::smatch match;
+ if (!regex_match(s, match, rx)) {
+ return boost::none;
+ }
+ if (match[2].str().empty() && match[3].str().empty()) {
+ // only resource exist
+ return rgw::ARNResource("", match[1], "");
+ }
+
+ // resource type also exist, and cannot be wildcard
+ if (match[1] != std::string(wildcard)) {
+ // resource type cannot be wildcard
+ return rgw::ARNResource(match[1], match[2], match[3]);
+ }
+
+ return boost::none;
+}
+
+std::string ARNResource::to_string() const {
+ std::string s;
+
+ if (!resource_type.empty()) {
+ s.append(resource_type);
+ s.push_back(':');
+
+ s.append(resource);
+ s.push_back(':');
+
+ s.append(qualifier);
+ } else {
+ s.append(resource);
+ }
+
+ return s;
+}
+
+}
+
--- /dev/null
+// -*- mode:C++; tab-width:8; c-basic-offset:2; indent-tabs-mode:t -*-
+// vim: ts=8 sw=2 smarttab ft=cpp
+
+#pragma once
+#include <string>
+#include <boost/optional.hpp>
+
+class rgw_obj;
+class rgw_bucket;
+
+namespace rgw {
+
+enum struct Partition {
+ aws, aws_cn, aws_us_gov, wildcard
+ // If we wanted our own ARNs for principal type unique to us
+ // (maybe to integrate better with Swift) or for anything else we
+ // provide that doesn't map onto S3, we could add an 'rgw'
+ // partition type.
+};
+
+enum struct Service {
+ apigateway, appstream, artifact, autoscaling, aws_portal, acm,
+ cloudformation, cloudfront, cloudhsm, cloudsearch, cloudtrail,
+ cloudwatch, events, logs, codebuild, codecommit, codedeploy,
+ codepipeline, cognito_idp, cognito_identity, cognito_sync,
+ config, datapipeline, dms, devicefarm, directconnect,
+ ds, dynamodb, ec2, ecr, ecs, ssm, elasticbeanstalk, elasticfilesystem,
+ elasticloadbalancing, elasticmapreduce, elastictranscoder, elasticache,
+ es, gamelift, glacier, health, iam, importexport, inspector, iot,
+ kms, kinesisanalytics, firehose, kinesis, lambda, lightsail,
+ machinelearning, aws_marketplace, aws_marketplace_management,
+ mobileanalytics, mobilehub, opsworks, opsworks_cm, polly,
+ redshift, rds, route53, route53domains, sts, servicecatalog,
+ ses, sns, sqs, s3, swf, sdb, states, storagegateway, support,
+ trustedadvisor, waf, workmail, workspaces, wildcard
+};
+
+/* valid format:
+ * 'arn:partition:service:region:account-id:resource'
+ * The 'resource' part can be further broken down via ARNResource
+*/
+struct ARN {
+ Partition partition;
+ Service service;
+ std::string region;
+ // Once we refit tenant, we should probably use that instead of a
+ // string.
+ std::string account;
+ std::string resource;
+
+ ARN()
+ : partition(Partition::wildcard), service(Service::wildcard) {}
+ ARN(Partition partition, Service service, std::string region,
+ std::string account, std::string resource)
+ : partition(partition), service(service), region(std::move(region)),
+ account(std::move(account)), resource(std::move(resource)) {}
+ ARN(const rgw_obj& o);
+ ARN(const rgw_bucket& b);
+ ARN(const rgw_bucket& b, const std::string& o);
+ ARN(const std::string& resource_name, const std::string& type, const std::string& tenant, bool has_path=false);
+
+ static boost::optional<ARN> parse(const std::string& s,
+ bool wildcard = false);
+ std::string to_string() const;
+
+ // `this` is the pattern
+ bool match(const ARN& candidate) const;
+};
+
+inline std::string to_string(const ARN& a) {
+ return a.to_string();
+}
+
+inline std::ostream& operator <<(std::ostream& m, const ARN& a) {
+ return m << to_string(a);
+}
+
+bool operator ==(const ARN& l, const ARN& r);
+bool operator <(const ARN& l, const ARN& r);
+
+/* valid formats (only resource part):
+ * 'resource'
+ * 'resourcetype/resource'
+ * 'resourcetype/resource/qualifier'
+ * 'resourcetype/resource:qualifier'
+ * 'resourcetype:resource'
+ * 'resourcetype:resource:qualifier'
+ * Note that 'resourceType' cannot be wildcard
+*/
+struct ARNResource {
+ constexpr static const char* const wildcard = "*";
+ std::string resource_type;
+ std::string resource;
+ std::string qualifier;
+
+ ARNResource() : resource_type(""), resource(wildcard), qualifier("") {}
+
+ ARNResource(const std::string& _resource_type, const std::string& _resource, const std::string& _qualifier) :
+ resource_type(std::move(_resource_type)), resource(std::move(_resource)), qualifier(std::move(_qualifier)) {}
+
+ static boost::optional<ARNResource> parse(const std::string& s);
+
+ std::string to_string() const;
+};
+
+inline std::string to_string(const ARNResource& r) {
+ return r.to_string();
+}
+
+} // namespace rgw
+
+namespace std {
+template<>
+struct hash<::rgw::Service> {
+ size_t operator()(const ::rgw::Service& s) const noexcept {
+ // Invoke a default-constructed hash object for int.
+ return hash<int>()(static_cast<int>(s));
+ }
+};
+} // namespace std
+
#include <string>
#include "rgw_basic_types.h"
+#include "rgw_xml.h"
#include "common/ceph_json.h"
using std::string;
void decode_json_obj(rgw_user& val, JSONObj *obj)
{
- string s = obj->get_data();
- val.from_str(s);
+ val.from_str(obj->get_data());
}
void encode_json(const char *name, const rgw_user& val, Formatter *f)
{
- string s = val.to_str();
- f->dump_string(name, s);
+ f->dump_string(name, val.to_str());
+}
+
+void encode_xml(const char *name, const rgw_user& val, Formatter *f)
+{
+ encode_xml(name, val.to_str(), f);
}
namespace rgw {
void decode_json_obj(rgw_user& val, JSONObj *obj);
void encode_json(const char *name, const rgw_user& val, Formatter *f);
+void encode_xml(const char *name, const rgw_user& val, Formatter *f);
inline ostream& operator<<(ostream& out, const rgw_user &u) {
string s;
#include "rgw_string.h"
#include "rgw_rados.h"
#include "rgw_http_errors.h"
+#include "rgw_arn.h"
#include "common/ceph_crypto.h"
#include "common/armor.h"
#define dout_context g_ceph_context
#define dout_subsys ceph_subsys_rgw
-using rgw::IAM::ARN;
+using rgw::ARN;
using rgw::IAM::Effect;
using rgw::IAM::op_to_perm;
using rgw::IAM::Policy;
struct req_state * const s,
RGWAccessControlPolicy * const user_acl,
const vector<rgw::IAM::Policy>& user_policies,
- const rgw::IAM::ARN& res,
+ const rgw::ARN& res,
const uint64_t op)
{
auto usr_policy_res = eval_user_policies(user_policies, s->env, boost::none, op, res);
bool verify_user_permission(const DoutPrefixProvider* dpp,
struct req_state * const s,
- const rgw::IAM::ARN& res,
+ const rgw::ARN& res,
const uint64_t op)
{
return verify_user_permission(dpp, s, s->user_acl.get(), s->iam_user_policies, res, op);
/* Helper class used for RGWHTTPArgs parsing */
class NameVal
{
- string str;
- string name;
- string val;
+ const std::string str;
+ std::string name;
+ std::string val;
public:
- explicit NameVal(string nv) : str(nv) {}
+ explicit NameVal(const std::string& nv) : str(nv) {}
int parse();
- string& get_name() { return name; }
- string& get_val() { return val; }
+ std::string& get_name() { return name; }
+ std::string& get_val() { return val; }
};
/** Stores the XML arguments associated with the HTTP request in req_state*/
class RGWHTTPArgs {
- string str, empty_str;
- map<string, string> val_map;
- map<string, string> sys_val_map;
- map<string, string> sub_resources;
- bool has_resp_modifier;
- bool admin_subresource_added;
+ std::string str, empty_str;
+ std::map<std::string, std::string> val_map;
+ std::map<std::string, std::string> sys_val_map;
+ std::map<std::string, std::string> sub_resources;
+ bool has_resp_modifier = false;
+ bool admin_subresource_added = false;
public:
- RGWHTTPArgs() : has_resp_modifier(false), admin_subresource_added(false) {}
+ RGWHTTPArgs() = default;
+ explicit RGWHTTPArgs(const std::string& s) {
+ set(s);
+ parse();
+ }
/** Set the arguments; as received */
- void set(string s) {
+ void set(const std::string& s) {
has_resp_modifier = false;
val_map.clear();
sub_resources.clear();
}
/** parse the received arguments */
int parse();
- void append(const string& name, const string& val);
+ void append(const std::string& name, const string& val);
/** Get the value for a specific argument parameter */
- const string& get(const string& name, bool *exists = NULL) const;
+ const string& get(const std::string& name, bool *exists = NULL) const;
boost::optional<const std::string&>
get_optional(const std::string& name) const;
- int get_bool(const string& name, bool *val, bool *exists);
+ int get_bool(const std::string& name, bool *val, bool *exists);
int get_bool(const char *name, bool *val, bool *exists);
void get_bool(const char *name, bool *val, bool def_val);
int get_int(const char *name, int *val, int def_val);
/** Get the value for specific system argument parameter */
- std::string sys_get(const string& name, bool *exists = nullptr) const;
+ std::string sys_get(const std::string& name, bool *exists = nullptr) const;
/** see if a parameter is contained in this RGWHTTPArgs */
bool exists(const char *name) const {
bool sub_resource_exists(const char *name) const {
return (sub_resources.find(name) != std::end(sub_resources));
}
- map<string, string>& get_params() {
+ std::map<std::string, std::string>& get_params() {
return val_map;
}
const std::map<std::string, std::string>& get_sub_resources() const {
return has_resp_modifier;
}
void set_system() { /* make all system params visible */
- map<string, string>::iterator iter;
+ std::map<std::string, std::string>::iterator iter;
for (iter = sys_val_map.begin(); iter != sys_val_map.end(); ++iter) {
val_map[iter->first] = iter->second;
}
}
- const string& get_str() {
+ const std::string& get_str() {
return str;
}
}; // RGWHTTPArgs
const rgw::IAM::Environment& env,
boost::optional<const rgw::auth::Identity&> id,
const uint64_t op,
- const rgw::IAM::ARN& arn);
+ const rgw::ARN& arn);
bool verify_user_permission(const DoutPrefixProvider* dpp,
struct req_state * const s,
RGWAccessControlPolicy * const user_acl,
const vector<rgw::IAM::Policy>& user_policies,
- const rgw::IAM::ARN& res,
+ const rgw::ARN& res,
const uint64_t op);
bool verify_user_permission_no_policy(const DoutPrefixProvider* dpp,
struct req_state * const s,
const int perm);
bool verify_user_permission(const DoutPrefixProvider* dpp,
struct req_state * const s,
- const rgw::IAM::ARN& res,
+ const rgw::ARN& res,
const uint64_t op);
bool verify_user_permission_no_policy(const DoutPrefixProvider* dpp,
struct req_state * const s,
#include "rgw_common.h"
#include "rgw_http_client.h"
#include "rgw_http_errors.h"
+#include "common/async/completion.h"
#include "common/RefCountedObj.h"
#include "rgw_coroutine.h"
Mutex lock;
Cond cond;
+ using Signature = void(boost::system::error_code);
+ using Completion = ceph::async::Completion<Signature>;
+ std::unique_ptr<Completion> completion;
+
rgw_http_req_data() : id(-1), lock("rgw_http_req_data::lock") {
memset(error_buf, 0, sizeof(error_buf));
}
- int wait() {
- Mutex::Locker l(lock);
+ template <typename ExecutionContext, typename CompletionToken>
+ auto async_wait(ExecutionContext& ctx, CompletionToken&& token) {
+ boost::asio::async_completion<CompletionToken, Signature> init(token);
+ auto& handler = init.completion_handler;
+ {
+ std::unique_lock l{lock};
+ completion = Completion::create(ctx.get_executor(), std::move(handler));
+ }
+ return init.result.get();
+ }
+ int wait(optional_yield y) {
if (done) {
return ret;
}
+#ifdef HAVE_BOOST_CONTEXT
+ if (y) {
+ auto& context = y.get_io_context();
+ auto& yield = y.get_yield_context();
+ boost::system::error_code ec;
+ async_wait(context, yield[ec]);
+ return -ec.value();
+ }
+#endif
+ Mutex::Locker l(lock);
cond.Wait(lock);
return ret;
}
curl_handle = NULL;
h = NULL;
done = true;
- cond.Signal();
- }
-
- bool _is_done() {
- return done;
+ if (completion) {
+ boost::system::error_code ec(-ret, boost::system::system_category());
+ Completion::post(std::move(completion), ec);
+ } else {
+ cond.Signal();
+ }
}
bool is_done() {
- Mutex::Locker l(lock);
return done;
}
/*
* process a single simple one off request
*/
-int RGWHTTPClient::process()
+int RGWHTTPClient::process(optional_yield y)
{
- return RGWHTTP::process(this);
+ return RGWHTTP::process(this, y);
}
string RGWHTTPClient::to_str()
/*
* wait for async request to complete
*/
-int RGWHTTPClient::wait()
+int RGWHTTPClient::wait(optional_yield y)
{
- return req_data->wait();
+ return req_data->wait(y);
}
void RGWHTTPClient::cancel()
if (req_data->curl_handle) {
curl_multi_remove_handle((CURLM *)multi_handle, req_data->get_easy_handle());
}
- if (!req_data->_is_done()) {
+ if (!req_data->is_done()) {
_finish_request(req_data, -ECANCELED);
}
}
return 0;
}
-int RGWHTTP::process(RGWHTTPClient *req) {
+int RGWHTTP::process(RGWHTTPClient *req, optional_yield y) {
if (!req) {
return 0;
}
return r;
}
- return req->wait();
+ return req->wait(y);
}
#ifndef CEPH_RGW_HTTP_CLIENT_H
#define CEPH_RGW_HTTP_CLIENT_H
+#include "common/async/yield_context.h"
#include "common/RWLock.h"
#include "common/Cond.h"
#include "rgw_common.h"
verify_ssl = flag;
}
- int process();
+ int process(optional_yield y=null_yield);
- int wait();
+ int wait(optional_yield y=null_yield);
void cancel();
bool is_done();
{
public:
static int send(RGWHTTPClient *req);
- static int process(RGWHTTPClient *req);
+ static int process(RGWHTTPClient *req, optional_yield y=null_yield);
};
#endif
const uint64_t bit;
};
-namespace {
-boost::optional<Partition> to_partition(const smatch::value_type& p,
- bool wildcards) {
- if (p == "aws") {
- return Partition::aws;
- } else if (p == "aws-cn") {
- return Partition::aws_cn;
- } else if (p == "aws-us-gov") {
- return Partition::aws_us_gov;
- } else if (p == "*" && wildcards) {
- return Partition::wildcard;
- } else {
- return boost::none;
- }
-
- ceph_abort();
-}
-
-boost::optional<Service> to_service(const smatch::value_type& s,
- bool wildcards) {
- static const unordered_map<string, Service> services = {
- { "acm", Service::acm },
- { "apigateway", Service::apigateway },
- { "appstream", Service::appstream },
- { "artifact", Service::artifact },
- { "autoscaling", Service::autoscaling },
- { "aws-marketplace", Service::aws_marketplace },
- { "aws-marketplace-management",
- Service::aws_marketplace_management },
- { "aws-portal", Service::aws_portal },
- { "cloudformation", Service::cloudformation },
- { "cloudfront", Service::cloudfront },
- { "cloudhsm", Service::cloudhsm },
- { "cloudsearch", Service::cloudsearch },
- { "cloudtrail", Service::cloudtrail },
- { "cloudwatch", Service::cloudwatch },
- { "codebuild", Service::codebuild },
- { "codecommit", Service::codecommit },
- { "codedeploy", Service::codedeploy },
- { "codepipeline", Service::codepipeline },
- { "cognito-identity", Service::cognito_identity },
- { "cognito-idp", Service::cognito_idp },
- { "cognito-sync", Service::cognito_sync },
- { "config", Service::config },
- { "datapipeline", Service::datapipeline },
- { "devicefarm", Service::devicefarm },
- { "directconnect", Service::directconnect },
- { "dms", Service::dms },
- { "ds", Service::ds },
- { "dynamodb", Service::dynamodb },
- { "ec2", Service::ec2 },
- { "ecr", Service::ecr },
- { "ecs", Service::ecs },
- { "elasticache", Service::elasticache },
- { "elasticbeanstalk", Service::elasticbeanstalk },
- { "elasticfilesystem", Service::elasticfilesystem },
- { "elasticloadbalancing", Service::elasticloadbalancing },
- { "elasticmapreduce", Service::elasticmapreduce },
- { "elastictranscoder", Service::elastictranscoder },
- { "es", Service::es },
- { "events", Service::events },
- { "firehose", Service::firehose },
- { "gamelift", Service::gamelift },
- { "glacier", Service::glacier },
- { "health", Service::health },
- { "iam", Service::iam },
- { "importexport", Service::importexport },
- { "inspector", Service::inspector },
- { "iot", Service::iot },
- { "kinesis", Service::kinesis },
- { "kinesisanalytics", Service::kinesisanalytics },
- { "kms", Service::kms },
- { "lambda", Service::lambda },
- { "lightsail", Service::lightsail },
- { "logs", Service::logs },
- { "machinelearning", Service::machinelearning },
- { "mobileanalytics", Service::mobileanalytics },
- { "mobilehub", Service::mobilehub },
- { "opsworks", Service::opsworks },
- { "opsworks-cm", Service::opsworks_cm },
- { "polly", Service::polly },
- { "rds", Service::rds },
- { "redshift", Service::redshift },
- { "route53", Service::route53 },
- { "route53domains", Service::route53domains },
- { "s3", Service::s3 },
- { "sdb", Service::sdb },
- { "servicecatalog", Service::servicecatalog },
- { "ses", Service::ses },
- { "sns", Service::sns },
- { "sqs", Service::sqs },
- { "ssm", Service::ssm },
- { "states", Service::states },
- { "storagegateway", Service::storagegateway },
- { "sts", Service::sts },
- { "support", Service::support },
- { "swf", Service::swf },
- { "trustedadvisor", Service::trustedadvisor },
- { "waf", Service::waf },
- { "workmail", Service::workmail },
- { "workspaces", Service::workspaces }};
-
- if (wildcards && s == "*") {
- return Service::wildcard;
- }
-
- auto i = services.find(s);
- if (i == services.end()) {
- return boost::none;
- } else {
- return i->second;
- }
-}
-}
-
-ARN::ARN(const rgw_obj& o)
- : partition(Partition::aws),
- service(Service::s3),
- region(),
- account(o.bucket.tenant),
- resource(o.bucket.name)
-{
- resource.push_back('/');
- resource.append(o.key.name);
-}
-
-ARN::ARN(const rgw_bucket& b)
- : partition(Partition::aws),
- service(Service::s3),
- region(),
- account(b.tenant),
- resource(b.name) { }
-
-ARN::ARN(const rgw_bucket& b, const string& o)
- : partition(Partition::aws),
- service(Service::s3),
- region(),
- account(b.tenant),
- resource(b.name) {
- resource.push_back('/');
- resource.append(o);
-}
-
-ARN::ARN(const string& resource_name, const string& type, const string& tenant, bool has_path)
- : partition(Partition::aws),
- service(Service::iam),
- region(),
- account(tenant),
- resource(type) {
- if (! has_path)
- resource.push_back('/');
- resource.append(resource_name);
-}
-
-boost::optional<ARN> ARN::parse(const string& s, bool wildcards) {
- static const regex rx_wild("arn:([^:]*):([^:]*):([^:]*):([^:]*):([^:]*)",
- std::regex_constants::ECMAScript |
- std::regex_constants::optimize);
- static const regex rx_no_wild(
- "arn:([^:*]*):([^:*]*):([^:*]*):([^:*]*):(.*)",
- std::regex_constants::ECMAScript |
- std::regex_constants::optimize);
-
- smatch match;
-
- if ((s == "*") && wildcards) {
- return ARN(Partition::wildcard, Service::wildcard, "*", "*", "*");
- } else if (regex_match(s, match, wildcards ? rx_wild : rx_no_wild) &&
- match.size() == 6) {
- if (auto p = to_partition(match[1], wildcards)) {
- if (auto s = to_service(match[2], wildcards)) {
- return ARN(*p, *s, match[3], match[4], match[5]);
- }
- }
- }
- return boost::none;
-}
-
-string ARN::to_string() const {
- string s;
-
- if (partition == Partition::aws) {
- s.append("aws:");
- } else if (partition == Partition::aws_cn) {
- s.append("aws-cn:");
- } else if (partition == Partition::aws_us_gov) {
- s.append("aws-us-gov:");
- } else {
- s.append("*:");
- }
-
- static const unordered_map<Service, string> services = {
- { Service::acm, "acm" },
- { Service::apigateway, "apigateway" },
- { Service::appstream, "appstream" },
- { Service::artifact, "artifact" },
- { Service::autoscaling, "autoscaling" },
- { Service::aws_marketplace, "aws-marketplace" },
- { Service::aws_marketplace_management, "aws-marketplace-management" },
- { Service::aws_portal, "aws-portal" },
- { Service::cloudformation, "cloudformation" },
- { Service::cloudfront, "cloudfront" },
- { Service::cloudhsm, "cloudhsm" },
- { Service::cloudsearch, "cloudsearch" },
- { Service::cloudtrail, "cloudtrail" },
- { Service::cloudwatch, "cloudwatch" },
- { Service::codebuild, "codebuild" },
- { Service::codecommit, "codecommit" },
- { Service::codedeploy, "codedeploy" },
- { Service::codepipeline, "codepipeline" },
- { Service::cognito_identity, "cognito-identity" },
- { Service::cognito_idp, "cognito-idp" },
- { Service::cognito_sync, "cognito-sync" },
- { Service::config, "config" },
- { Service::datapipeline, "datapipeline" },
- { Service::devicefarm, "devicefarm" },
- { Service::directconnect, "directconnect" },
- { Service::dms, "dms" },
- { Service::ds, "ds" },
- { Service::dynamodb, "dynamodb" },
- { Service::ec2, "ec2" },
- { Service::ecr, "ecr" },
- { Service::ecs, "ecs" },
- { Service::elasticache, "elasticache" },
- { Service::elasticbeanstalk, "elasticbeanstalk" },
- { Service::elasticfilesystem, "elasticfilesystem" },
- { Service::elasticloadbalancing, "elasticloadbalancing" },
- { Service::elasticmapreduce, "elasticmapreduce" },
- { Service::elastictranscoder, "elastictranscoder" },
- { Service::es, "es" },
- { Service::events, "events" },
- { Service::firehose, "firehose" },
- { Service::gamelift, "gamelift" },
- { Service::glacier, "glacier" },
- { Service::health, "health" },
- { Service::iam, "iam" },
- { Service::importexport, "importexport" },
- { Service::inspector, "inspector" },
- { Service::iot, "iot" },
- { Service::kinesis, "kinesis" },
- { Service::kinesisanalytics, "kinesisanalytics" },
- { Service::kms, "kms" },
- { Service::lambda, "lambda" },
- { Service::lightsail, "lightsail" },
- { Service::logs, "logs" },
- { Service::machinelearning, "machinelearning" },
- { Service::mobileanalytics, "mobileanalytics" },
- { Service::mobilehub, "mobilehub" },
- { Service::opsworks, "opsworks" },
- { Service::opsworks_cm, "opsworks-cm" },
- { Service::polly, "polly" },
- { Service::rds, "rds" },
- { Service::redshift, "redshift" },
- { Service::route53, "route53" },
- { Service::route53domains, "route53domains" },
- { Service::s3, "s3" },
- { Service::sdb, "sdb" },
- { Service::servicecatalog, "servicecatalog" },
- { Service::ses, "ses" },
- { Service::sns, "sns" },
- { Service::sqs, "sqs" },
- { Service::ssm, "ssm" },
- { Service::states, "states" },
- { Service::storagegateway, "storagegateway" },
- { Service::sts, "sts" },
- { Service::support, "support" },
- { Service::swf, "swf" },
- { Service::trustedadvisor, "trustedadvisor" },
- { Service::waf, "waf" },
- { Service::workmail, "workmail" },
- { Service::workspaces, "workspaces" }};
-
- auto i = services.find(service);
- if (i != services.end()) {
- s.append(i->second);
- } else {
- s.push_back('*');
- }
- s.push_back(':');
-
- s.append(region);
- s.push_back(':');
-
- s.append(account);
- s.push_back(':');
-
- s.append(resource);
-
- return s;
-}
-bool operator ==(const ARN& l, const ARN& r) {
- return ((l.partition == r.partition) &&
- (l.service == r.service) &&
- (l.region == r.region) &&
- (l.account == r.account) &&
- (l.resource == r.resource));
-}
-bool operator <(const ARN& l, const ARN& r) {
- return ((l.partition < r.partition) ||
- (l.service < r.service) ||
- (l.region < r.region) ||
- (l.account < r.account) ||
- (l.resource < r.resource));
-}
-
-// The candidate is not allowed to have wildcards. The only way to
-// do that sanely would be to use unification rather than matching.
-bool ARN::match(const ARN& candidate) const {
- if ((candidate.partition == Partition::wildcard) ||
- (partition != candidate.partition && partition
- != Partition::wildcard)) {
- return false;
- }
-
- if ((candidate.service == Service::wildcard) ||
- (service != candidate.service && service != Service::wildcard)) {
- return false;
- }
-
- if (!match_policy(region, candidate.region, MATCH_POLICY_ARN)) {
- return false;
- }
-
- if (!match_policy(account, candidate.account, MATCH_POLICY_ARN)) {
- return false;
- }
-
- if (!match_policy(resource, candidate.resource, MATCH_POLICY_RESOURCE)) {
- return false;
- }
-
- return true;
-}
static const actpair actpairs[] =
{{ "s3:AbortMultipartUpload", s3AbortMultipartUpload },
#include "rgw_basic_types.h"
#include "rgw_iam_policy_keywords.h"
#include "rgw_string.h"
+#include "rgw_arn.h"
class RGWRados;
namespace rgw {
using Environment = boost::container::flat_map<std::string, std::string>;
-enum struct Partition {
- aws, aws_cn, aws_us_gov, wildcard
- // If we wanted our own ARNs for principal type unique to us
- // (maybe to integrate better with Swift) or for anything else we
- // provide that doesn't map onto S3, we could add an 'rgw'
- // partition type.
-};
-
-enum struct Service {
- apigateway, appstream, artifact, autoscaling, aws_portal, acm,
- cloudformation, cloudfront, cloudhsm, cloudsearch, cloudtrail,
- cloudwatch, events, logs, codebuild, codecommit, codedeploy,
- codepipeline, cognito_idp, cognito_identity, cognito_sync,
- config, datapipeline, dms, devicefarm, directconnect,
- ds, dynamodb, ec2, ecr, ecs, ssm, elasticbeanstalk, elasticfilesystem,
- elasticloadbalancing, elasticmapreduce, elastictranscoder, elasticache,
- es, gamelift, glacier, health, iam, importexport, inspector, iot,
- kms, kinesisanalytics, firehose, kinesis, lambda, lightsail,
- machinelearning, aws_marketplace, aws_marketplace_management,
- mobileanalytics, mobilehub, opsworks, opsworks_cm, polly,
- redshift, rds, route53, route53domains, sts, servicecatalog,
- ses, sns, sqs, s3, swf, sdb, states, storagegateway, support,
- trustedadvisor, waf, workmail, workspaces, wildcard
-};
-
-struct ARN {
- Partition partition;
- Service service;
- std::string region;
- // Once we refit tenant, we should probably use that instead of a
- // string.
- std::string account;
- std::string resource;
-
- ARN()
- : partition(Partition::wildcard), service(Service::wildcard) {}
- ARN(Partition partition, Service service, std::string region,
- std::string account, std::string resource)
- : partition(partition), service(service), region(std::move(region)),
- account(std::move(account)), resource(std::move(resource)) {}
- ARN(const rgw_obj& o);
- ARN(const rgw_bucket& b);
- ARN(const rgw_bucket& b, const std::string& o);
- ARN(const string& resource_name, const string& type, const string& tenant, bool has_path=false);
-
- static boost::optional<ARN> parse(const std::string& s,
- bool wildcard = false);
- std::string to_string() const;
-
- // `this` is the pattern
- bool match(const ARN& candidate) const;
-};
-
-inline std::string to_string(const ARN& a) {
- return a.to_string();
-}
-
-inline std::ostream& operator <<(std::ostream& m, const ARN& a) {
- return m << to_string(a);
-}
-
-bool operator ==(const ARN& l, const ARN& r);
-bool operator <(const ARN& l, const ARN& r);
-
using Address = std::bitset<128>;
struct MaskedIP {
bool v6;
}
}
-namespace std {
-template<>
-struct hash<::rgw::IAM::Service> {
- size_t operator()(const ::rgw::IAM::Service& s) const noexcept {
- // Invoke a default-constructed hash object for int.
- return hash<int>()(static_cast<int>(s));
- }
-};
-}
-
#endif
f->open_object_section("auth");
f->open_object_section("passwordCredentials");
encode_json("username", to_string(conf.get_admin_user()), f);
- encode_json("password", to_string(conf.get_admin_password()), f);
+ encode_json("password", conf.get_admin_password(), f);
f->close_section();
encode_json("tenantName", to_string(conf.get_admin_tenant()), f);
f->close_section();
encode_json("name", to_string(conf.get_admin_domain()), f);
f->close_section();
encode_json("name", to_string(conf.get_admin_user()), f);
- encode_json("password", to_string(conf.get_admin_password()), f);
+ encode_json("password", conf.get_admin_password(), f);
f->close_section();
f->close_section();
f->close_section();
#include "rgw_frontend.h"
#include "rgw_http_client_curl.h"
#include "rgw_perf_counters.h"
+#ifdef WITH_RADOSGW_AMQP_ENDPOINT
+#include "rgw_amqp.h"
+#endif
#if defined(WITH_RADOSGW_BEAST_FRONTEND)
#include "rgw_asio_frontend.h"
#endif /* WITH_RADOSGW_BEAST_FRONTEND */
// S3 website mode is a specialization of S3
const bool s3website_enabled = apis_map.count("s3website") > 0;
const bool sts_enabled = apis_map.count("sts") > 0;
+ const bool pubsub_enabled = apis_map.count("pubsub") > 0;
// Swift API entrypoint could placed in the root instead of S3
const bool swift_at_root = g_conf()->rgw_swift_url_prefix == "/";
if (apis_map.count("s3") > 0 || s3website_enabled) {
if (! swift_at_root) {
rest.register_default_mgr(set_logging(rest_filter(store, RGW_REST_S3,
- new RGWRESTMgr_S3(s3website_enabled, sts_enabled))));
+ new RGWRESTMgr_S3(s3website_enabled, sts_enabled, pubsub_enabled))));
} else {
derr << "Cannot have the S3 or S3 Website enabled together with "
<< "Swift API placed in the root of hierarchy" << dendl;
}
}
+ if (pubsub_enabled) {
+#ifdef WITH_RADOSGW_AMQP_ENDPOINT
+ if (!rgw::amqp::init(cct.get())) {
+ dout(1) << "ERROR: failed to initialize AMQP manager" << dendl;
+ }
+#endif
+ }
+
if (apis_map.count("swift") > 0) {
RGWRESTMgr_SWIFT* const swift_resource = new RGWRESTMgr_SWIFT;
rgw_shutdown_resolver();
rgw_http_client_cleanup();
rgw::curl::cleanup_curl();
+#ifdef WITH_RADOSGW_AMQP_ENDPOINT
+ rgw::amqp::shutdown();
+#endif
rgw_perf_stop(g_ceph_context);
--- /dev/null
+// -*- mode:C++; tab-width:8; c-basic-offset:2; indent-tabs-mode:t -*-
+// vim: ts=8 sw=2 smarttab
+
+#include "rgw_notify.h"
+#include <memory>
+#include <boost/algorithm/hex.hpp>
+#include "rgw_pubsub.h"
+#include "rgw_pubsub_push.h"
+#include "rgw_perf_counters.h"
+#include "common/dout.h"
+
+#define dout_subsys ceph_subsys_rgw
+
+namespace rgw::notify {
+
+// populate record from request
+void populate_record_from_request(const req_state *s,
+ const ceph::real_time& mtime,
+ const std::string& etag,
+ EventType event_type,
+ rgw_pubsub_s3_record& record) {
+ record.eventVersion = "2.1";
+ record.eventSource = "aws:s3";
+ record.eventTime = mtime;
+ record.eventName = to_string(event_type);
+ record.userIdentity = s->user->user_id.id; // user that triggered the change
+ record.sourceIPAddress = ""; // IP address of client that triggered the change: TODO
+ record.x_amz_request_id = s->req_id; // request ID of the original change
+ record.x_amz_id_2 = s->host_id; // RGW on which the change was made
+ record.s3SchemaVersion = "1.0";
+ // configurationId is filled from subscription configuration
+ record.bucket_name = s->bucket_name;
+ record.bucket_ownerIdentity = s->bucket_owner.get_id().id;
+ record.bucket_arn = to_string(rgw::ARN(s->bucket));
+ record.object_key = s->object.name;
+ record.object_size = s->obj_size;
+ record.object_etag = etag;
+ record.object_versionId = s->object.instance;
+ // use timestamp as per key sequence id (hex encoded)
+ const utime_t ts(real_clock::now());
+ boost::algorithm::hex((const char*)&ts, (const char*)&ts + sizeof(utime_t),
+ std::back_inserter(record.object_sequencer));
+ // event ID is rgw extension (not in the S3 spec), used for acking the event
+ // same format is used in both S3 compliant and Ceph specific events
+ // not used in case of push-only mode
+ record.id = "";
+ record.bucket_id = s->bucket.bucket_id;
+ // pass meta data
+ record.x_meta_map = s->info.x_meta_map;
+}
+
+bool match(const rgw_pubsub_topic_filter& filter, const req_state* s, EventType event) {
+ if (!::match(filter.events, event)) {
+ return false;
+ }
+ if (!::match(filter.s3_filter.key_filter, s->object.name)) {
+ return false;
+ }
+ if (!::match(filter.s3_filter.metadata_filter, s->info.x_meta_map)) {
+ return false;
+ }
+ return true;
+}
+
+int publish(const req_state* s,
+ const ceph::real_time& mtime,
+ const std::string& etag,
+ EventType event_type,
+ RGWRados* store) {
+ RGWUserPubSub ps_user(store, s->user->user_id);
+ RGWUserPubSub::Bucket ps_bucket(&ps_user, s->bucket);
+ rgw_pubsub_bucket_topics bucket_topics;
+ auto rc = ps_bucket.get_topics(&bucket_topics);
+ if (rc < 0) {
+ // failed to fetch bucket topics
+ return rc;
+ }
+ rgw_pubsub_s3_record record;
+ populate_record_from_request(s, mtime, etag, event_type, record);
+ bool event_handled = false;
+ bool event_should_be_handled = false;
+ for (const auto& bucket_topic : bucket_topics.topics) {
+ const rgw_pubsub_topic_filter& topic_filter = bucket_topic.second;
+ const rgw_pubsub_topic& topic_cfg = topic_filter.topic;
+ if (!match(topic_filter, s, event_type)) {
+ // topic does not apply to req_state
+ continue;
+ }
+ event_should_be_handled = true;
+ record.configurationId = topic_filter.s3_id;
+ ldout(s->cct, 20) << "notification: '" << topic_filter.s3_id <<
+ "' on topic: '" << topic_cfg.dest.arn_topic <<
+ "' and bucket: '" << s->bucket.name <<
+ "' (unique topic: '" << topic_cfg.name <<
+ "') apply to event of type: '" << to_string(event_type) << "'" << dendl;
+ try {
+ // TODO add endpoint LRU cache
+ const auto push_endpoint = RGWPubSubEndpoint::create(topic_cfg.dest.push_endpoint,
+ topic_cfg.dest.arn_topic,
+ RGWHTTPArgs(topic_cfg.dest.push_endpoint_args),
+ s->cct);
+ const std::string push_endpoint_str = push_endpoint->to_str();
+ ldout(s->cct, 20) << "push endpoint created: " << push_endpoint_str << dendl;
+ auto rc = push_endpoint->send_to_completion_async(s->cct, record, s->yield);
+ if (rc < 0) {
+ // bail out on first error
+ // TODO: add conf for bail out policy
+ ldout(s->cct, 1) << "push to endpoint " << push_endpoint_str << " failed, with error: " << rc << dendl;
+ if (perfcounter) perfcounter->inc(l_rgw_pubsub_push_failed);
+ return rc;
+ }
+ if (perfcounter) perfcounter->inc(l_rgw_pubsub_push_ok);
+ ldout(s->cct, 20) << "successfull push to endpoint " << push_endpoint_str << dendl;
+ event_handled = true;
+ } catch (const RGWPubSubEndpoint::configuration_error& e) {
+ ldout(s->cct, 1) << "ERROR: failed to create push endpoint: "
+ << topic_cfg.dest.push_endpoint << " due to: " << e.what() << dendl;
+ if (perfcounter) perfcounter->inc(l_rgw_pubsub_push_failed);
+ return -EINVAL;
+ }
+ }
+
+ if (event_should_be_handled) {
+ // not counting events with no notifications or events that are filtered
+ // counting a single event, regardless of the number of notifications it sends
+ if (perfcounter) perfcounter->inc(l_rgw_pubsub_event_triggered);
+ if (!event_handled) {
+ // all notifications for this event failed
+ if (perfcounter) perfcounter->inc(l_rgw_pubsub_event_lost);
+ }
+ }
+
+ return 0;
+}
+
+}
+
--- /dev/null
+// -*- mode:C++; tab-width:8; c-basic-offset:2; indent-tabs-mode:t -*-
+// vim: ts=8 sw=2 smarttab
+
+#pragma once
+
+#include <string>
+#include "common/ceph_time.h"
+#include "rgw_notify_event_type.h"
+
+// forward declarations
+class RGWRados;
+class req_state;
+
+namespace rgw::notify {
+
+// publish notification
+int publish(const req_state* s,
+ const ceph::real_time& mtime,
+ const std::string& etag,
+ EventType event_type,
+ RGWRados* store);
+
+}
+
--- /dev/null
+// -*- mode:C++; tab-width:8; c-basic-offset:2; indent-tabs-mode:t -*-
+// vim: ts=8 sw=2 smarttab
+
+#include "rgw_notify_event_type.h"
+#include "include/str_list.h"
+
+namespace rgw::notify {
+
+ std::string to_string(EventType t) {
+ switch (t) {
+ case ObjectCreated:
+ return "s3:ObjectCreated:*";
+ case ObjectCreatedPut:
+ return "s3:ObjectCreated:Put";
+ case ObjectCreatedPost:
+ return "s3:ObjectCreated:Post";
+ case ObjectCreatedCopy:
+ return "s3:ObjectCreated:Copy";
+ case ObjectCreatedCompleteMultipartUpload:
+ return "s3:ObjectCreated:CompleteMultipartUpload";
+ case ObjectRemoved:
+ return "s3:ObjectRemoved:*";
+ case ObjectRemovedDelete:
+ return "s3:ObjectRemoved:Delete";
+ case ObjectRemovedDeleteMarkerCreated:
+ return "s3:ObjectRemoved:DeleteMarkerCreated";
+ case UnknownEvent:
+ return "s3:UnknownEvet";
+ }
+ return "s3:UnknownEvent";
+ }
+
+ std::string to_ceph_string(EventType t) {
+ switch (t) {
+ case ObjectCreated:
+ case ObjectCreatedPut:
+ case ObjectCreatedPost:
+ case ObjectCreatedCopy:
+ case ObjectCreatedCompleteMultipartUpload:
+ return "OBJECT_CREATE";
+ case ObjectRemovedDelete:
+ return "OBJECT_DELETE";
+ case ObjectRemovedDeleteMarkerCreated:
+ return "DELETE_MARKER_CREATE";
+ case ObjectRemoved:
+ case UnknownEvent:
+ return "UNKNOWN_EVENT";
+ }
+ return "UNKNOWN_EVENT";
+ }
+
+ EventType from_string(const std::string& s) {
+ if (s == "s3:ObjectCreated:*" || s == "OBJECT_CREATE")
+ return ObjectCreated;
+ if (s == "s3:ObjectCreated:Put")
+ return ObjectCreatedPut;
+ if (s == "s3:ObjectCreated:Post")
+ return ObjectCreatedPost;
+ if (s == "s3:ObjectCreated:Copy")
+ return ObjectCreatedCopy;
+ if (s == "s3:ObjectCreated:CompleteMultipartUpload")
+ return ObjectCreatedCompleteMultipartUpload;
+ if (s == "s3:ObjectRemoved:*")
+ return ObjectRemoved;
+ if (s == "s3:ObjectRemoved:Delete" || s == "OBJECT_DELETE")
+ return ObjectRemovedDelete;
+ if (s == "s3:ObjectRemoved:DeleteMarkerCreated" || s == "DELETE_MARKER_CREATE")
+ return ObjectRemovedDeleteMarkerCreated;
+ return UnknownEvent;
+ }
+
+bool operator==(EventType lhs, EventType rhs) {
+ return lhs & rhs;
+}
+
+void from_string_list(const std::string& string_list, EventTypeList& event_list) {
+ event_list.clear();
+ ceph::for_each_substr(string_list, ",", [&event_list] (auto token) {
+ event_list.push_back(rgw::notify::from_string(std::string(token.begin(), token.end())));
+ });
+}
+}
--- /dev/null
+// -*- mode:C++; tab-width:8; c-basic-offset:2; indent-tabs-mode:t -*-
+// vim: ts=8 sw=2 smarttab
+
+#pragma once
+#include <string>
+#include <vector>
+
+namespace rgw::notify {
+ enum EventType {
+ ObjectCreated = 0xF,
+ ObjectCreatedPut = 0x1,
+ ObjectCreatedPost = 0x2,
+ ObjectCreatedCopy = 0x4,
+ ObjectCreatedCompleteMultipartUpload = 0x8,
+ ObjectRemoved = 0xF0,
+ ObjectRemovedDelete = 0x10,
+ ObjectRemovedDeleteMarkerCreated = 0x20,
+ UnknownEvent = 0x100
+ };
+
+ using EventTypeList = std::vector<EventType>;
+
+ // two event types are considered equal if their bits intersect
+ bool operator==(EventType lhs, EventType rhs);
+
+ std::string to_string(EventType t);
+
+ std::string to_ceph_string(EventType t);
+
+ EventType from_string(const std::string& s);
+
+ // create a vector of event types from comma separated list of event types
+ void from_string_list(const std::string& string_list, EventTypeList& event_list);
+}
+
#include "rgw_putobj_processor.h"
#include "rgw_crypt.h"
#include "rgw_perf_counters.h"
+#include "rgw_notify.h"
+#include "rgw_notify_event_type.h"
#include "services/svc_zone.h"
#include "services/svc_quota.h"
using boost::optional;
using boost::none;
-using rgw::IAM::ARN;
+using rgw::ARN;
using rgw::IAM::Effect;
using rgw::IAM::Policy;
int RGWListBuckets::verify_permission()
{
- rgw::IAM::Partition partition = rgw::IAM::Partition::aws;
- rgw::IAM::Service service = rgw::IAM::Service::s3;
+ rgw::Partition partition = rgw::Partition::aws;
+ rgw::Service service = rgw::Service::s3;
if (!verify_user_permission(this, s, ARN(partition, service, "", s->user->user_id.tenant, "*"), rgw::IAM::s3ListAllMyBuckets)) {
return -EACCES;
cs_object.instance.empty() ?
rgw::IAM::s3GetObject :
rgw::IAM::s3GetObjectVersion,
- rgw::IAM::ARN(obj)); usr_policy_res == Effect::Deny)
+ rgw::ARN(obj)); usr_policy_res == Effect::Deny)
return -EACCES;
else if (usr_policy_res == Effect::Allow)
break;
cs_object.instance.empty() ?
rgw::IAM::s3GetObject :
rgw::IAM::s3GetObjectVersion,
- rgw::IAM::ARN(obj));
+ rgw::ARN(obj));
}
if (e == Effect::Deny) {
return -EACCES;
return;
}
}
+
+ // send request to notification manager
+ const auto ret = rgw::notify::publish(s, mtime, etag, rgw::notify::ObjectCreatedPut, store);
+ if (ret < 0) {
+ ldpp_dout(this, 5) << "WARNING: publishing notification failed, with error: " << ret << dendl;
+ // TODO: we should have conf to make send a blocking coroutine and reply with error in case sending failed
+ // this should be global conf (probably returnign a different handler)
+ // so we don't need to read the configured values before we perform it
+ }
}
int RGWPostObj::verify_permission()
return;
}
} while (is_next_file_to_upload());
+
+ const auto ret = rgw::notify::publish(s, ceph::real_clock::now(), etag, rgw::notify::ObjectCreatedPost, store);
+ if (ret < 0) {
+ ldpp_dout(this, 5) << "WARNING: publishing notification failed, with error: " << ret << dendl;
+ // TODO: we should have conf to make send a blocking coroutine and reply with error in case sending failed
+ // this should be global conf (probably returnign a different handler)
+ // so we don't need to read the configured values before we perform it
+ }
}
} else {
op_ret = -EINVAL;
}
+
+ const auto ret = rgw::notify::publish(s, ceph::real_clock::now(), attrs[RGW_ATTR_ETAG].to_str(),
+ delete_marker && s->object.instance.empty() ? rgw::notify::ObjectRemovedDeleteMarkerCreated : rgw::notify::ObjectRemovedDelete,
+ store);
+ if (ret < 0) {
+ ldpp_dout(this, 5) << "WARNING: publishing notification failed, with error: " << ret << dendl;
+ // TODO: we should have conf to make send a blocking coroutine and reply with error in case sending failed
+ // this should be global conf (probably returnign a different handler)
+ // so we don't need to read the configured values before we perform it
+ }
}
bool RGWCopyObj::parse_copy_location(const boost::string_view& url_src,
&etag,
copy_obj_progress_cb, (void *)this
);
+
+ const auto ret = rgw::notify::publish(s, mtime, etag, rgw::notify::ObjectCreatedCopy, store);
+ if (ret < 0) {
+ ldpp_dout(this, 5) << "WARNING: publishing notification failed, with error: " << ret << dendl;
+ // TODO: we should have conf to make send a blocking coroutine and reply with error in case sending failed
+ // this should be global conf (probably returnign a different handler)
+ // so we don't need to read the configured values before we perform it
+ }
}
int RGWGetACLs::verify_permission()
op_ret = obj_op.write_meta(bl.length(), 0, attrs);
} while (op_ret == -EEXIST);
+
+ const auto ret = rgw::notify::publish(s, ceph::real_clock::now(), attrs[RGW_ATTR_ETAG].to_str(), rgw::notify::ObjectCreatedPost, store);
+ if (ret < 0) {
+ ldpp_dout(this, 5) << "WARNING: publishing notification failed, with error: " << ret << dendl;
+ // TODO: we should have conf to make send a blocking coroutine and reply with error in case sending failed
+ // this should be global conf (probably returnign a different handler)
+ // so we don't need to read the configured values before we perform it
+ }
}
int RGWCompleteMultipart::verify_permission()
} else {
ldpp_dout(this, 0) << "WARNING: failed to remove object " << meta_obj << dendl;
}
+
+ const auto ret = rgw::notify::publish(s, ceph::real_clock::now(), etag, rgw::notify::ObjectCreatedCompleteMultipartUpload, store);
+ if (ret < 0) {
+ ldpp_dout(this, 5) << "WARNING: publishing notification failed, with error: " << ret << dendl;
+ // TODO: we should have conf to make send a blocking coroutine and reply with error in case sending failed
+ // this should be global conf (probably returnign a different handler)
+ // so we don't need to read the configured values before we perform it
+ }
}
int RGWCompleteMultipart::MPSerializer::try_lock(
plb.add_u64_counter(l_rgw_pubsub_push_ok, "pubsub_push_ok", "Pubsub events pushed to an endpoint");
plb.add_u64_counter(l_rgw_pubsub_push_failed, "pubsub_push_failed", "Pubsub events failed to be pushed to an endpoint");
plb.add_u64(l_rgw_pubsub_push_pending, "pubsub_push_pending", "Pubsub events pending reply from endpoint");
+ plb.add_u64_counter(l_rgw_pubsub_missing_conf, "pubsub_missing_conf", "Pubsub events could not be handled because of missing configuration");
perfcounter = plb.create_perf_counters();
cct->get_perfcounters_collection()->add(perfcounter);
l_rgw_pubsub_push_ok,
l_rgw_pubsub_push_failed,
l_rgw_pubsub_push_pending,
+ l_rgw_pubsub_missing_conf,
l_rgw_last,
};
+// -*- mode:C++; tab-width:8; c-basic-offset:2; indent-tabs-mode:t -*-
+// vim: ts=8 sw=2 smarttab ft=cpp
+
+#include "services/svc_zone.h"
#include "rgw_b64.h"
#include "rgw_rados.h"
#include "rgw_pubsub.h"
#include "rgw_tools.h"
+#include "rgw_xml.h"
+#include "rgw_arn.h"
+#include "rgw_pubsub_push.h"
+#include "rgw_rados.h"
+#include <regex>
+#include <algorithm>
#define dout_subsys ceph_subsys_rgw
+bool rgw_s3_key_filter::decode_xml(XMLObj* obj) {
+ XMLObjIter iter = obj->find("FilterRule");
+ XMLObj *o;
+
+ const auto throw_if_missing = true;
+ auto prefix_not_set = true;
+ auto suffix_not_set = true;
+ auto regex_not_set = true;
+ std::string name;
+
+ while ((o = iter.get_next())) {
+ RGWXMLDecoder::decode_xml("Name", name, o, throw_if_missing);
+ if (name == "prefix" && prefix_not_set) {
+ prefix_not_set = false;
+ RGWXMLDecoder::decode_xml("Value", prefix_rule, o, throw_if_missing);
+ } else if (name == "suffix" && suffix_not_set) {
+ suffix_not_set = false;
+ RGWXMLDecoder::decode_xml("Value", suffix_rule, o, throw_if_missing);
+ } else if (name == "regex" && regex_not_set) {
+ regex_not_set = false;
+ RGWXMLDecoder::decode_xml("Value", regex_rule, o, throw_if_missing);
+ } else {
+ throw RGWXMLDecoder::err("invalid/duplicate S3Key filter rule name: '" + name + "'");
+ }
+ }
+ return true;
+}
+
+void rgw_s3_key_filter::dump_xml(Formatter *f) const {
+ if (!prefix_rule.empty()) {
+ f->open_object_section("FilterRule");
+ ::encode_xml("Name", "prefix", f);
+ ::encode_xml("Value", prefix_rule, f);
+ f->close_section();
+ }
+ if (!suffix_rule.empty()) {
+ f->open_object_section("FilterRule");
+ ::encode_xml("Name", "suffix", f);
+ ::encode_xml("Value", suffix_rule, f);
+ f->close_section();
+ }
+ if (!regex_rule.empty()) {
+ f->open_object_section("FilterRule");
+ ::encode_xml("Name", "regex", f);
+ ::encode_xml("Value", regex_rule, f);
+ f->close_section();
+ }
+}
+
+bool rgw_s3_key_filter::has_content() const {
+ return !(prefix_rule.empty() && suffix_rule.empty() && regex_rule.empty());
+}
+
+bool rgw_s3_metadata_filter::decode_xml(XMLObj* obj) {
+ metadata.clear();
+ XMLObjIter iter = obj->find("FilterRule");
+ XMLObj *o;
+
+ const auto throw_if_missing = true;
+
+ std::string key;
+ std::string value;
+
+ while ((o = iter.get_next())) {
+ RGWXMLDecoder::decode_xml("Name", key, o, throw_if_missing);
+ RGWXMLDecoder::decode_xml("Value", value, o, throw_if_missing);
+ metadata.emplace(key, value);
+ }
+ return true;
+}
+
+void rgw_s3_metadata_filter::dump_xml(Formatter *f) const {
+ for (const auto& key_value : metadata) {
+ f->open_object_section("FilterRule");
+ ::encode_xml("Name", key_value.first, f);
+ ::encode_xml("Value", key_value.second, f);
+ f->close_section();
+ }
+}
+
+bool rgw_s3_metadata_filter::has_content() const {
+ return !metadata.empty();
+}
+
+bool rgw_s3_filter::decode_xml(XMLObj* obj) {
+ RGWXMLDecoder::decode_xml("S3Key", key_filter, obj);
+ RGWXMLDecoder::decode_xml("S3Metadata", metadata_filter, obj);
+ return true;
+}
+
+void rgw_s3_filter::dump_xml(Formatter *f) const {
+ if (key_filter.has_content()) {
+ ::encode_xml("S3Key", key_filter, f);
+ }
+ if (metadata_filter.has_content()) {
+ ::encode_xml("S3Metadata", metadata_filter, f);
+ }
+}
+
+bool rgw_s3_filter::has_content() const {
+ return key_filter.has_content() ||
+ metadata_filter.has_content();
+}
+
+bool match(const rgw_s3_key_filter& filter, const std::string& key) {
+ const auto key_size = key.size();
+ const auto prefix_size = filter.prefix_rule.size();
+ if (prefix_size != 0) {
+ // prefix rule exists
+ if (prefix_size > key_size) {
+ // if prefix is longer than key, we fail
+ return false;
+ }
+ if (!std::equal(filter.prefix_rule.begin(), filter.prefix_rule.end(), key.begin())) {
+ return false;
+ }
+ }
+ const auto suffix_size = filter.suffix_rule.size();
+ if (suffix_size != 0) {
+ // suffix rule exists
+ if (suffix_size > key_size) {
+ // if suffix is longer than key, we fail
+ return false;
+ }
+ if (!std::equal(filter.suffix_rule.begin(), filter.suffix_rule.end(), (key.end() - suffix_size))) {
+ return false;
+ }
+ }
+ if (!filter.regex_rule.empty()) {
+ // TODO add regex chaching in the filter
+ const std::regex base_regex(filter.regex_rule);
+ if (!std::regex_match(key, base_regex)) {
+ return false;
+ }
+ }
+ return true;
+}
+
+bool match(const rgw_s3_metadata_filter& filter, const Metadata& metadata) {
+ // all filter pairs must exist with the same value in the object's metadata
+ // object metadata may include items not in the filter
+ return std::includes(metadata.begin(), metadata.end(), filter.metadata.begin(), filter.metadata.end());
+}
+
+bool match(const rgw::notify::EventTypeList& events, rgw::notify::EventType event) {
+ // if event list exists, and none of the events in the list matches the event type, filter the message
+ if (!events.empty() && std::find(events.begin(), events.end(), event) == events.end()) {
+ return false;
+ }
+ return true;
+}
+
+void do_decode_xml_obj(rgw::notify::EventTypeList& l, const string& name, XMLObj *obj) {
+ l.clear();
+
+ XMLObjIter iter = obj->find(name);
+ XMLObj *o;
+
+ while ((o = iter.get_next())) {
+ std::string val;
+ decode_xml_obj(val, o);
+ l.push_back(rgw::notify::from_string(val));
+ }
+}
+
+bool rgw_pubsub_s3_notification::decode_xml(XMLObj *obj) {
+ const auto throw_if_missing = true;
+ RGWXMLDecoder::decode_xml("Id", id, obj, throw_if_missing);
+
+ RGWXMLDecoder::decode_xml("Topic", topic_arn, obj, throw_if_missing);
+
+ RGWXMLDecoder::decode_xml("Filter", filter, obj);
+
+ do_decode_xml_obj(events, "Event", obj);
+ if (events.empty()) {
+ // if no events are provided, we assume all events
+ events.push_back(rgw::notify::ObjectCreated);
+ events.push_back(rgw::notify::ObjectRemoved);
+ }
+ return true;
+}
+
+void rgw_pubsub_s3_notification::dump_xml(Formatter *f) const {
+ ::encode_xml("Id", id, f);
+ ::encode_xml("Topic", topic_arn.c_str(), f);
+ if (filter.has_content()) {
+ ::encode_xml("Filter", filter, f);
+ }
+ for (const auto& event : events) {
+ ::encode_xml("Event", rgw::notify::to_string(event), f);
+ }
+}
+
+bool rgw_pubsub_s3_notifications::decode_xml(XMLObj *obj) {
+ do_decode_xml_obj(list, "TopicConfiguration", obj);
+ if (list.empty()) {
+ throw RGWXMLDecoder::err("at least one 'TopicConfiguration' must exist");
+ }
+ return true;
+}
+
+rgw_pubsub_s3_notification::rgw_pubsub_s3_notification(const rgw_pubsub_topic_filter& topic_filter) :
+ id(topic_filter.s3_id), events(topic_filter.events), topic_arn(topic_filter.topic.arn), filter(topic_filter.s3_filter) {}
+
+void rgw_pubsub_s3_notifications::dump_xml(Formatter *f) const {
+ do_encode_xml("NotificationConfiguration", list, "TopicConfiguration", f);
+}
+
+void rgw_pubsub_s3_record::dump(Formatter *f) const {
+ encode_json("eventVersion", eventVersion, f);
+ encode_json("eventSource", eventSource, f);
+ encode_json("awsRegion", awsRegion, f);
+ utime_t ut(eventTime);
+ encode_json("eventTime", ut, f);
+ encode_json("eventName", eventName, f);
+ {
+ Formatter::ObjectSection s(*f, "userIdentity");
+ encode_json("principalId", userIdentity, f);
+ }
+ {
+ Formatter::ObjectSection s(*f, "requestParameters");
+ encode_json("sourceIPAddress", sourceIPAddress, f);
+ }
+ {
+ Formatter::ObjectSection s(*f, "responseElements");
+ encode_json("x-amz-request-id", x_amz_request_id, f);
+ encode_json("x-amz-id-2", x_amz_id_2, f);
+ }
+ {
+ Formatter::ObjectSection s(*f, "s3");
+ encode_json("s3SchemaVersion", s3SchemaVersion, f);
+ encode_json("configurationId", configurationId, f);
+ {
+ Formatter::ObjectSection sub_s(*f, "bucket");
+ encode_json("name", bucket_name, f);
+ {
+ Formatter::ObjectSection sub_sub_s(*f, "ownerIdentity");
+ encode_json("principalId", bucket_ownerIdentity, f);
+ }
+ encode_json("arn", bucket_arn, f);
+ encode_json("id", bucket_id, f);
+ }
+ {
+ Formatter::ObjectSection sub_s(*f, "object");
+ encode_json("key", object_key, f);
+ encode_json("size", object_size, f);
+ encode_json("etag", object_etag, f);
+ encode_json("versionId", object_versionId, f);
+ encode_json("sequencer", object_sequencer, f);
+ encode_json("metadata", x_meta_map, f);
+ }
+ }
+ encode_json("eventId", id, f);
+}
void rgw_pubsub_event::dump(Formatter *f) const
{
encode_json("id", id, f);
- encode_json("event", event, f);
+ encode_json("event", event_name, f);
utime_t ut(timestamp);
encode_json("timestamp", ut, f);
encode_json("info", info, f);
{
encode_json("user", user, f);
encode_json("name", name, f);
+ encode_json("dest", dest, f);
+ encode_json("arn", arn, f);
+}
+
+void rgw_pubsub_topic::dump_xml(Formatter *f) const
+{
+ encode_xml("User", user, f);
+ encode_xml("Name", name, f);
+ encode_xml("EndPoint", dest, f);
+ encode_xml("TopicArn", arn, f);
+}
+
+void encode_json(const char *name, const rgw::notify::EventTypeList& l, Formatter *f)
+{
+ f->open_array_section(name);
+ for (auto iter = l.cbegin(); iter != l.cend(); ++iter) {
+ f->dump_string("obj", rgw::notify::to_ceph_string(*iter));
+ }
+ f->close_section();
}
void rgw_pubsub_topic_filter::dump(Formatter *f) const
}
}
+void rgw_pubsub_user_topics::dump_xml(Formatter *f) const
+{
+ for (auto& t : topics) {
+ encode_xml("member", t.second.topic, f);
+ }
+}
+
void rgw_pubsub_sub_dest::dump(Formatter *f) const
{
encode_json("bucket_name", bucket_name, f);
encode_json("oid_prefix", oid_prefix, f);
encode_json("push_endpoint", push_endpoint, f);
+ encode_json("push_endpoint_args", push_endpoint_args, f);
+ encode_json("push_endpoint_topic", arn_topic, f);
+}
+
+void rgw_pubsub_sub_dest::dump_xml(Formatter *f) const
+{
+ encode_xml("EndpointAddress", push_endpoint, f);
+ encode_xml("EndpointArgs", push_endpoint_args, f);
+ encode_xml("EndpointTopic", arn_topic, f);
}
void rgw_pubsub_sub_config::dump(Formatter *f) const
encode_json("name", name, f);
encode_json("topic", topic, f);
encode_json("dest", dest, f);
+ encode_json("s3_id", s3_id, f);
}
int RGWUserPubSub::read_user_topics(rgw_pubsub_user_topics *result, RGWObjVersionTracker *objv_tracker)
{
int ret = read(user_meta_obj, result, objv_tracker);
- if (ret < 0 && ret != -ENOENT) {
- ldout(store->ctx(), 0) << "ERROR: failed to read topics info: ret=" << ret << dendl;
+ if (ret < 0) {
+ ldout(store->ctx(), 10) << "WARNING: failed to read topics info: ret=" << ret << dendl;
return ret;
}
return 0;
{
int ret = write(user_meta_obj, topics, objv_tracker);
if (ret < 0 && ret != -ENOENT) {
- ldout(store->ctx(), 0) << "ERROR: failed to read topics info: ret=" << ret << dendl;
+ ldout(store->ctx(), 1) << "ERROR: failed to write topics info: ret=" << ret << dendl;
return ret;
}
return 0;
{
int ret = ps->read(bucket_meta_obj, result, objv_tracker);
if (ret < 0 && ret != -ENOENT) {
- ldout(ps->store->ctx(), 0) << "ERROR: failed to read bucket topics info: ret=" << ret << dendl;
+ ldout(ps->store->ctx(), 1) << "ERROR: failed to read bucket topics info: ret=" << ret << dendl;
return ret;
}
return 0;
{
int ret = ps->write(bucket_meta_obj, topics, objv_tracker);
if (ret < 0) {
- ldout(ps->store->ctx(), 0) << "ERROR: failed to write bucket topics info: ret=" << ret << dendl;
+ ldout(ps->store->ctx(), 1) << "ERROR: failed to write bucket topics info: ret=" << ret << dendl;
return ret;
}
rgw_pubsub_user_topics topics;
int ret = get_user_topics(&topics);
if (ret < 0) {
- ldout(store->ctx(), 0) << "ERROR: failed to read topics info: ret=" << ret << dendl;
+ ldout(store->ctx(), 1) << "ERROR: failed to read topics info: ret=" << ret << dendl;
return ret;
}
auto iter = topics.topics.find(name);
if (iter == topics.topics.end()) {
- ldout(store->ctx(), 0) << "ERROR: cannot add subscription to topic: topic not found" << dendl;
+ ldout(store->ctx(), 1) << "ERROR: topic not found" << dendl;
return -ENOENT;
}
return 0;
}
-
-int RGWUserPubSub::Bucket::create_notification(const string& topic_name, const set<string, ltstr_nocase>& events)
+int RGWUserPubSub::get_topic(const string& name, rgw_pubsub_topic *result)
{
+ rgw_pubsub_user_topics topics;
+ int ret = get_user_topics(&topics);
+ if (ret < 0) {
+ ldout(store->ctx(), 1) << "ERROR: failed to read topics info: ret=" << ret << dendl;
+ return ret;
+ }
+
+ auto iter = topics.topics.find(name);
+ if (iter == topics.topics.end()) {
+ ldout(store->ctx(), 1) << "ERROR: topic not found" << dendl;
+ return -ENOENT;
+ }
+
+ *result = iter->second.topic;
+ return 0;
+}
+
+int RGWUserPubSub::Bucket::create_notification(const string& topic_name, const rgw::notify::EventTypeList& events) {
+ return create_notification(topic_name, events, std::nullopt, "");
+}
+
+int RGWUserPubSub::Bucket::create_notification(const string& topic_name, const rgw::notify::EventTypeList& events, OptionalFilter s3_filter, const std::string& notif_name) {
rgw_pubsub_topic_subs user_topic_info;
RGWRados *store = ps->store;
int ret = ps->get_topic(topic_name, &user_topic_info);
if (ret < 0) {
- ldout(store->ctx(), 0) << "ERROR: failed to read topic info: ret=" << ret << dendl;
+ ldout(store->ctx(), 1) << "ERROR: failed to read topic '" << topic_name << "' info: ret=" << ret << dendl;
return ret;
}
+ ldout(store->ctx(), 20) << "successfully read topic '" << topic_name << "' info" << dendl;
RGWObjVersionTracker objv_tracker;
rgw_pubsub_bucket_topics bucket_topics;
ret = read_topics(&bucket_topics, &objv_tracker);
- if (ret < 0 && ret != -ENOENT) {
- ldout(store->ctx(), 0) << "ERROR: failed to read bucket topics info: ret=" << ret << dendl;
+ if (ret < 0) {
+ ldout(store->ctx(), 1) << "ERROR: failed to read topics from bucket '" <<
+ bucket.name << "': ret=" << ret << dendl;
return ret;
}
+ ldout(store->ctx(), 20) << "successfully read " << bucket_topics.topics.size() << " topics from bucket '" <<
+ bucket.name << "'" << dendl;
auto& topic_filter = bucket_topics.topics[topic_name];
topic_filter.topic = user_topic_info.topic;
topic_filter.events = events;
+ topic_filter.s3_id = notif_name;
+ if (s3_filter) {
+ topic_filter.s3_filter = *s3_filter;
+ }
ret = write_topics(bucket_topics, &objv_tracker);
if (ret < 0) {
- ldout(store->ctx(), 0) << "ERROR: failed to write topics info: ret=" << ret << dendl;
+ ldout(store->ctx(), 1) << "ERROR: failed to write topics to bucket '" << bucket.name << "': ret=" << ret << dendl;
return ret;
}
+
+ ldout(store->ctx(), 20) << "successfully wrote " << bucket_topics.topics.size() << " topics to bucket '" << bucket.name << "'" << dendl;
return 0;
}
int ret = ps->get_topic(topic_name, &user_topic_info);
if (ret < 0) {
- ldout(store->ctx(), 0) << "ERROR: failed to read topic info: ret=" << ret << dendl;
+ ldout(store->ctx(), 1) << "ERROR: failed to read topic info: ret=" << ret << dendl;
return ret;
}
rgw_pubsub_bucket_topics bucket_topics;
ret = read_topics(&bucket_topics, &objv_tracker);
- if (ret < 0 && ret != -ENOENT) {
- ldout(store->ctx(), 0) << "ERROR: failed to read bucket topics info: ret=" << ret << dendl;
+ if (ret < 0) {
+ ldout(store->ctx(), 1) << "ERROR: failed to read bucket topics info: ret=" << ret << dendl;
return ret;
}
ret = write_topics(bucket_topics, &objv_tracker);
if (ret < 0) {
- ldout(store->ctx(), 0) << "ERROR: failed to write topics info: ret=" << ret << dendl;
+ ldout(store->ctx(), 1) << "ERROR: failed to write topics info: ret=" << ret << dendl;
return ret;
}
return 0;
}
-int RGWUserPubSub::create_topic(const string& name)
-{
+int RGWUserPubSub::create_topic(const string& name) {
+ return create_topic(name, rgw_pubsub_sub_dest(), "");
+}
+
+int RGWUserPubSub::create_topic(const string& name, const rgw_pubsub_sub_dest& dest, const std::string& arn) {
RGWObjVersionTracker objv_tracker;
rgw_pubsub_user_topics topics;
int ret = read_user_topics(&topics, &objv_tracker);
if (ret < 0 && ret != -ENOENT) {
- ldout(store->ctx(), 0) << "ERROR: failed to read topics info: ret=" << ret << dendl;
+ // its not an error if not topics exist, we create one
+ ldout(store->ctx(), 1) << "ERROR: failed to read topics info: ret=" << ret << dendl;
return ret;
}
-
+
rgw_pubsub_topic_subs& new_topic = topics.topics[name];
new_topic.topic.user = user;
new_topic.topic.name = name;
+ new_topic.topic.dest = dest;
+ new_topic.topic.arn = arn;
ret = write_user_topics(topics, &objv_tracker);
if (ret < 0) {
- ldout(store->ctx(), 0) << "ERROR: failed to write topics info: ret=" << ret << dendl;
+ ldout(store->ctx(), 1) << "ERROR: failed to write topics info: ret=" << ret << dendl;
return ret;
}
int ret = read_user_topics(&topics, &objv_tracker);
if (ret < 0 && ret != -ENOENT) {
- ldout(store->ctx(), 0) << "ERROR: failed to read topics info: ret=" << ret << dendl;
+ ldout(store->ctx(), 1) << "ERROR: failed to read topics info: ret=" << ret << dendl;
return ret;
+ } else if (ret == -ENOENT) {
+ // its not an error if no topics exist, just a no-op
+ ldout(store->ctx(), 10) << "WARNING: failed to read topics info, deletion is a no-op: ret=" << ret << dendl;
+ return 0;
}
topics.topics.erase(name);
ret = write_user_topics(topics, &objv_tracker);
if (ret < 0) {
- ldout(store->ctx(), 0) << "ERROR: failed to write topics info: ret=" << ret << dendl;
+ ldout(store->ctx(), 1) << "ERROR: failed to remove topics info: ret=" << ret << dendl;
return ret;
}
{
int ret = ps->read(sub_meta_obj, result, objv_tracker);
if (ret < 0 && ret != -ENOENT) {
- ldout(ps->store->ctx(), 0) << "ERROR: failed to read subscription info: ret=" << ret << dendl;
+ ldout(ps->store->ctx(), 1) << "ERROR: failed to read subscription info: ret=" << ret << dendl;
return ret;
}
return 0;
{
int ret = ps->write(sub_meta_obj, sub_conf, objv_tracker);
if (ret < 0) {
- ldout(ps->store->ctx(), 0) << "ERROR: failed to write subscription info: ret=" << ret << dendl;
+ ldout(ps->store->ctx(), 1) << "ERROR: failed to write subscription info: ret=" << ret << dendl;
return ret;
}
{
int ret = ps->remove(sub_meta_obj, objv_tracker);
if (ret < 0) {
- ldout(ps->store->ctx(), 0) << "ERROR: failed to write subscription info: ret=" << ret << dendl;
+ ldout(ps->store->ctx(), 1) << "ERROR: failed to remove subscription info: ret=" << ret << dendl;
return ret;
}
return read_sub(result, nullptr);
}
-int RGWUserPubSub::Sub::subscribe(const string& topic, const rgw_pubsub_sub_dest& dest)
+int RGWUserPubSub::Sub::subscribe(const string& topic, const rgw_pubsub_sub_dest& dest, const std::string& s3_id)
{
RGWObjVersionTracker user_objv_tracker;
rgw_pubsub_user_topics topics;
int ret = ps->read_user_topics(&topics, &user_objv_tracker);
if (ret < 0) {
- ldout(store->ctx(), 0) << "ERROR: failed to read topics info: ret=" << ret << dendl;
- return ret;
+ ldout(store->ctx(), 1) << "ERROR: failed to read topics info: ret=" << ret << dendl;
+ return ret != -ENOENT ? ret : -EINVAL;
}
auto iter = topics.topics.find(topic);
if (iter == topics.topics.end()) {
- ldout(store->ctx(), 0) << "ERROR: cannot add subscription to topic: topic not found" << dendl;
- return -ENOENT;
+ ldout(store->ctx(), 1) << "ERROR: cannot add subscription to topic: topic not found" << dendl;
+ return -EINVAL;
}
auto& t = iter->second;
sub_conf.name = sub;
sub_conf.topic = topic;
sub_conf.dest = dest;
+ sub_conf.s3_id = s3_id;
t.subs.insert(sub);
ret = ps->write_user_topics(topics, &user_objv_tracker);
if (ret < 0) {
- ldout(store->ctx(), 0) << "ERROR: failed to write topics info: ret=" << ret << dendl;
+ ldout(store->ctx(), 1) << "ERROR: failed to write topics info: ret=" << ret << dendl;
return ret;
}
ret = write_sub(sub_conf, nullptr);
if (ret < 0) {
- ldout(store->ctx(), 0) << "ERROR: failed to write subscription info: ret=" << ret << dendl;
+ ldout(store->ctx(), 1) << "ERROR: failed to write subscription info: ret=" << ret << dendl;
return ret;
}
return 0;
rgw_pubsub_sub_config sub_conf;
int ret = read_sub(&sub_conf, &sobjv_tracker);
if (ret < 0) {
- ldout(store->ctx(), 0) << "ERROR: failed to read subscription info: ret=" << ret << dendl;
+ ldout(store->ctx(), 1) << "ERROR: failed to read subscription info: ret=" << ret << dendl;
return ret;
}
topic = sub_conf.topic;
int ret = ps->read_user_topics(&topics, &objv_tracker);
if (ret < 0) {
- ldout(store->ctx(), 0) << "ERROR: failed to read topics info: ret=" << ret << dendl;
- }
-
- if (ret >= 0) {
+ // not an error - could be that topic was already deleted
+ ldout(store->ctx(), 10) << "WARNING: failed to read topics info: ret=" << ret << dendl;
+ } else {
auto iter = topics.topics.find(topic);
if (iter != topics.topics.end()) {
auto& t = iter->second;
ret = ps->write_user_topics(topics, &objv_tracker);
if (ret < 0) {
- ldout(store->ctx(), 0) << "ERROR: failed to write topics info: ret=" << ret << dendl;
+ ldout(store->ctx(), 1) << "ERROR: failed to write topics info: ret=" << ret << dendl;
return ret;
}
}
ret = remove_sub(&sobjv_tracker);
if (ret < 0) {
- ldout(store->ctx(), 0) << "ERROR: failed to delete subscription info: ret=" << ret << dendl;
+ ldout(store->ctx(), 1) << "ERROR: failed to delete subscription info: ret=" << ret << dendl;
return ret;
}
return 0;
}
-void RGWUserPubSub::Sub::list_events_result::dump(Formatter *f) const
+template<typename EventType>
+void RGWUserPubSub::SubWithEvents<EventType>::list_events_result::dump(Formatter *f) const
{
encode_json("next_marker", next_marker, f);
encode_json("is_truncated", is_truncated, f);
- Formatter::ArraySection s(*f, "events");
+ Formatter::ArraySection s(*f, EventType::json_type_plural);
for (auto& event : events) {
- encode_json("event", event, f);
+ encode_json(EventType::json_type_single, event, f);
}
}
-int RGWUserPubSub::Sub::list_events(const string& marker, int max_events,
- list_events_result *result)
+template<typename EventType>
+int RGWUserPubSub::SubWithEvents<EventType>::list_events(const string& marker, int max_events)
{
RGWRados *store = ps->store;
rgw_pubsub_sub_config sub_conf;
int ret = get_conf(&sub_conf);
if (ret < 0) {
- ldout(store->ctx(), 0) << "ERROR: failed to read sub config: ret=" << ret << dendl;
+ ldout(store->ctx(), 1) << "ERROR: failed to read sub config: ret=" << ret << dendl;
return ret;
}
RGWSysObjectCtx obj_ctx(store->svc.sysobj->init_obj_ctx());
ret = store->get_bucket_info(obj_ctx, tenant, sub_conf.dest.bucket_name, bucket_info, nullptr, nullptr);
if (ret == -ENOENT) {
- result->is_truncated = false;
+ list.is_truncated = false;
return 0;
}
if (ret < 0) {
- ldout(store->ctx(), 0) << "ERROR: failed to read bucket info for events bucket: bucket=" << sub_conf.dest.bucket_name << " ret=" << ret << dendl;
+ ldout(store->ctx(), 1) << "ERROR: failed to read bucket info for events bucket: bucket=" << sub_conf.dest.bucket_name << " ret=" << ret << dendl;
return ret;
}
list_op.params.prefix = sub_conf.dest.oid_prefix;
list_op.params.marker = marker;
- vector<rgw_bucket_dir_entry> objs;
+ std::vector<rgw_bucket_dir_entry> objs;
- ret = list_op.list_objects(max_events, &objs, nullptr, &result->is_truncated);
+ ret = list_op.list_objects(max_events, &objs, nullptr, &list.is_truncated);
if (ret < 0) {
- ldout(store->ctx(), 0) << "ERROR: failed to list bucket: bucket=" << sub_conf.dest.bucket_name << " ret=" << ret << dendl;
+ ldout(store->ctx(), 1) << "ERROR: failed to list bucket: bucket=" << sub_conf.dest.bucket_name << " ret=" << ret << dendl;
return ret;
}
- if (result->is_truncated) {
- result->next_marker = list_op.get_next_marker().name;
+ if (list.is_truncated) {
+ list.next_marker = list_op.get_next_marker().name;
}
for (auto& obj : objs) {
try {
bl.decode_base64(bl64);
} catch (buffer::error& err) {
- ldout(store->ctx(), 0) << "ERROR: failed to event (not a valid base64)" << dendl;
+ ldout(store->ctx(), 1) << "ERROR: failed to event (not a valid base64)" << dendl;
continue;
}
- rgw_pubsub_event event;
+ EventType event;
auto iter = bl.cbegin();
try {
decode(event, iter);
} catch (buffer::error& err) {
- ldout(store->ctx(), 0) << "ERROR: failed to decode event" << dendl;
+ ldout(store->ctx(), 1) << "ERROR: failed to decode event" << dendl;
continue;
};
- result->events.push_back(event);
+ list.events.push_back(event);
}
return 0;
}
-int RGWUserPubSub::Sub::remove_event(const string& event_id)
+template<typename EventType>
+int RGWUserPubSub::SubWithEvents<EventType>::remove_event(const string& event_id)
{
RGWRados *store = ps->store;
rgw_pubsub_sub_config sub_conf;
int ret = get_conf(&sub_conf);
if (ret < 0) {
- ldout(store->ctx(), 0) << "ERROR: failed to read sub config: ret=" << ret << dendl;
+ ldout(store->ctx(), 1) << "ERROR: failed to read sub config: ret=" << ret << dendl;
return ret;
}
RGWSysObjectCtx sysobj_ctx(store->svc.sysobj->init_obj_ctx());
ret = store->get_bucket_info(sysobj_ctx, tenant, sub_conf.dest.bucket_name, bucket_info, nullptr, nullptr);
if (ret < 0) {
- ldout(store->ctx(), 0) << "ERROR: failed to read bucket info for events bucket: bucket=" << sub_conf.dest.bucket_name << " ret=" << ret << dendl;
+ ldout(store->ctx(), 1) << "ERROR: failed to read bucket info for events bucket: bucket=" << sub_conf.dest.bucket_name << " ret=" << ret << dendl;
return ret;
}
ret = del_op.delete_obj();
if (ret < 0) {
- ldout(store->ctx(), 0) << "ERROR: failed to remove event (obj=" << obj << "): ret=" << ret << dendl;
+ ldout(store->ctx(), 1) << "ERROR: failed to remove event (obj=" << obj << "): ret=" << ret << dendl;
}
return 0;
}
+
+template<typename EventType>
+void RGWUserPubSub::SubWithEvents<EventType>::dump(Formatter* f) const {
+ list.dump(f);
+}
+
+// explicit instantiation for the only two possible types
+// no need to move implementation to header
+template class RGWUserPubSub::SubWithEvents<rgw_pubsub_event>;
+template class RGWUserPubSub::SubWithEvents<rgw_pubsub_s3_record>;
+
+// -*- mode:C++; tab-width:8; c-basic-offset:2; indent-tabs-mode:t -*-
+// vim: ts=8 sw=2 smarttab ft=cpp
+
#ifndef CEPH_RGW_PUBSUB_H
#define CEPH_RGW_PUBSUB_H
#include "rgw_common.h"
#include "rgw_tools.h"
#include "rgw_zone.h"
-
+#include "rgw_rados.h"
+#include "rgw_notify_event_type.h"
#include "services/svc_sys_obj.h"
+class XMLObj;
+
+struct rgw_s3_key_filter {
+ std::string prefix_rule;
+ std::string suffix_rule;
+ std::string regex_rule;
+
+ bool has_content() const;
+
+ bool decode_xml(XMLObj *obj);
+ void dump_xml(Formatter *f) const;
+
+ void encode(bufferlist& bl) const {
+ ENCODE_START(1, 1, bl);
+ encode(prefix_rule, bl);
+ encode(suffix_rule, bl);
+ encode(regex_rule, bl);
+ ENCODE_FINISH(bl);
+ }
+
+ void decode(bufferlist::const_iterator& bl) {
+ DECODE_START(1, bl);
+ decode(prefix_rule, bl);
+ decode(suffix_rule, bl);
+ decode(regex_rule, bl);
+ DECODE_FINISH(bl);
+ }
+};
+WRITE_CLASS_ENCODER(rgw_s3_key_filter)
+
+using Metadata = std::map<std::string, std::string>;
+
+struct rgw_s3_metadata_filter {
+ Metadata metadata;
+
+ bool has_content() const;
+
+ bool decode_xml(XMLObj *obj);
+ void dump_xml(Formatter *f) const;
+
+ void encode(bufferlist& bl) const {
+ ENCODE_START(1, 1, bl);
+ encode(metadata, bl);
+ ENCODE_FINISH(bl);
+ }
+ void decode(bufferlist::const_iterator& bl) {
+ DECODE_START(1, bl);
+ decode(metadata, bl);
+ DECODE_FINISH(bl);
+ }
+};
+WRITE_CLASS_ENCODER(rgw_s3_metadata_filter)
+
+struct rgw_s3_filter {
+ rgw_s3_key_filter key_filter;
+ rgw_s3_metadata_filter metadata_filter;
+
+ bool has_content() const;
+
+ bool decode_xml(XMLObj *obj);
+ void dump_xml(Formatter *f) const;
+
+ void encode(bufferlist& bl) const {
+ ENCODE_START(1, 1, bl);
+ encode(key_filter, bl);
+ encode(metadata_filter, bl);
+ ENCODE_FINISH(bl);
+ }
+
+ void decode(bufferlist::const_iterator& bl) {
+ DECODE_START(1, bl);
+ decode(key_filter, bl);
+ decode(metadata_filter, bl);
+ DECODE_FINISH(bl);
+ }
+};
+WRITE_CLASS_ENCODER(rgw_s3_filter)
+
+using OptionalFilter = std::optional<rgw_s3_filter>;
+
+class rgw_pubsub_topic_filter;
+/* S3 notification configuration
+ * based on: https://docs.aws.amazon.com/AmazonS3/latest/API/RESTBucketPUTnotification.html
+<NotificationConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
+ <TopicConfiguration>
+ <Filter>
+ <S3Key>
+ <FilterRule>
+ <Name>suffix</Name>
+ <Value>jpg</Value>
+ </FilterRule>
+ </S3Key>
+ <S3Metadata>
+ <FilterRule>
+ <Name></Name>
+ <Value></Value>
+ </FilterRule>
+ </s3Metadata>
+ </Filter>
+ <Id>notification1</Id>
+ <Topic>arn:aws:sns:<region>:<account>:<topic></Topic>
+ <Event>s3:ObjectCreated:*</Event>
+ <Event>s3:ObjectRemoved:*</Event>
+ </TopicConfiguration>
+</NotificationConfiguration>
+*/
+struct rgw_pubsub_s3_notification {
+ // notification id
+ std::string id;
+ // types of events
+ rgw::notify::EventTypeList events;
+ // topic ARN
+ std::string topic_arn;
+ // filter rules
+ rgw_s3_filter filter;
+
+ bool decode_xml(XMLObj *obj);
+ void dump_xml(Formatter *f) const;
+
+ rgw_pubsub_s3_notification() = default;
+ // construct from rgw_pubsub_topic_filter (used by get/list notifications)
+ rgw_pubsub_s3_notification(const rgw_pubsub_topic_filter& topic_filter);
+};
+
+// return true if the key matches the prefix/suffix/regex rules of the key filter
+bool match(const rgw_s3_key_filter& filter, const std::string& key);
+// return true if the key matches the metadata rules of the metadata filter
+bool match(const rgw_s3_metadata_filter& filter, const Metadata& metadata);
+// return true if the event type matches (equal or contained in) one of the events in the list
+bool match(const rgw::notify::EventTypeList& events, rgw::notify::EventType event);
+
+struct rgw_pubsub_s3_notifications {
+ std::list<rgw_pubsub_s3_notification> list;
+ bool decode_xml(XMLObj *obj);
+ void dump_xml(Formatter *f) const;
+};
+
+/* S3 event records structure
+ * based on: https://docs.aws.amazon.com/AmazonS3/latest/dev/notification-content-structure.html
+{
+"Records":[
+ {
+ "eventVersion":""
+ "eventSource":"",
+ "awsRegion":"",
+ "eventTime":"",
+ "eventName":"",
+ "userIdentity":{
+ "principalId":""
+ },
+ "requestParameters":{
+ "sourceIPAddress":""
+ },
+ "responseElements":{
+ "x-amz-request-id":"",
+ "x-amz-id-2":""
+ },
+ "s3":{
+ "s3SchemaVersion":"1.0",
+ "configurationId":"",
+ "bucket":{
+ "name":"",
+ "ownerIdentity":{
+ "principalId":""
+ },
+ "arn":""
+ "id": ""
+ },
+ "object":{
+ "key":"",
+ "size": ,
+ "eTag":"",
+ "versionId":"",
+ "sequencer": "",
+ "metadata": ""
+ }
+ },
+ "eventId":"",
+ }
+]
+}*/
+
+struct rgw_pubsub_s3_record {
+ constexpr static const char* const json_type_single = "Record";
+ constexpr static const char* const json_type_plural = "Records";
+ // 2.1
+ std::string eventVersion;
+ // aws:s3
+ std::string eventSource;
+ // zonegroup
+ std::string awsRegion;
+ // time of the request
+ ceph::real_time eventTime;
+ // type of the event
+ std::string eventName;
+ // user that sent the requet (not implemented)
+ std::string userIdentity;
+ // IP address of source of the request (not implemented)
+ std::string sourceIPAddress;
+ // request ID (not implemented)
+ std::string x_amz_request_id;
+ // radosgw that received the request
+ std::string x_amz_id_2;
+ // 1.0
+ std::string s3SchemaVersion;
+ // ID received in the notification request
+ std::string configurationId;
+ // bucket name
+ std::string bucket_name;
+ // bucket owner (not implemented)
+ std::string bucket_ownerIdentity;
+ // bucket ARN
+ std::string bucket_arn;
+ // object key
+ std::string object_key;
+ // object size (not implemented)
+ uint64_t object_size;
+ // object etag
+ std::string object_etag;
+ // object version id bucket is versioned
+ std::string object_versionId;
+ // hexadecimal value used to determine event order for specific key
+ std::string object_sequencer;
+ // this is an rgw extension (not S3 standard)
+ // used to store a globally unique identifier of the event
+ // that could be used for acking
+ std::string id;
+ // this is an rgw extension holding the internal bucket id
+ std::string bucket_id;
+ // meta data
+ std::map<std::string, std::string> x_meta_map;
+
+ void encode(bufferlist& bl) const {
+ ENCODE_START(2, 1, bl);
+ encode(eventVersion, bl);
+ encode(eventSource, bl);
+ encode(awsRegion, bl);
+ encode(eventTime, bl);
+ encode(eventName, bl);
+ encode(userIdentity, bl);
+ encode(sourceIPAddress, bl);
+ encode(x_amz_request_id, bl);
+ encode(x_amz_id_2, bl);
+ encode(s3SchemaVersion, bl);
+ encode(configurationId, bl);
+ encode(bucket_name, bl);
+ encode(bucket_ownerIdentity, bl);
+ encode(bucket_arn, bl);
+ encode(object_key, bl);
+ encode(object_size, bl);
+ encode(object_etag, bl);
+ encode(object_versionId, bl);
+ encode(object_sequencer, bl);
+ encode(id, bl);
+ encode(bucket_id, bl);
+ encode(x_meta_map, bl);
+ ENCODE_FINISH(bl);
+ }
+
+ void decode(bufferlist::const_iterator& bl) {
+ DECODE_START(2, bl);
+ decode(eventVersion, bl);
+ decode(eventSource, bl);
+ decode(awsRegion, bl);
+ decode(eventTime, bl);
+ decode(eventName, bl);
+ decode(userIdentity, bl);
+ decode(sourceIPAddress, bl);
+ decode(x_amz_request_id, bl);
+ decode(x_amz_id_2, bl);
+ decode(s3SchemaVersion, bl);
+ decode(configurationId, bl);
+ decode(bucket_name, bl);
+ decode(bucket_ownerIdentity, bl);
+ decode(bucket_arn, bl);
+ decode(object_key, bl);
+ decode(object_size, bl);
+ decode(object_etag, bl);
+ decode(object_versionId, bl);
+ decode(object_sequencer, bl);
+ decode(id, bl);
+ if (struct_v >= 2) {
+ decode(bucket_id, bl);
+ decode(x_meta_map, bl);
+ }
+ DECODE_FINISH(bl);
+ }
+
+ void dump(Formatter *f) const;
+};
+WRITE_CLASS_ENCODER(rgw_pubsub_s3_record)
struct rgw_pubsub_event {
- string id;
- string event;
- string source;
+ constexpr static const char* const json_type_single = "event";
+ constexpr static const char* const json_type_plural = "events";
+ std::string id;
+ std::string event_name;
+ std::string source;
ceph::real_time timestamp;
JSONFormattable info;
void encode(bufferlist& bl) const {
ENCODE_START(1, 1, bl);
encode(id, bl);
- encode(event, bl);
+ encode(event_name, bl);
encode(source, bl);
encode(timestamp, bl);
encode(info, bl);
void decode(bufferlist::const_iterator& bl) {
DECODE_START(1, bl);
decode(id, bl);
- decode(event, bl);
+ decode(event_name, bl);
decode(source, bl);
decode(timestamp, bl);
decode(info, bl);
WRITE_CLASS_ENCODER(rgw_pubsub_event)
struct rgw_pubsub_sub_dest {
- string bucket_name;
- string oid_prefix;
- string push_endpoint;
- string push_endpoint_args;
+ std::string bucket_name;
+ std::string oid_prefix;
+ std::string push_endpoint;
+ std::string push_endpoint_args;
+ std::string arn_topic;
void encode(bufferlist& bl) const {
- ENCODE_START(2, 1, bl);
+ ENCODE_START(3, 1, bl);
encode(bucket_name, bl);
encode(oid_prefix, bl);
encode(push_endpoint, bl);
encode(push_endpoint_args, bl);
+ encode(arn_topic, bl);
ENCODE_FINISH(bl);
}
void decode(bufferlist::const_iterator& bl) {
- DECODE_START(2, bl);
+ DECODE_START(3, bl);
decode(bucket_name, bl);
decode(oid_prefix, bl);
decode(push_endpoint, bl);
if (struct_v >= 2) {
decode(push_endpoint_args, bl);
}
+ if (struct_v >= 3) {
+ decode(arn_topic, bl);
+ }
DECODE_FINISH(bl);
}
void dump(Formatter *f) const;
+ void dump_xml(Formatter *f) const;
};
WRITE_CLASS_ENCODER(rgw_pubsub_sub_dest)
struct rgw_pubsub_sub_config {
rgw_user user;
- string name;
- string topic;
+ std::string name;
+ std::string topic;
rgw_pubsub_sub_dest dest;
+ std::string s3_id;
void encode(bufferlist& bl) const {
- ENCODE_START(1, 1, bl);
+ ENCODE_START(2, 1, bl);
encode(user, bl);
encode(name, bl);
encode(topic, bl);
encode(dest, bl);
+ encode(s3_id, bl);
ENCODE_FINISH(bl);
}
void decode(bufferlist::const_iterator& bl) {
- DECODE_START(1, bl);
+ DECODE_START(2, bl);
decode(user, bl);
decode(name, bl);
decode(topic, bl);
decode(dest, bl);
+ if (struct_v >= 2) {
+ decode(s3_id, bl);
+ }
DECODE_FINISH(bl);
}
struct rgw_pubsub_topic {
rgw_user user;
- string name;
+ std::string name;
+ rgw_pubsub_sub_dest dest;
+ std::string arn;
void encode(bufferlist& bl) const {
- ENCODE_START(1, 1, bl);
+ ENCODE_START(2, 1, bl);
encode(user, bl);
encode(name, bl);
+ encode(dest, bl);
+ encode(arn, bl);
ENCODE_FINISH(bl);
}
void decode(bufferlist::const_iterator& bl) {
- DECODE_START(1, bl);
+ DECODE_START(2, bl);
decode(user, bl);
decode(name, bl);
+ if (struct_v >= 2) {
+ decode(dest, bl);
+ decode(arn, bl);
+ }
DECODE_FINISH(bl);
}
}
void dump(Formatter *f) const;
+ void dump_xml(Formatter *f) const;
bool operator<(const rgw_pubsub_topic& t) const {
return to_str().compare(t.to_str());
struct rgw_pubsub_topic_subs {
rgw_pubsub_topic topic;
- set<string> subs;
+ std::set<std::string> subs;
void encode(bufferlist& bl) const {
ENCODE_START(1, 1, bl);
struct rgw_pubsub_topic_filter {
rgw_pubsub_topic topic;
- set<string, ltstr_nocase> events;
+ rgw::notify::EventTypeList events;
+ std::string s3_id;
+ rgw_s3_filter s3_filter;
void encode(bufferlist& bl) const {
- ENCODE_START(1, 1, bl);
+ ENCODE_START(3, 1, bl);
encode(topic, bl);
- encode(events, bl);
+ // events are stored as a vector of strings
+ std::vector<std::string> tmp_events;
+ const auto converter = s3_id.empty() ? rgw::notify::to_ceph_string : rgw::notify::to_string;
+ std::transform(events.begin(), events.end(), std::back_inserter(tmp_events), converter);
+ encode(tmp_events, bl);
+ encode(s3_id, bl);
+ encode(s3_filter, bl);
ENCODE_FINISH(bl);
}
void decode(bufferlist::const_iterator& bl) {
- DECODE_START(1, bl);
+ DECODE_START(3, bl);
decode(topic, bl);
- decode(events, bl);
+ // events are stored as a vector of strings
+ events.clear();
+ std::vector<std::string> tmp_events;
+ decode(tmp_events, bl);
+ std::transform(tmp_events.begin(), tmp_events.end(), std::back_inserter(events), rgw::notify::from_string);
+ if (struct_v >= 2) {
+ decode(s3_id, bl);
+ }
+ if (struct_v >= 3) {
+ decode(s3_filter, bl);
+ }
DECODE_FINISH(bl);
}
WRITE_CLASS_ENCODER(rgw_pubsub_topic_filter)
struct rgw_pubsub_bucket_topics {
- map<string, rgw_pubsub_topic_filter> topics;
+ std::map<std::string, rgw_pubsub_topic_filter> topics;
void encode(bufferlist& bl) const {
ENCODE_START(1, 1, bl);
WRITE_CLASS_ENCODER(rgw_pubsub_bucket_topics)
struct rgw_pubsub_user_topics {
- map<string, rgw_pubsub_topic_subs> topics;
+ std::map<std::string, rgw_pubsub_topic_subs> topics;
void encode(bufferlist& bl) const {
ENCODE_START(1, 1, bl);
}
void dump(Formatter *f) const;
+ void dump_xml(Formatter *f) const;
};
WRITE_CLASS_ENCODER(rgw_pubsub_user_topics)
-static string pubsub_user_oid_prefix = "pubsub.user.";
+static std::string pubsub_user_oid_prefix = "pubsub.user.";
class RGWUserPubSub
{
rgw_raw_obj user_meta_obj;
- string user_meta_oid() const {
+ std::string user_meta_oid() const {
return pubsub_user_oid_prefix + user.to_str();
}
- string bucket_meta_oid(const rgw_bucket& bucket) const {
+ std::string bucket_meta_oid(const rgw_bucket& bucket) const {
return pubsub_user_oid_prefix + user.to_str() + ".bucket." + bucket.name + "/" + bucket.bucket_id;
}
- string sub_meta_oid(const string& name) const {
+ std::string sub_meta_oid(const string& name) const {
return pubsub_user_oid_prefix + user.to_str() + ".sub." + name;
}
int read_user_topics(rgw_pubsub_user_topics *result, RGWObjVersionTracker *objv_tracker);
int write_user_topics(const rgw_pubsub_user_topics& topics, RGWObjVersionTracker *objv_tracker);
+
public:
RGWUserPubSub(RGWRados *_store, const rgw_user& _user) : store(_store),
user(_user),
rgw_bucket bucket;
rgw_raw_obj bucket_meta_obj;
+ // read the list of topics associated with a bucket and populate into result
+ // use version tacker to enforce atomicity between read/write
+ // return 0 on success or if no topic was associated with the bucket, error code otherwise
int read_topics(rgw_pubsub_bucket_topics *result, RGWObjVersionTracker *objv_tracker);
+ // set the list of topics associated with a bucket
+ // use version tacker to enforce atomicity between read/write
+ // return 0 on success, error code otherwise
int write_topics(const rgw_pubsub_bucket_topics& topics, RGWObjVersionTracker *objv_tracker);
public:
Bucket(RGWUserPubSub *_ps, const rgw_bucket& _bucket) : ps(_ps), bucket(_bucket) {
ps->get_bucket_meta_obj(bucket, &bucket_meta_obj);
}
+ // read the list of topics associated with a bucket and populate into result
+ // return 0 on success or if no topic was associated with the bucket, error code otherwise
int get_topics(rgw_pubsub_bucket_topics *result);
- int create_notification(const string& topic_name, const set<string, ltstr_nocase>& events);
+ // adds a topic + filter (event list, and possibly name and metadata filters) to a bucket
+ // assigning a notification name is optional (needed for S3 compatible notifications)
+ // if the topic already exist on the bucket, the filter event list may be updated
+ // for S3 compliant notifications the version with: s3_filter and notif_name should be used
+ // return -ENOENT if the topic does not exists
+ // return 0 on success, error code otherwise
+ int create_notification(const string& topic_name, const rgw::notify::EventTypeList& events);
+ int create_notification(const string& topic_name, const rgw::notify::EventTypeList& events, OptionalFilter s3_filter, const std::string& notif_name);
+ // remove a topic and filter from bucket
+ // if the topic does not exists on the bucket it is a no-op (considered success)
+ // return -ENOENT if the topic does not exists
+ // return 0 on success, error code otherwise
int remove_notification(const string& topic_name);
};
+ // base class for subscription
class Sub {
friend class RGWUserPubSub;
- RGWUserPubSub *ps;
- string sub;
+ protected:
+ RGWUserPubSub* const ps;
+ const std::string sub;
rgw_raw_obj sub_meta_obj;
int read_sub(rgw_pubsub_sub_config *result, RGWObjVersionTracker *objv_tracker);
int write_sub(const rgw_pubsub_sub_config& sub_conf, RGWObjVersionTracker *objv_tracker);
int remove_sub(RGWObjVersionTracker *objv_tracker);
public:
- Sub(RGWUserPubSub *_ps, const string& _sub) : ps(_ps), sub(_sub) {
+ Sub(RGWUserPubSub *_ps, const std::string& _sub) : ps(_ps), sub(_sub) {
ps->get_sub_meta_obj(sub, &sub_meta_obj);
}
- int subscribe(const string& topic_name, const rgw_pubsub_sub_dest& dest);
+ virtual ~Sub() = default;
+
+ int subscribe(const string& topic_name, const rgw_pubsub_sub_dest& dest, const std::string& s3_id="");
int unsubscribe(const string& topic_name);
- int get_conf(rgw_pubsub_sub_config *result);
+ int get_conf(rgw_pubsub_sub_config* result);
+
+ static const int DEFAULT_MAX_EVENTS = 100;
+ // followint virtual methods should only be called in derived
+ virtual int list_events(const string& marker, int max_events) {ceph_assert(false);}
+ virtual int remove_event(const string& event_id) {ceph_assert(false);}
+ virtual void dump(Formatter* f) const {ceph_assert(false);}
+ };
+ // subscription with templated list of events to support both S3 compliant and Ceph specific events
+ template<typename EventType>
+ class SubWithEvents : public Sub {
+ private:
struct list_events_result {
- string next_marker;
+ std::string next_marker;
bool is_truncated{false};
- std::vector<rgw_pubsub_event> events;
-
void dump(Formatter *f) const;
- };
+ std::vector<EventType> events;
+ } list;
- int list_events(const string& marker, int max_events, list_events_result *result);
- int remove_event(const string& event_id);
+ public:
+ SubWithEvents(RGWUserPubSub *_ps, const string& _sub) : Sub(_ps, _sub) {}
+
+ virtual ~SubWithEvents() = default;
+
+ int list_events(const string& marker, int max_events) override;
+ int remove_event(const string& event_id) override;
+ void dump(Formatter* f) const override;
};
using BucketRef = std::shared_ptr<Bucket>;
SubRef get_sub(const string& sub) {
return std::make_shared<Sub>(this, sub);
}
-
+
+ SubRef get_sub_with_events(const string& sub) {
+ auto tmpsub = Sub(this, sub);
+ rgw_pubsub_sub_config conf;
+ if (tmpsub.get_conf(&conf) < 0) {
+ return nullptr;
+ }
+ if (conf.s3_id.empty()) {
+ return std::make_shared<SubWithEvents<rgw_pubsub_event>>(this, sub);
+ }
+ return std::make_shared<SubWithEvents<rgw_pubsub_s3_record>>(this, sub);
+ }
+
void get_user_meta_obj(rgw_raw_obj *obj) const {
*obj = rgw_raw_obj(store->svc.zone->get_zone_params().log_pool, user_meta_oid());
}
*obj = rgw_raw_obj(store->svc.zone->get_zone_params().log_pool, sub_meta_oid(name));
}
+ // get all topics defined for the user and populate them into "result"
+ // return 0 on success or if no topics exist, error code otherwise
int get_user_topics(rgw_pubsub_user_topics *result);
+ // get a topic with its subscriptions by its name and populate it into "result"
+ // return -ENOENT if the topic does not exists
+ // return 0 on success, error code otherwise
int get_topic(const string& name, rgw_pubsub_topic_subs *result);
+ // get a topic with by its name and populate it into "result"
+ // return -ENOENT if the topic does not exists
+ // return 0 on success, error code otherwise
+ int get_topic(const string& name, rgw_pubsub_topic *result);
+ // create a topic with a name only
+ // if the topic already exists it is a no-op (considered success)
+ // return 0 on success, error code otherwise
int create_topic(const string& name);
+ // create a topic with push destination information and ARN
+ // if the topic already exists the destination and ARN values may be updated (considered succsess)
+ // return 0 on success, error code otherwise
+ int create_topic(const string& name, const rgw_pubsub_sub_dest& dest, const std::string& arn);
+ // remove a topic according to its name
+ // if the topic does not exists it is a no-op (considered success)
+ // return 0 on success, error code otherwise
int remove_topic(const string& name);
};
return ret;
}
+ obj_ctx.invalidate(const_cast<rgw_raw_obj&>(obj));
return 0;
}
#include <algorithm>
#include "include/buffer_fwd.h"
#include "common/Formatter.h"
+#include "common/async/completion.h"
#include "rgw_common.h"
#include "rgw_data_sync.h"
#include "rgw_pubsub.h"
using namespace rgw;
-std::string json_format_pubsub_event(const rgw_pubsub_event& event) {
+template<typename EventType>
+std::string json_format_pubsub_event(const EventType& event) {
std::stringstream ss;
JSONFormatter f(false);
- encode_json("event", event, &f);
+ encode_json(EventType::json_type_single, event, &f);
f.flush(ss);
return ss.str();
}
public:
RGWPubSubHTTPEndpoint(const std::string& _endpoint,
- const RGWHTTPArgs& args) :
- endpoint(_endpoint) {
- bool exists;
-
- str_ack_level = args.get("http-ack-level", &exists);
- if (!exists || str_ack_level == "any") {
- // "any" is default
- ack_level = ACK_LEVEL_ANY;
- } else if (str_ack_level == "non-error") {
- ack_level = ACK_LEVEL_NON_ERROR;
- } else {
- ack_level = std::atoi(str_ack_level.c_str());
- if (ack_level < 100 || ack_level >= 600) {
- throw configuration_error("HTTP: invalid http-ack-level " + str_ack_level);
- }
- }
+ const RGWHTTPArgs& args) : endpoint(_endpoint) {
+ bool exists;
- auto str_verify_ssl = args.get("verify-ssl", &exists);
- boost::algorithm::to_lower(str_verify_ssl);
- // verify server certificate by default
- if (!exists || str_verify_ssl == "true") {
- verify_ssl = true;
- } else if (str_verify_ssl == "false") {
- verify_ssl = false;
- } else {
- throw configuration_error("HTTP: verify-ssl must be true/false, not: " + str_verify_ssl);
+ str_ack_level = args.get("http-ack-level", &exists);
+ if (!exists || str_ack_level == "any") {
+ // "any" is default
+ ack_level = ACK_LEVEL_ANY;
+ } else if (str_ack_level == "non-error") {
+ ack_level = ACK_LEVEL_NON_ERROR;
+ } else {
+ ack_level = std::atoi(str_ack_level.c_str());
+ if (ack_level < 100 || ack_level >= 600) {
+ throw configuration_error("HTTP/S: invalid http-ack-level: " + str_ack_level);
}
}
+ auto str_verify_ssl = args.get("verify-ssl", &exists);
+ boost::algorithm::to_lower(str_verify_ssl);
+ // verify server certificate by default
+ if (!exists || str_verify_ssl == "true") {
+ verify_ssl = true;
+ } else if (str_verify_ssl == "false") {
+ verify_ssl = false;
+ } else {
+ throw configuration_error("HTTP/S: verify-ssl must be true/false, not: " + str_verify_ssl);
+ }
+ }
+
RGWCoroutine* send_to_completion_async(const rgw_pubsub_event& event, RGWDataSyncEnv* env) override {
return new PostCR(json_format_pubsub_event(event), env, endpoint, ack_level, verify_ssl);
}
+ RGWCoroutine* send_to_completion_async(const rgw_pubsub_s3_record& record, RGWDataSyncEnv* env) override {
+ return new PostCR(json_format_pubsub_event(record), env, endpoint, ack_level, verify_ssl);
+ }
+
+ int send_to_completion_async(CephContext* cct, const rgw_pubsub_s3_record& record, optional_yield y) override {
+ bufferlist read_bl;
+ RGWPostHTTPData request(cct, "POST", endpoint, &read_bl, verify_ssl);
+ const auto post_data = json_format_pubsub_event(record);
+ request.set_post_data(post_data);
+ request.set_send_length(post_data.length());
+ if (perfcounter) perfcounter->inc(l_rgw_pubsub_push_pending);
+ const auto rc = RGWHTTP::process(&request, y);
+ if (perfcounter) perfcounter->dec(l_rgw_pubsub_push_pending);
+ // TODO: use read_bl to process return code and handle according to ack level
+ return rc;
+ }
+
std::string to_str() const override {
- std::string str("HTTP Endpoint");
+ std::string str("HTTP/S Endpoint");
str += "\nURI: " + endpoint;
str += "\nAck Level: " + str_ack_level;
str += (verify_ssl ? "\nverify SSL" : "\ndon't verify SSL");
#ifdef WITH_RADOSGW_AMQP_ENDPOINT
class RGWPubSubAMQPEndpoint : public RGWPubSubEndpoint {
- private:
- enum ack_level_t {
- ACK_LEVEL_NONE,
- ACK_LEVEL_BROKER,
- ACK_LEVEL_ROUTEABLE
- };
- const std::string endpoint;
- const std::string topic;
- amqp::connection_ptr_t conn;
- ack_level_t ack_level;
- std::string str_ack_level;
-
- static std::string get_exchange(const RGWHTTPArgs& args) {
- bool exists;
- const auto exchange = args.get("amqp-exchange", &exists);
- if (!exists) {
- throw configuration_error("AMQP: missing amqp-exchange");
- }
- return exchange;
+private:
+ enum ack_level_t {
+ ACK_LEVEL_NONE,
+ ACK_LEVEL_BROKER,
+ ACK_LEVEL_ROUTEABLE
+ };
+ CephContext* const cct;
+ const std::string endpoint;
+ const std::string topic;
+ const std::string exchange;
+ amqp::connection_ptr_t conn;
+ ack_level_t ack_level;
+ std::string str_ack_level;
+
+ static std::string get_exchange(const RGWHTTPArgs& args) {
+ bool exists;
+ const auto exchange = args.get("amqp-exchange", &exists);
+ if (!exists) {
+ throw configuration_error("AMQP: missing amqp-exchange");
}
+ return exchange;
+ }
// NoAckPublishCR implements async amqp publishing via coroutine
// This coroutine ends when it send the message and does not wait for an ack
class NoAckPublishCR : public RGWCoroutine {
private:
- RGWDataSyncEnv* const sync_env;
const std::string topic;
amqp::connection_ptr_t conn;
const std::string message;
public:
- NoAckPublishCR(RGWDataSyncEnv* _sync_env,
+ NoAckPublishCR(CephContext* cct,
const std::string& _topic,
amqp::connection_ptr_t& _conn,
const std::string& _message) :
- RGWCoroutine(_sync_env->cct), sync_env(_sync_env),
+ RGWCoroutine(cct),
topic(_topic), conn(_conn), message(_message) {}
// send message to endpoint, without waiting for reply
// note that it does not wait for an ack fron the end client
class AckPublishCR : public RGWCoroutine, public RGWIOProvider {
private:
- RGWDataSyncEnv* const sync_env;
const std::string topic;
amqp::connection_ptr_t conn;
const std::string message;
const ack_level_t ack_level; // TODO not used for now
public:
- AckPublishCR(RGWDataSyncEnv* _sync_env,
+ AckPublishCR(CephContext* cct,
const std::string& _topic,
amqp::connection_ptr_t& _conn,
const std::string& _message,
ack_level_t _ack_level) :
- RGWCoroutine(_sync_env->cct), sync_env(_sync_env),
+ RGWCoroutine(cct),
topic(_topic), conn(_conn), message(_message), ack_level(_ack_level) {}
// send message to endpoint, waiting for reply
}
};
+public:
+ RGWPubSubAMQPEndpoint(const std::string& _endpoint,
+ const std::string& _topic,
+ const RGWHTTPArgs& args,
+ CephContext* _cct) :
+ cct(_cct),
+ endpoint(_endpoint),
+ topic(_topic),
+ exchange(get_exchange(args)),
+ conn(amqp::connect(endpoint, exchange)) {
+ if (!conn) {
+ throw configuration_error("AMQP: failed to create connection to: " + endpoint);
+ }
+ bool exists;
+ // get ack level
+ str_ack_level = args.get("amqp-ack-level", &exists);
+ if (!exists || str_ack_level == "broker") {
+ // "broker" is default
+ ack_level = ACK_LEVEL_BROKER;
+ } else if (str_ack_level == "none") {
+ ack_level = ACK_LEVEL_NONE;
+ } else if (str_ack_level == "routable") {
+ ack_level = ACK_LEVEL_ROUTEABLE;
+ } else {
+ throw configuration_error("AMQP: invalid amqp-ack-level: " + str_ack_level);
+ }
+ }
+
+ RGWCoroutine* send_to_completion_async(const rgw_pubsub_event& event, RGWDataSyncEnv* env) override {
+ ceph_assert(conn);
+ if (ack_level == ACK_LEVEL_NONE) {
+ return new NoAckPublishCR(cct, topic, conn, json_format_pubsub_event(event));
+ } else {
+ // TODO: currently broker and routable are the same - this will require different flags
+ // but the same mechanism
+ return new AckPublishCR(cct, topic, conn, json_format_pubsub_event(event), ack_level);
+ }
+ }
+
+ RGWCoroutine* send_to_completion_async(const rgw_pubsub_s3_record& record, RGWDataSyncEnv* env) override {
+ ceph_assert(conn);
+ if (ack_level == ACK_LEVEL_NONE) {
+ return new NoAckPublishCR(cct, topic, conn, json_format_pubsub_event(record));
+ } else {
+ // TODO: currently broker and routable are the same - this will require different flags
+ // but the same mechanism
+ return new AckPublishCR(cct, topic, conn, json_format_pubsub_event(record), ack_level);
+ }
+ }
+
+ // this allows waiting untill "finish()" is called from a different thread
+ // waiting could be blocking the waiting thread or yielding, depending
+ // with compilation flag support and whether the optional_yield is set
+ class Waiter {
+ using Signature = void(boost::system::error_code);
+ using Completion = ceph::async::Completion<Signature>;
+ std::unique_ptr<Completion> completion = nullptr;
+ int ret;
+
+ mutable std::atomic<bool> done = false;
+ mutable std::mutex lock;
+ mutable std::condition_variable cond;
+
+ template <typename ExecutionContext, typename CompletionToken>
+ auto async_wait(ExecutionContext& ctx, CompletionToken&& token) {
+ boost::asio::async_completion<CompletionToken, Signature> init(token);
+ auto& handler = init.completion_handler;
+ {
+ std::unique_lock l{lock};
+ completion = Completion::create(ctx.get_executor(), std::move(handler));
+ }
+ return init.result.get();
+ }
+
public:
- RGWPubSubAMQPEndpoint(const std::string& _endpoint,
- const std::string& _topic,
- const RGWHTTPArgs& args) :
- endpoint(_endpoint),
- topic(_topic),
- conn(amqp::connect(endpoint, get_exchange(args))) {
- bool exists;
- // get ack level
- str_ack_level = args.get("amqp-ack-level", &exists);
- if (!exists || str_ack_level == "broker") {
- // "broker" is default
- ack_level = ACK_LEVEL_BROKER;
- } else if (str_ack_level == "none") {
- ack_level = ACK_LEVEL_NONE;
- } else if (str_ack_level == "routable") {
- ack_level = ACK_LEVEL_ROUTEABLE;
- } else {
- throw configuration_error("HTTP: invalid amqp-ack-level " + str_ack_level);
+ int wait(optional_yield y) {
+ if (done) {
+ return ret;
+ }
+#ifdef HAVE_BOOST_CONTEXT
+ if (y) {
+ auto& io_ctx = y.get_io_context();
+ auto& yield_ctx = y.get_yield_context();
+ boost::system::error_code ec;
+ async_wait(io_ctx, yield_ctx[ec]);
+ return -ec.value();
}
+#endif
+ std::unique_lock l(lock);
+ cond.wait(l, [this]{return (done==true);});
+ return ret;
}
- RGWCoroutine* send_to_completion_async(const rgw_pubsub_event& event, RGWDataSyncEnv* env) override {
- if (ack_level == ACK_LEVEL_NONE) {
- return new NoAckPublishCR(env, topic, conn, json_format_pubsub_event(event));
+ void finish(int r) {
+ std::unique_lock l{lock};
+ ret = r;
+ done = true;
+ if (completion) {
+ boost::system::error_code ec(-ret, boost::system::system_category());
+ Completion::post(std::move(completion), ec);
} else {
- // TODO: currently broker and routable are the same - this will require different flags
- // but the same mechanism
- return new AckPublishCR(env, topic, conn, json_format_pubsub_event(event), ack_level);
+ cond.notify_all();
}
}
+ };
- std::string to_str() const override {
- std::string str("AMQP(0.9.1) Endpoint");
- str += "\nURI: " + endpoint;
- str += "\nTopic: " + topic;
- str += "\nAck Level: " + str_ack_level;
- return str;
+ int send_to_completion_async(CephContext* cct, const rgw_pubsub_s3_record& record, optional_yield y) override {
+ ceph_assert(conn);
+ if (ack_level == ACK_LEVEL_NONE) {
+ return amqp::publish(conn, topic, json_format_pubsub_event(record));
+ } else {
+ // TODO: currently broker and routable are the same - this will require different flags but the same mechanism
+ // note: dynamic allocation of Waiter is needed when this is invoked from a beast coroutine
+ auto w = std::unique_ptr<Waiter>(new Waiter);
+ const auto rc = amqp::publish_with_confirm(conn,
+ topic,
+ json_format_pubsub_event(record),
+ std::bind(&Waiter::finish, w.get(), std::placeholders::_1));
+ if (rc < 0) {
+ // failed to publish, does not wait for reply
+ return rc;
+ }
+ return w->wait(y);
}
+ }
+
+ std::string to_str() const override {
+ std::string str("AMQP(0.9.1) Endpoint");
+ str += "\nURI: " + endpoint;
+ str += "\nTopic: " + topic;
+ str += "\nExchange: " + exchange;
+ str += "\nAck Level: " + str_ack_level;
+ return str;
+ }
};
static const std::string AMQP_0_9_1("0-9-1");
static const std::string AMQP_1_0("1-0");
+static const std::string AMQP_SCHEMA("amqp");
#endif // ifdef WITH_RADOSGW_AMQP_ENDPOINT
-RGWPubSubEndpoint::Ptr RGWPubSubEndpoint::create(const std::string& endpoint,
- const std::string& topic,
- const RGWHTTPArgs& args) {
- //fetch the schema from the endpoint
+static const std::string WEBHOOK_SCHEMA("webhook");
+static const std::string UNKNOWN_SCHEMA("unknown");
+static const std::string NO_SCHEMA("");
+
+const std::string& get_schema(const std::string& endpoint) {
+ if (endpoint.empty()) {
+ return NO_SCHEMA;
+ }
const auto pos = endpoint.find(':');
if (pos == std::string::npos) {
- throw configuration_error("malformed endpoint " + endpoint);
- return nullptr;
+ return UNKNOWN_SCHEMA;
}
const auto& schema = endpoint.substr(0,pos);
if (schema == "http" || schema == "https") {
- return Ptr(new RGWPubSubHTTPEndpoint(endpoint, args));
+ return WEBHOOK_SCHEMA;
#ifdef WITH_RADOSGW_AMQP_ENDPOINT
} else if (schema == "amqp") {
+ return AMQP_SCHEMA;
+#endif
+ }
+ return UNKNOWN_SCHEMA;
+}
+
+RGWPubSubEndpoint::Ptr RGWPubSubEndpoint::create(const std::string& endpoint,
+ const std::string& topic,
+ const RGWHTTPArgs& args,
+ CephContext* cct) {
+ const auto& schema = get_schema(endpoint);
+ if (schema == WEBHOOK_SCHEMA) {
+ return Ptr(new RGWPubSubHTTPEndpoint(endpoint, args));
+#ifdef WITH_RADOSGW_AMQP_ENDPOINT
+ } else if (schema == AMQP_SCHEMA) {
bool exists;
std::string version = args.get("amqp-version", &exists);
if (!exists) {
version = AMQP_0_9_1;
}
if (version == AMQP_0_9_1) {
- return Ptr(new RGWPubSubAMQPEndpoint(endpoint, topic, args));
+ return Ptr(new RGWPubSubAMQPEndpoint(endpoint, topic, args, cct));
} else if (version == AMQP_1_0) {
- throw configuration_error("amqp v1.0 not supported");
+ throw configuration_error("AMQP: v1.0 not supported");
return nullptr;
} else {
- throw configuration_error("unknown amqp version " + version);
+ throw configuration_error("AMQP: unknown version: " + version);
return nullptr;
}
} else if (schema == "amqps") {
- throw configuration_error("amqps not supported");
+ throw configuration_error("AMQP: ssl not supported");
return nullptr;
#endif
}
- throw configuration_error("unknown schema " + schema);
+ throw configuration_error("unknown schema in: " + endpoint);
return nullptr;
}
#include <memory>
#include <stdexcept>
#include "include/buffer_fwd.h"
+#include "common/async/yield_context.h"
-// TODO the env should be used as a template parameter to differentiate
-// synchronization driven pushes to (when running on the pubsub zone) to direct rados driven pushes
-// when running on the main zone
+// TODO the env should be used as a template parameter to differentiate the source that triggers the pushes
class RGWDataSyncEnv;
class RGWCoroutine;
class RGWHTTPArgs;
+class CephContext;
struct rgw_pubsub_event;
+struct rgw_pubsub_s3_record;
// endpoint base class all endpoint - types should derive from it
class RGWPubSubEndpoint {
// factory method for the actual notification endpoint
// derived class specific arguments are passed in http args format
// may throw a configuration_error if creation fails
- static Ptr create(const std::string& endpoint, const std::string& topic, const RGWHTTPArgs& args);
+ static Ptr create(const std::string& endpoint, const std::string& topic, const RGWHTTPArgs& args, CephContext *cct=nullptr);
- // this method is used in order to send notification and wait for completion
- // in async manner via a coroutine
+ // this method is used in order to send notification (Ceph specific) and wait for completion
+ // in async manner via a coroutine when invoked in the data sync environment
virtual RGWCoroutine* send_to_completion_async(const rgw_pubsub_event& event, RGWDataSyncEnv* env) = 0;
+ // this method is used in order to send notification (S3 compliant) and wait for completion
+ // in async manner via a coroutine when invoked in the data sync environment
+ virtual RGWCoroutine* send_to_completion_async(const rgw_pubsub_s3_record& record, RGWDataSyncEnv* env) = 0;
+
+ // this method is used in order to send notification (S3 compliant) and wait for completion
+ // in async manner via a coroutine when invoked in the frontend environment
+ virtual int send_to_completion_async(CephContext* cct, const rgw_pubsub_s3_record& record, optional_yield y) = 0;
+
// present as string
virtual std::string to_str() const { return ""; }
int default_type,
bool configurable)
{
- s->format = default_type;
+ s->format = -1; // set to invalid value to allocation happens anyway
+ auto type = default_type;
if (configurable) {
string format_str = s->info.args.get("format");
if (format_str.compare("xml") == 0) {
- s->format = RGW_FORMAT_XML;
+ type = RGW_FORMAT_XML;
} else if (format_str.compare("json") == 0) {
- s->format = RGW_FORMAT_JSON;
+ type = RGW_FORMAT_JSON;
} else if (format_str.compare("html") == 0) {
- s->format = RGW_FORMAT_HTML;
+ type = RGW_FORMAT_HTML;
} else {
const char *accept = s->info.env->get("HTTP_ACCEPT");
if (accept) {
}
format_buf[i] = 0;
if ((strcmp(format_buf, "text/xml") == 0) || (strcmp(format_buf, "application/xml") == 0)) {
- s->format = RGW_FORMAT_XML;
+ type = RGW_FORMAT_XML;
} else if (strcmp(format_buf, "application/json") == 0) {
- s->format = RGW_FORMAT_JSON;
+ type = RGW_FORMAT_JSON;
} else if (strcmp(format_buf, "text/html") == 0) {
- s->format = RGW_FORMAT_HTML;
+ type = RGW_FORMAT_HTML;
}
}
}
}
+ return RGWHandler_REST::reallocate_formatter(s, type);
+}
+
+int RGWHandler_REST::reallocate_formatter(struct req_state *s, int type)
+{
+ if (s->format == type) {
+ // do nothing, just reset
+ ceph_assert(s->formatter);
+ s->formatter->reset();
+ return 0;
+ }
+
+ delete s->formatter;
+ s->formatter = nullptr;
+ s->format = type;
const string& mm = s->info.args.get("multipart-manifest");
const bool multipart_delete = (mm.compare("delete") == 0);
virtual RGWOp *op_copy() { return NULL; }
virtual RGWOp *op_options() { return NULL; }
+public:
static int allocate_formatter(struct req_state *s, int default_formatter,
bool configurable);
-public:
+
static constexpr int MAX_BUCKET_NAME_LEN = 255;
static constexpr int MAX_OBJ_NAME_LEN = 1024;
static int validate_bucket_name(const string& bucket);
static int validate_object_name(const string& object);
+ static int reallocate_formatter(struct req_state *s, int type);
int init_permissions(RGWOp* op) override;
int read_permissions(RGWOp* op) override;
--- /dev/null
+// -*- mode:C++; tab-width:8; c-basic-offset:2; indent-tabs-mode:t -*-
+// vim: ts=8 sw=2 smarttab
+
+#include <algorithm>
+#include <boost/tokenizer.hpp>
+#include <optional>
+#include "rgw_rest_pubsub_common.h"
+#include "rgw_rest_pubsub.h"
+#include "rgw_pubsub_push.h"
+#include "rgw_pubsub.h"
+#include "rgw_sync_module_pubsub.h"
+#include "rgw_op.h"
+#include "rgw_rest.h"
+#include "rgw_rest_s3.h"
+#include "rgw_arn.h"
+#include "rgw_auth_s3.h"
+#include "services/svc_zone.h"
+
+#define dout_context g_ceph_context
+#define dout_subsys ceph_subsys_rgw
+
+// command (AWS compliant):
+// POST
+// Action=CreateTopic&Name=<topic-name>[&push-endpoint=<endpoint>[&<arg1>=<value1>]]
+class RGWPSCreateTopic_ObjStore_AWS : public RGWPSCreateTopicOp {
+public:
+ int get_params() override {
+ topic_name = s->info.args.get("Name");
+ if (topic_name.empty()) {
+ ldout(s->cct, 1) << "CreateTopic Action 'Name' argument is missing" << dendl;
+ return -EINVAL;
+ }
+
+ dest.push_endpoint = s->info.args.get("push-endpoint");
+ for (const auto param : s->info.args.get_params()) {
+ if (param.first == "Action" || param.first == "Name" || param.first == "PayloadHash") {
+ continue;
+ }
+ dest.push_endpoint_args.append(param.first+"="+param.second+"&");
+ }
+
+ if (!dest.push_endpoint_args.empty()) {
+ // remove last separator
+ dest.push_endpoint_args.pop_back();
+ }
+
+ // dest object only stores endpoint info
+ // bucket to store events/records will be set only when subscription is created
+ dest.bucket_name = "";
+ dest.oid_prefix = "";
+ dest.arn_topic = topic_name;
+ // the topic ARN will be sent in the reply
+ const rgw::ARN arn(rgw::Partition::aws, rgw::Service::sns,
+ store->svc.zone->get_zonegroup().get_name(),
+ s->user->user_id.tenant, topic_name);
+ topic_arn = arn.to_string();
+ return 0;
+ }
+
+ void send_response() override {
+ if (op_ret) {
+ set_req_state_err(s, op_ret);
+ }
+ dump_errno(s);
+ end_header(s, this, "application/xml");
+
+ if (op_ret < 0) {
+ return;
+ }
+
+ const auto f = s->formatter;
+ f->open_object_section_in_ns("CreateTopicResponse", "https://sns.amazonaws.com/doc/2010-03-31/");
+ f->open_object_section("CreateTopicResult");
+ encode_xml("TopicArn", topic_arn, f);
+ f->close_section();
+ f->open_object_section("ResponseMetadata");
+ encode_xml("RequestId", s->req_id, f);
+ f->close_section();
+ f->close_section();
+ rgw_flush_formatter_and_reset(s, f);
+ }
+};
+
+// command (AWS compliant):
+// POST
+// Action=ListTopics
+class RGWPSListTopics_ObjStore_AWS : public RGWPSListTopicsOp {
+public:
+ void send_response() override {
+ if (op_ret) {
+ set_req_state_err(s, op_ret);
+ }
+ dump_errno(s);
+ end_header(s, this, "application/xml");
+
+ if (op_ret < 0) {
+ return;
+ }
+
+ const auto f = s->formatter;
+ f->open_object_section_in_ns("ListTopicsResponse", "https://sns.amazonaws.com/doc/2010-03-31/");
+ f->open_object_section("ListTopicsResult");
+ encode_xml("Topics", result, f);
+ f->close_section();
+ f->open_object_section("ResponseMetadata");
+ encode_xml("RequestId", s->req_id, f);
+ f->close_section();
+ f->close_section();
+ rgw_flush_formatter_and_reset(s, f);
+ }
+};
+
+// command (extension to AWS):
+// POST
+// Action=GetTopic&TopicArn=<topic-arn>
+class RGWPSGetTopic_ObjStore_AWS : public RGWPSGetTopicOp {
+public:
+ int get_params() override {
+ const auto topic_arn = rgw::ARN::parse((s->info.args.get("TopicArn")));
+
+ if (!topic_arn || topic_arn->resource.empty()) {
+ ldout(s->cct, 1) << "GetTopic Action 'TopicArn' argument is missing or invalid" << dendl;
+ return -EINVAL;
+ }
+
+ topic_name = topic_arn->resource;
+ return 0;
+ }
+
+ void send_response() override {
+ if (op_ret) {
+ set_req_state_err(s, op_ret);
+ }
+ dump_errno(s);
+ end_header(s, this, "application/xml");
+
+ if (op_ret < 0) {
+ return;
+ }
+
+ const auto f = s->formatter;
+ f->open_object_section("GetTopicResponse");
+ f->open_object_section("GetTopicResult");
+ encode_xml("Topic", result.topic, f);
+ f->close_section();
+ f->open_object_section("ResponseMetadata");
+ encode_xml("RequestId", s->req_id, f);
+ f->close_section();
+ f->close_section();
+ rgw_flush_formatter_and_reset(s, f);
+ }
+};
+
+// command (AWS compliant):
+// POST
+// Action=DeleteTopic&TopicArn=<topic-arn>
+class RGWPSDeleteTopic_ObjStore_AWS : public RGWPSDeleteTopicOp {
+public:
+ int get_params() override {
+ const auto topic_arn = rgw::ARN::parse((s->info.args.get("TopicArn")));
+
+ if (!topic_arn || topic_arn->resource.empty()) {
+ ldout(s->cct, 1) << "DeleteTopic Action 'TopicArn' argument is missing or invalid" << dendl;
+ return -EINVAL;
+ }
+
+ topic_name = topic_arn->resource;
+ return 0;
+ }
+
+ void send_response() override {
+ if (op_ret) {
+ set_req_state_err(s, op_ret);
+ }
+ dump_errno(s);
+ end_header(s, this, "application/xml");
+
+ if (op_ret < 0) {
+ return;
+ }
+
+ const auto f = s->formatter;
+ f->open_object_section_in_ns("DeleteTopicResponse", "https://sns.amazonaws.com/doc/2010-03-31/");
+ f->open_object_section("ResponseMetadata");
+ encode_xml("RequestId", s->req_id, f);
+ f->close_section();
+ f->close_section();
+ rgw_flush_formatter_and_reset(s, f);
+ }
+};
+
+namespace {
+// utility classes and functions for handling parameters with the following format:
+// Attributes.entry.{N}.{key|value}={VALUE}
+// N - any unsigned number
+// VALUE - url encoded string
+
+// and Attribute is holding key and value
+// ctor and set are done according to the "type" argument
+// if type is not "key" or "value" its a no-op
+class Attribute {
+ std::string key;
+ std::string value;
+public:
+ Attribute(const std::string& type, const std::string& key_or_value) {
+ set(type, key_or_value);
+ }
+ void set(const std::string& type, const std::string& key_or_value) {
+ if (type == "key") {
+ key = key_or_value;
+ } else if (type == "value") {
+ value = key_or_value;
+ }
+ }
+ const std::string& get_key() const { return key; }
+ const std::string& get_value() const { return value; }
+};
+
+using AttributeMap = std::map<unsigned, Attribute>;
+
+// aggregate the attributes into a map
+// the key and value are associated by the index (N)
+// no assumptions are made on the order in which these parameters are added
+void update_attribute_map(const std::string& input, AttributeMap& map) {
+ const boost::char_separator<char> sep(".");
+ const boost::tokenizer tokens(input, sep);
+ auto token = tokens.begin();
+ if (*token != "Attributes") {
+ return;
+ }
+ ++token;
+
+ if (*token != "entry") {
+ return;
+ }
+ ++token;
+
+ unsigned idx;
+ try {
+ idx = std::stoul(*token);
+ } catch (const std::invalid_argument&) {
+ return;
+ }
+ ++token;
+
+ std::string key_or_value = "";
+ // get the rest of the string regardless of dots
+ // this is to allow dots in the value
+ while (token != tokens.end()) {
+ key_or_value.append(*token+".");
+ ++token;
+ }
+ // remove last separator
+ key_or_value.pop_back();
+
+ auto pos = key_or_value.find("=");
+ if (pos != string::npos) {
+ const auto key_or_value_lhs = key_or_value.substr(0, pos);
+ const auto key_or_value_rhs = url_decode(key_or_value.substr(pos + 1, key_or_value.size() - 1));
+ const auto map_it = map.find(idx);
+ if (map_it == map.end()) {
+ // new entry
+ map.emplace(std::make_pair(idx, Attribute(key_or_value_lhs, key_or_value_rhs)));
+ } else {
+ // existing entry
+ map_it->second.set(key_or_value_lhs, key_or_value_rhs);
+ }
+ }
+}
+}
+
+void RGWHandler_REST_PSTopic_AWS::rgw_topic_parse_input() {
+ if (post_body.size() > 0) {
+ ldout(s->cct, 10) << "Content of POST: " << post_body << dendl;
+
+ if (post_body.find("Action") != string::npos) {
+ const boost::char_separator<char> sep("&");
+ const boost::tokenizer<boost::char_separator<char>> tokens(post_body, sep);
+ AttributeMap map;
+ for (const auto& t : tokens) {
+ auto pos = t.find("=");
+ if (pos != string::npos) {
+ const auto key = t.substr(0, pos);
+ if (key == "Action") {
+ s->info.args.append(key, t.substr(pos + 1, t.size() - 1));
+ } else if (key == "Name" || key == "TopicArn") {
+ const auto value = url_decode(t.substr(pos + 1, t.size() - 1));
+ s->info.args.append(key, value);
+ } else {
+ update_attribute_map(t, map);
+ }
+ }
+ }
+ // update the regular args with the content of the attribute map
+ for (const auto attr : map) {
+ s->info.args.append(attr.second.get_key(), attr.second.get_value());
+ }
+ }
+ const auto payload_hash = rgw::auth::s3::calc_v4_payload_hash(post_body);
+ s->info.args.append("PayloadHash", payload_hash);
+ }
+}
+
+RGWOp* RGWHandler_REST_PSTopic_AWS::op_post() {
+ rgw_topic_parse_input();
+
+ if (s->info.args.exists("Action")) {
+ const auto action = s->info.args.get("Action");
+ if (action.compare("CreateTopic") == 0)
+ return new RGWPSCreateTopic_ObjStore_AWS();
+ if (action.compare("DeleteTopic") == 0)
+ return new RGWPSDeleteTopic_ObjStore_AWS;
+ if (action.compare("ListTopics") == 0)
+ return new RGWPSListTopics_ObjStore_AWS();
+ if (action.compare("GetTopic") == 0)
+ return new RGWPSGetTopic_ObjStore_AWS();
+ }
+
+ return nullptr;
+}
+
+int RGWHandler_REST_PSTopic_AWS::authorize(const DoutPrefixProvider* dpp) {
+ /*if (s->info.args.exists("Action") && s->info.args.get("Action").find("Topic") != std::string::npos) {
+ // TODO: some topic specific authorization
+ return 0;
+ }*/
+ return RGW_Auth_S3::authorize(dpp, store, auth_registry, s);
+}
+
+
+namespace {
+// return a unique topic by prefexing with the notification name: <notification>_<topic>
+std::string topic_to_unique(const std::string& topic, const std::string& notification) {
+ return notification + "_" + topic;
+}
+
+// extract the topic from a unique topic of the form: <notification>_<topic>
+[[maybe_unused]] std::string unique_to_topic(const std::string& unique_topic, const std::string& notification) {
+ if (unique_topic.find(notification + "_") == string::npos) {
+ return "";
+ }
+ return unique_topic.substr(notification.length() + 1);
+}
+
+// from list of bucket topics, find the one that was auto-generated by a notification
+auto find_unique_topic(const rgw_pubsub_bucket_topics& bucket_topics, const std::string& notif_name) {
+ auto it = std::find_if(bucket_topics.topics.begin(), bucket_topics.topics.end(), [&](const auto& val) { return notif_name == val.second.s3_id; });
+ return it != bucket_topics.topics.end() ?
+ std::optional<std::reference_wrapper<const rgw_pubsub_topic_filter>>(it->second):
+ std::nullopt;
+}
+}
+
+// command (S3 compliant): PUT /<bucket name>?notification
+// a "notification" and a subscription will be auto-generated
+// actual configuration is XML encoded in the body of the message
+class RGWPSCreateNotif_ObjStore_S3 : public RGWPSCreateNotifOp {
+ rgw_pubsub_s3_notifications configurations;
+
+ int get_params_from_body() {
+ const auto max_size = s->cct->_conf->rgw_max_put_param_size;
+ int r;
+ bufferlist data;
+ std::tie(r, data) = rgw_rest_read_all_input(s, max_size, false);
+
+ if (r < 0) {
+ ldout(s->cct, 1) << "failed to read XML payload" << dendl;
+ return r;
+ }
+ if (data.length() == 0) {
+ ldout(s->cct, 1) << "XML payload missing" << dendl;
+ return -EINVAL;
+ }
+
+ RGWXMLDecoder::XMLParser parser;
+
+ if (!parser.init()){
+ ldout(s->cct, 1) << "failed to initialize XML parser" << dendl;
+ return -EINVAL;
+ }
+ if (!parser.parse(data.c_str(), data.length(), 1)) {
+ ldout(s->cct, 1) << "failed to parse XML payload" << dendl;
+ return -ERR_MALFORMED_XML;
+ }
+ try {
+ // NotificationConfigurations is mandatory
+ RGWXMLDecoder::decode_xml("NotificationConfiguration", configurations, &parser, true);
+ } catch (RGWXMLDecoder::err& err) {
+ ldout(s->cct, 1) << "failed to parse XML payload. error: " << err << dendl;
+ return -ERR_MALFORMED_XML;
+ }
+ return 0;
+ }
+
+ int get_params() override {
+ bool exists;
+ const auto no_value = s->info.args.get("notification", &exists);
+ if (!exists) {
+ ldout(s->cct, 1) << "missing required param 'notification'" << dendl;
+ return -EINVAL;
+ }
+ if (no_value.length() > 0) {
+ ldout(s->cct, 1) << "param 'notification' should not have any value" << dendl;
+ return -EINVAL;
+ }
+ if (s->bucket_name.empty()) {
+ ldout(s->cct, 1) << "request must be on a bucket" << dendl;
+ return -EINVAL;
+ }
+ bucket_name = s->bucket_name;
+ return 0;
+ }
+
+public:
+ const char* name() const override { return "pubsub_notification_create_s3"; }
+ void execute() override;
+};
+
+void RGWPSCreateNotif_ObjStore_S3::execute() {
+ op_ret = get_params_from_body();
+ if (op_ret < 0) {
+ return;
+ }
+
+ ups.emplace(store, s->owner.get_id());
+ auto b = ups->get_bucket(bucket_info.bucket);
+ ceph_assert(b);
+ std::string data_bucket_prefix = "";
+ std::string data_oid_prefix = "";
+ bool push_only = true;
+ if (store->get_sync_module()) {
+ const auto psmodule = dynamic_cast<RGWPSSyncModuleInstance*>(store->get_sync_module().get());
+ if (psmodule) {
+ const auto& conf = psmodule->get_effective_conf();
+ data_bucket_prefix = conf["data_bucket_prefix"];
+ data_oid_prefix = conf["data_oid_prefix"];
+ // TODO: allow "push-only" on PS zone as well
+ push_only = false;
+ }
+ }
+
+ for (const auto& c : configurations.list) {
+ const auto& notif_name = c.id;
+ if (notif_name.empty()) {
+ ldout(s->cct, 1) << "missing notification id" << dendl;
+ op_ret = -EINVAL;
+ return;
+ }
+ if (c.topic_arn.empty()) {
+ ldout(s->cct, 1) << "missing topic ARN in notification: '" << notif_name << "'" << dendl;
+ op_ret = -EINVAL;
+ return;
+ }
+
+ const auto arn = rgw::ARN::parse(c.topic_arn);
+ if (!arn || arn->resource.empty()) {
+ ldout(s->cct, 1) << "topic ARN has invalid format: '" << c.topic_arn << "' in notification: '" << notif_name << "'" << dendl;
+ op_ret = -EINVAL;
+ return;
+ }
+
+ if (std::find(c.events.begin(), c.events.end(), rgw::notify::UnknownEvent) != c.events.end()) {
+ ldout(s->cct, 1) << "unknown event type in notification: '" << notif_name << "'" << dendl;
+ op_ret = -EINVAL;
+ return;
+ }
+
+ const auto topic_name = arn->resource;
+
+ // get topic information. destination information is stored in the topic
+ rgw_pubsub_topic topic_info;
+ op_ret = ups->get_topic(topic_name, &topic_info);
+ if (op_ret < 0) {
+ ldout(s->cct, 1) << "failed to get topic '" << topic_name << "', ret=" << op_ret << dendl;
+ return;
+ }
+ // make sure that full topic configuration match
+ // TODO: use ARN match function
+
+ // create unique topic name. this has 2 reasons:
+ // (1) topics cannot be shared between different S3 notifications because they hold the filter information
+ // (2) make topic clneaup easier, when notification is removed
+ const auto unique_topic_name = topic_to_unique(topic_name, notif_name);
+ // generate the internal topic. destination is stored here for the "push-only" case
+ // when no subscription exists
+ // ARN is cached to make the "GET" method faster
+ op_ret = ups->create_topic(unique_topic_name, topic_info.dest, topic_info.arn);
+ if (op_ret < 0) {
+ ldout(s->cct, 1) << "failed to auto-generate unique topic '" << unique_topic_name <<
+ "', ret=" << op_ret << dendl;
+ return;
+ }
+ ldout(s->cct, 20) << "successfully auto-generated unique topic '" << unique_topic_name << "'" << dendl;
+ // generate the notification
+ rgw::notify::EventTypeList events;
+ op_ret = b->create_notification(unique_topic_name, c.events, std::make_optional(c.filter), notif_name);
+ if (op_ret < 0) {
+ ldout(s->cct, 1) << "failed to auto-generate notification for unique topic '" << unique_topic_name <<
+ "', ret=" << op_ret << dendl;
+ // rollback generated topic (ignore return value)
+ ups->remove_topic(unique_topic_name);
+ return;
+ }
+ ldout(s->cct, 20) << "successfully auto-generated notification for unique topic '" << unique_topic_name << "'" << dendl;
+
+ if (!push_only) {
+ // generate the subscription with destination information from the original topic
+ rgw_pubsub_sub_dest dest = topic_info.dest;
+ dest.bucket_name = data_bucket_prefix + s->owner.get_id().to_str() + "-" + unique_topic_name;
+ dest.oid_prefix = data_oid_prefix + notif_name + "/";
+ auto sub = ups->get_sub(notif_name);
+ op_ret = sub->subscribe(unique_topic_name, dest, notif_name);
+ if (op_ret < 0) {
+ ldout(s->cct, 1) << "failed to auto-generate subscription '" << notif_name << "', ret=" << op_ret << dendl;
+ // rollback generated notification (ignore return value)
+ b->remove_notification(unique_topic_name);
+ // rollback generated topic (ignore return value)
+ ups->remove_topic(unique_topic_name);
+ return;
+ }
+ ldout(s->cct, 20) << "successfully auto-generated subscription '" << notif_name << "'" << dendl;
+ }
+ }
+}
+
+// command (extension to S3): DELETE /bucket?notification[=<notification-id>]
+class RGWPSDeleteNotif_ObjStore_S3 : public RGWPSDeleteNotifOp {
+private:
+ std::string notif_name;
+
+ int get_params() override {
+ bool exists;
+ notif_name = s->info.args.get("notification", &exists);
+ if (!exists) {
+ ldout(s->cct, 1) << "missing required param 'notification'" << dendl;
+ return -EINVAL;
+ }
+ if (s->bucket_name.empty()) {
+ ldout(s->cct, 1) << "request must be on a bucket" << dendl;
+ return -EINVAL;
+ }
+ bucket_name = s->bucket_name;
+ return 0;
+ }
+
+ void remove_notification_by_topic(const std::string& topic_name, const RGWUserPubSub::BucketRef& b) {
+ op_ret = b->remove_notification(topic_name);
+ if (op_ret < 0) {
+ ldout(s->cct, 1) << "failed to remove notification of topic '" << topic_name << "', ret=" << op_ret << dendl;
+ }
+ op_ret = ups->remove_topic(topic_name);
+ if (op_ret < 0) {
+ ldout(s->cct, 1) << "failed to remove auto-generated topic '" << topic_name << "', ret=" << op_ret << dendl;
+ }
+ }
+
+public:
+ void execute() override;
+ const char* name() const override { return "pubsub_notification_delete_s3"; }
+};
+
+void RGWPSDeleteNotif_ObjStore_S3::execute() {
+ op_ret = get_params();
+ if (op_ret < 0) {
+ return;
+ }
+
+ ups.emplace(store, s->owner.get_id());
+ auto b = ups->get_bucket(bucket_info.bucket);
+ ceph_assert(b);
+
+ // get all topics on a bucket
+ rgw_pubsub_bucket_topics bucket_topics;
+ op_ret = b->get_topics(&bucket_topics);
+ if (op_ret < 0) {
+ ldout(s->cct, 1) << "failed to get list of topics from bucket '" << bucket_info.bucket.name << "', ret=" << op_ret << dendl;
+ return;
+ }
+
+ if (!notif_name.empty()) {
+ // delete a specific notification
+ const auto unique_topic = find_unique_topic(bucket_topics, notif_name);
+ if (unique_topic) {
+ // remove the auto generated subscription according to notification name (if exist)
+ const auto unique_topic_name = unique_topic->get().topic.name;
+ auto sub = ups->get_sub(notif_name);
+ op_ret = sub->unsubscribe(unique_topic_name);
+ if (op_ret < 0 && op_ret != -ENOENT) {
+ ldout(s->cct, 1) << "failed to remove auto-generated subscription '" << notif_name << "', ret=" << op_ret << dendl;
+ return;
+ }
+ remove_notification_by_topic(unique_topic_name, b);
+ return;
+ }
+ // notification to be removed is not found - considered success
+ ldout(s->cct, 20) << "notification '" << notif_name << "' already removed" << dendl;
+ return;
+ }
+
+ // delete all notification of on a bucket
+ for (const auto& topic : bucket_topics.topics) {
+ // remove the auto generated subscription of the topic (if exist)
+ rgw_pubsub_topic_subs topic_subs;
+ op_ret = ups->get_topic(topic.first, &topic_subs);
+ for (const auto& topic_sub_name : topic_subs.subs) {
+ auto sub = ups->get_sub(topic_sub_name);
+ rgw_pubsub_sub_config sub_conf;
+ op_ret = sub->get_conf(&sub_conf);
+ if (op_ret < 0) {
+ ldout(s->cct, 1) << "failed to get subscription '" << topic_sub_name << "' info, ret=" << op_ret << dendl;
+ return;
+ }
+ if (!sub_conf.s3_id.empty()) {
+ // S3 notification, has autogenerated subscription
+ const auto& sub_topic_name = sub_conf.topic;
+ op_ret = sub->unsubscribe(sub_topic_name);
+ if (op_ret < 0) {
+ ldout(s->cct, 1) << "failed to remove auto-generated subscription '" << topic_sub_name << "', ret=" << op_ret << dendl;
+ return;
+ }
+ }
+ }
+ remove_notification_by_topic(topic.first, b);
+ }
+}
+
+// command (S3 compliant): GET /bucket?notification[=<notification-id>]
+class RGWPSListNotifs_ObjStore_S3 : public RGWPSListNotifsOp {
+private:
+ std::string notif_name;
+ rgw_pubsub_s3_notifications notifications;
+
+ int get_params() override {
+ bool exists;
+ notif_name = s->info.args.get("notification", &exists);
+ if (!exists) {
+ ldout(s->cct, 1) << "missing required param 'notification'" << dendl;
+ return -EINVAL;
+ }
+ if (s->bucket_name.empty()) {
+ ldout(s->cct, 1) << "request must be on a bucket" << dendl;
+ return -EINVAL;
+ }
+ bucket_name = s->bucket_name;
+ return 0;
+ }
+
+public:
+ void execute() override;
+ void send_response() override {
+ if (op_ret) {
+ set_req_state_err(s, op_ret);
+ }
+ dump_errno(s);
+ end_header(s, this, "application/xml");
+
+ if (op_ret < 0) {
+ return;
+ }
+ notifications.dump_xml(s->formatter);
+ rgw_flush_formatter_and_reset(s, s->formatter);
+ }
+ const char* name() const override { return "pubsub_notifications_get_s3"; }
+};
+
+void RGWPSListNotifs_ObjStore_S3::execute() {
+ ups.emplace(store, s->owner.get_id());
+ auto b = ups->get_bucket(bucket_info.bucket);
+ ceph_assert(b);
+
+ // get all topics on a bucket
+ rgw_pubsub_bucket_topics bucket_topics;
+ op_ret = b->get_topics(&bucket_topics);
+ if (op_ret < 0) {
+ ldout(s->cct, 1) << "failed to get list of topics from bucket '" << bucket_info.bucket.name << "', ret=" << op_ret << dendl;
+ return;
+ }
+ if (!notif_name.empty()) {
+ // get info of a specific notification
+ const auto unique_topic = find_unique_topic(bucket_topics, notif_name);
+ if (unique_topic) {
+ notifications.list.emplace_back(unique_topic->get());
+ return;
+ }
+ op_ret = -ENOENT;
+ ldout(s->cct, 1) << "failed to get notification info for '" << notif_name << "', ret=" << op_ret << dendl;
+ return;
+ }
+ // loop through all topics of the bucket
+ for (const auto& topic : bucket_topics.topics) {
+ if (topic.second.s3_id.empty()) {
+ // not an s3 notification
+ continue;
+ }
+ notifications.list.emplace_back(topic.second);
+ }
+}
+
+RGWOp* RGWHandler_REST_PSNotifs_S3::op_get() {
+ return new RGWPSListNotifs_ObjStore_S3();
+}
+
+RGWOp* RGWHandler_REST_PSNotifs_S3::op_put() {
+ return new RGWPSCreateNotif_ObjStore_S3();
+}
+
+RGWOp* RGWHandler_REST_PSNotifs_S3::op_delete() {
+ return new RGWPSDeleteNotif_ObjStore_S3();
+}
+
+RGWOp* RGWHandler_REST_PSNotifs_S3::create_get_op() {
+ return new RGWPSListNotifs_ObjStore_S3();
+}
+
+RGWOp* RGWHandler_REST_PSNotifs_S3::create_put_op() {
+ return new RGWPSCreateNotif_ObjStore_S3();
+}
+
+RGWOp* RGWHandler_REST_PSNotifs_S3::create_delete_op() {
+ return new RGWPSDeleteNotif_ObjStore_S3();
+}
+
--- /dev/null
+// -*- mode:C++; tab-width:8; c-basic-offset:2; indent-tabs-mode:t -*-
+// vim: ts=8 sw=2 smarttab
+#pragma once
+
+#include "rgw_rest_s3.h"
+
+// s3 compliant notification handler factory
+class RGWHandler_REST_PSNotifs_S3 : public RGWHandler_REST_S3 {
+protected:
+ int init_permissions(RGWOp* op) override {return 0;}
+ int read_permissions(RGWOp* op) override {return 0;}
+ bool supports_quota() override {return false;}
+ RGWOp* op_get() override;
+ RGWOp* op_put() override;
+ RGWOp* op_delete() override;
+public:
+ using RGWHandler_REST_S3::RGWHandler_REST_S3;
+ virtual ~RGWHandler_REST_PSNotifs_S3() = default;
+ // following are used to generate the operations when invoked by another REST handler
+ static RGWOp* create_get_op();
+ static RGWOp* create_put_op();
+ static RGWOp* create_delete_op();
+};
+
+// AWS compliant topics handler factory
+class RGWHandler_REST_PSTopic_AWS : public RGWHandler_REST {
+ const rgw::auth::StrategyRegistry& auth_registry;
+ const std::string& post_body;
+ void rgw_topic_parse_input();
+ //static int init_from_header(struct req_state *s, int default_formatter, bool configurable_format);
+protected:
+ RGWOp* op_post() override;
+public:
+ RGWHandler_REST_PSTopic_AWS(const rgw::auth::StrategyRegistry& _auth_registry, const std::string& _post_body) :
+ auth_registry(_auth_registry),
+ post_body(_post_body) {}
+ virtual ~RGWHandler_REST_PSTopic_AWS() = default;
+ int postauth_init() override { return 0; }
+ int authorize(const DoutPrefixProvider* dpp) override;
+};
+
--- /dev/null
+// -*- mode:C++; tab-width:8; c-basic-offset:2; indent-tabs-mode:t -*-
+// vim: ts=8 sw=2 smarttab
+
+#include "rgw_rest_pubsub_common.h"
+#include "common/dout.h"
+
+#define dout_context g_ceph_context
+#define dout_subsys ceph_subsys_rgw
+
+void RGWPSCreateTopicOp::execute() {
+ op_ret = get_params();
+ if (op_ret < 0) {
+ return;
+ }
+
+ ups.emplace(store, s->owner.get_id());
+ op_ret = ups->create_topic(topic_name, dest, topic_arn);
+ if (op_ret < 0) {
+ ldout(s->cct, 1) << "failed to create topic '" << topic_name << "', ret=" << op_ret << dendl;
+ return;
+ }
+ ldout(s->cct, 20) << "successfully created topic '" << topic_name << "'" << dendl;
+}
+
+void RGWPSListTopicsOp::execute() {
+ ups.emplace(store, s->owner.get_id());
+ op_ret = ups->get_user_topics(&result);
+ if (op_ret < 0) {
+ ldout(s->cct, 1) << "failed to get topics, ret=" << op_ret << dendl;
+ return;
+ }
+ ldout(s->cct, 20) << "successfully got topics" << dendl;
+}
+
+void RGWPSGetTopicOp::execute() {
+ op_ret = get_params();
+ if (op_ret < 0) {
+ return;
+ }
+ ups.emplace(store, s->owner.get_id());
+ op_ret = ups->get_topic(topic_name, &result);
+ if (op_ret < 0) {
+ ldout(s->cct, 1) << "failed to get topic '" << topic_name << "', ret=" << op_ret << dendl;
+ return;
+ }
+ ldout(s->cct, 1) << "successfully got topic '" << topic_name << "'" << dendl;
+}
+
+void RGWPSDeleteTopicOp::execute() {
+ op_ret = get_params();
+ if (op_ret < 0) {
+ return;
+ }
+ ups.emplace(store, s->owner.get_id());
+ op_ret = ups->remove_topic(topic_name);
+ if (op_ret < 0) {
+ ldout(s->cct, 1) << "failed to remove topic '" << topic_name << ", ret=" << op_ret << dendl;
+ return;
+ }
+ ldout(s->cct, 1) << "successfully removed topic '" << topic_name << "'" << dendl;
+}
+
+void RGWPSCreateSubOp::execute() {
+ op_ret = get_params();
+ if (op_ret < 0) {
+ return;
+ }
+ ups.emplace(store, s->owner.get_id());
+ auto sub = ups->get_sub(sub_name);
+ op_ret = sub->subscribe(topic_name, dest);
+ if (op_ret < 0) {
+ ldout(s->cct, 1) << "failed to create subscription '" << sub_name << "', ret=" << op_ret << dendl;
+ return;
+ }
+ ldout(s->cct, 20) << "successfully created subscription '" << sub_name << "'" << dendl;
+}
+
+void RGWPSGetSubOp::execute() {
+ op_ret = get_params();
+ if (op_ret < 0) {
+ return;
+ }
+ ups.emplace(store, s->owner.get_id());
+ auto sub = ups->get_sub(sub_name);
+ op_ret = sub->get_conf(&result);
+ if (op_ret < 0) {
+ ldout(s->cct, 1) << "failed to get subscription '" << sub_name << "', ret=" << op_ret << dendl;
+ return;
+ }
+ ldout(s->cct, 20) << "successfully got subscription '" << sub_name << "'" << dendl;
+}
+
+void RGWPSDeleteSubOp::execute() {
+ op_ret = get_params();
+ if (op_ret < 0) {
+ return;
+ }
+ ups.emplace(store, s->owner.get_id());
+ auto sub = ups->get_sub(sub_name);
+ op_ret = sub->unsubscribe(topic_name);
+ if (op_ret < 0) {
+ ldout(s->cct, 1) << "failed to remove subscription '" << sub_name << "', ret=" << op_ret << dendl;
+ return;
+ }
+ ldout(s->cct, 20) << "successfully removed subscription '" << sub_name << "'" << dendl;
+}
+
+void RGWPSAckSubEventOp::execute() {
+ op_ret = get_params();
+ if (op_ret < 0) {
+ return;
+ }
+ ups.emplace(store, s->owner.get_id());
+ auto sub = ups->get_sub_with_events(sub_name);
+ op_ret = sub->remove_event(event_id);
+ if (op_ret < 0) {
+ ldout(s->cct, 1) << "failed to ack event on subscription '" << sub_name << "', ret=" << op_ret << dendl;
+ return;
+ }
+ ldout(s->cct, 20) << "successfully acked event on subscription '" << sub_name << "'" << dendl;
+}
+
+void RGWPSPullSubEventsOp::execute() {
+ op_ret = get_params();
+ if (op_ret < 0) {
+ return;
+ }
+ ups.emplace(store, s->owner.get_id());
+ sub = ups->get_sub_with_events(sub_name);
+ if (!sub) {
+ op_ret = -ENOENT;
+ ldout(s->cct, 1) << "failed to get subscription '" << sub_name << "' for events, ret=" << op_ret << dendl;
+ return;
+ }
+ op_ret = sub->list_events(marker, max_entries);
+ if (op_ret < 0) {
+ ldout(s->cct, 1) << "failed to get events from subscription '" << sub_name << "', ret=" << op_ret << dendl;
+ return;
+ }
+ ldout(s->cct, 20) << "successfully got events from subscription '" << sub_name << "'" << dendl;
+}
+
+
+int RGWPSCreateNotifOp::verify_permission() {
+ int ret = get_params();
+ if (ret < 0) {
+ return ret;
+ }
+
+ const auto& id = s->owner.get_id();
+
+ ret = store->get_bucket_info(*s->sysobj_ctx, id.tenant, bucket_name,
+ bucket_info, nullptr, nullptr);
+ if (ret < 0) {
+ ldout(s->cct, 1) << "failed to get bucket info, cannot verify ownership" << dendl;
+ return ret;
+ }
+
+ if (bucket_info.owner != id) {
+ ldout(s->cct, 1) << "user doesn't own bucket, not allowed to create notification" << dendl;
+ return -EPERM;
+ }
+ return 0;
+}
+
+int RGWPSDeleteNotifOp::verify_permission() {
+ int ret = get_params();
+ if (ret < 0) {
+ return ret;
+ }
+
+ ret = store->get_bucket_info(*s->sysobj_ctx, s->owner.get_id().tenant, bucket_name,
+ bucket_info, nullptr, nullptr);
+ if (ret < 0) {
+ return ret;
+ }
+
+ if (bucket_info.owner != s->owner.get_id()) {
+ ldout(s->cct, 1) << "user doesn't own bucket, cannot remove notification" << dendl;
+ return -EPERM;
+ }
+ return 0;
+}
+
+int RGWPSListNotifsOp::verify_permission() {
+ int ret = get_params();
+ if (ret < 0) {
+ return ret;
+ }
+
+ ret = store->get_bucket_info(*s->sysobj_ctx, s->owner.get_id().tenant, bucket_name,
+ bucket_info, nullptr, nullptr);
+ if (ret < 0) {
+ return ret;
+ }
+
+ if (bucket_info.owner != s->owner.get_id()) {
+ ldout(s->cct, 1) << "user doesn't own bucket, cannot get topic list" << dendl;
+ return -EPERM;
+ }
+
+ return 0;
+}
+
--- /dev/null
+// -*- mode:C++; tab-width:8; c-basic-offset:2; indent-tabs-mode:t -*-
+// vim: ts=8 sw=2 smarttab
+#pragma once
+#include <string>
+#include <optional>
+#include "rgw_op.h"
+#include "rgw_pubsub.h"
+
+// create a topic
+class RGWPSCreateTopicOp : public RGWDefaultResponseOp {
+protected:
+ std::optional<RGWUserPubSub> ups;
+ std::string topic_name;
+ rgw_pubsub_sub_dest dest;
+ std::string topic_arn;
+
+ virtual int get_params() = 0;
+
+public:
+ int verify_permission() override {
+ return 0;
+ }
+ void pre_exec() override {
+ rgw_bucket_object_pre_exec(s);
+ }
+ void execute() override;
+
+ const char* name() const override { return "pubsub_topic_create"; }
+ RGWOpType get_type() override { return RGW_OP_PUBSUB_TOPIC_CREATE; }
+ uint32_t op_mask() override { return RGW_OP_TYPE_WRITE; }
+};
+
+// list all topics
+class RGWPSListTopicsOp : public RGWOp {
+protected:
+ std::optional<RGWUserPubSub> ups;
+ rgw_pubsub_user_topics result;
+
+public:
+ int verify_permission() override {
+ return 0;
+ }
+ void pre_exec() override {
+ rgw_bucket_object_pre_exec(s);
+ }
+ void execute() override;
+
+ const char* name() const override { return "pubsub_topics_list"; }
+ RGWOpType get_type() override { return RGW_OP_PUBSUB_TOPICS_LIST; }
+ uint32_t op_mask() override { return RGW_OP_TYPE_READ; }
+};
+
+// get topic information
+class RGWPSGetTopicOp : public RGWOp {
+protected:
+ std::string topic_name;
+ std::optional<RGWUserPubSub> ups;
+ rgw_pubsub_topic_subs result;
+
+ virtual int get_params() = 0;
+
+public:
+ int verify_permission() override {
+ return 0;
+ }
+ void pre_exec() override {
+ rgw_bucket_object_pre_exec(s);
+ }
+ void execute() override;
+
+ const char* name() const override { return "pubsub_topic_get"; }
+ RGWOpType get_type() override { return RGW_OP_PUBSUB_TOPIC_GET; }
+ uint32_t op_mask() override { return RGW_OP_TYPE_READ; }
+};
+
+// delete a topic
+class RGWPSDeleteTopicOp : public RGWDefaultResponseOp {
+protected:
+ string topic_name;
+ std::optional<RGWUserPubSub> ups;
+
+ virtual int get_params() = 0;
+
+public:
+ int verify_permission() override {
+ return 0;
+ }
+ void pre_exec() override {
+ rgw_bucket_object_pre_exec(s);
+ }
+ void execute() override;
+
+ const char* name() const override { return "pubsub_topic_delete"; }
+ RGWOpType get_type() override { return RGW_OP_PUBSUB_TOPIC_DELETE; }
+ uint32_t op_mask() override { return RGW_OP_TYPE_DELETE; }
+};
+
+// create a subscription
+class RGWPSCreateSubOp : public RGWDefaultResponseOp {
+protected:
+ std::string sub_name;
+ std::string topic_name;
+ std::optional<RGWUserPubSub> ups;
+ rgw_pubsub_sub_dest dest;
+
+ virtual int get_params() = 0;
+
+public:
+ int verify_permission() override {
+ return 0;
+ }
+ void pre_exec() override {
+ rgw_bucket_object_pre_exec(s);
+ }
+ void execute() override;
+
+ const char* name() const override { return "pubsub_subscription_create"; }
+ RGWOpType get_type() override { return RGW_OP_PUBSUB_SUB_CREATE; }
+ uint32_t op_mask() override { return RGW_OP_TYPE_WRITE; }
+};
+
+// get subscription information (including push-endpoint if exist)
+class RGWPSGetSubOp : public RGWOp {
+protected:
+ std::string sub_name;
+ std::optional<RGWUserPubSub> ups;
+ rgw_pubsub_sub_config result;
+
+ virtual int get_params() = 0;
+
+public:
+ int verify_permission() override {
+ return 0;
+ }
+ void pre_exec() override {
+ rgw_bucket_object_pre_exec(s);
+ }
+ void execute() override;
+
+ const char* name() const override { return "pubsub_subscription_get"; }
+ RGWOpType get_type() override { return RGW_OP_PUBSUB_SUB_GET; }
+ uint32_t op_mask() override { return RGW_OP_TYPE_READ; }
+};
+
+// delete subscription
+class RGWPSDeleteSubOp : public RGWDefaultResponseOp {
+protected:
+ std::string sub_name;
+ std::string topic_name;
+ std::optional<RGWUserPubSub> ups;
+
+ virtual int get_params() = 0;
+
+public:
+ int verify_permission() override {
+ return 0;
+ }
+ void pre_exec() override {
+ rgw_bucket_object_pre_exec(s);
+ }
+ void execute() override;
+
+ const char* name() const override { return "pubsub_subscription_delete"; }
+ RGWOpType get_type() override { return RGW_OP_PUBSUB_SUB_DELETE; }
+ uint32_t op_mask() override { return RGW_OP_TYPE_DELETE; }
+};
+
+// acking of an event
+class RGWPSAckSubEventOp : public RGWDefaultResponseOp {
+protected:
+ std::string sub_name;
+ std::string event_id;
+ std::optional<RGWUserPubSub> ups;
+
+ virtual int get_params() = 0;
+
+public:
+ RGWPSAckSubEventOp() {}
+
+ int verify_permission() override {
+ return 0;
+ }
+ void pre_exec() override {
+ rgw_bucket_object_pre_exec(s);
+ }
+ void execute() override;
+
+ const char* name() const override { return "pubsub_subscription_ack"; }
+ RGWOpType get_type() override { return RGW_OP_PUBSUB_SUB_ACK; }
+ uint32_t op_mask() override { return RGW_OP_TYPE_WRITE; }
+};
+
+// fetching events from a subscription
+// dpending on whether the subscription was created via s3 compliant API or not
+// the matching events will be returned
+class RGWPSPullSubEventsOp : public RGWOp {
+protected:
+ int max_entries{0};
+ std::string sub_name;
+ std::string marker;
+ std::optional<RGWUserPubSub> ups;
+ RGWUserPubSub::SubRef sub;
+
+ virtual int get_params() = 0;
+
+public:
+ RGWPSPullSubEventsOp() {}
+
+ int verify_permission() override {
+ return 0;
+ }
+ void pre_exec() override {
+ rgw_bucket_object_pre_exec(s);
+ }
+ void execute() override;
+
+ const char* name() const override { return "pubsub_subscription_pull"; }
+ RGWOpType get_type() override { return RGW_OP_PUBSUB_SUB_PULL; }
+ uint32_t op_mask() override { return RGW_OP_TYPE_READ; }
+};
+
+// notification creation
+class RGWPSCreateNotifOp : public RGWDefaultResponseOp {
+protected:
+ std::optional<RGWUserPubSub> ups;
+ string bucket_name;
+ RGWBucketInfo bucket_info;
+
+ virtual int get_params() = 0;
+
+public:
+ int verify_permission() override;
+
+ void pre_exec() override {
+ rgw_bucket_object_pre_exec(s);
+ }
+
+ RGWOpType get_type() override { return RGW_OP_PUBSUB_NOTIF_CREATE; }
+ uint32_t op_mask() override { return RGW_OP_TYPE_WRITE; }
+};
+
+// delete a notification
+class RGWPSDeleteNotifOp : public RGWDefaultResponseOp {
+protected:
+ std::optional<RGWUserPubSub> ups;
+ std::string bucket_name;
+ RGWBucketInfo bucket_info;
+
+ virtual int get_params() = 0;
+
+public:
+ int verify_permission() override;
+
+ void pre_exec() override {
+ rgw_bucket_object_pre_exec(s);
+ }
+
+ RGWOpType get_type() override { return RGW_OP_PUBSUB_NOTIF_DELETE; }
+ uint32_t op_mask() override { return RGW_OP_TYPE_DELETE; }
+};
+
+// get topics/notifications on a bucket
+class RGWPSListNotifsOp : public RGWOp {
+protected:
+ std::string bucket_name;
+ RGWBucketInfo bucket_info;
+ std::optional<RGWUserPubSub> ups;
+
+ virtual int get_params() = 0;
+
+public:
+ int verify_permission() override;
+
+ void pre_exec() override {
+ rgw_bucket_object_pre_exec(s);
+ }
+
+ RGWOpType get_type() override { return RGW_OP_PUBSUB_NOTIF_LIST; }
+ uint32_t op_mask() override { return RGW_OP_TYPE_READ; }
+};
+
uint64_t op = get_op();
if (!verify_user_permission(this,
s,
- rgw::IAM::ARN(resource_name,
+ rgw::ARN(resource_name,
"role",
s->user->user_id.tenant, true),
op)) {
string resource_name = role_path + role_name;
if (!verify_user_permission(this,
s,
- rgw::IAM::ARN(resource_name,
+ rgw::ARN(resource_name,
"role",
s->user->user_id.tenant, true),
get_op())) {
string resource_name = role.get_path() + role.get_name();
if (!verify_user_permission(this,
s,
- rgw::IAM::ARN(resource_name,
+ rgw::ARN(resource_name,
"role",
s->user->user_id.tenant, true),
get_op())) {
if (!verify_user_permission(this,
s,
- rgw::IAM::ARN(),
+ rgw::ARN(),
get_op())) {
return -EACCES;
}
#include "rgw_rest.h"
#include "rgw_rest_s3.h"
#include "rgw_rest_s3website.h"
+#include "rgw_rest_pubsub.h"
#include "rgw_auth_s3.h"
#include "rgw_acl.h"
#include "rgw_policy_s3.h"
RGWOp *RGWHandler_REST_Service_S3::op_post()
{
+ const auto max_size = s->cct->_conf->rgw_max_put_param_size;
+
+ int ret;
+ bufferlist data;
+ std::tie(ret, data) = rgw_rest_read_all_input(s, max_size, false);
+ if (ret < 0) {
+ return nullptr;
+ }
+
+ const auto post_body = data.to_str();
+
if (s->info.args.exists("Action")) {
string action = s->info.args.get("Action");
if (action.compare("CreateRole") == 0)
if (action.compare("DeleteUserPolicy") == 0)
return new RGWDeleteUserPolicy;
}
- if (this->isSTSenabled) {
- RGWHandler_REST_STS sts_handler(auth_registry);
+
+ if (isSTSenabled) {
+ RGWHandler_REST_STS sts_handler(auth_registry, post_body);
sts_handler.init(store, s, s->cio);
- return sts_handler.get_op(store);
+ auto op = sts_handler.get_op(store);
+ if (op) {
+ return op;
+ }
}
+ if (isPSenabled) {
+ RGWHandler_REST_PSTopic_AWS topic_handler(auth_registry, post_body);
+ topic_handler.init(store, s, s->cio);
+ auto op = topic_handler.get_op(store);
+ if (op) {
+ return op;
+ }
+ }
+
return NULL;
}
return new RGWGetLC_ObjStore_S3;
} else if(is_policy_op()) {
return new RGWGetBucketPolicy;
+ } else if (is_notification_op()) {
+ return RGWHandler_REST_PSNotifs_S3::create_get_op();
}
return get_obj_op(true);
}
return new RGWPutLC_ObjStore_S3;
} else if(is_policy_op()) {
return new RGWPutBucketPolicy;
+ } else if (is_notification_op()) {
+ return RGWHandler_REST_PSNotifs_S3::create_put_op();
}
return new RGWCreateBucket_ObjStore_S3;
}
return new RGWDeleteLC_ObjStore_S3;
} else if(is_policy_op()) {
return new RGWDeleteBucketPolicy;
+ } else if (is_notification_op()) {
+ return RGWHandler_REST_PSNotifs_S3::create_delete_op();
}
if (s->info.args.sub_resource_exists("website")) {
}
} else {
if (s->init_state.url_bucket.empty()) {
- handler = new RGWHandler_REST_Service_S3(auth_registry, enable_sts);
+ handler = new RGWHandler_REST_Service_S3(auth_registry, enable_sts, enable_pubsub);
} else if (s->object.empty()) {
- handler = new RGWHandler_REST_Bucket_S3(auth_registry);
+ handler = new RGWHandler_REST_Bucket_S3(auth_registry, enable_pubsub);
} else {
handler = new RGWHandler_REST_Obj_S3(auth_registry);
}
case RGW_OP_PUT_OBJ_TAGGING:
case RGW_OP_PUT_LC:
case RGW_OP_SET_REQUEST_PAYMENT:
+ case RGW_OP_PUBSUB_NOTIF_CREATE:
break;
default:
dout(10) << "ERROR: AWS4 completion for this operation NOT IMPLEMENTED" << dendl;
class RGWHandler_REST_Service_S3 : public RGWHandler_REST_S3 {
protected:
- bool isSTSenabled;
+ const bool isSTSenabled;
+ const bool isPSenabled;
bool is_usage_op() {
return s->info.args.exists("usage");
}
RGWOp *op_post() override;
public:
RGWHandler_REST_Service_S3(const rgw::auth::StrategyRegistry& auth_registry,
- bool isSTSenabled) :
- RGWHandler_REST_S3(auth_registry), isSTSenabled(isSTSenabled) {}
+ bool _isSTSenabled, bool _isPSenabled) :
+ RGWHandler_REST_S3(auth_registry), isSTSenabled(_isSTSenabled), isPSenabled(_isPSenabled) {}
~RGWHandler_REST_Service_S3() override = default;
};
class RGWHandler_REST_Bucket_S3 : public RGWHandler_REST_S3 {
+ const bool enable_pubsub;
protected:
bool is_acl_op() {
return s->info.args.exists("acl");
bool is_policy_op() {
return s->info.args.exists("policy");
}
+ bool is_notification_op() const {
+ if (enable_pubsub) {
+ return s->info.args.exists("notification");
+ }
+ return false;
+ }
RGWOp *get_obj_op(bool get_data);
RGWOp *op_get() override;
RGWOp *op_post() override;
RGWOp *op_options() override;
public:
- using RGWHandler_REST_S3::RGWHandler_REST_S3;
+ RGWHandler_REST_Bucket_S3(const rgw::auth::StrategyRegistry& auth_registry, bool _enable_pubsub) :
+ RGWHandler_REST_S3(auth_registry), enable_pubsub(_enable_pubsub) {}
~RGWHandler_REST_Bucket_S3() override = default;
};
private:
bool enable_s3website;
bool enable_sts;
+ const bool enable_pubsub;
public:
- explicit RGWRESTMgr_S3(bool enable_s3website = false, bool enable_sts = false)
+ explicit RGWRESTMgr_S3(bool enable_s3website = false, bool enable_sts = false, bool _enable_pubsub = false)
: enable_s3website(enable_s3website),
- enable_sts(enable_sts) {
+ enable_sts(enable_sts),
+ enable_pubsub(_enable_pubsub) {
}
~RGWRESTMgr_S3() override = default;
int RGWSTSGetSessionToken::verify_permission()
{
- rgw::IAM::Partition partition = rgw::IAM::Partition::aws;
- rgw::IAM::Service service = rgw::IAM::Service::s3;
+ rgw::Partition partition = rgw::Partition::aws;
+ rgw::Service service = rgw::Service::s3;
if (!verify_user_permission(this,
s,
- rgw::IAM::ARN(partition, service, "", s->user->user_id.tenant, ""),
+ rgw::ARN(partition, service, "", s->user->user_id.tenant, ""),
rgw::IAM::stsGetSessionToken)) {
return -EACCES;
}
void RGWHandler_REST_STS::rgw_sts_parse_input()
{
- const auto max_size = s->cct->_conf->rgw_max_put_param_size;
-
- int ret = 0;
- bufferlist data;
- std::tie(ret, data) = rgw_rest_read_all_input(s, max_size, false);
- string post_body = data.to_str();
- if (data.length() > 0) {
+ if (post_body.size() > 0) {
ldout(s->cct, 10) << "Content of POST: " << post_body << dendl;
if (post_body.find("Action") != string::npos) {
class RGWHandler_REST_STS : public RGWHandler_REST {
const rgw::auth::StrategyRegistry& auth_registry;
+ const string& post_body;
RGWOp *op_post() override;
void rgw_sts_parse_input();
public:
static int init_from_header(struct req_state *s, int default_formatter, bool configurable_format);
- RGWHandler_REST_STS(const rgw::auth::StrategyRegistry& auth_registry)
+ RGWHandler_REST_STS(const rgw::auth::StrategyRegistry& auth_registry, const string& post_body="")
: RGWHandler_REST(),
- auth_registry(auth_registry) {}
+ auth_registry(auth_registry),
+ post_body(post_body) {}
~RGWHandler_REST_STS() override = default;
int init(RGWRados *store,
uint64_t op = get_op();
string user_name = s->info.args.get("UserName");
rgw_user user_id(user_name);
- if (! verify_user_permission(this, s, rgw::IAM::ARN(rgw::IAM::ARN(user_id.id,
+ if (! verify_user_permission(this, s, rgw::ARN(rgw::ARN(user_id.id,
"user",
user_id.tenant)), op)) {
return -EACCES;
int AssumedRoleUser::generateAssumedRoleUser(CephContext* cct,
RGWRados *store,
const string& roleId,
- const rgw::IAM::ARN& roleArn,
+ const rgw::ARN& roleArn,
const string& roleSessionName)
{
string resource = std::move(roleArn.resource);
resource.append("/");
resource.append(roleSessionName);
- rgw::IAM::ARN assumed_role_arn(rgw::IAM::Partition::aws,
- rgw::IAM::Service::sts,
+ rgw::ARN assumed_role_arn(rgw::Partition::aws,
+ rgw::Service::sts,
"", roleArn.account, resource);
arn = assumed_role_arn.to_string();
std::tuple<int, RGWRole> STSService::getRoleInfo(const string& arn)
{
- if (auto r_arn = rgw::IAM::ARN::parse(arn); r_arn) {
+ if (auto r_arn = rgw::ARN::parse(arn); r_arn) {
auto pos = r_arn->resource.find_last_of('/');
string roleName = r_arn->resource.substr(pos + 1);
RGWRole role(cct, store, roleName, r_arn->account);
response.sub = req.getSub();
//Get the role info which is being assumed
- boost::optional<rgw::IAM::ARN> r_arn = rgw::IAM::ARN::parse(req.getRoleARN());
+ boost::optional<rgw::ARN> r_arn = rgw::ARN::parse(req.getRoleARN());
if (r_arn == boost::none) {
response.assumeRoleResp.retCode = -EINVAL;
return response;
response.packedPolicySize = 0;
//Get the role info which is being assumed
- boost::optional<rgw::IAM::ARN> r_arn = rgw::IAM::ARN::parse(req.getRoleARN());
+ boost::optional<rgw::ARN> r_arn = rgw::ARN::parse(req.getRoleARN());
if (r_arn == boost::none) {
response.retCode = -EINVAL;
return response;
int generateAssumedRoleUser( CephContext* cct,
RGWRados *store,
const string& roleId,
- const rgw::IAM::ARN& roleArn,
+ const rgw::ARN& roleArn,
const string& roleSessionName);
const string& getARN() const { return arn; }
const string& getAssumeRoleId() const { return assumeRoleId; }
}
virtual RGWMetadataHandler *alloc_bucket_meta_handler();
virtual RGWMetadataHandler *alloc_bucket_instance_meta_handler();
+
+ // indication whether the sync module start with full sync (default behavior)
+ // incremental sync would follow anyway
+ virtual bool should_full_sync() const {
+ return true;
+ }
};
typedef std::shared_ptr<RGWSyncModuleInstance> RGWSyncModuleInstanceRef;
#include "rgw_op.h"
#include "rgw_pubsub.h"
#include "rgw_pubsub_push.h"
+#include "rgw_notify_event_type.h"
#include "rgw_perf_counters.h"
+#ifdef WITH_RADOSGW_AMQP_ENDPOINT
+#include "rgw_amqp.h"
+#endif
+#include <boost/algorithm/hex.hpp>
#include <boost/asio/yield.hpp>
#define dout_subsys ceph_subsys_rgw
"uid": <uid>, # default: "pubsub"
"data_bucket_prefix": <prefix> # default: "pubsub-"
"data_oid_prefix": <prefix> #
-
- "events_retention_days": <days> # default: 7
-
- # non-dynamic config
- "notifications": [
- {
- "path": <notification-path>, # this can be either an explicit path: <bucket>, or <bucket>/<object>,
- # or a prefix if it ends with a wildcard
- "topic": <topic-name>
- },
- ...
- ],
- "subscriptions": [
- {
- "name": <subscription-name>,
- "topic": <topic>,
- "push_endpoint": <endpoint>,
- "args:" <arg list>. # any push endpoint specific args (include all args)
- "data_bucket": <bucket>, # override name of bucket where subscription data will be store
- "data_oid_prefix": <prefix> # set prefix for subscription data object ids
- },
- ...
- ]
-}
-
-*/
-
-/*
-
-config:
-
-{
- "tenant": <tenant>, # default: <empty>
- "uid": <uid>, # default: "pubsub"
- "data_bucket_prefix": <prefix> # default: "pubsub-"
- "data_oid_prefix": <prefix> #
+ "events_retention_days": <int> # default: 7
+ "start_with_full_sync" <bool> # default: false
# non-dynamic config
"notifications": [
"name": <subscription-name>,
"topic": <topic>,
"push_endpoint": <endpoint>,
- "args:" <arg list>. # any push endpoint specific args (include all args)
+ "push_endpoint_args:" <arg list>. # any push endpoint specific args (include all args)
"data_bucket": <bucket>, # override name of bucket where subscription data will be store
"data_oid_prefix": <prefix> # set prefix for subscription data object ids
+ "s3_id": <id> # in case of S3 compatible notifications, the notification ID will be set here
},
...
]
return args;
}
-struct PSSubConfig { /* subscription config */
- string name;
- string topic;
- string push_endpoint_name;
- string push_endpoint_args;
+struct PSSubConfig {
+ std::string name;
+ std::string topic;
+ std::string push_endpoint_name;
+ std::string push_endpoint_args;
+ std::string data_bucket_name;
+ std::string data_oid_prefix;
+ std::string s3_id;
+ std::string arn_topic;
RGWPubSubEndpoint::Ptr push_endpoint;
- string data_bucket_name;
- string data_oid_prefix;
-
void from_user_conf(CephContext *cct, const rgw_pubsub_sub_config& uc) {
name = uc.name;
topic = uc.topic;
push_endpoint_name = uc.dest.push_endpoint;
data_bucket_name = uc.dest.bucket_name;
data_oid_prefix = uc.dest.oid_prefix;
- if (push_endpoint_name != "") {
+ s3_id = uc.s3_id;
+ arn_topic = uc.dest.arn_topic;
+ if (!push_endpoint_name.empty()) {
push_endpoint_args = uc.dest.push_endpoint_args;
try {
- push_endpoint = RGWPubSubEndpoint::create(push_endpoint_name, topic, string_to_args(push_endpoint_args));
+ push_endpoint = RGWPubSubEndpoint::create(push_endpoint_name, arn_topic, string_to_args(push_endpoint_args), cct);
+ ldout(cct, 20) << "push endpoint created: " << push_endpoint->to_str() << dendl;
} catch (const RGWPubSubEndpoint::configuration_error& e) {
- ldout(cct, 0) << "ERROR: failed to create push endpoint: "
+ ldout(cct, 1) << "ERROR: failed to create push endpoint: "
<< push_endpoint_name << " due to: " << e.what() << dendl;
}
}
encode_json("name", name, f);
encode_json("topic", topic, f);
encode_json("push_endpoint", push_endpoint_name, f);
- encode_json("args", push_endpoint_args, f);
+ encode_json("push_endpoint_args", push_endpoint_args, f);
encode_json("data_bucket_name", data_bucket_name, f);
encode_json("data_oid_prefix", data_oid_prefix, f);
+ encode_json("s3_id", s3_id, f);
}
void init(CephContext *cct, const JSONFormattable& config,
string default_bucket_name = data_bucket_prefix + name;
data_bucket_name = config["data_bucket"](default_bucket_name.c_str());
data_oid_prefix = config["data_oid_prefix"](default_oid_prefix.c_str());
+ s3_id = config["s3_id"];
+ arn_topic = config["arn_topic"];
if (!push_endpoint_name.empty()) {
push_endpoint_args = config["push_endpoint_args"];
try {
- push_endpoint = RGWPubSubEndpoint::create(push_endpoint_name, topic, string_to_args(push_endpoint_args));
+ push_endpoint = RGWPubSubEndpoint::create(push_endpoint_name, arn_topic, string_to_args(push_endpoint_args), cct);
+ ldout(cct, 20) << "push endpoint created: " << push_endpoint->to_str() << dendl;
} catch (const RGWPubSubEndpoint::configuration_error& e) {
- ldout(cct, 0) << "ERROR: failed to create push endpoint: "
+ ldout(cct, 1) << "ERROR: failed to create push endpoint: "
<< push_endpoint_name << " due to: " << e.what() << dendl;
}
}
using PSSubConfigRef = std::shared_ptr<PSSubConfig>;
struct PSTopicConfig {
- string name;
- set<string> subs;
+ std::string name;
+ std::set<std::string> subs;
void dump(Formatter *f) const {
encode_json("name", name, f);
}
using PSTopicConfigRef = std::shared_ptr<PSTopicConfig>;
-using TopicsRef = std::shared_ptr<vector<PSTopicConfigRef>>;
-
+using TopicsRef = std::shared_ptr<std::vector<PSTopicConfigRef>>;
struct PSConfig {
string id{"pubsub"};
uint64_t max_id{0};
/* FIXME: no hard coded buckets, we'll have configurable topics */
- map<string, PSSubConfigRef> subs;
- map<string, PSTopicConfigRef> topics;
- multimap<string, PSNotificationConfig> notifications;
+ std::map<std::string, PSSubConfigRef> subs;
+ std::map<std::string, PSTopicConfigRef> topics;
+ std::multimap<std::string, PSNotificationConfig> notifications;
+
+ bool start_with_full_sync{false};
void dump(Formatter *f) const {
encode_json("id", id, f);
f->close_section();
}
}
+ encode_json("start_with_full_sync", start_with_full_sync, f);
}
void init(CephContext *cct, const JSONFormattable& config) {
iter->second->subs.insert(sc->name);
}
}
+ start_with_full_sync = config["start_with_full_sync"](false);
ldout(cct, 5) << "pubsub: module config (parsed representation):\n" << json_str("config", *this, true) << dendl;
}
continue;
}
- ldout(cct, 10) << ": found topic for path=" << bucket << "/" << key << ": id=" << target.id << " target_path=" << target.path << ", topic=" << target.topic << dendl;
+ ldout(cct, 20) << ": found topic for path=" << bucket << "/" << key << ": id=" << target.id <<
+ " target_path=" << target.path << ", topic=" << target.topic << dendl;
(*result)->push_back(topic->second);
} while (iter != notifications.begin());
}
}
};
-enum RGWPubSubEventType {
- UNKNOWN_EVENT = 0,
- OBJECT_CREATE = 1,
- OBJECT_DELETE = 2,
- DELETE_MARKER_CREATE = 3,
-};
-
-#define EVENT_NAME_OBJECT_CREATE "OBJECT_CREATE"
-#define EVENT_NAME_OBJECT_DELETE "OBJECT_DELETE"
-#define EVENT_NAME_OBJECT_DELETE_MARKER_CREATE "DELETE_MARKER_CREATE"
-#define EVENT_NAME_UNKNOWN "UNKNOWN_EVENT"
-
-static const char *get_event_name(const RGWPubSubEventType& val)
-{
- switch (val) {
- case OBJECT_CREATE:
- return EVENT_NAME_OBJECT_CREATE;
- case OBJECT_DELETE:
- return EVENT_NAME_OBJECT_DELETE;
- case DELETE_MARKER_CREATE:
- return EVENT_NAME_OBJECT_DELETE_MARKER_CREATE;
- default:
- return "EVENT_NAME_UNKNOWN";
- };
-}
-
using PSConfigRef = std::shared_ptr<PSConfig>;
-using EventRef = std::shared_ptr<rgw_pubsub_event>;
+template<typename EventType>
+using EventRef = std::shared_ptr<EventType>;
struct objstore_event {
string id;
}
};
+static void set_event_id(std::string& id, const std::string& hash, const utime_t& ts) {
+ char buf[64];
+ const auto len = snprintf(buf, sizeof(buf), "%010ld.%06ld.%s", (long)ts.sec(), (long)ts.usec(), hash.c_str());
+ if (len > 0) {
+ id.assign(buf, len);
+ }
+}
+
static void make_event_ref(CephContext *cct, const rgw_bucket& bucket,
const rgw_obj_key& key,
const ceph::real_time& mtime,
const std::vector<std::pair<std::string, std::string> > *attrs,
- const string& event_name,
- EventRef *event) {
+ rgw::notify::EventType event_type,
+ EventRef<rgw_pubsub_event> *event) {
*event = std::make_shared<rgw_pubsub_event>();
- EventRef& e = *event;
- e->event = event_name;
+ EventRef<rgw_pubsub_event>& e = *event;
+ e->event_name = rgw::notify::to_ceph_string(event_type);
e->source = bucket.name + "/" + key.name;
e->timestamp = real_clock::now();
objstore_event oevent(bucket, key, mtime, attrs);
- string hash = oevent.get_hash();
- utime_t ts(e->timestamp);
- char buf[64];
- snprintf(buf, sizeof(buf), "%010ld.%06ld.%s", (long)ts.sec(), (long)ts.usec(), hash.c_str());
- e->id = buf;
+ const utime_t ts(e->timestamp);
+ set_event_id(e->id, oevent.get_hash(), ts);
encode_json("info", oevent, &e->info);
}
+static void make_s3_record_ref(CephContext *cct, const rgw_bucket& bucket,
+ const rgw_user& owner,
+ const rgw_obj_key& key,
+ const ceph::real_time& mtime,
+ const std::vector<std::pair<std::string, std::string> > *attrs,
+ rgw::notify::EventType event_type,
+ EventRef<rgw_pubsub_s3_record> *record) {
+ *record = std::make_shared<rgw_pubsub_s3_record>();
+
+ EventRef<rgw_pubsub_s3_record>& r = *record;
+ r->eventVersion = "2.1";
+ r->eventSource = "aws:s3";
+ r->eventTime = mtime;
+ r->eventName = rgw::notify::to_string(event_type);
+ r->userIdentity = ""; // user that triggered the change: not supported in sync module
+ r->sourceIPAddress = ""; // IP address of client that triggered the change: not supported in sync module
+ r->x_amz_request_id = ""; // request ID of the original change: not supported in sync module
+ r->x_amz_id_2 = ""; // RGW on which the change was made: not supported in sync module
+ r->s3SchemaVersion = "1.0";
+ // configurationId is filled from subscription configuration
+ r->bucket_name = bucket.name;
+ r->bucket_ownerIdentity = owner.to_str();
+ r->bucket_arn = to_string(rgw::ARN(bucket));
+ r->bucket_id = bucket.bucket_id; // rgw extension
+ r->object_key = key.name;
+ r->object_size = 0; // not supported in sync module
+ objstore_event oevent(bucket, key, mtime, attrs);
+ r->object_etag = oevent.get_hash();
+ r->object_versionId = key.instance;
+
+ // use timestamp as per key sequence id (hex encoded)
+ const utime_t ts(real_clock::now());
+ boost::algorithm::hex((const char*)&ts, (const char*)&ts + sizeof(utime_t),
+ std::back_inserter(r->object_sequencer));
+
+ // event ID is rgw extension (not in the S3 spec), used for acking the event
+ // same format is used in both S3 compliant and Ceph specific events
+ set_event_id(r->id, r->object_etag, ts);
+}
+
class PSManager;
using PSManagerRef = std::shared_ptr<PSManager>;
using PSEnvRef = std::shared_ptr<PSEnv>;
+template<typename EventType>
class PSEvent {
- const EventRef event;
+ const EventRef<EventType> event;
public:
- PSEvent(const EventRef& _event) : event(_event) {}
+ PSEvent(const EventRef<EventType>& _event) : event(_event) {}
void format(bufferlist *bl) const {
- bl->append(json_str("event", *event));
+ bl->append(json_str(EventType::json_type_single, *event));
}
void encode_event(bufferlist& bl) const {
RGWDataSyncEnv *sync_env;
PSEnvRef env;
PSSubConfigRef sub_conf;
- shared_ptr<rgw_get_bucket_info_result> get_bucket_info_result;
+ std::shared_ptr<rgw_get_bucket_info_result> get_bucket_info_result;
RGWBucketInfo *bucket_info{nullptr};
RGWDataAccessRef data_access;
RGWDataAccess::BucketRef bucket;
- struct push_endpoint_info {
- shared_ptr<RGWRESTConn> conn;
- string path;
- } push;
-
InitCR *init_cr{nullptr};
class InitBucketLifecycleCR : public RGWCoroutine {
InitBucketLifecycleCR(RGWDataSyncEnv *_sync_env,
PSConfigRef& _conf,
RGWBucketInfo& _bucket_info,
- map<string, bufferlist>& _bucket_attrs) : RGWCoroutine(_sync_env->cct),
+ std::map<string, bufferlist>& _bucket_attrs) : RGWCoroutine(_sync_env->cct),
sync_env(_sync_env),
conf(_conf) {
lc_config.bucket_info = _bucket_info;
}
};
+ template<typename EventType>
class StoreEventCR : public RGWCoroutine {
RGWDataSyncEnv* const sync_env;
const PSSubscriptionRef sub;
- const PSEvent pse;
+ const PSEvent<EventType> pse;
const string oid_prefix;
public:
StoreEventCR(RGWDataSyncEnv* const _sync_env,
const PSSubscriptionRef& _sub,
- const EventRef& _event) : RGWCoroutine(_sync_env->cct),
+ const EventRef<EventType>& _event) : RGWCoroutine(_sync_env->cct),
sync_env(_sync_env),
sub(_sub),
pse(_event),
}
int operate() override {
- // TODO: in case of "push-only" subscription no need to store event
rgw_object_simple_put_params put_obj;
reenter(this) {
sync_env->store,
put_obj));
if (retcode < 0) {
- ldout(sync_env->cct, 10) << "ERROR: failed to store event: " << put_obj.bucket << "/" << put_obj.key << " ret=" << retcode << dendl;
- if (perfcounter) perfcounter->inc(l_rgw_pubsub_store_fail);
+ ldpp_dout(sync_env->dpp, 10) << "failed to store event: " << put_obj.bucket << "/" << put_obj.key << " ret=" << retcode << dendl;
return set_cr_error(retcode);
} else {
- ldout(sync_env->cct, 20) << "event stored: " << put_obj.bucket << "/" << put_obj.key << dendl;
- if (perfcounter) perfcounter->inc(l_rgw_pubsub_store_ok);
+ ldpp_dout(sync_env->dpp, 20) << "event stored: " << put_obj.bucket << "/" << put_obj.key << dendl;
}
return set_cr_done();
}
};
+ template<typename EventType>
class PushEventCR : public RGWCoroutine {
RGWDataSyncEnv* const sync_env;
- const EventRef event;
+ const EventRef<EventType> event;
const PSSubConfigRef& sub_conf;
public:
PushEventCR(RGWDataSyncEnv* const _sync_env,
const PSSubscriptionRef& _sub,
- const EventRef& _event) : RGWCoroutine(_sync_env->cct),
+ const EventRef<EventType>& _event) : RGWCoroutine(_sync_env->cct),
sync_env(_sync_env),
event(_event),
sub_conf(_sub->sub_conf) {
yield call(sub_conf->push_endpoint->send_to_completion_async(*event.get(), sync_env));
if (retcode < 0) {
- ldout(sync_env->cct, 10) << "ERROR: failed to push event: " << event->id <<
+ ldout(sync_env->cct, 10) << "failed to push event: " << event->id <<
" to endpoint: " << sub_conf->push_endpoint_name << " ret=" << retcode << dendl;
- if (perfcounter) perfcounter->inc(l_rgw_pubsub_push_failed);
return set_cr_error(retcode);
}
- ldout(sync_env->cct, 10) << "event: " << event->id <<
+ ldout(sync_env->cct, 20) << "event: " << event->id <<
" pushed to endpoint: " << sub_conf->push_endpoint_name << dendl;
- if (perfcounter) perfcounter->inc(l_rgw_pubsub_push_ok);
return set_cr_done();
}
return 0;
return init_cr->execute(caller);
}
- static RGWCoroutine *store_event_cr(RGWDataSyncEnv* const sync_env, const PSSubscriptionRef& sub, const EventRef& event) {
- return new StoreEventCR(sync_env, sub, event);
+ template<typename EventType>
+ static RGWCoroutine *store_event_cr(RGWDataSyncEnv* const sync_env, const PSSubscriptionRef& sub, const EventRef<EventType>& event) {
+ return new StoreEventCR<EventType>(sync_env, sub, event);
}
- static RGWCoroutine *push_event_cr(RGWDataSyncEnv* const sync_env, const PSSubscriptionRef& sub, const EventRef& event) {
- return new PushEventCR(sync_env, sub, event);
+ template<typename EventType>
+ static RGWCoroutine *push_event_cr(RGWDataSyncEnv* const sync_env, const PSSubscriptionRef& sub, const EventRef<EventType>& event) {
+ return new PushEventCR<EventType>(sync_env, sub, event);
}
friend class InitCR;
};
-
class PSManager
{
RGWDataSyncEnv *sync_env;
PSEnvRef env;
- map<string, PSSubscriptionRef> subs;
+ std::map<string, PSSubscriptionRef> subs;
class GetSubCR : public RGWSingletonCR<PSSubscriptionRef> {
RGWDataSyncEnv *sync_env;
PSSubConfigRef sub_conf;
rgw_pubsub_sub_config user_sub_conf;
+
public:
GetSubCR(RGWDataSyncEnv *_sync_env,
PSManagerRef& _mgr,
ref(_ref),
conf(mgr->env->conf) {
}
- ~GetSubCR() {
- }
+ ~GetSubCR() { }
int operate() override {
reenter(this) {
if (owner.empty()) {
if (!conf->find_sub(sub_name, &sub_conf)) {
- ldout(sync_env->cct, 10) << "ERROR: could not find subscription config: name=" << sub_name << dendl;
+ ldout(sync_env->cct, 10) << "failed to find subscription config: name=" << sub_name << dendl;
mgr->remove_get_sub(owner, sub_name);
return set_cr_error(-ENOENT);
}
yield (*ref)->call_init_cr(this);
if (retcode < 0) {
- ldout(sync_env->cct, 10) << "ERROR: failed to init subscription" << dendl;
+ ldout(sync_env->cct, 10) << "failed to init subscription" << dendl;
mgr->remove_get_sub(owner, sub_name);
return set_cr_error(retcode);
}
return owner_prefix + sub_name;
}
- map<string, GetSubCR *> get_subs;
+ std::map<std::string, GetSubCR *> get_subs;
GetSubCR *& get_get_subs(const rgw_user& owner, const string& name) {
return get_subs[sub_id(owner, name)];
}
};
+bool match(const rgw_pubsub_topic_filter& filter, const std::string& key_name, rgw::notify::EventType event_type) {
+ if (!match(filter.events, event_type)) {
+ return false;
+ }
+ if (!match(filter.s3_filter.key_filter, key_name)) {
+ return false;
+ }
+ return true;
+}
+
class RGWPSFindBucketTopicsCR : public RGWCoroutine {
RGWDataSyncEnv *sync_env;
PSEnvRef env;
rgw_user owner;
rgw_bucket bucket;
rgw_obj_key key;
- string event_name;
+ rgw::notify::EventType event_type;
RGWUserPubSub ups;
const rgw_user& _owner,
const rgw_bucket& _bucket,
const rgw_obj_key& _key,
- const string& _event_name,
+ rgw::notify::EventType _event_type,
TopicsRef *_topics) : RGWCoroutine(_sync_env->cct),
sync_env(_sync_env),
env(_env),
owner(_owner),
bucket(_bucket),
key(_key),
- event_name(_event_name),
+ event_type(_event_type),
ups(_sync_env->store, owner),
topics(_topics) {
*topics = std::make_shared<vector<PSTopicConfigRef> >();
for (auto& titer : bucket_topics.topics) {
auto& topic_filter = titer.second;
auto& info = topic_filter.topic;
- if (!topic_filter.events.empty() &&
- topic_filter.events.find(event_name) == topic_filter.events.end()) {
+ if (!match(topic_filter, key.name, event_type)) {
continue;
}
- shared_ptr<PSTopicConfig> tc = std::make_shared<PSTopicConfig>();
+ std::shared_ptr<PSTopicConfig> tc = std::make_shared<PSTopicConfig>();
tc->name = info.name;
tc->subs = user_topics.topics[info.name].subs;
(*topics)->push_back(tc);
RGWDataSyncEnv* const sync_env;
const PSEnvRef env;
const rgw_user& owner;
- const EventRef event;
+ const EventRef<rgw_pubsub_event> event;
+ const EventRef<rgw_pubsub_s3_record> record;
const TopicsRef topics;
const std::array<rgw_user, 2> owners;
bool has_subscriptions;
bool sub_conf_found;
PSSubscriptionRef sub;
std::array<rgw_user, 2>::const_iterator oiter;
- vector<PSTopicConfigRef>::const_iterator titer;
- set<string>::const_iterator siter;
+ std::vector<PSTopicConfigRef>::const_iterator titer;
+ std::set<string>::const_iterator siter;
int last_sub_conf_error;
public:
RGWPSHandleObjEventCR(RGWDataSyncEnv* const _sync_env,
const PSEnvRef _env,
const rgw_user& _owner,
- const EventRef& _event,
+ const EventRef<rgw_pubsub_event>& _event,
+ const EventRef<rgw_pubsub_s3_record>& _record,
const TopicsRef& _topics) : RGWCoroutine(_sync_env->cct),
sync_env(_sync_env),
env(_env),
owner(_owner),
event(_event),
+ record(_record),
topics(_topics),
owners({owner, rgw_user{}}),
has_subscriptions(false),
int operate() override {
reenter(this) {
- ldout(sync_env->cct, 10) << ": handle event: obj: z=" << sync_env->source_zone
+ ldout(sync_env->cct, 20) << ": handle event: obj: z=" << sync_env->source_zone
<< " event=" << json_str("event", *event, false)
<< " owner=" << owner << dendl;
ldout(sync_env->cct, 20) << "pubsub: " << topics->size() << " topics found for path" << dendl;
-
- if (topics->empty()) {
- // if event has no topics - no further processing is needed
- return set_cr_done();
- }
+
+ // outside caller should check that
+ ceph_assert(!topics->empty());
if (perfcounter) perfcounter->inc(l_rgw_pubsub_event_triggered);
+ // loop over all topics related to the bucket/object
for (titer = topics->begin(); titer != topics->end(); ++titer) {
- ldout(sync_env->cct, 10) << ": notification for " << event->source << ": topic=" <<
+ ldout(sync_env->cct, 20) << ": notification for " << event->source << ": topic=" <<
(*titer)->name << ", has " << (*titer)->subs.size() << " subscriptions" << dendl;
-
+ // loop over all subscriptions of the topic
for (siter = (*titer)->subs.begin(); siter != (*titer)->subs.end(); ++siter) {
- ldout(sync_env->cct, 10) << ": subscription: " << *siter << dendl;
+ ldout(sync_env->cct, 20) << ": subscription: " << *siter << dendl;
has_subscriptions = true;
sub_conf_found = false;
+ // try to read subscription configuration from global/user cond
+ // configuration is considered missing only if does not exist in either
for (oiter = owners.begin(); oiter != owners.end(); ++oiter) {
- /*
- * once for the global subscriptions, once for the user specific subscriptions
- */
yield PSManager::call_get_subscription_cr(sync_env, env->manager, this, *oiter, *siter, &sub);
if (retcode < 0) {
+ if (sub_conf_found) {
+ // not a real issue, sub conf already found
+ retcode = 0;
+ }
last_sub_conf_error = retcode;
continue;
}
sub_conf_found = true;
-
- ldout(sync_env->cct, 20) << "storing event for subscription=" << *siter << " owner=" << *oiter << " ret=" << retcode << dendl;
- yield call(PSSubscription::store_event_cr(sync_env, sub, event));
- if (retcode < 0) {
- ldout(sync_env->cct, 10) << "ERROR: failed to store event for subscription=" << *siter << " ret=" << retcode << dendl;
- } else {
- event_handled = true;
- }
- if (sub->sub_conf->push_endpoint) {
+ if (sub->sub_conf->s3_id.empty()) {
+ // subscription was not made by S3 compatible API
+ ldout(sync_env->cct, 20) << "storing event for subscription=" << *siter << " owner=" << *oiter << " ret=" << retcode << dendl;
+ yield call(PSSubscription::store_event_cr(sync_env, sub, event));
+ if (retcode < 0) {
+ if (perfcounter) perfcounter->inc(l_rgw_pubsub_store_fail);
+ ldout(sync_env->cct, 1) << "ERROR: failed to store event for subscription=" << *siter << " ret=" << retcode << dendl;
+ } else {
+ if (perfcounter) perfcounter->inc(l_rgw_pubsub_store_ok);
+ event_handled = true;
+ }
+ if (sub->sub_conf->push_endpoint) {
ldout(sync_env->cct, 20) << "push event for subscription=" << *siter << " owner=" << *oiter << " ret=" << retcode << dendl;
- yield call(PSSubscription::push_event_cr(sync_env, sub, event));
+ yield call(PSSubscription::push_event_cr(sync_env, sub, event));
+ if (retcode < 0) {
+ if (perfcounter) perfcounter->inc(l_rgw_pubsub_push_failed);
+ ldout(sync_env->cct, 1) << "ERROR: failed to push event for subscription=" << *siter << " ret=" << retcode << dendl;
+ } else {
+ if (perfcounter) perfcounter->inc(l_rgw_pubsub_push_ok);
+ event_handled = true;
+ }
+ }
+ } else {
+ // subscription was made by S3 compatible API
+ ldout(sync_env->cct, 20) << "storing record for subscription=" << *siter << " owner=" << *oiter << " ret=" << retcode << dendl;
+ record->configurationId = sub->sub_conf->s3_id;
+ yield call(PSSubscription::store_event_cr(sync_env, sub, record));
if (retcode < 0) {
- ldout(sync_env->cct, 10) << "ERROR: failed to push event for subscription=" << *siter << " ret=" << retcode << dendl;
+ if (perfcounter) perfcounter->inc(l_rgw_pubsub_store_fail);
+ ldout(sync_env->cct, 1) << "ERROR: failed to store record for subscription=" << *siter << " ret=" << retcode << dendl;
} else {
+ if (perfcounter) perfcounter->inc(l_rgw_pubsub_store_ok);
event_handled = true;
}
+ if (sub->sub_conf->push_endpoint) {
+ ldout(sync_env->cct, 20) << "push record for subscription=" << *siter << " owner=" << *oiter << " ret=" << retcode << dendl;
+ yield call(PSSubscription::push_event_cr(sync_env, sub, record));
+ if (retcode < 0) {
+ if (perfcounter) perfcounter->inc(l_rgw_pubsub_push_failed);
+ ldout(sync_env->cct, 1) << "ERROR: failed to push record for subscription=" << *siter << " ret=" << retcode << dendl;
+ } else {
+ if (perfcounter) perfcounter->inc(l_rgw_pubsub_push_ok);
+ event_handled = true;
+ }
+ }
}
}
if (!sub_conf_found) {
// could not find conf for subscription at user or global levels
- ldout(sync_env->cct, 10) << "ERROR: failed to find subscription config for subscription=" << *siter
+ if (perfcounter) perfcounter->inc(l_rgw_pubsub_missing_conf);
+ ldout(sync_env->cct, 1) << "ERROR: failed to find subscription config for subscription=" << *siter
<< " ret=" << last_sub_conf_error << dendl;
+ if (retcode == -ENOENT) {
+ // missing subscription info should be reflected back as invalid argument
+ // and not as missing object
+ retcode = -EINVAL;
+ }
}
}
}
}
};
-
+// coroutine invoked on remote object creation
class RGWPSHandleRemoteObjCBCR : public RGWStatRemoteObjCBCR {
RGWDataSyncEnv *sync_env;
PSEnvRef env;
std::optional<uint64_t> versioned_epoch;
- EventRef event;
+ EventRef<rgw_pubsub_event> event;
+ EventRef<rgw_pubsub_s3_record> record;
TopicsRef topics;
public:
RGWPSHandleRemoteObjCBCR(RGWDataSyncEnv *_sync_env,
}
int operate() override {
reenter(this) {
- ldout(sync_env->cct, 10) << ": stat of remote obj: z=" << sync_env->source_zone
+ ldout(sync_env->cct, 20) << ": stat of remote obj: z=" << sync_env->source_zone
<< " b=" << bucket_info.bucket << " k=" << key << " size=" << size << " mtime=" << mtime
<< " attrs=" << attrs << dendl;
{
k = k.substr(sizeof(RGW_ATTR_PREFIX) - 1);
}
attrs.push_back(std::make_pair(k, attr.second));
- }
+ }
+ // at this point we don't know whether we need the ceph event or S3 record
+ // this is why both are created here, once we have information about the
+ // subscription, we will store/push only the relevant ones
make_event_ref(sync_env->cct,
bucket_info.bucket, key,
mtime, &attrs,
- EVENT_NAME_OBJECT_CREATE, &event);
+ rgw::notify::ObjectCreated, &event);
+ make_s3_record_ref(sync_env->cct,
+ bucket_info.bucket, bucket_info.owner, key,
+ mtime, &attrs,
+ rgw::notify::ObjectCreated, &record);
}
- yield call(new RGWPSHandleObjEventCR(sync_env, env, bucket_info.owner, event, topics));
+ yield call(new RGWPSHandleObjEventCR(sync_env, env, bucket_info.owner, event, record, topics));
if (retcode < 0) {
return set_cr_error(retcode);
}
reenter(this) {
yield call(new RGWPSFindBucketTopicsCR(sync_env, env, bucket_info.owner,
bucket_info.bucket, key,
- EVENT_NAME_OBJECT_CREATE,
+ rgw::notify::ObjectCreated,
&topics));
if (retcode < 0) {
- ldout(sync_env->cct, 0) << "ERROR: RGWPSFindBucketTopicsCR returned ret=" << retcode << dendl;
+ ldout(sync_env->cct, 1) << "ERROR: RGWPSFindBucketTopicsCR returned ret=" << retcode << dendl;
return set_cr_error(retcode);
}
if (topics->empty()) {
}
};
+// coroutine invoked on remote object deletion
class RGWPSGenericObjEventCBCR : public RGWCoroutine {
RGWDataSyncEnv *sync_env;
PSEnvRef env;
rgw_bucket bucket;
rgw_obj_key key;
ceph::real_time mtime;
- string event_name;
- EventRef event;
+ rgw::notify::EventType event_type;
+ EventRef<rgw_pubsub_event> event;
+ EventRef<rgw_pubsub_s3_record> record;
TopicsRef topics;
public:
RGWPSGenericObjEventCBCR(RGWDataSyncEnv *_sync_env,
PSEnvRef _env,
RGWBucketInfo& _bucket_info, rgw_obj_key& _key, const ceph::real_time& _mtime,
- RGWPubSubEventType _event_type) : RGWCoroutine(_sync_env->cct),
+ rgw::notify::EventType _event_type) : RGWCoroutine(_sync_env->cct),
sync_env(_sync_env),
env(_env),
owner(_bucket_info.owner),
bucket(_bucket_info.bucket),
key(_key),
- mtime(_mtime), event_name(get_event_name(_event_type)) {}
+ mtime(_mtime), event_type(_event_type) {}
int operate() override {
reenter(this) {
- ldout(sync_env->cct, 10) << ": remove remote obj: z=" << sync_env->source_zone
+ ldout(sync_env->cct, 20) << ": remove remote obj: z=" << sync_env->source_zone
<< " b=" << bucket << " k=" << key << " mtime=" << mtime << dendl;
- yield call(new RGWPSFindBucketTopicsCR(sync_env, env, owner, bucket, key, event_name, &topics));
+ yield call(new RGWPSFindBucketTopicsCR(sync_env, env, owner, bucket, key, event_type, &topics));
if (retcode < 0) {
- ldout(sync_env->cct, 0) << "ERROR: RGWPSFindBucketTopicsCR returned ret=" << retcode << dendl;
+ ldout(sync_env->cct, 1) << "ERROR: RGWPSFindBucketTopicsCR returned ret=" << retcode << dendl;
return set_cr_error(retcode);
}
if (topics->empty()) {
ldout(sync_env->cct, 20) << "no topics found for " << bucket << "/" << key << dendl;
return set_cr_done();
}
- {
- make_event_ref(sync_env->cct,
- bucket, key,
- mtime, nullptr,
- event_name, &event);
- }
-
- yield call(new RGWPSHandleObjEventCR(sync_env, env, owner, event, topics));
+ // at this point we don't know whether we need the ceph event or S3 record
+ // this is why both are created here, once we have information about the
+ // subscription, we will store/push only the relevant ones
+ make_event_ref(sync_env->cct,
+ bucket, key,
+ mtime, nullptr,
+ event_type, &event);
+ make_s3_record_ref(sync_env->cct,
+ bucket, owner, key,
+ mtime, nullptr,
+ event_type, &record);
+ yield call(new RGWPSHandleObjEventCR(sync_env, env, owner, event, record, topics));
if (retcode < 0) {
return set_cr_error(retcode);
}
class RGWPSDataSyncModule : public RGWDataSyncModule {
PSEnvRef env;
PSConfigRef& conf;
+
public:
RGWPSDataSyncModule(CephContext *cct, const JSONFormattable& config) : env(std::make_shared<PSEnv>()), conf(env->conf) {
env->init(cct, config);
}
+
~RGWPSDataSyncModule() override {}
void init(RGWDataSyncEnv *sync_env, uint64_t instance_id) override {
ldout(sync_env->cct, 5) << conf->id << ": start" << dendl;
return new RGWPSInitEnvCBCR(sync_env, env);
}
- RGWCoroutine *sync_object(RGWDataSyncEnv *sync_env, RGWBucketInfo& bucket_info, rgw_obj_key& key, std::optional<uint64_t> versioned_epoch, rgw_zone_set *zones_trace) override {
- ldout(sync_env->cct, 10) << conf->id << ": sync_object: b=" << bucket_info.bucket << " k=" << key << " versioned_epoch=" << versioned_epoch.value_or(0) << dendl;
+
+ RGWCoroutine *sync_object(RGWDataSyncEnv *sync_env, RGWBucketInfo& bucket_info,
+ rgw_obj_key& key, std::optional<uint64_t> versioned_epoch, rgw_zone_set *zones_trace) override {
+ ldout(sync_env->cct, 10) << conf->id << ": sync_object: b=" << bucket_info.bucket <<
+ " k=" << key << " versioned_epoch=" << versioned_epoch.value_or(0) << dendl;
return new RGWPSHandleObjCreateCR(sync_env, bucket_info, key, env, versioned_epoch);
}
- RGWCoroutine *remove_object(RGWDataSyncEnv *sync_env, RGWBucketInfo& bucket_info, rgw_obj_key& key, real_time& mtime, bool versioned, uint64_t versioned_epoch, rgw_zone_set *zones_trace) override {
- ldout(sync_env->cct, 10) << conf->id << ": rm_object: b=" << bucket_info.bucket << " k=" << key << " mtime=" << mtime << " versioned=" << versioned << " versioned_epoch=" << versioned_epoch << dendl;
- return new RGWPSGenericObjEventCBCR(sync_env, env, bucket_info, key, mtime, OBJECT_DELETE);
+
+ RGWCoroutine *remove_object(RGWDataSyncEnv *sync_env, RGWBucketInfo& bucket_info,
+ rgw_obj_key& key, real_time& mtime, bool versioned, uint64_t versioned_epoch, rgw_zone_set *zones_trace) override {
+ ldout(sync_env->cct, 10) << conf->id << ": rm_object: b=" << bucket_info.bucket <<
+ " k=" << key << " mtime=" << mtime << " versioned=" << versioned << " versioned_epoch=" << versioned_epoch << dendl;
+ return new RGWPSGenericObjEventCBCR(sync_env, env, bucket_info, key, mtime, rgw::notify::ObjectRemovedDelete);
}
- RGWCoroutine *create_delete_marker(RGWDataSyncEnv *sync_env, RGWBucketInfo& bucket_info, rgw_obj_key& key, real_time& mtime,
- rgw_bucket_entry_owner& owner, bool versioned, uint64_t versioned_epoch, rgw_zone_set *zones_trace) override {
- ldout(sync_env->cct, 10) << conf->id << ": create_delete_marker: b=" << bucket_info.bucket << " k=" << key << " mtime=" << mtime
- << " versioned=" << versioned << " versioned_epoch=" << versioned_epoch << dendl;
- return new RGWPSGenericObjEventCBCR(sync_env, env, bucket_info, key, mtime, DELETE_MARKER_CREATE);
+
+ RGWCoroutine *create_delete_marker(RGWDataSyncEnv *sync_env, RGWBucketInfo& bucket_info,
+ rgw_obj_key& key, real_time& mtime, rgw_bucket_entry_owner& owner, bool versioned, uint64_t versioned_epoch, rgw_zone_set *zones_trace) override {
+ ldout(sync_env->cct, 10) << conf->id << ": create_delete_marker: b=" << bucket_info.bucket <<
+ " k=" << key << " mtime=" << mtime << " versioned=" << versioned << " versioned_epoch=" << versioned_epoch << dendl;
+ return new RGWPSGenericObjEventCBCR(sync_env, env, bucket_info, key, mtime, rgw::notify::ObjectRemovedDeleteMarkerCreated);
}
PSConfigRef& get_conf() { return conf; }
string jconf = json_str("conf", *data_handler->get_conf());
JSONParser p;
if (!p.parse(jconf.c_str(), jconf.size())) {
- ldout(cct, 0) << "ERROR: failed to parse sync module effective conf: " << jconf << dendl;
+ ldout(cct, 1) << "ERROR: failed to parse sync module effective conf: " << jconf << dendl;
effective_conf = config;
} else {
effective_conf.decode_json(&p);
}
+#ifdef WITH_RADOSGW_AMQP_ENDPOINT
+ if (!rgw::amqp::init(cct)) {
+ ldout(cct, 1) << "ERROR: failed to initialize AMQP manager in pubsub sync module" << dendl;
+ }
+#endif
+}
+
+RGWPSSyncModuleInstance::~RGWPSSyncModuleInstance() {
+#ifdef WITH_RADOSGW_AMQP_ENDPOINT
+ rgw::amqp::shutdown();
+#endif
}
RGWDataSyncModule *RGWPSSyncModuleInstance::get_data_handler()
if (dialect != RGW_REST_S3) {
return orig;
}
- return new RGWRESTMgr_PubSub_S3(orig);
+ return new RGWRESTMgr_PubSub();
+}
+
+bool RGWPSSyncModuleInstance::should_full_sync() const {
+ return data_handler->get_conf()->start_with_full_sync;
}
int RGWPSSyncModule::create_instance(CephContext *cct, const JSONFormattable& config, RGWSyncModuleInstanceRef *instance) {
return 0;
}
+
JSONFormattable effective_conf;
public:
RGWPSSyncModuleInstance(CephContext *cct, const JSONFormattable& config);
+ ~RGWPSSyncModuleInstance();
RGWDataSyncModule *get_data_handler() override;
RGWRESTMgr *get_rest_filter(int dialect, RGWRESTMgr *orig) override;
bool supports_user_writes() override {
const JSONFormattable& get_effective_conf() {
return effective_conf;
}
+ // start with full sync based on configuration
+ // default to incremental only
+ virtual bool should_full_sync() const override;
};
#endif
+// -*- mode:C++; tab-width:8; c-basic-offset:2; indent-tabs-mode:t -*-
+// vim: ts=8 sw=2 smarttab ft=cpp
+
+#include <algorithm>
+#include "rgw_rest_pubsub_common.h"
+#include "rgw_rest_pubsub.h"
#include "rgw_sync_module_pubsub.h"
+#include "rgw_pubsub_push.h"
#include "rgw_sync_module_pubsub_rest.h"
#include "rgw_pubsub.h"
#include "rgw_op.h"
#include "rgw_rest.h"
#include "rgw_rest_s3.h"
+#include "rgw_arn.h"
+#include "rgw_zone.h"
#define dout_context g_ceph_context
#define dout_subsys ceph_subsys_rgw
-class RGWPSCreateTopicOp : public RGWDefaultResponseOp {
-protected:
- std::unique_ptr<RGWUserPubSub> ups;
- string topic_name;
- string bucket_name;
-
+// command: PUT /topics/<topic-name>[&push-endpoint=<endpoint>[&<arg1>=<value1>]]
+class RGWPSCreateTopic_ObjStore : public RGWPSCreateTopicOp {
public:
- RGWPSCreateTopicOp() {}
-
- int verify_permission() override {
- return 0;
- }
- void pre_exec() override {
- rgw_bucket_object_pre_exec(s);
- }
- void execute() override;
-
- const char* name() const override { return "pubsub_topic_create"; }
- virtual RGWOpType get_type() override { return RGW_OP_PUBSUB_TOPIC_CREATE; }
- virtual uint32_t op_mask() override { return RGW_OP_TYPE_WRITE; }
- virtual int get_params() = 0;
-};
-
-void RGWPSCreateTopicOp::execute()
-{
- op_ret = get_params();
- if (op_ret < 0) {
- return;
- }
-
- ups = make_unique<RGWUserPubSub>(store, s->owner.get_id());
- op_ret = ups->create_topic(topic_name);
- if (op_ret < 0) {
- ldout(s->cct, 20) << "failed to create topic, ret=" << op_ret << dendl;
- return;
- }
-}
-
-class RGWPSCreateTopic_ObjStore_S3 : public RGWPSCreateTopicOp {
-public:
- explicit RGWPSCreateTopic_ObjStore_S3() {}
-
int get_params() override {
+
topic_name = s->object.name;
+
+ dest.push_endpoint = s->info.args.get("push-endpoint");
+ dest.push_endpoint_args = s->info.args.get_str();
+ // dest object only stores endpoint info
+ // bucket to store events/records will be set only when subscription is created
+ dest.bucket_name = "";
+ dest.oid_prefix = "";
+ dest.arn_topic = topic_name;
+ // the topic ARN will be sent in the reply
+ const rgw::ARN arn(rgw::Partition::aws, rgw::Service::sns,
+ store->svc.zone->get_zonegroup().get_name(),
+ s->user->user_id.tenant, topic_name);
+ topic_arn = arn.to_string();
return 0;
}
-};
-
-class RGWPSListTopicsOp : public RGWOp {
-protected:
- std::unique_ptr<RGWUserPubSub> ups;
- rgw_pubsub_user_topics result;
+ void send_response() override {
+ if (op_ret) {
+ set_req_state_err(s, op_ret);
+ }
+ dump_errno(s);
+ end_header(s, this, "application/json");
-public:
- RGWPSListTopicsOp() {}
+ if (op_ret < 0) {
+ return;
+ }
- int verify_permission() override {
- return 0;
- }
- void pre_exec() override {
- rgw_bucket_object_pre_exec(s);
+ {
+ Formatter::ObjectSection section(*s->formatter, "result");
+ encode_json("arn", topic_arn, s->formatter);
+ }
+ rgw_flush_formatter_and_reset(s, s->formatter);
}
- void execute() override;
-
- const char* name() const override { return "pubsub_topics_list"; }
- virtual RGWOpType get_type() override { return RGW_OP_PUBSUB_TOPICS_LIST; }
- virtual uint32_t op_mask() override { return RGW_OP_TYPE_READ; }
};
-void RGWPSListTopicsOp::execute()
-{
- ups = make_unique<RGWUserPubSub>(store, s->owner.get_id());
- op_ret = ups->get_user_topics(&result);
- if (op_ret < 0) {
- ldout(s->cct, 20) << "failed to get topics, ret=" << op_ret << dendl;
- return;
- }
-
-}
-
-class RGWPSListTopics_ObjStore_S3 : public RGWPSListTopicsOp {
+// command: GET /topics
+class RGWPSListTopics_ObjStore : public RGWPSListTopicsOp {
public:
- explicit RGWPSListTopics_ObjStore_S3() {}
-
void send_response() override {
if (op_ret) {
set_req_state_err(s, op_ret);
}
};
-class RGWPSGetTopicOp : public RGWOp {
-protected:
- string topic_name;
- std::unique_ptr<RGWUserPubSub> ups;
- rgw_pubsub_topic_subs result;
-
+// command: GET /topics/<topic-name>
+class RGWPSGetTopic_ObjStore : public RGWPSGetTopicOp {
public:
- RGWPSGetTopicOp() {}
-
- int verify_permission() override {
- return 0;
- }
- void pre_exec() override {
- rgw_bucket_object_pre_exec(s);
- }
- void execute() override;
-
- const char* name() const override { return "pubsub_topic_get"; }
- virtual RGWOpType get_type() override { return RGW_OP_PUBSUB_TOPIC_GET; }
- virtual uint32_t op_mask() override { return RGW_OP_TYPE_READ; }
- virtual int get_params() = 0;
-};
-
-void RGWPSGetTopicOp::execute()
-{
- op_ret = get_params();
- if (op_ret < 0) {
- return;
- }
- ups = make_unique<RGWUserPubSub>(store, s->owner.get_id());
- op_ret = ups->get_topic(topic_name, &result);
- if (op_ret < 0) {
- ldout(s->cct, 20) << "failed to get topic, ret=" << op_ret << dendl;
- return;
- }
-}
-
-class RGWPSGetTopic_ObjStore_S3 : public RGWPSGetTopicOp {
-public:
- explicit RGWPSGetTopic_ObjStore_S3() {}
-
int get_params() override {
topic_name = s->object.name;
return 0;
}
};
-class RGWPSDeleteTopicOp : public RGWDefaultResponseOp {
-protected:
- string topic_name;
- std::unique_ptr<RGWUserPubSub> ups;
-
+// command: DELETE /topics/<topic-name>
+class RGWPSDeleteTopic_ObjStore : public RGWPSDeleteTopicOp {
public:
- RGWPSDeleteTopicOp() {}
-
- int verify_permission() override {
- return 0;
- }
- void pre_exec() override {
- rgw_bucket_object_pre_exec(s);
- }
- void execute() override;
-
- const char* name() const override { return "pubsub_topic_delete"; }
- virtual RGWOpType get_type() override { return RGW_OP_PUBSUB_TOPIC_DELETE; }
- virtual uint32_t op_mask() override { return RGW_OP_TYPE_DELETE; }
- virtual int get_params() = 0;
-};
-
-void RGWPSDeleteTopicOp::execute()
-{
- op_ret = get_params();
- if (op_ret < 0) {
- return;
- }
-
- ups = make_unique<RGWUserPubSub>(store, s->owner.get_id());
- op_ret = ups->remove_topic(topic_name);
- if (op_ret < 0) {
- ldout(s->cct, 20) << "failed to remove topic, ret=" << op_ret << dendl;
- return;
- }
-}
-
-class RGWPSDeleteTopic_ObjStore_S3 : public RGWPSDeleteTopicOp {
-public:
- explicit RGWPSDeleteTopic_ObjStore_S3() {}
-
int get_params() override {
topic_name = s->object.name;
return 0;
}
};
-class RGWHandler_REST_PSTopic_S3 : public RGWHandler_REST_S3 {
+// ceph specifc topics handler factory
+class RGWHandler_REST_PSTopic : public RGWHandler_REST_S3 {
protected:
int init_permissions(RGWOp* op) override {
return 0;
}
+
int read_permissions(RGWOp* op) override {
return 0;
}
+
bool supports_quota() override {
return false;
}
+
RGWOp *op_get() override {
if (s->init_state.url_bucket.empty()) {
return nullptr;
}
if (s->object.empty()) {
- return new RGWPSListTopics_ObjStore_S3();
+ return new RGWPSListTopics_ObjStore();
}
- return new RGWPSGetTopic_ObjStore_S3();
+ return new RGWPSGetTopic_ObjStore();
}
RGWOp *op_put() override {
if (!s->object.empty()) {
- return new RGWPSCreateTopic_ObjStore_S3();
+ return new RGWPSCreateTopic_ObjStore();
}
return nullptr;
}
RGWOp *op_delete() override {
if (!s->object.empty()) {
- return new RGWPSDeleteTopic_ObjStore_S3();
+ return new RGWPSDeleteTopic_ObjStore();
}
return nullptr;
}
public:
- explicit RGWHandler_REST_PSTopic_S3(const rgw::auth::StrategyRegistry& auth_registry) : RGWHandler_REST_S3(auth_registry) {}
- virtual ~RGWHandler_REST_PSTopic_S3() {}
+ explicit RGWHandler_REST_PSTopic(const rgw::auth::StrategyRegistry& auth_registry) : RGWHandler_REST_S3(auth_registry) {}
+ virtual ~RGWHandler_REST_PSTopic() = default;
};
-
-class RGWPSCreateSubOp : public RGWDefaultResponseOp {
-protected:
- string sub_name;
- string topic_name;
- std::unique_ptr<RGWUserPubSub> ups;
- rgw_pubsub_sub_dest dest;
-
+// command: PUT /subscriptions/<sub-name>?topic=<topic-name>[&push-endpoint=<endpoint>[&<arg1>=<value1>]]...
+class RGWPSCreateSub_ObjStore : public RGWPSCreateSubOp {
public:
- RGWPSCreateSubOp() {}
-
- int verify_permission() override {
- return 0;
- }
- void pre_exec() override {
- rgw_bucket_object_pre_exec(s);
- }
- void execute() override;
-
- const char* name() const override { return "pubsub_subscription_create"; }
- virtual RGWOpType get_type() override { return RGW_OP_PUBSUB_SUB_CREATE; }
- virtual uint32_t op_mask() override { return RGW_OP_TYPE_WRITE; }
- virtual int get_params() = 0;
-};
-
-void RGWPSCreateSubOp::execute()
-{
- op_ret = get_params();
- if (op_ret < 0) {
- return;
- }
- ups = make_unique<RGWUserPubSub>(store, s->owner.get_id());
- auto sub = ups->get_sub(sub_name);
- op_ret = sub->subscribe(topic_name, dest);
- if (op_ret < 0) {
- ldout(s->cct, 20) << "failed to create subscription, ret=" << op_ret << dendl;
- return;
- }
-}
-
-class RGWPSCreateSub_ObjStore_S3 : public RGWPSCreateSubOp {
-public:
- explicit RGWPSCreateSub_ObjStore_S3() {}
-
int get_params() override {
sub_name = s->object.name;
bool exists;
-
topic_name = s->info.args.get("topic", &exists);
if (!exists) {
- ldout(s->cct, 20) << "ERROR: missing required param 'topic' for request" << dendl;
+ ldout(s->cct, 1) << "missing required param 'topic'" << dendl;
return -EINVAL;
}
- auto psmodule = static_cast<RGWPSSyncModuleInstance *>(store->get_sync_module().get());
- auto conf = psmodule->get_effective_conf();
+ const auto psmodule = static_cast<RGWPSSyncModuleInstance*>(store->get_sync_module().get());
+ const auto& conf = psmodule->get_effective_conf();
dest.push_endpoint = s->info.args.get("push-endpoint");
dest.bucket_name = string(conf["data_bucket_prefix"]) + s->owner.get_id().to_str() + "-" + topic_name;
dest.oid_prefix = string(conf["data_oid_prefix"]) + sub_name + "/";
dest.push_endpoint_args = s->info.args.get_str();
+ dest.arn_topic = topic_name;
return 0;
}
};
-class RGWPSGetSubOp : public RGWOp {
-protected:
- string sub_name;
- std::unique_ptr<RGWUserPubSub> ups;
- rgw_pubsub_sub_config result;
-
-public:
- RGWPSGetSubOp() {}
-
- int verify_permission() override {
- return 0;
- }
- void pre_exec() override {
- rgw_bucket_object_pre_exec(s);
- }
- void execute() override;
-
- const char* name() const override { return "pubsub_subscription_get"; }
- virtual RGWOpType get_type() override { return RGW_OP_PUBSUB_SUB_GET; }
- virtual uint32_t op_mask() override { return RGW_OP_TYPE_READ; }
- virtual int get_params() = 0;
-};
-
-void RGWPSGetSubOp::execute()
-{
- op_ret = get_params();
- if (op_ret < 0) {
- return;
- }
- ups = make_unique<RGWUserPubSub>(store, s->owner.get_id());
- auto sub = ups->get_sub(sub_name);
- op_ret = sub->get_conf(&result);
- if (op_ret < 0) {
- ldout(s->cct, 20) << "failed to get subscription, ret=" << op_ret << dendl;
- return;
- }
-}
-
-class RGWPSGetSub_ObjStore_S3 : public RGWPSGetSubOp {
+// command: GET /subscriptions/<sub-name>
+class RGWPSGetSub_ObjStore : public RGWPSGetSubOp {
public:
- explicit RGWPSGetSub_ObjStore_S3() {}
-
int get_params() override {
sub_name = s->object.name;
return 0;
}
-
void send_response() override {
if (op_ret) {
set_req_state_err(s, op_ret);
return;
}
- {
- Formatter::ObjectSection section(*s->formatter, "result");
- encode_json("topic", result.topic, s->formatter);
- encode_json("push_endpoint", result.dest.push_endpoint, s->formatter);
- encode_json("args", result.dest.push_endpoint_args, s->formatter);
- }
+ encode_json("result", result, s->formatter);
rgw_flush_formatter_and_reset(s, s->formatter);
}
};
-class RGWPSDeleteSubOp : public RGWDefaultResponseOp {
-protected:
- string sub_name;
- string topic_name;
- std::unique_ptr<RGWUserPubSub> ups;
-
-public:
- RGWPSDeleteSubOp() {}
-
- int verify_permission() override {
- return 0;
- }
- void pre_exec() override {
- rgw_bucket_object_pre_exec(s);
- }
- void execute() override;
-
- const char* name() const override { return "pubsub_subscription_delete"; }
- virtual RGWOpType get_type() override { return RGW_OP_PUBSUB_SUB_DELETE; }
- virtual uint32_t op_mask() override { return RGW_OP_TYPE_DELETE; }
- virtual int get_params() = 0;
-};
-
-void RGWPSDeleteSubOp::execute()
-{
- op_ret = get_params();
- if (op_ret < 0) {
- return;
- }
- ups = make_unique<RGWUserPubSub>(store, s->owner.get_id());
- auto sub = ups->get_sub(sub_name);
- op_ret = sub->unsubscribe(topic_name);
- if (op_ret < 0) {
- ldout(s->cct, 20) << "failed to remove subscription, ret=" << op_ret << dendl;
- return;
- }
-}
-
-class RGWPSDeleteSub_ObjStore_S3 : public RGWPSDeleteSubOp {
+// command: DELETE /subscriptions/<sub-name>
+class RGWPSDeleteSub_ObjStore : public RGWPSDeleteSubOp {
public:
- explicit RGWPSDeleteSub_ObjStore_S3() {}
-
int get_params() override {
sub_name = s->object.name;
topic_name = s->info.args.get("topic");
}
};
-class RGWPSAckSubEventOp : public RGWDefaultResponseOp {
-protected:
- string sub_name;
- string event_id;
- std::unique_ptr<RGWUserPubSub> ups;
-
+// command: POST /subscriptions/<sub-name>?ack&event-id=<event-id>
+class RGWPSAckSubEvent_ObjStore : public RGWPSAckSubEventOp {
public:
- RGWPSAckSubEventOp() {}
-
- int verify_permission() override {
- return 0;
- }
- void pre_exec() override {
- rgw_bucket_object_pre_exec(s);
- }
- void execute() override;
-
- const char* name() const override { return "pubsub_subscription_ack"; }
- virtual RGWOpType get_type() override { return RGW_OP_PUBSUB_SUB_ACK; }
- virtual uint32_t op_mask() override { return RGW_OP_TYPE_WRITE; }
- virtual int get_params() = 0;
-};
-
-void RGWPSAckSubEventOp::execute()
-{
- op_ret = get_params();
- if (op_ret < 0) {
- return;
- }
- ups = make_unique<RGWUserPubSub>(store, s->owner.get_id());
- auto sub = ups->get_sub(sub_name);
- op_ret = sub->remove_event(event_id);
- if (op_ret < 0) {
- ldout(s->cct, 20) << "failed to ack event, ret=" << op_ret << dendl;
- return;
- }
-}
-
-class RGWPSAckSubEvent_ObjStore_S3 : public RGWPSAckSubEventOp {
-public:
- explicit RGWPSAckSubEvent_ObjStore_S3() {}
+ explicit RGWPSAckSubEvent_ObjStore() {}
int get_params() override {
sub_name = s->object.name;
event_id = s->info.args.get("event-id", &exists);
if (!exists) {
- ldout(s->cct, 20) << "ERROR: missing required param 'event-id' for request" << dendl;
+ ldout(s->cct, 1) << "missing required param 'event-id'" << dendl;
return -EINVAL;
}
return 0;
}
};
-class RGWPSPullSubEventsOp : public RGWOp {
-protected:
- int max_entries{0};
- string sub_name;
- string marker;
- std::unique_ptr<RGWUserPubSub> ups;
- RGWUserPubSub::Sub::list_events_result result;
-
-public:
- RGWPSPullSubEventsOp() {}
-
- int verify_permission() override {
- return 0;
- }
- void pre_exec() override {
- rgw_bucket_object_pre_exec(s);
- }
- void execute() override;
-
- const char* name() const override { return "pubsub_subscription_pull"; }
- virtual RGWOpType get_type() override { return RGW_OP_PUBSUB_SUB_PULL; }
- virtual uint32_t op_mask() override { return RGW_OP_TYPE_READ; }
- virtual int get_params() = 0;
-};
-
-void RGWPSPullSubEventsOp::execute()
-{
- op_ret = get_params();
- if (op_ret < 0) {
- return;
- }
- ups = make_unique<RGWUserPubSub>(store, s->owner.get_id());
- auto sub = ups->get_sub(sub_name);
- op_ret = sub->list_events(marker, max_entries, &result);
- if (op_ret < 0) {
- ldout(s->cct, 20) << "failed to get subscription, ret=" << op_ret << dendl;
- return;
- }
-}
-
-class RGWPSPullSubEvents_ObjStore_S3 : public RGWPSPullSubEventsOp {
+// command: GET /subscriptions/<sub-name>?events[&max-entries=<max-entries>][&marker=<marker>]
+class RGWPSPullSubEvents_ObjStore : public RGWPSPullSubEventsOp {
public:
- explicit RGWPSPullSubEvents_ObjStore_S3() {}
-
int get_params() override {
sub_name = s->object.name;
marker = s->info.args.get("marker");
-#define DEFAULT_MAX_ENTRIES 100
- int ret = s->info.args.get_int("max-entries", &max_entries, DEFAULT_MAX_ENTRIES);
+ const int ret = s->info.args.get_int("max-entries", &max_entries,
+ RGWUserPubSub::Sub::DEFAULT_MAX_EVENTS);
if (ret < 0) {
- ldout(s->cct, 20) << "failed to parse 'max-entries' param" << dendl;
+ ldout(s->cct, 1) << "failed to parse 'max-entries' param" << dendl;
return -EINVAL;
}
return 0;
return;
}
- encode_json("result", result, s->formatter);
+ encode_json("result", *sub, s->formatter);
rgw_flush_formatter_and_reset(s, s->formatter);
}
};
-class RGWHandler_REST_PSSub_S3 : public RGWHandler_REST_S3 {
+// subscriptions handler factory
+class RGWHandler_REST_PSSub : public RGWHandler_REST_S3 {
protected:
int init_permissions(RGWOp* op) override {
return 0;
return nullptr;
}
if (s->info.args.exists("events")) {
- return new RGWPSPullSubEvents_ObjStore_S3();
+ return new RGWPSPullSubEvents_ObjStore();
}
- return new RGWPSGetSub_ObjStore_S3();
+ return new RGWPSGetSub_ObjStore();
}
RGWOp *op_put() override {
if (!s->object.empty()) {
- return new RGWPSCreateSub_ObjStore_S3();
+ return new RGWPSCreateSub_ObjStore();
}
return nullptr;
}
RGWOp *op_delete() override {
if (!s->object.empty()) {
- return new RGWPSDeleteSub_ObjStore_S3();
+ return new RGWPSDeleteSub_ObjStore();
}
return nullptr;
}
RGWOp *op_post() override {
if (s->info.args.exists("ack")) {
- return new RGWPSAckSubEvent_ObjStore_S3();
+ return new RGWPSAckSubEvent_ObjStore();
}
return nullptr;
}
public:
- explicit RGWHandler_REST_PSSub_S3(const rgw::auth::StrategyRegistry& auth_registry) : RGWHandler_REST_S3(auth_registry) {}
- virtual ~RGWHandler_REST_PSSub_S3() {}
+ explicit RGWHandler_REST_PSSub(const rgw::auth::StrategyRegistry& auth_registry) : RGWHandler_REST_S3(auth_registry) {}
+ virtual ~RGWHandler_REST_PSSub() = default;
};
-
-static int notif_bucket_path(const string& path, string *bucket_name)
-{
+namespace {
+// extract bucket name from ceph specific notification command, with the format:
+// /notifications/<bucket-name>
+int notif_bucket_path(const string& path, std::string& bucket_name) {
if (path.empty()) {
return -EINVAL;
}
return -EINVAL;
}
- *bucket_name = path.substr(pos + 1);
+ bucket_name = path.substr(pos + 1);
return 0;
}
+}
-class RGWPSCreateNotifOp : public RGWDefaultResponseOp {
-protected:
- std::unique_ptr<RGWUserPubSub> ups;
- string topic_name;
- set<string, ltstr_nocase> events;
-
- string bucket_name;
- RGWBucketInfo bucket_info;
-
-public:
- RGWPSCreateNotifOp() {}
+// command (ceph specific): PUT /notification/bucket/<bucket name>?topic=<topic name>
+class RGWPSCreateNotif_ObjStore : public RGWPSCreateNotifOp {
+private:
+ std::string topic_name;
+ rgw::notify::EventTypeList events;
- int verify_permission() override {
- int ret = get_params();
- if (ret < 0) {
- return ret;
+ int get_params() override {
+ bool exists;
+ topic_name = s->info.args.get("topic", &exists);
+ if (!exists) {
+ ldout(s->cct, 1) << "missing required param 'topic'" << dendl;
+ return -EINVAL;
}
- ret = store->get_bucket_info(*s->sysobj_ctx, s->owner.get_id().tenant, bucket_name,
- bucket_info, nullptr, nullptr);
- if (ret < 0) {
- return ret;
+ std::string events_str = s->info.args.get("events", &exists);
+ if (!exists) {
+ // if no events are provided, we notify on all of them
+ events_str = "OBJECT_CREATE,OBJECT_DELETE,DELETE_MARKER_CREATE";
}
-
- if (bucket_info.owner != s->owner.get_id()) {
- ldout(s->cct, 20) << "user doesn't own bucket, cannot create topic" << dendl;
- return -EPERM;
+ rgw::notify::from_string_list(events_str, events);
+ if (std::find(events.begin(), events.end(), rgw::notify::UnknownEvent) != events.end()) {
+ ldout(s->cct, 1) << "invalid event type in list: " << events_str << dendl;
+ return -EINVAL;
}
- return 0;
- }
- void pre_exec() override {
- rgw_bucket_object_pre_exec(s);
+ return notif_bucket_path(s->object.name, bucket_name);
}
- void execute() override;
+public:
const char* name() const override { return "pubsub_notification_create"; }
- virtual RGWOpType get_type() override { return RGW_OP_PUBSUB_NOTIF_CREATE; }
- virtual uint32_t op_mask() override { return RGW_OP_TYPE_WRITE; }
- virtual int get_params() = 0;
+ void execute() override;
};
-void RGWPSCreateNotifOp::execute()
+void RGWPSCreateNotif_ObjStore::execute()
{
- op_ret = get_params();
- if (op_ret < 0) {
- return;
- }
+ ups.emplace(store, s->owner.get_id());
- ups = make_unique<RGWUserPubSub>(store, s->owner.get_id());
auto b = ups->get_bucket(bucket_info.bucket);
op_ret = b->create_notification(topic_name, events);
if (op_ret < 0) {
- ldout(s->cct, 20) << "failed to create notification, ret=" << op_ret << dendl;
+ ldout(s->cct, 1) << "failed to create notification for topic '" << topic_name << "', ret=" << op_ret << dendl;
return;
}
+ ldout(s->cct, 20) << "successfully created notification for topic '" << topic_name << "'" << dendl;
}
-class RGWPSCreateNotif_ObjStore_S3 : public RGWPSCreateNotifOp {
-public:
- explicit RGWPSCreateNotif_ObjStore_S3() {}
+// command: DELETE /notifications/bucket/<bucket>?topic=<topic-name>
+class RGWPSDeleteNotif_ObjStore : public RGWPSDeleteNotifOp {
+private:
+ std::string topic_name;
int get_params() override {
bool exists;
topic_name = s->info.args.get("topic", &exists);
if (!exists) {
- ldout(s->cct, 20) << "param 'topic' not provided" << dendl;
+ ldout(s->cct, 1) << "missing required param 'topic'" << dendl;
return -EINVAL;
}
-
- string events_str = s->info.args.get("events", &exists);
- if (exists) {
- get_str_set(events_str, ",", events);
- }
- return notif_bucket_path(s->object.name, &bucket_name);
+ return notif_bucket_path(s->object.name, bucket_name);
}
-};
-
-class RGWPSDeleteNotifOp : public RGWDefaultResponseOp {
-protected:
- std::unique_ptr<RGWUserPubSub> ups;
- string topic_name;
- string bucket_name;
- RGWBucketInfo bucket_info;
public:
- RGWPSDeleteNotifOp() {}
-
- int verify_permission() override {
- int ret = get_params();
- if (ret < 0) {
- return ret;
- }
-
- ret = store->get_bucket_info(*s->sysobj_ctx, s->owner.get_id().tenant, bucket_name,
- bucket_info, nullptr, nullptr);
- if (ret < 0) {
- return ret;
- }
-
- if (bucket_info.owner != s->owner.get_id()) {
- ldout(s->cct, 20) << "user doesn't own bucket, cannot create topic" << dendl;
- return -EPERM;
- }
- return 0;
- }
- void pre_exec() override {
- rgw_bucket_object_pre_exec(s);
- }
void execute() override;
-
const char* name() const override { return "pubsub_notification_delete"; }
- virtual RGWOpType get_type() override { return RGW_OP_PUBSUB_NOTIF_DELETE; }
- virtual uint32_t op_mask() override { return RGW_OP_TYPE_DELETE; }
- virtual int get_params() = 0;
};
-void RGWPSDeleteNotifOp::execute()
-{
+void RGWPSDeleteNotif_ObjStore::execute() {
op_ret = get_params();
if (op_ret < 0) {
return;
}
- ups = make_unique<RGWUserPubSub>(store, s->owner.get_id());
+ ups.emplace(store, s->owner.get_id());
auto b = ups->get_bucket(bucket_info.bucket);
op_ret = b->remove_notification(topic_name);
if (op_ret < 0) {
- ldout(s->cct, 20) << "failed to remove notification, ret=" << op_ret << dendl;
+ ldout(s->cct, 1) << "failed to remove notification from topic '" << topic_name << "', ret=" << op_ret << dendl;
return;
}
+ ldout(s->cct, 20) << "successfully removed notification from topic '" << topic_name << "'" << dendl;
}
-class RGWPSDeleteNotif_ObjStore_S3 : public RGWPSCreateNotifOp {
-public:
- explicit RGWPSDeleteNotif_ObjStore_S3() {}
+// command: GET /notifications/bucket/<bucket>
+class RGWPSListNotifs_ObjStore : public RGWPSListNotifsOp {
+private:
+ rgw_pubsub_bucket_topics result;
int get_params() override {
- bool exists;
- topic_name = s->info.args.get("topic", &exists);
- if (!exists) {
- ldout(s->cct, 20) << "param 'topic' not provided" << dendl;
- return -EINVAL;
- }
- return notif_bucket_path(s->object.name, &bucket_name);
+ return notif_bucket_path(s->object.name, bucket_name);
}
-};
-
-class RGWPSListNotifsOp : public RGWOp {
-protected:
- string bucket_name;
- RGWBucketInfo bucket_info;
- std::unique_ptr<RGWUserPubSub> ups;
- rgw_pubsub_bucket_topics result;
-
public:
- RGWPSListNotifsOp() {}
-
- int verify_permission() override {
- int ret = get_params();
- if (ret < 0) {
- return ret;
- }
-
- ret = store->get_bucket_info(*s->sysobj_ctx, s->owner.get_id().tenant, bucket_name,
- bucket_info, nullptr, nullptr);
- if (ret < 0) {
- return ret;
- }
-
- if (bucket_info.owner != s->owner.get_id()) {
- ldout(s->cct, 20) << "user doesn't own bucket, cannot create topic" << dendl;
- return -EPERM;
- }
-
- return 0;
- }
- void pre_exec() override {
- rgw_bucket_object_pre_exec(s);
- }
void execute() override;
-
- const char* name() const override { return "pubsub_notifications_list"; }
- virtual RGWOpType get_type() override { return RGW_OP_PUBSUB_NOTIF_LIST; }
- virtual uint32_t op_mask() override { return RGW_OP_TYPE_READ; }
- virtual int get_params() = 0;
-};
-
-void RGWPSListNotifsOp::execute()
-{
- ups = make_unique<RGWUserPubSub>(store, s->owner.get_id());
- auto b = ups->get_bucket(bucket_info.bucket);
- op_ret = b->get_topics(&result);
- if (op_ret < 0) {
- ldout(s->cct, 20) << "failed to get topics, ret=" << op_ret << dendl;
- return;
- }
-
-}
-
-class RGWPSListNotifs_ObjStore_S3 : public RGWPSListNotifsOp {
-public:
- explicit RGWPSListNotifs_ObjStore_S3() {}
-
- int get_params() override {
- return notif_bucket_path(s->object.name, &bucket_name);
- }
-
void send_response() override {
if (op_ret) {
set_req_state_err(s, op_ret);
if (op_ret < 0) {
return;
}
-
encode_json("result", result, s->formatter);
rgw_flush_formatter_and_reset(s, s->formatter);
}
+ const char* name() const override { return "pubsub_notifications_list"; }
};
+void RGWPSListNotifs_ObjStore::execute()
+{
+ ups.emplace(store, s->owner.get_id());
+ auto b = ups->get_bucket(bucket_info.bucket);
+ op_ret = b->get_topics(&result);
+ if (op_ret < 0) {
+ ldout(s->cct, 1) << "failed to get topics, ret=" << op_ret << dendl;
+ return;
+ }
+}
-class RGWHandler_REST_PSNotifs_S3 : public RGWHandler_REST_S3 {
+// ceph specific notification handler factory
+class RGWHandler_REST_PSNotifs : public RGWHandler_REST_S3 {
protected:
int init_permissions(RGWOp* op) override {
return 0;
if (s->object.empty()) {
return nullptr;
}
- return new RGWPSListNotifs_ObjStore_S3();
+ return new RGWPSListNotifs_ObjStore();
}
RGWOp *op_put() override {
if (!s->object.empty()) {
- return new RGWPSCreateNotif_ObjStore_S3();
+ return new RGWPSCreateNotif_ObjStore();
}
return nullptr;
}
RGWOp *op_delete() override {
if (!s->object.empty()) {
- return new RGWPSDeleteNotif_ObjStore_S3();
+ return new RGWPSDeleteNotif_ObjStore();
}
return nullptr;
}
public:
- explicit RGWHandler_REST_PSNotifs_S3(const rgw::auth::StrategyRegistry& auth_registry) : RGWHandler_REST_S3(auth_registry) {}
- virtual ~RGWHandler_REST_PSNotifs_S3() {}
+ explicit RGWHandler_REST_PSNotifs(const rgw::auth::StrategyRegistry& auth_registry) : RGWHandler_REST_S3(auth_registry) {}
+ virtual ~RGWHandler_REST_PSNotifs() = default;
};
-
-RGWHandler_REST* RGWRESTMgr_PubSub_S3::get_handler(struct req_state* const s,
+// factory for ceph specific PubSub REST handlers
+RGWHandler_REST* RGWRESTMgr_PubSub::get_handler(struct req_state* const s,
const rgw::auth::StrategyRegistry& auth_registry,
const std::string& frontend_prefix)
{
- int ret =
- RGWHandler_REST_S3::init_from_header(s,
- RGW_FORMAT_JSON, true);
- if (ret < 0) {
+ if (RGWHandler_REST_S3::init_from_header(s, RGW_FORMAT_JSON, true) < 0) {
return nullptr;
}
+
+ RGWHandler_REST* handler{nullptr};
- RGWHandler_REST *handler = nullptr;;
-
+ // ceph specific PubSub API: topics/subscriptions/notification are reserved bucket names
+ // this API is available only on RGW that belong to a pubsub zone
if (s->init_state.url_bucket == "topics") {
- handler = new RGWHandler_REST_PSTopic_S3(auth_registry);
- }
-
- if (s->init_state.url_bucket == "subscriptions") {
- handler = new RGWHandler_REST_PSSub_S3(auth_registry);
- }
-
- if (s->init_state.url_bucket == "notifications") {
- handler = new RGWHandler_REST_PSNotifs_S3(auth_registry);
- }
-
+ handler = new RGWHandler_REST_PSTopic(auth_registry);
+ } else if (s->init_state.url_bucket == "subscriptions") {
+ handler = new RGWHandler_REST_PSSub(auth_registry);
+ } else if (s->init_state.url_bucket == "notifications") {
+ handler = new RGWHandler_REST_PSNotifs(auth_registry);
+ } else if (s->info.args.exists("notification")) {
+ const int ret = RGWHandler_REST::allocate_formatter(s, RGW_FORMAT_XML, true);
+ if (ret == 0) {
+ handler = new RGWHandler_REST_PSNotifs_S3(auth_registry);
+ }
+ }
+
ldout(s->cct, 20) << __func__ << " handler=" << (handler ? typeid(*handler).name() : "<null>") << dendl;
+
return handler;
}
#include "rgw_rest.h"
-class RGWRESTMgr_PubSub_S3 : public RGWRESTMgr {
- RGWRESTMgr *next;
+class RGWRESTMgr_PubSub : public RGWRESTMgr {
public:
- explicit RGWRESTMgr_PubSub_S3(RGWRESTMgr *_next) : next(_next) {}
-
- RGWHandler_REST *get_handler(struct req_state* s,
+ virtual RGWHandler_REST* get_handler(struct req_state* s,
const rgw::auth::StrategyRegistry& auth_registry,
const std::string& frontend_prefix) override;
};
add_executable(unittest_rgw_xml test_rgw_xml.cc)
add_ceph_unittest(unittest_rgw_xml)
-target_link_libraries(unittest_rgw_xml rgw_a ${EXPAT_LIBRARIES})
+target_link_libraries(unittest_rgw_xml ${rgw_libs} ${EXPAT_LIBRARIES})
+
+# unittest_rgw_arn
+add_executable(unittest_rgw_arn test_rgw_arn.cc)
+add_ceph_unittest(unittest_rgw_arn)
+
+target_link_libraries(unittest_rgw_arn ${rgw_libs})
VALID_PASSWORD = password;
}
+std::atomic<unsigned> g_tag_skip = 0;
+std::atomic<int> g_multiple = 0;
+
+void set_multiple(unsigned tag_skip) {
+ g_multiple = 1;
+ g_tag_skip = tag_skip;
+}
+
+void reset_multiple() {
+ g_multiple = 0;
+ g_tag_skip = 0;
+}
+
bool FAIL_NEXT_WRITE(false);
bool FAIL_NEXT_READ(false);
bool REPLY_ACK(true);
// "wait" for queue
usleep(tv->tv_sec*1000000+tv->tv_usec);
// read from queue
- if (REPLY_ACK) {
- if (state->ack_list.pop(state->ack)) {
- decoded_frame->frame_type = AMQP_FRAME_METHOD;
+ if (g_multiple) {
+ // pop multiples and reply once at the end
+ for (auto i = 0U; i < g_tag_skip; ++i) {
+ if (REPLY_ACK && !state->ack_list.pop(state->ack)) {
+ // queue is empty
+ return AMQP_STATUS_TIMEOUT;
+ } else if (!REPLY_ACK && !state->nack_list.pop(state->nack)) {
+ // queue is empty
+ return AMQP_STATUS_TIMEOUT;
+ }
+ }
+ if (REPLY_ACK) {
+ state->ack.multiple = g_multiple;
decoded_frame->payload.method.id = AMQP_BASIC_ACK_METHOD;
decoded_frame->payload.method.decoded = &state->ack;
- state->reply.reply_type = AMQP_RESPONSE_NORMAL;
- return AMQP_STATUS_OK;
} else {
- // queue is empty
- return AMQP_STATUS_TIMEOUT;
- }
- } else {
- if (state->nack_list.pop(state->nack)) {
- decoded_frame->frame_type = AMQP_FRAME_METHOD;
+ state->nack.multiple = g_multiple;
decoded_frame->payload.method.id = AMQP_BASIC_NACK_METHOD;
decoded_frame->payload.method.decoded = &state->nack;
- state->reply.reply_type = AMQP_RESPONSE_NORMAL;
- return AMQP_STATUS_OK;
- } else {
- // queue is empty
- return AMQP_STATUS_TIMEOUT;
}
+ decoded_frame->frame_type = AMQP_FRAME_METHOD;
+ state->reply.reply_type = AMQP_RESPONSE_NORMAL;
+ reset_multiple();
+ return AMQP_STATUS_OK;
+ }
+ // pop replies one by one
+ if (REPLY_ACK && state->ack_list.pop(state->ack)) {
+ state->ack.multiple = g_multiple;
+ decoded_frame->frame_type = AMQP_FRAME_METHOD;
+ decoded_frame->payload.method.id = AMQP_BASIC_ACK_METHOD;
+ decoded_frame->payload.method.decoded = &state->ack;
+ state->reply.reply_type = AMQP_RESPONSE_NORMAL;
+ return AMQP_STATUS_OK;
+ } else if (!REPLY_ACK && state->nack_list.pop(state->nack)) {
+ state->nack.multiple = g_multiple;
+ decoded_frame->frame_type = AMQP_FRAME_METHOD;
+ decoded_frame->payload.method.id = AMQP_BASIC_NACK_METHOD;
+ decoded_frame->payload.method.decoded = &state->nack;
+ state->reply.reply_type = AMQP_RESPONSE_NORMAL;
+ return AMQP_STATUS_OK;
+ } else {
+ // queue is empty
+ return AMQP_STATUS_TIMEOUT;
}
}
return AMQP_STATUS_CONNECTION_CLOSED;
void set_valid_host(const std::string& host);
void set_valid_vhost(const std::string& vhost);
void set_valid_user(const std::string& user, const std::string& password);
+void set_multiple(unsigned tag);
+void reset_multiple();
extern bool FAIL_NEXT_WRITE; // default "false"
extern bool FAIL_NEXT_READ; // default "false"
return ['--access-key', self.access_key, '--secret', self.secret]
class User(SystemObject):
- def __init__(self, uid, data = None, name = None, credentials = None):
+ def __init__(self, uid, data = None, name = None, credentials = None, tenant = None):
self.name = name
self.credentials = credentials or []
+ self.tenant = tenant
super(User, self).__init__(data, uid)
def user_arg(self):
""" command-line argument to specify this user """
- return ['--uid', self.id]
+ args = ['--uid', self.id]
+ if self.tenant:
+ args += ['--tenant', self.tenant]
+ return args
def build_command(self, command):
""" build a command line for the given command and args """
self.checkpoint_delay = kwargs.get('checkpoint_delay', 5)
# allow some time for realm reconfiguration after changing master zone
self.reconfigure_delay = kwargs.get('reconfigure_delay', 5)
+ self.tenant = kwargs.get('tenant', '')
# rgw multisite tests, written against the interfaces provided in rgw_multi.
# these tests must be initialized and run by another module that provides
config = _config or Config()
realm_meta_checkpoint(realm)
+def get_user():
+ return user.id if user is not None else ''
+
+def get_tenant():
+ return config.tenant if config is not None and config.tenant is not None else ''
+
def get_realm():
return realm
def bilog_list(zone, bucket, args = None):
cmd = ['bilog', 'list', '--bucket', bucket] + (args or [])
+ cmd += ['--tenant', config.tenant, '--uid', user.name] if config.tenant else []
bilog, _ = zone.cluster.admin(cmd, read_only=True)
bilog = bilog.decode('utf-8')
return json.loads(bilog)
cmd = ['bucket', 'sync', 'markers'] + target_zone.zone_args()
cmd += ['--source-zone', source_zone.name]
cmd += ['--bucket', bucket_name]
+ cmd += ['--tenant', config.tenant, '--uid', user.name] if config.tenant else []
while True:
bucket_sync_status_json, retcode = target_zone.cluster.admin(cmd, check_retcode=False, read_only=True)
if retcode == 0:
def bucket_source_log_status(source_zone, bucket_name):
cmd = ['bilog', 'status'] + source_zone.zone_args()
cmd += ['--bucket', bucket_name]
+ cmd += ['--tenant', config.tenant, '--uid', user.name] if config.tenant else []
source_cluster = source_zone.cluster
bilog_status_json, retcode = source_cluster.admin(cmd, read_only=True)
bilog_status = json.loads(bilog_status_json.decode('utf-8'))
import logging
import json
import tempfile
-from rgw_multi.tests import get_realm, \
+import BaseHTTPServer
+import SocketServer
+import random
+import threading
+import subprocess
+import socket
+import time
+import os
+from .tests import get_realm, \
ZonegroupConns, \
zonegroup_meta_checkpoint, \
zone_meta_checkpoint, \
zone_bucket_checkpoint, \
zone_data_checkpoint, \
+ zonegroup_bucket_checkpoint, \
check_bucket_eq, \
- gen_bucket_name
-from rgw_multi.zone_ps import PSTopic, PSNotification, PSSubscription
+ gen_bucket_name, \
+ get_user, \
+ get_tenant
+from .zone_ps import PSTopic, PSTopicS3, PSNotification, PSSubscription, PSNotificationS3, print_connection_info, delete_all_s3_topics
+from multisite import User
from nose import SkipTest
from nose.tools import assert_not_equal, assert_equal
# configure logging for the tests module
-log = logging.getLogger('rgw_multi.tests')
+log = logging.getLogger(__name__)
+
+skip_push_tests = True
####################################
# utility functions for pubsub tests
####################################
+def set_contents_from_string(key, content):
+ try:
+ key.set_contents_from_string(content)
+ except Exception as e:
+ print 'Error: ' + str(e)
+
+
+# HTTP endpoint functions
+# multithreaded streaming server, based on: https://stackoverflow.com/questions/46210672/
+
+class HTTPPostHandler(BaseHTTPServer.BaseHTTPRequestHandler):
+ """HTTP POST hanler class storing the received events in its http server"""
+ def do_POST(self):
+ """implementation of POST handler"""
+ try:
+ content_length = int(self.headers['Content-Length'])
+ body = self.rfile.read(content_length)
+ log.info('HTTP Server (%d) received event: %s', self.server.worker_id, str(body))
+ self.server.append(json.loads(body))
+ except:
+ log.error('HTTP Server received empty event')
+ self.send_response(400)
+ else:
+ self.send_response(100)
+ finally:
+ self.end_headers()
+
+
+class HTTPServerWithEvents(BaseHTTPServer.HTTPServer):
+ """HTTP server used by the handler to store events"""
+ def __init__(self, addr, handler, worker_id):
+ BaseHTTPServer.HTTPServer.__init__(self, addr, handler, False)
+ self.worker_id = worker_id
+ self.events = []
+
+ def append(self, event):
+ self.events.append(event)
+
+
+class HTTPServerThread(threading.Thread):
+ """thread for running the HTTP server. reusing the same socket for all threads"""
+ def __init__(self, i, sock, addr):
+ threading.Thread.__init__(self)
+ self.i = i
+ self.daemon = True
+ self.httpd = HTTPServerWithEvents(addr, HTTPPostHandler, i)
+ self.httpd.socket = sock
+ # prevent the HTTP server from re-binding every handler
+ self.httpd.server_bind = self.server_close = lambda self: None
+ self.start()
+
+ def run(self):
+ try:
+ log.info('HTTP Server (%d) started on: %s', self.i, self.httpd.server_address)
+ self.httpd.serve_forever()
+ log.info('HTTP Server (%d) ended', self.i)
+ except Exception as error:
+ # could happen if the server r/w to a closing socket during shutdown
+ log.info('HTTP Server (%d) ended unexpectedly: %s', self.i, str(error))
+
+ def close(self):
+ self.httpd.shutdown()
+
+ def get_events(self):
+ return self.httpd.events
+
+ def reset_events(self):
+ self.httpd.events = []
+
+
+class StreamingHTTPServer:
+ """multi-threaded http server class also holding list of events received into the handler
+ each thread has its own server, and all servers share the same socket"""
+ def __init__(self, host, port, num_workers=100):
+ addr = (host, port)
+ self.sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
+ self.sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
+ self.sock.bind(addr)
+ self.sock.listen(num_workers)
+ self.workers = [HTTPServerThread(i, self.sock, addr) for i in range(num_workers)]
+
+ def verify_s3_events(self, keys, exact_match=False, deletions=False):
+ """verify stored s3 records agains a list of keys"""
+ events = []
+ for worker in self.workers:
+ events += worker.get_events()
+ worker.reset_events()
+ verify_s3_records_by_elements(events, keys, exact_match=exact_match, deletions=deletions)
+
+ def verify_events(self, keys, exact_match=False, deletions=False):
+ """verify stored events agains a list of keys"""
+ events = []
+ for worker in self.workers:
+ events += worker.get_events()
+ worker.reset_events()
+ verify_events_by_elements(events, keys, exact_match=exact_match, deletions=deletions)
+
+ def close(self):
+ """close all workers in the http server and wait for it to finish"""
+ # make sure that the shared socket is closed
+ # this is needed in case that one of the threads is blocked on the socket
+ self.sock.shutdown(socket.SHUT_RDWR)
+ self.sock.close()
+ # wait for server threads to finish
+ for worker in self.workers:
+ worker.close()
+ worker.join()
+
+
+# AMQP endpoint functions
+
+rabbitmq_port = 5672
+
+class AMQPReceiver(object):
+ """class for receiving and storing messages on a topic from the AMQP broker"""
+ def __init__(self, exchange, topic):
+ import pika
+ hostname = get_ip()
+ remaining_retries = 10
+ while remaining_retries > 0:
+ try:
+ connection = pika.BlockingConnection(pika.ConnectionParameters(host=hostname, port=rabbitmq_port))
+ break
+ except Exception as error:
+ remaining_retries -= 1
+ print 'failed to connect to rabbitmq (remaining retries ' + str(remaining_retries) + '): ' + str(error)
+ time.sleep(0.5)
+
+ if remaining_retries == 0:
+ raise Exception('failed to connect to rabbitmq - no retries left')
+
+ self.channel = connection.channel()
+ self.channel.exchange_declare(exchange=exchange, exchange_type='topic', durable=True)
+ result = self.channel.queue_declare('', exclusive=True)
+ queue_name = result.method.queue
+ self.channel.queue_bind(exchange=exchange, queue=queue_name, routing_key=topic)
+ self.channel.basic_consume(queue=queue_name,
+ on_message_callback=self.on_message,
+ auto_ack=True)
+ self.events = []
+ self.topic = topic
+
+ def on_message(self, ch, method, properties, body):
+ """callback invoked when a new message arrive on the topic"""
+ log.info('AMQP received event for topic %s:\n %s', self.topic, body)
+ self.events.append(json.loads(body))
+
+ # TODO create a base class for the AMQP and HTTP cases
+ def verify_s3_events(self, keys, exact_match=False, deletions=False):
+ """verify stored s3 records agains a list of keys"""
+ verify_s3_records_by_elements(self.events, keys, exact_match=exact_match, deletions=deletions)
+ self.events = []
+
+ def verify_events(self, keys, exact_match=False, deletions=False):
+ """verify stored events agains a list of keys"""
+ verify_events_by_elements(self.events, keys, exact_match=exact_match, deletions=deletions)
+ self.events = []
+
+ def get_and_reset_events(self):
+ tmp = self.events
+ self.events = []
+ return tmp
+
+
+def amqp_receiver_thread_runner(receiver):
+ """main thread function for the amqp receiver"""
+ try:
+ log.info('AMQP receiver started')
+ receiver.channel.start_consuming()
+ log.info('AMQP receiver ended')
+ except Exception as error:
+ log.info('AMQP receiver ended unexpectedly: %s', str(error))
+
+
+def create_amqp_receiver_thread(exchange, topic):
+ """create amqp receiver and thread"""
+ receiver = AMQPReceiver(exchange, topic)
+ task = threading.Thread(target=amqp_receiver_thread_runner, args=(receiver,))
+ task.daemon = True
+ return task, receiver
+
+
+def stop_amqp_receiver(receiver, task):
+ """stop the receiver thread and wait for it to finis"""
+ try:
+ receiver.channel.stop_consuming()
+ log.info('stopping AMQP receiver')
+ except Exception as error:
+ log.info('failed to gracefuly stop AMQP receiver: %s', str(error))
+ task.join(5)
def check_ps_configured():
"""check if at least one pubsub zone exist"""
realm = get_realm()
zonegroup = realm.master_zonegroup()
- es_zones = zonegroup.zones_by_type.get("pubsub")
- if not es_zones:
+ ps_zones = zonegroup.zones_by_type.get("pubsub")
+ if not ps_zones:
raise SkipTest("Requires at least one PS zone")
key_found = False
for event in events:
if event['info']['bucket']['name'] == key.bucket.name and \
- event['info']['key']['name'] == key.name:
+ event['info']['key']['name'] == key.name:
if deletions and event['event'] == 'OBJECT_DELETE':
key_found = True
break
assert False, err
-def init_env():
+def verify_s3_records_by_elements(records, keys, exact_match=False, deletions=False):
+ """ verify there is at least one record per element """
+ err = ''
+ for key in keys:
+ key_found = False
+ for record in records:
+ if record['s3']['bucket']['name'] == key.bucket.name and \
+ record['s3']['object']['key'] == key.name:
+ if deletions and 'ObjectRemoved' in record['eventName']:
+ key_found = True
+ break
+ elif not deletions and 'ObjectCreated' in record['eventName']:
+ key_found = True
+ break
+ if not key_found:
+ err = 'no ' + ('deletion' if deletions else 'creation') + ' event found for key: ' + str(key)
+ for record in records:
+ log.error(str(record['s3']['bucket']['name']) + ',' + str(record['s3']['object']['key']))
+ assert False, err
+
+ if not len(records) == len(keys):
+ err = 'superfluous records are found'
+ log.warning(err)
+ if exact_match:
+ for record in records:
+ log.error(str(record['s3']['bucket']['name']) + ',' + str(record['s3']['object']['key']))
+ assert False, err
+
+
+def init_rabbitmq():
+ """ start a rabbitmq broker """
+ hostname = get_ip()
+ #port = str(random.randint(20000, 30000))
+ #data_dir = './' + port + '_data'
+ #log_dir = './' + port + '_log'
+ #print('')
+ #try:
+ # os.mkdir(data_dir)
+ # os.mkdir(log_dir)
+ #except:
+ # print('rabbitmq directories already exists')
+ #env = {'RABBITMQ_NODE_PORT': port,
+ # 'RABBITMQ_NODENAME': 'rabbit'+ port + '@' + hostname,
+ # 'RABBITMQ_USE_LONGNAME': 'true',
+ # 'RABBITMQ_MNESIA_BASE': data_dir,
+ # 'RABBITMQ_LOG_BASE': log_dir}
+ # TODO: support multiple brokers per host using env
+ # make sure we don't collide with the default
+ try:
+ proc = subprocess.Popen('rabbitmq-server')
+ except Exception as error:
+ log.info('failed to execute rabbitmq-server: %s', str(error))
+ print 'failed to execute rabbitmq-server: %s' % str(error)
+ return None
+ # TODO add rabbitmq checkpoint instead of sleep
+ time.sleep(5)
+ return proc #, port, data_dir, log_dir
+
+
+def clean_rabbitmq(proc): #, data_dir, log_dir)
+ """ stop the rabbitmq broker """
+ try:
+ subprocess.call(['rabbitmqctl', 'stop'])
+ time.sleep(5)
+ proc.terminate()
+ except:
+ log.info('rabbitmq server already terminated')
+ # TODO: add directory cleanup once multiple brokers are supported
+ #try:
+ # os.rmdir(data_dir)
+ # os.rmdir(log_dir)
+ #except:
+ # log.info('rabbitmq directories already removed')
+
+
+def init_env(require_ps=True):
"""initialize the environment"""
- check_ps_configured()
+ if require_ps:
+ check_ps_configured()
realm = get_realm()
zonegroup = realm.master_zonegroup()
zones.append(conn)
assert_not_equal(len(zones), 0)
- assert_not_equal(len(ps_zones), 0)
+ if require_ps:
+ assert_not_equal(len(ps_zones), 0)
return zones, ps_zones
+def get_ip():
+ """ This method returns the "primary" IP on the local box (the one with a default route)
+ source: https://stackoverflow.com/a/28950776/711085
+ this is needed because on the teuthology machines: socket.getfqdn()/socket.gethostname() return 127.0.0.1 """
+ s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
+ try:
+ # address should not be reachable
+ s.connect(('10.255.255.255', 1))
+ ip = s.getsockname()[0]
+ finally:
+ s.close()
+ return ip
+
+
TOPIC_SUFFIX = "_topic"
SUB_SUFFIX = "_sub"
+NOTIFICATION_SUFFIX = "_notif"
##############
# pubsub tests
##############
-
-def test_ps_topic():
- """ test set/get/delete of topic """
- _, ps_zones = init_env()
+def test_ps_info():
+ """ log information for manual testing """
+ return SkipTest("only used in manual testing")
+ zones, ps_zones = init_env()
+ realm = get_realm()
+ zonegroup = realm.master_zonegroup()
bucket_name = gen_bucket_name()
- topic_name = bucket_name+TOPIC_SUFFIX
-
- # create topic
- topic_conf = PSTopic(ps_zones[0].conn, topic_name)
- _, status = topic_conf.set_config()
- assert_equal(status/100, 2)
- # get topic
- result, _ = topic_conf.get_config()
- # verify topic content
- parsed_result = json.loads(result)
- assert_equal(parsed_result['topic']['name'], topic_name)
- assert_equal(len(parsed_result['subs']), 0)
- # delete topic
- _, status = topic_conf.del_config()
- assert_equal(status/100, 2)
- # verift topic is deleted
- result, _ = topic_conf.get_config()
- parsed_result = json.loads(result)
- assert_equal(parsed_result['Code'], 'NoSuchKey')
+ # create bucket on the first of the rados zones
+ bucket = zones[0].create_bucket(bucket_name)
+ # create objects in the bucket
+ number_of_objects = 10
+ for i in range(number_of_objects):
+ key = bucket.new_key(str(i))
+ key.set_contents_from_string('bar')
+ print 'Zonegroup: ' + zonegroup.name
+ print 'user: ' + get_user()
+ print 'tenant: ' + get_tenant()
+ print 'Master Zone'
+ print_connection_info(zones[0].conn)
+ print 'PubSub Zone'
+ print_connection_info(ps_zones[0].conn)
+ print 'Bucket: ' + bucket_name
-def test_ps_notification():
- """ test set/get/delete of notification """
+def test_ps_s3_notification_low_level():
+ """ test low level implementation of s3 notifications """
zones, ps_zones = init_env()
bucket_name = gen_bucket_name()
- topic_name = bucket_name+TOPIC_SUFFIX
-
- # create topic
- topic_conf = PSTopic(ps_zones[0].conn, topic_name)
- topic_conf.set_config()
# create bucket on the first of the rados zones
zones[0].create_bucket(bucket_name)
# wait for sync
zone_meta_checkpoint(ps_zones[0].zone)
- # create notifications
- notification_conf = PSNotification(ps_zones[0].conn, bucket_name,
- topic_name)
- _, status = notification_conf.set_config()
+ # create topic
+ topic_name = bucket_name + TOPIC_SUFFIX
+ topic_conf = PSTopic(ps_zones[0].conn, topic_name)
+ result, status = topic_conf.set_config()
assert_equal(status/100, 2)
- # get notification
- result, _ = notification_conf.get_config()
parsed_result = json.loads(result)
- assert_equal(len(parsed_result['topics']), 1)
- assert_equal(parsed_result['topics'][0]['topic']['name'],
- topic_name)
- # delete notification
- _, status = notification_conf.del_config()
+ topic_arn = parsed_result['arn']
+ # create s3 notification
+ notification_name = bucket_name + NOTIFICATION_SUFFIX
+ generated_topic_name = notification_name+'_'+topic_name
+ topic_conf_list = [{'Id': notification_name,
+ 'TopicArn': topic_arn,
+ 'Events': ['s3:ObjectCreated:*']
+ }]
+ s3_notification_conf = PSNotificationS3(ps_zones[0].conn, bucket_name, topic_conf_list)
+ _, status = s3_notification_conf.set_config()
assert_equal(status/100, 2)
- # TODO: deletion cannot be verified via GET
- # result, _ = notification_conf.get_config()
- # parsed_result = json.loads(result)
- # assert_equal(parsed_result['Code'], 'NoSuchKey')
-
- # cleanup
- topic_conf.del_config()
- zones[0].delete_bucket(bucket_name)
-
-
-def test_ps_notification_events():
- """ test set/get/delete of notification on specific events"""
- zones, ps_zones = init_env()
- bucket_name = gen_bucket_name()
- topic_name = bucket_name+TOPIC_SUFFIX
-
- # create topic
- topic_conf = PSTopic(ps_zones[0].conn, topic_name)
- topic_conf.set_config()
- # create bucket on the first of the rados zones
- zones[0].create_bucket(bucket_name)
- # wait for sync
zone_meta_checkpoint(ps_zones[0].zone)
- # create notifications
- events = "OBJECT_CREATE,OBJECT_DELETE"
- notification_conf = PSNotification(ps_zones[0].conn, bucket_name,
- topic_name,
- events)
- _, status = notification_conf.set_config()
+ # get auto-generated topic
+ generated_topic_conf = PSTopic(ps_zones[0].conn, generated_topic_name)
+ result, status = generated_topic_conf.get_config()
+ parsed_result = json.loads(result)
assert_equal(status/100, 2)
- # get notification
- result, _ = notification_conf.get_config()
+ assert_equal(parsed_result['topic']['name'], generated_topic_name)
+ # get auto-generated notification
+ notification_conf = PSNotification(ps_zones[0].conn, bucket_name,
+ generated_topic_name)
+ result, status = notification_conf.get_config()
parsed_result = json.loads(result)
+ assert_equal(status/100, 2)
assert_equal(len(parsed_result['topics']), 1)
- assert_equal(parsed_result['topics'][0]['topic']['name'],
- topic_name)
- assert_not_equal(len(parsed_result['topics'][0]['events']), 0)
- # TODO add test for invalid event name
+ # get auto-generated subscription
+ sub_conf = PSSubscription(ps_zones[0].conn, notification_name,
+ generated_topic_name)
+ result, status = sub_conf.get_config()
+ parsed_result = json.loads(result)
+ assert_equal(status/100, 2)
+ assert_equal(parsed_result['topic'], generated_topic_name)
+ # delete s3 notification
+ _, status = s3_notification_conf.del_config(notification=notification_name)
+ assert_equal(status/100, 2)
+ # delete topic
+ _, status = topic_conf.del_config()
+ assert_equal(status/100, 2)
+
+ # verify low-level cleanup
+ _, status = generated_topic_conf.get_config()
+ assert_equal(status, 404)
+ result, status = notification_conf.get_config()
+ parsed_result = json.loads(result)
+ assert_equal(len(parsed_result['topics']), 0)
+ # TODO should return 404
+ # assert_equal(status, 404)
+ result, status = sub_conf.get_config()
+ parsed_result = json.loads(result)
+ assert_equal(parsed_result['topic'], '')
+ # TODO should return 404
+ # assert_equal(status, 404)
# cleanup
- notification_conf.del_config()
topic_conf.del_config()
+ # delete the bucket
zones[0].delete_bucket(bucket_name)
-def test_ps_subscription():
- """ test set/get/delete of subscription """
+def test_ps_s3_notification_records():
+ """ test s3 records fetching """
zones, ps_zones = init_env()
bucket_name = gen_bucket_name()
- topic_name = bucket_name+TOPIC_SUFFIX
-
- # create topic
- topic_conf = PSTopic(ps_zones[0].conn, topic_name)
- topic_conf.set_config()
# create bucket on the first of the rados zones
bucket = zones[0].create_bucket(bucket_name)
# wait for sync
zone_meta_checkpoint(ps_zones[0].zone)
- # create notifications
- notification_conf = PSNotification(ps_zones[0].conn, bucket_name,
- topic_name)
- _, status = notification_conf.set_config()
+ # create topic
+ topic_name = bucket_name + TOPIC_SUFFIX
+ topic_conf = PSTopic(ps_zones[0].conn, topic_name)
+ result, status = topic_conf.set_config()
assert_equal(status/100, 2)
- # create subscription
- sub_conf = PSSubscription(ps_zones[0].conn, bucket_name+SUB_SUFFIX,
+ parsed_result = json.loads(result)
+ topic_arn = parsed_result['arn']
+ # create s3 notification
+ notification_name = bucket_name + NOTIFICATION_SUFFIX
+ topic_conf_list = [{'Id': notification_name,
+ 'TopicArn': topic_arn,
+ 'Events': ['s3:ObjectCreated:*']
+ }]
+ s3_notification_conf = PSNotificationS3(ps_zones[0].conn, bucket_name, topic_conf_list)
+ _, status = s3_notification_conf.set_config()
+ assert_equal(status/100, 2)
+ zone_meta_checkpoint(ps_zones[0].zone)
+ # get auto-generated subscription
+ sub_conf = PSSubscription(ps_zones[0].conn, notification_name,
topic_name)
- _, status = sub_conf.set_config()
+ _, status = sub_conf.get_config()
assert_equal(status/100, 2)
- # get the subscription
- result, _ = sub_conf.get_config()
- parsed_result = json.loads(result)
- assert_equal(parsed_result['topic'], topic_name)
# create objects in the bucket
number_of_objects = 10
for i in range(number_of_objects):
# wait for sync
zone_bucket_checkpoint(ps_zones[0].zone, zones[0].zone, bucket_name)
- # get the create events from the subscription
+ # get the events from the subscription
result, _ = sub_conf.get_events()
parsed_result = json.loads(result)
- for event in parsed_result['events']:
- log.debug('Event: objname: "' + str(event['info']['key']['name']) + '" type: "' + str(event['event']) + '"')
+ for record in parsed_result['Records']:
+ log.debug(record)
keys = list(bucket.list())
- # TODO: set exact_match to true
- verify_events_by_elements(parsed_result['events'], keys, exact_match=False)
- # delete objects from the bucket
- for key in bucket.list():
- key.delete()
- # wait for sync
- zone_meta_checkpoint(ps_zones[0].zone)
- zone_bucket_checkpoint(ps_zones[0].zone, zones[0].zone, bucket_name)
-
- # get the delete events from the subscriptions
- result, _ = sub_conf.get_events()
- for event in parsed_result['events']:
- log.debug('Event: objname: "' + str(event['info']['key']['name']) + '" type: "' + str(event['event']) + '"')
- # TODO: check deletions
- # verify_events_by_elements(parsed_result['events'], keys, exact_match=False, deletions=True)
- # we should see the creations as well as the deletions
- # delete subscription
- _, status = sub_conf.del_config()
- assert_equal(status/100, 2)
- result, _ = sub_conf.get_config()
- parsed_result = json.loads(result)
- assert_equal(parsed_result['topic'], '')
- # TODO should return "no-key" instead
- # assert_equal(parsed_result['Code'], 'NoSuchKey')
+ # TODO: use exact match
+ verify_s3_records_by_elements(parsed_result['Records'], keys, exact_match=False)
# cleanup
- notification_conf.del_config()
+ _, status = s3_notification_conf.del_config()
topic_conf.del_config()
+ # delete the keys
+ for key in bucket.list():
+ key.delete()
zones[0].delete_bucket(bucket_name)
-def test_ps_event_type_subscription():
- """ test subscriptions for different events """
+def test_ps_s3_notification():
+ """ test s3 notification set/get/delete """
zones, ps_zones = init_env()
bucket_name = gen_bucket_name()
-
- # create topic for objects creation
- topic_create_name = bucket_name+TOPIC_SUFFIX+'_create'
- topic_create_conf = PSTopic(ps_zones[0].conn, topic_create_name)
- topic_create_conf.set_config()
- # create topic for objects deletion
- topic_delete_name = bucket_name+TOPIC_SUFFIX+'_delete'
- topic_delete_conf = PSTopic(ps_zones[0].conn, topic_delete_name)
- topic_delete_conf.set_config()
- # create topic for all events
- topic_name = bucket_name+TOPIC_SUFFIX+'_all'
- topic_conf = PSTopic(ps_zones[0].conn, topic_name)
- topic_conf.set_config()
# create bucket on the first of the rados zones
- bucket = zones[0].create_bucket(bucket_name)
+ zones[0].create_bucket(bucket_name)
# wait for sync
zone_meta_checkpoint(ps_zones[0].zone)
- zone_bucket_checkpoint(ps_zones[0].zone, zones[0].zone, bucket_name)
- # create notifications for objects creation
- notification_create_conf = PSNotification(ps_zones[0].conn, bucket_name,
- topic_create_name, "OBJECT_CREATE")
- _, status = notification_create_conf.set_config()
+ topic_name = bucket_name + TOPIC_SUFFIX
+ # create topic
+ topic_name = bucket_name + TOPIC_SUFFIX
+ topic_conf = PSTopic(ps_zones[0].conn, topic_name)
+ response, status = topic_conf.set_config()
assert_equal(status/100, 2)
- # create notifications for objects deletion
- notification_delete_conf = PSNotification(ps_zones[0].conn, bucket_name,
- topic_delete_name, "OBJECT_DELETE")
- _, status = notification_delete_conf.set_config()
+ parsed_result = json.loads(response)
+ topic_arn = parsed_result['arn']
+ # create one s3 notification
+ notification_name1 = bucket_name + NOTIFICATION_SUFFIX + '_1'
+ topic_conf_list = [{'Id': notification_name1,
+ 'TopicArn': topic_arn,
+ 'Events': ['s3:ObjectCreated:*']
+ }]
+ s3_notification_conf1 = PSNotificationS3(ps_zones[0].conn, bucket_name, topic_conf_list)
+ response, status = s3_notification_conf1.set_config()
assert_equal(status/100, 2)
- # create notifications for all events
- notification_conf = PSNotification(ps_zones[0].conn, bucket_name,
- topic_name, "OBJECT_DELETE,OBJECT_CREATE")
- _, status = notification_conf.set_config()
+ # create another s3 notification with the same topic
+ notification_name2 = bucket_name + NOTIFICATION_SUFFIX + '_2'
+ topic_conf_list = [{'Id': notification_name2,
+ 'TopicArn': topic_arn,
+ 'Events': ['s3:ObjectCreated:*', 's3:ObjectRemoved:*']
+ }]
+ s3_notification_conf2 = PSNotificationS3(ps_zones[0].conn, bucket_name, topic_conf_list)
+ response, status = s3_notification_conf2.set_config()
assert_equal(status/100, 2)
- # create subscription for objects creation
- sub_create_conf = PSSubscription(ps_zones[0].conn, bucket_name+SUB_SUFFIX+'_create',
- topic_create_name)
- _, status = sub_create_conf.set_config()
+ zone_meta_checkpoint(ps_zones[0].zone)
+
+ # get all notification on a bucket
+ response, status = s3_notification_conf1.get_config()
assert_equal(status/100, 2)
- # create subscription for objects deletion
- sub_delete_conf = PSSubscription(ps_zones[0].conn, bucket_name+SUB_SUFFIX+'_delete',
- topic_delete_name)
- _, status = sub_delete_conf.set_config()
+ assert_equal(len(response['TopicConfigurations']), 2)
+ assert_equal(response['TopicConfigurations'][0]['TopicArn'], topic_arn)
+ assert_equal(response['TopicConfigurations'][1]['TopicArn'], topic_arn)
+
+ # get specific notification on a bucket
+ response, status = s3_notification_conf1.get_config(notification=notification_name1)
assert_equal(status/100, 2)
- # create subscription for all events
- sub_conf = PSSubscription(ps_zones[0].conn, bucket_name+SUB_SUFFIX+'_all',
- topic_name)
- _, status = sub_conf.set_config()
+ assert_equal(response['NotificationConfiguration']['TopicConfiguration']['Topic'], topic_arn)
+ assert_equal(response['NotificationConfiguration']['TopicConfiguration']['Id'], notification_name1)
+ response, status = s3_notification_conf2.get_config(notification=notification_name2)
assert_equal(status/100, 2)
- # create objects in the bucket
- number_of_objects = 10
- for i in range(number_of_objects):
- key = bucket.new_key(str(i))
- key.set_contents_from_string('bar')
- # wait for sync
- zone_bucket_checkpoint(ps_zones[0].zone, zones[0].zone, bucket_name)
+ assert_equal(response['NotificationConfiguration']['TopicConfiguration']['Topic'], topic_arn)
+ assert_equal(response['NotificationConfiguration']['TopicConfiguration']['Id'], notification_name2)
+
+ # delete specific notifications
+ _, status = s3_notification_conf1.del_config(notification=notification_name1)
+ assert_equal(status/100, 2)
+ _, status = s3_notification_conf2.del_config(notification=notification_name2)
+ assert_equal(status/100, 2)
+
+ # cleanup
+ topic_conf.del_config()
+ # delete the bucket
+ zones[0].delete_bucket(bucket_name)
+
+def test_ps_s3_topic_on_master():
+ """ test s3 notification set/get/delete on master """
+ zones, _ = init_env(require_ps=False)
+ realm = get_realm()
+ zonegroup = realm.master_zonegroup()
+ bucket_name = gen_bucket_name()
+ topic_name = bucket_name + TOPIC_SUFFIX
+
+ # clean all topics
+ delete_all_s3_topics(zones[0].conn, zonegroup.name)
+
+ # create s3 topics
+ endpoint_address = 'amqp://127.0.0.1:7001'
+ endpoint_args = 'push-endpoint='+endpoint_address+'&amqp-exchange=amqp.direct&amqp-ack-level=none'
+ topic_conf1 = PSTopicS3(zones[0].conn, topic_name+'_1', zonegroup.name, endpoint_args=endpoint_args)
+ topic_arn = topic_conf1.set_config()
+ assert_equal(topic_arn,
+ 'arn:aws:sns:' + zonegroup.name + ':' + get_tenant() + ':' + topic_name + '_1')
+
+ endpoint_address = 'http://127.0.0.1:9001'
+ endpoint_args = 'push-endpoint='+endpoint_address
+ topic_conf2 = PSTopicS3(zones[0].conn, topic_name+'_2', zonegroup.name, endpoint_args=endpoint_args)
+ topic_arn = topic_conf2.set_config()
+ assert_equal(topic_arn,
+ 'arn:aws:sns:' + zonegroup.name + ':' + get_tenant() + ':' + topic_name + '_2')
+ endpoint_address = 'http://127.0.0.1:9002'
+ endpoint_args = 'push-endpoint='+endpoint_address
+ topic_conf3 = PSTopicS3(zones[0].conn, topic_name+'_3', zonegroup.name, endpoint_args=endpoint_args)
+ topic_arn = topic_conf3.set_config()
+ assert_equal(topic_arn,
+ 'arn:aws:sns:' + zonegroup.name + ':' + get_tenant() + ':' + topic_name + '_3')
+
+ # get topic 3
+ result, status = topic_conf3.get_config()
+ assert_equal(status, 200)
+ assert_equal(topic_arn, result['GetTopicResponse']['GetTopicResult']['Topic']['TopicArn'])
+ assert_equal(endpoint_address, result['GetTopicResponse']['GetTopicResult']['Topic']['EndPoint']['EndpointAddress'])
+ # Note that endpoint args may be ordered differently in the result
+
+ # delete topic 1
+ result = topic_conf1.del_config()
+ assert_equal(status, 200)
+
+ # try to get a deleted topic
+ _, status = topic_conf1.get_config()
+ assert_equal(status, 404)
+
+ # get the remaining 2 topics
+ result = topic_conf1.get_list()
+ assert_equal(len(result['Topics']), 2)
+
+ # delete topics
+ result = topic_conf2.del_config()
+ # TODO: should be 200OK
+ # assert_equal(status, 200)
+ result = topic_conf3.del_config()
+ # TODO: should be 200OK
+ # assert_equal(status, 200)
+
+ # get topic list, make sure it is empty
+ result = topic_conf1.get_list()
+ assert_equal(len(result['Topics']), 0)
+
+
+def test_ps_s3_notification_on_master():
+ """ test s3 notification set/get/delete on master """
+ zones, _ = init_env(require_ps=False)
+ realm = get_realm()
+ zonegroup = realm.master_zonegroup()
+ bucket_name = gen_bucket_name()
+ # create bucket
+ bucket = zones[0].create_bucket(bucket_name)
+ topic_name = bucket_name + TOPIC_SUFFIX
+ # create s3 topic
+ endpoint_address = 'amqp://127.0.0.1:7001'
+ endpoint_args = 'push-endpoint='+endpoint_address+'&amqp-exchange=amqp.direct&amqp-ack-level=none'
+ topic_conf = PSTopicS3(zones[0].conn, topic_name, zonegroup.name, endpoint_args=endpoint_args)
+ topic_arn = topic_conf.set_config()
+ # create s3 notification
+ notification_name = bucket_name + NOTIFICATION_SUFFIX
+ topic_conf_list = [{'Id': notification_name+'_1',
+ 'TopicArn': topic_arn,
+ 'Events': ['s3:ObjectCreated:*']
+ },
+ {'Id': notification_name+'_2',
+ 'TopicArn': topic_arn,
+ 'Events': ['s3:ObjectRemoved:*']
+ },
+ {'Id': notification_name+'_3',
+ 'TopicArn': topic_arn,
+ 'Events': []
+ }]
+ s3_notification_conf = PSNotificationS3(zones[0].conn, bucket_name, topic_conf_list)
+ _, status = s3_notification_conf.set_config()
+ assert_equal(status/100, 2)
+
+ # get notifications on a bucket
+ response, status = s3_notification_conf.get_config(notification=notification_name+'_1')
+ assert_equal(status/100, 2)
+ assert_equal(response['NotificationConfiguration']['TopicConfiguration']['Topic'], topic_arn)
+
+ # delete specific notifications
+ _, status = s3_notification_conf.del_config(notification=notification_name+'_1')
+ assert_equal(status/100, 2)
+
+ # get the remaining 2 notifications on a bucket
+ response, status = s3_notification_conf.get_config()
+ assert_equal(status/100, 2)
+ assert_equal(len(response['TopicConfigurations']), 2)
+ assert_equal(response['TopicConfigurations'][0]['TopicArn'], topic_arn)
+ assert_equal(response['TopicConfigurations'][1]['TopicArn'], topic_arn)
+
+ # delete remaining notifications
+ _, status = s3_notification_conf.del_config()
+ assert_equal(status/100, 2)
+
+ # make sure that the notifications are now deleted
+ _, status = s3_notification_conf.get_config()
+
+ # cleanup
+ topic_conf.del_config()
+ # delete the bucket
+ zones[0].delete_bucket(bucket_name)
+
+
+def ps_s3_notification_filter(on_master):
+ """ test s3 notification filter on master """
+ if skip_push_tests:
+ return SkipTest("PubSub push tests don't run in teuthology")
+ hostname = get_ip()
+ proc = init_rabbitmq()
+ if proc is None:
+ return SkipTest('end2end amqp tests require rabbitmq-server installed')
+ if on_master:
+ zones, _ = init_env(require_ps=False)
+ ps_zone = zones[0]
+ else:
+ zones, ps_zones = init_env(require_ps=True)
+ ps_zone = ps_zones[0]
+
+ realm = get_realm()
+ zonegroup = realm.master_zonegroup()
+
+ # create bucket
+ bucket_name = gen_bucket_name()
+ bucket = zones[0].create_bucket(bucket_name)
+ topic_name = bucket_name + TOPIC_SUFFIX
+
+ # start amqp receivers
+ exchange = 'ex1'
+ task, receiver = create_amqp_receiver_thread(exchange, topic_name)
+ task.start()
+
+ # create s3 topic
+ endpoint_address = 'amqp://' + hostname
+ endpoint_args = 'push-endpoint='+endpoint_address+'&amqp-exchange=' + exchange +'&amqp-ack-level=broker'
+ if on_master:
+ topic_conf = PSTopicS3(ps_zone.conn, topic_name, zonegroup.name, endpoint_args=endpoint_args)
+ topic_arn = topic_conf.set_config()
+ else:
+ topic_conf = PSTopic(ps_zone.conn, topic_name, endpoint=endpoint_address, endpoint_args=endpoint_args)
+ result, _ = topic_conf.set_config()
+ parsed_result = json.loads(result)
+ topic_arn = parsed_result['arn']
+ zone_meta_checkpoint(ps_zone.zone)
+
+ # create s3 notification
+ notification_name = bucket_name + NOTIFICATION_SUFFIX
+ topic_conf_list = [{'Id': notification_name+'_1',
+ 'TopicArn': topic_arn,
+ 'Events': ['s3:ObjectCreated:*'],
+ 'Filter': {
+ 'Key': {
+ 'FilterRules': [{'Name': 'prefix', 'Value': 'hello'}]
+ }
+ }
+ },
+ {'Id': notification_name+'_2',
+ 'TopicArn': topic_arn,
+ 'Events': ['s3:ObjectCreated:*'],
+ 'Filter': {
+ 'Key': {
+ 'FilterRules': [{'Name': 'prefix', 'Value': 'world'},
+ {'Name': 'suffix', 'Value': 'log'}]
+ }
+ }
+ },
+ {'Id': notification_name+'_3',
+ 'TopicArn': topic_arn,
+ 'Events': [],
+ 'Filter': {
+ 'Key': {
+ 'FilterRules': [{'Name': 'regex', 'Value': '([a-z]+)\\.txt'}]
+ }
+ }
+ }]
+
+ s3_notification_conf = PSNotificationS3(ps_zone.conn, bucket_name, topic_conf_list)
+ result, status = s3_notification_conf.set_config()
+ assert_equal(status/100, 2)
+
+ if on_master:
+ topic_conf_list = [{'Id': notification_name+'_4',
+ 'TopicArn': topic_arn,
+ 'Events': ['s3:ObjectCreated:*', 's3:ObjectRemoved:*'],
+ 'Filter': {
+ 'Metadata': {
+ 'FilterRules': [{'Name': 'x-amz-meta-foo', 'Value': 'bar'},
+ {'Name': 'x-amz-meta-hello', 'Value': 'world'}]
+ },
+ 'Key': {
+ 'FilterRules': [{'Name': 'regex', 'Value': '([a-z]+)'}]
+ }
+ }
+ }]
+
+ try:
+ s3_notification_conf4 = PSNotificationS3(ps_zone.conn, bucket_name, topic_conf_list)
+ _, status = s3_notification_conf4.set_config()
+ assert_equal(status/100, 2)
+ skip_notif4 = False
+ except Exception as error:
+ print 'note: metadata filter is not supported by boto3 - skipping test'
+ skip_notif4 = True
+ else:
+ print 'filtering by attributes only supported on master zone'
+ skip_notif4 = True
+
+
+ # get all notifications
+ result, status = s3_notification_conf.get_config()
+ assert_equal(status/100, 2)
+ for conf in result['TopicConfigurations']:
+ filter_name = conf['Filter']['Key']['FilterRules'][0]['Name']
+ assert filter_name == 'prefix' or filter_name == 'suffix' or filter_name == 'regex', filter_name
+
+ if not skip_notif4:
+ result, status = s3_notification_conf4.get_config(notification=notification_name+'_4')
+ assert_equal(status/100, 2)
+ filter_name = result['NotificationConfiguration']['TopicConfiguration']['Filter']['S3Metadata']['FilterRule'][0]['Name']
+ assert filter_name == 'x-amz-meta-foo' or filter_name == 'x-amz-meta-hello'
+
+ expected_in1 = ['hello.kaboom', 'hello.txt', 'hello123.txt', 'hello']
+ expected_in2 = ['world1.log', 'world2log', 'world3.log']
+ expected_in3 = ['hello.txt', 'hell.txt', 'worldlog.txt']
+ expected_in4 = ['foo', 'bar', 'hello', 'world']
+ filtered = ['hell.kaboom', 'world.og', 'world.logg', 'he123ll.txt', 'wo', 'log', 'h', 'txt', 'world.log.txt']
+ filtered_with_attr = ['nofoo', 'nobar', 'nohello', 'noworld']
+ # create objects in bucket
+ for key_name in expected_in1:
+ key = bucket.new_key(key_name)
+ key.set_contents_from_string('bar')
+ for key_name in expected_in2:
+ key = bucket.new_key(key_name)
+ key.set_contents_from_string('bar')
+ for key_name in expected_in3:
+ key = bucket.new_key(key_name)
+ key.set_contents_from_string('bar')
+ if not skip_notif4:
+ for key_name in expected_in4:
+ key = bucket.new_key(key_name)
+ key.set_metadata('foo', 'bar')
+ key.set_metadata('hello', 'world')
+ key.set_metadata('goodbye', 'cruel world')
+ key.set_contents_from_string('bar')
+ for key_name in filtered:
+ key = bucket.new_key(key_name)
+ key.set_contents_from_string('bar')
+ for key_name in filtered_with_attr:
+ key.set_metadata('foo', 'nobar')
+ key.set_metadata('hello', 'noworld')
+ key.set_metadata('goodbye', 'cruel world')
+ key = bucket.new_key(key_name)
+ key.set_contents_from_string('bar')
+
+ if on_master:
+ print 'wait for 5sec for the messages...'
+ time.sleep(5)
+ else:
+ zone_bucket_checkpoint(ps_zone.zone, zones[0].zone, bucket_name)
+
+ found_in1 = []
+ found_in2 = []
+ found_in3 = []
+ found_in4 = []
+
+ for event in receiver.get_and_reset_events():
+ notif_id = event['s3']['configurationId']
+ key_name = event['s3']['object']['key']
+ if notif_id == notification_name+'_1':
+ found_in1.append(key_name)
+ elif notif_id == notification_name+'_2':
+ found_in2.append(key_name)
+ elif notif_id == notification_name+'_3':
+ found_in3.append(key_name)
+ elif not skip_notif4 and notif_id == notification_name+'_4':
+ found_in4.append(key_name)
+ else:
+ assert False, 'invalid notification: ' + notif_id
+
+ assert_equal(set(found_in1), set(expected_in1))
+ assert_equal(set(found_in2), set(expected_in2))
+ assert_equal(set(found_in3), set(expected_in3))
+ if not skip_notif4:
+ assert_equal(set(found_in4), set(expected_in4))
+
+ # cleanup
+ s3_notification_conf.del_config()
+ if not skip_notif4:
+ s3_notification_conf4.del_config()
+ topic_conf.del_config()
+ # delete the bucket
+ for key in bucket.list():
+ key.delete()
+ zones[0].delete_bucket(bucket_name)
+ stop_amqp_receiver(receiver, task)
+ clean_rabbitmq(proc)
+
+
+def test_ps_s3_notification_filter_on_master():
+ ps_s3_notification_filter(on_master=True)
+
+
+def test_ps_s3_notification_filter():
+ ps_s3_notification_filter(on_master=False)
+
+
+def test_ps_s3_notification_errors_on_master():
+ """ test s3 notification set/get/delete on master """
+ zones, _ = init_env(require_ps=False)
+ realm = get_realm()
+ zonegroup = realm.master_zonegroup()
+ bucket_name = gen_bucket_name()
+ # create bucket
+ bucket = zones[0].create_bucket(bucket_name)
+ topic_name = bucket_name + TOPIC_SUFFIX
+ # create s3 topic
+ endpoint_address = 'amqp://127.0.0.1:7001'
+ endpoint_args = 'push-endpoint='+endpoint_address+'&amqp-exchange=amqp.direct&amqp-ack-level=none'
+ topic_conf = PSTopicS3(zones[0].conn, topic_name, zonegroup.name, endpoint_args=endpoint_args)
+ topic_arn = topic_conf.set_config()
+
+ # create s3 notification with invalid event name
+ notification_name = bucket_name + NOTIFICATION_SUFFIX
+ topic_conf_list = [{'Id': notification_name,
+ 'TopicArn': topic_arn,
+ 'Events': ['s3:ObjectCreated:Kaboom']
+ }]
+ s3_notification_conf = PSNotificationS3(zones[0].conn, bucket_name, topic_conf_list)
+ try:
+ result, status = s3_notification_conf.set_config()
+ except Exception as error:
+ print str(error) + ' - is expected'
+ else:
+ assert False, 'invalid event name is expected to fail'
+
+ # create s3 notification with missing name
+ topic_conf_list = [{'Id': '',
+ 'TopicArn': topic_arn,
+ 'Events': ['s3:ObjectCreated:Put']
+ }]
+ s3_notification_conf = PSNotificationS3(zones[0].conn, bucket_name, topic_conf_list)
+ try:
+ _, _ = s3_notification_conf.set_config()
+ except Exception as error:
+ print str(error) + ' - is expected'
+ else:
+ assert False, 'missing notification name is expected to fail'
+
+ # create s3 notification with invalid topic ARN
+ invalid_topic_arn = 'kaboom'
+ topic_conf_list = [{'Id': notification_name,
+ 'TopicArn': invalid_topic_arn,
+ 'Events': ['s3:ObjectCreated:Put']
+ }]
+ s3_notification_conf = PSNotificationS3(zones[0].conn, bucket_name, topic_conf_list)
+ try:
+ _, _ = s3_notification_conf.set_config()
+ except Exception as error:
+ print str(error) + ' - is expected'
+ else:
+ assert False, 'invalid ARN is expected to fail'
+
+ # create s3 notification with unknown topic ARN
+ invalid_topic_arn = 'arn:aws:sns:a::kaboom'
+ topic_conf_list = [{'Id': notification_name,
+ 'TopicArn': invalid_topic_arn ,
+ 'Events': ['s3:ObjectCreated:Put']
+ }]
+ s3_notification_conf = PSNotificationS3(zones[0].conn, bucket_name, topic_conf_list)
+ try:
+ _, _ = s3_notification_conf.set_config()
+ except Exception as error:
+ print str(error) + ' - is expected'
+ else:
+ assert False, 'unknown topic is expected to fail'
+
+ # create s3 notification with wrong bucket
+ topic_conf_list = [{'Id': notification_name,
+ 'TopicArn': topic_arn,
+ 'Events': ['s3:ObjectCreated:Put']
+ }]
+ s3_notification_conf = PSNotificationS3(zones[0].conn, 'kaboom', topic_conf_list)
+ try:
+ _, _ = s3_notification_conf.set_config()
+ except Exception as error:
+ print str(error) + ' - is expected'
+ else:
+ assert False, 'unknown bucket is expected to fail'
+
+ topic_conf.del_config()
+
+ status = topic_conf.del_config()
+ # deleting an unknown notification is not considered an error
+ assert_equal(status, 200)
+
+ _, status = topic_conf.get_config()
+ assert_equal(status, 404)
+
+ # cleanup
+ # delete the bucket
+ zones[0].delete_bucket(bucket_name)
+
+
+def test_objcet_timing():
+ return SkipTest("only used in manual testing")
+ zones, _ = init_env(require_ps=False)
+
+ # create bucket
+ bucket_name = gen_bucket_name()
+ bucket = zones[0].create_bucket(bucket_name)
+ # create objects in the bucket (async)
+ print 'creating objects...'
+ number_of_objects = 1000
+ client_threads = []
+ start_time = time.time()
+ content = str(bytearray(os.urandom(1024*1024)))
+ for i in range(number_of_objects):
+ key = bucket.new_key(str(i))
+ thr = threading.Thread(target = set_contents_from_string, args=(key, content,))
+ thr.start()
+ client_threads.append(thr)
+ [thr.join() for thr in client_threads]
+
+ time_diff = time.time() - start_time
+ print 'average time for object creation: ' + str(time_diff*1000/number_of_objects) + ' milliseconds'
+
+ print 'total number of objects: ' + str(len(list(bucket.list())))
+
+ print 'deleting objects...'
+ client_threads = []
+ start_time = time.time()
+ for key in bucket.list():
+ thr = threading.Thread(target = key.delete, args=())
+ thr.start()
+ client_threads.append(thr)
+ [thr.join() for thr in client_threads]
+
+ time_diff = time.time() - start_time
+ print 'average time for object deletion: ' + str(time_diff*1000/number_of_objects) + ' milliseconds'
+
+ # cleanup
+ zones[0].delete_bucket(bucket_name)
+
+
+def test_ps_s3_notification_push_amqp_on_master():
+ """ test pushing amqp s3 notification on master """
+ if skip_push_tests:
+ return SkipTest("PubSub push tests don't run in teuthology")
+ hostname = get_ip()
+ proc = init_rabbitmq()
+ if proc is None:
+ return SkipTest('end2end amqp tests require rabbitmq-server installed')
+ zones, _ = init_env(require_ps=False)
+ realm = get_realm()
+ zonegroup = realm.master_zonegroup()
+
+ # create bucket
+ bucket_name = gen_bucket_name()
+ bucket = zones[0].create_bucket(bucket_name)
+ topic_name1 = bucket_name + TOPIC_SUFFIX + '_1'
+ topic_name2 = bucket_name + TOPIC_SUFFIX + '_2'
+
+ # start amqp receivers
+ exchange = 'ex1'
+ task1, receiver1 = create_amqp_receiver_thread(exchange, topic_name1)
+ task2, receiver2 = create_amqp_receiver_thread(exchange, topic_name2)
+ task1.start()
+ task2.start()
+
+ # create two s3 topic
+ endpoint_address = 'amqp://' + hostname
+ # with acks from broker
+ endpoint_args = 'push-endpoint='+endpoint_address+'&amqp-exchange=' + exchange +'&amqp-ack-level=broker'
+ topic_conf1 = PSTopicS3(zones[0].conn, topic_name1, zonegroup.name, endpoint_args=endpoint_args)
+ topic_arn1 = topic_conf1.set_config()
+ # without acks from broker
+ endpoint_args = 'push-endpoint='+endpoint_address+'&amqp-exchange=' + exchange +'&amqp-ack-level=none'
+ topic_conf2 = PSTopicS3(zones[0].conn, topic_name2, zonegroup.name, endpoint_args=endpoint_args)
+ topic_arn2 = topic_conf2.set_config()
+ # create s3 notification
+ notification_name = bucket_name + NOTIFICATION_SUFFIX
+ topic_conf_list = [{'Id': notification_name+'_1', 'TopicArn': topic_arn1,
+ 'Events': []
+ },
+ {'Id': notification_name+'_2', 'TopicArn': topic_arn2,
+ 'Events': ['s3:ObjectCreated:*']
+ }]
+
+ s3_notification_conf = PSNotificationS3(zones[0].conn, bucket_name, topic_conf_list)
+ response, status = s3_notification_conf.set_config()
+ assert_equal(status/100, 2)
+
+ # create objects in the bucket (async)
+ number_of_objects = 100
+ client_threads = []
+ start_time = time.time()
+ for i in range(number_of_objects):
+ key = bucket.new_key(str(i))
+ content = str(os.urandom(1024*1024))
+ thr = threading.Thread(target = set_contents_from_string, args=(key, content,))
+ thr.start()
+ client_threads.append(thr)
+ [thr.join() for thr in client_threads]
+
+ time_diff = time.time() - start_time
+ print 'average time for creation + qmqp notification is: ' + str(time_diff*1000/number_of_objects) + ' milliseconds'
+
+ print 'wait for 5sec for the messages...'
+ time.sleep(5)
+
+ # check amqp receiver
+ keys = list(bucket.list())
+ print 'total number of objects: ' + str(len(keys))
+ receiver1.verify_s3_events(keys, exact_match=True)
+ receiver2.verify_s3_events(keys, exact_match=True)
+
+ # delete objects from the bucket
+ client_threads = []
+ start_time = time.time()
+ for key in bucket.list():
+ thr = threading.Thread(target = key.delete, args=())
+ thr.start()
+ client_threads.append(thr)
+ [thr.join() for thr in client_threads]
+
+ time_diff = time.time() - start_time
+ print 'average time for creation + http notification is: ' + str(time_diff*1000/number_of_objects) + ' milliseconds'
+
+ print 'wait for 5sec for the messages...'
+ time.sleep(5)
+
+ # check amqp receiver 1 for deletions
+ receiver1.verify_s3_events(keys, exact_match=True, deletions=True)
+ # check amqp receiver 2 has no deletions
+ try:
+ receiver1.verify_s3_events(keys, exact_match=False, deletions=True)
+ except:
+ pass
+ else:
+ err = 'amqp receiver 2 should have no deletions'
+ assert False, err
+
+
+ # cleanup
+ stop_amqp_receiver(receiver1, task1)
+ stop_amqp_receiver(receiver2, task2)
+ s3_notification_conf.del_config()
+ topic_conf1.del_config()
+ topic_conf2.del_config()
+ # delete the bucket
+ zones[0].delete_bucket(bucket_name)
+ clean_rabbitmq(proc)
+
+
+def test_ps_s3_notification_push_http_on_master():
+ """ test pushing http s3 notification on master """
+ if skip_push_tests:
+ return SkipTest("PubSub push tests don't run in teuthology")
+ hostname = get_ip()
+ zones, _ = init_env(require_ps=False)
+ realm = get_realm()
+ zonegroup = realm.master_zonegroup()
+
+ # create random port for the http server
+ host = get_ip()
+ port = random.randint(10000, 20000)
+ # start an http server in a separate thread
+ number_of_objects = 100
+ http_server = StreamingHTTPServer(host, port, num_workers=number_of_objects)
+
+ # create bucket
+ bucket_name = gen_bucket_name()
+ bucket = zones[0].create_bucket(bucket_name)
+ topic_name = bucket_name + TOPIC_SUFFIX
+
+ # create s3 topic
+ endpoint_address = 'http://'+host+':'+str(port)
+ endpoint_args = 'push-endpoint='+endpoint_address
+ topic_conf = PSTopicS3(zones[0].conn, topic_name, zonegroup.name, endpoint_args=endpoint_args)
+ topic_arn = topic_conf.set_config()
+ # create s3 notification
+ notification_name = bucket_name + NOTIFICATION_SUFFIX
+ topic_conf_list = [{'Id': notification_name,
+ 'TopicArn': topic_arn,
+ 'Events': []
+ }]
+ s3_notification_conf = PSNotificationS3(zones[0].conn, bucket_name, topic_conf_list)
+ response, status = s3_notification_conf.set_config()
+ assert_equal(status/100, 2)
+
+ # create objects in the bucket
+ client_threads = []
+ start_time = time.time()
+ content = 'bar'
+ for i in range(number_of_objects):
+ key = bucket.new_key(str(i))
+ thr = threading.Thread(target = set_contents_from_string, args=(key, content,))
+ thr.start()
+ client_threads.append(thr)
+ [thr.join() for thr in client_threads]
+
+ time_diff = time.time() - start_time
+ print 'average time for creation + http notification is: ' + str(time_diff*1000/number_of_objects) + ' milliseconds'
+
+ print 'wait for 5sec for the messages...'
+ time.sleep(5)
+
+ # check http receiver
+ keys = list(bucket.list())
+ print 'total number of objects: ' + str(len(keys))
+ http_server.verify_s3_events(keys, exact_match=True)
+
+ # delete objects from the bucket
+ client_threads = []
+ start_time = time.time()
+ for key in bucket.list():
+ thr = threading.Thread(target = key.delete, args=())
+ thr.start()
+ client_threads.append(thr)
+ [thr.join() for thr in client_threads]
+
+ time_diff = time.time() - start_time
+ print 'average time for creation + http notification is: ' + str(time_diff*1000/number_of_objects) + ' milliseconds'
+
+ print 'wait for 5sec for the messages...'
+ time.sleep(5)
+
+ # check http receiver
+ http_server.verify_s3_events(keys, exact_match=True, deletions=True)
+
+ # cleanup
+ topic_conf.del_config()
+ s3_notification_conf.del_config(notification=notification_name)
+ # delete the bucket
+ zones[0].delete_bucket(bucket_name)
+ http_server.close()
+
+
+def test_ps_topic():
+ """ test set/get/delete of topic """
+ _, ps_zones = init_env()
+ realm = get_realm()
+ zonegroup = realm.master_zonegroup()
+ bucket_name = gen_bucket_name()
+ topic_name = bucket_name+TOPIC_SUFFIX
+
+ # create topic
+ topic_conf = PSTopic(ps_zones[0].conn, topic_name)
+ _, status = topic_conf.set_config()
+ assert_equal(status/100, 2)
+ # get topic
+ result, _ = topic_conf.get_config()
+ # verify topic content
+ parsed_result = json.loads(result)
+ assert_equal(parsed_result['topic']['name'], topic_name)
+ assert_equal(len(parsed_result['subs']), 0)
+ assert_equal(parsed_result['topic']['arn'],
+ 'arn:aws:sns:' + zonegroup.name + ':' + get_tenant() + ':' + topic_name)
+ # delete topic
+ _, status = topic_conf.del_config()
+ assert_equal(status/100, 2)
+ # verift topic is deleted
+ result, status = topic_conf.get_config()
+ assert_equal(status, 404)
+ parsed_result = json.loads(result)
+ assert_equal(parsed_result['Code'], 'NoSuchKey')
+
+
+def test_ps_topic_with_endpoint():
+ """ test set topic with endpoint"""
+ _, ps_zones = init_env()
+ bucket_name = gen_bucket_name()
+ topic_name = bucket_name+TOPIC_SUFFIX
+
+ # create topic
+ dest_endpoint = 'amqp://localhost:7001'
+ dest_args = 'amqp-exchange=amqp.direct&amqp-ack-level=none'
+ topic_conf = PSTopic(ps_zones[0].conn, topic_name,
+ endpoint=dest_endpoint,
+ endpoint_args=dest_args)
+ _, status = topic_conf.set_config()
+ assert_equal(status/100, 2)
+ # get topic
+ result, _ = topic_conf.get_config()
+ # verify topic content
+ parsed_result = json.loads(result)
+ assert_equal(parsed_result['topic']['name'], topic_name)
+ assert_equal(parsed_result['topic']['dest']['push_endpoint'], dest_endpoint)
+ # cleanup
+ topic_conf.del_config()
+
+
+def test_ps_notification():
+ """ test set/get/delete of notification """
+ zones, ps_zones = init_env()
+ bucket_name = gen_bucket_name()
+ topic_name = bucket_name+TOPIC_SUFFIX
+
+ # create topic
+ topic_conf = PSTopic(ps_zones[0].conn, topic_name)
+ topic_conf.set_config()
+ # create bucket on the first of the rados zones
+ zones[0].create_bucket(bucket_name)
+ # wait for sync
+ zone_meta_checkpoint(ps_zones[0].zone)
+ # create notifications
+ notification_conf = PSNotification(ps_zones[0].conn, bucket_name,
+ topic_name)
+ _, status = notification_conf.set_config()
+ assert_equal(status/100, 2)
+ # get notification
+ result, _ = notification_conf.get_config()
+ parsed_result = json.loads(result)
+ assert_equal(len(parsed_result['topics']), 1)
+ assert_equal(parsed_result['topics'][0]['topic']['name'],
+ topic_name)
+ # delete notification
+ _, status = notification_conf.del_config()
+ assert_equal(status/100, 2)
+ result, status = notification_conf.get_config()
+ parsed_result = json.loads(result)
+ assert_equal(len(parsed_result['topics']), 0)
+ # TODO should return 404
+ # assert_equal(status, 404)
+
+ # cleanup
+ topic_conf.del_config()
+ zones[0].delete_bucket(bucket_name)
+
+
+def test_ps_notification_events():
+ """ test set/get/delete of notification on specific events"""
+ zones, ps_zones = init_env()
+ bucket_name = gen_bucket_name()
+ topic_name = bucket_name+TOPIC_SUFFIX
+
+ # create topic
+ topic_conf = PSTopic(ps_zones[0].conn, topic_name)
+ topic_conf.set_config()
+ # create bucket on the first of the rados zones
+ zones[0].create_bucket(bucket_name)
+ # wait for sync
+ zone_meta_checkpoint(ps_zones[0].zone)
+ # create notifications
+ events = "OBJECT_CREATE,OBJECT_DELETE"
+ notification_conf = PSNotification(ps_zones[0].conn, bucket_name,
+ topic_name,
+ events)
+ _, status = notification_conf.set_config()
+ assert_equal(status/100, 2)
+ # get notification
+ result, _ = notification_conf.get_config()
+ parsed_result = json.loads(result)
+ assert_equal(len(parsed_result['topics']), 1)
+ assert_equal(parsed_result['topics'][0]['topic']['name'],
+ topic_name)
+ assert_not_equal(len(parsed_result['topics'][0]['events']), 0)
+ # TODO add test for invalid event name
+
+ # cleanup
+ notification_conf.del_config()
+ topic_conf.del_config()
+ zones[0].delete_bucket(bucket_name)
+
+
+def test_ps_subscription():
+ """ test set/get/delete of subscription """
+ zones, ps_zones = init_env()
+ bucket_name = gen_bucket_name()
+ topic_name = bucket_name+TOPIC_SUFFIX
+
+ # create topic
+ topic_conf = PSTopic(ps_zones[0].conn, topic_name)
+ topic_conf.set_config()
+ # create bucket on the first of the rados zones
+ bucket = zones[0].create_bucket(bucket_name)
+ # wait for sync
+ zone_meta_checkpoint(ps_zones[0].zone)
+ # create notifications
+ notification_conf = PSNotification(ps_zones[0].conn, bucket_name,
+ topic_name)
+ _, status = notification_conf.set_config()
+ assert_equal(status/100, 2)
+ # create subscription
+ sub_conf = PSSubscription(ps_zones[0].conn, bucket_name+SUB_SUFFIX,
+ topic_name)
+ _, status = sub_conf.set_config()
+ assert_equal(status/100, 2)
+ # get the subscription
+ result, _ = sub_conf.get_config()
+ parsed_result = json.loads(result)
+ assert_equal(parsed_result['topic'], topic_name)
+ # create objects in the bucket
+ number_of_objects = 10
+ for i in range(number_of_objects):
+ key = bucket.new_key(str(i))
+ key.set_contents_from_string('bar')
+ # wait for sync
+ zone_bucket_checkpoint(ps_zones[0].zone, zones[0].zone, bucket_name)
+
+ # get the create events from the subscription
+ result, _ = sub_conf.get_events()
+ parsed_result = json.loads(result)
+ for event in parsed_result['events']:
+ log.debug('Event: objname: "' + str(event['info']['key']['name']) + '" type: "' + str(event['event']) + '"')
+ keys = list(bucket.list())
+ # TODO: use exact match
+ verify_events_by_elements(parsed_result['events'], keys, exact_match=False)
+ # delete objects from the bucket
+ for key in bucket.list():
+ key.delete()
+ # wait for sync
+ zone_meta_checkpoint(ps_zones[0].zone)
+ zone_bucket_checkpoint(ps_zones[0].zone, zones[0].zone, bucket_name)
+
+ # get the delete events from the subscriptions
+ result, _ = sub_conf.get_events()
+ for event in parsed_result['events']:
+ log.debug('Event: objname: "' + str(event['info']['key']['name']) + '" type: "' + str(event['event']) + '"')
+ # TODO: check deletions
+ # TODO: use exact match
+ # verify_events_by_elements(parsed_result['events'], keys, exact_match=False, deletions=True)
+ # we should see the creations as well as the deletions
+ # delete subscription
+ _, status = sub_conf.del_config()
+ assert_equal(status/100, 2)
+ result, status = sub_conf.get_config()
+ parsed_result = json.loads(result)
+ assert_equal(parsed_result['topic'], '')
+ # TODO should return 404
+ # assert_equal(status, 404)
+
+ # cleanup
+ notification_conf.del_config()
+ topic_conf.del_config()
+ zones[0].delete_bucket(bucket_name)
+
+
+def test_ps_event_type_subscription():
+ """ test subscriptions for different events """
+ zones, ps_zones = init_env()
+ bucket_name = gen_bucket_name()
+
+ # create topic for objects creation
+ topic_create_name = bucket_name+TOPIC_SUFFIX+'_create'
+ topic_create_conf = PSTopic(ps_zones[0].conn, topic_create_name)
+ topic_create_conf.set_config()
+ # create topic for objects deletion
+ topic_delete_name = bucket_name+TOPIC_SUFFIX+'_delete'
+ topic_delete_conf = PSTopic(ps_zones[0].conn, topic_delete_name)
+ topic_delete_conf.set_config()
+ # create topic for all events
+ topic_name = bucket_name+TOPIC_SUFFIX+'_all'
+ topic_conf = PSTopic(ps_zones[0].conn, topic_name)
+ topic_conf.set_config()
+ # create bucket on the first of the rados zones
+ bucket = zones[0].create_bucket(bucket_name)
+ # wait for sync
+ zone_meta_checkpoint(ps_zones[0].zone)
+ zone_bucket_checkpoint(ps_zones[0].zone, zones[0].zone, bucket_name)
+ # create notifications for objects creation
+ notification_create_conf = PSNotification(ps_zones[0].conn, bucket_name,
+ topic_create_name, "OBJECT_CREATE")
+ _, status = notification_create_conf.set_config()
+ assert_equal(status/100, 2)
+ # create notifications for objects deletion
+ notification_delete_conf = PSNotification(ps_zones[0].conn, bucket_name,
+ topic_delete_name, "OBJECT_DELETE")
+ _, status = notification_delete_conf.set_config()
+ assert_equal(status/100, 2)
+ # create notifications for all events
+ notification_conf = PSNotification(ps_zones[0].conn, bucket_name,
+ topic_name, "OBJECT_DELETE,OBJECT_CREATE")
+ _, status = notification_conf.set_config()
+ assert_equal(status/100, 2)
+ # create subscription for objects creation
+ sub_create_conf = PSSubscription(ps_zones[0].conn, bucket_name+SUB_SUFFIX+'_create',
+ topic_create_name)
+ _, status = sub_create_conf.set_config()
+ assert_equal(status/100, 2)
+ # create subscription for objects deletion
+ sub_delete_conf = PSSubscription(ps_zones[0].conn, bucket_name+SUB_SUFFIX+'_delete',
+ topic_delete_name)
+ _, status = sub_delete_conf.set_config()
+ assert_equal(status/100, 2)
+ # create subscription for all events
+ sub_conf = PSSubscription(ps_zones[0].conn, bucket_name+SUB_SUFFIX+'_all',
+ topic_name)
+ _, status = sub_conf.set_config()
+ assert_equal(status/100, 2)
+ # create objects in the bucket
+ number_of_objects = 10
+ for i in range(number_of_objects):
+ key = bucket.new_key(str(i))
+ key.set_contents_from_string('bar')
+ # wait for sync
+ zone_bucket_checkpoint(ps_zones[0].zone, zones[0].zone, bucket_name)
# get the events from the creation subscription
result, _ = sub_create_conf.get_events()
parsed_result = json.loads(result)
for event in parsed_result['events']:
- log.debug('Event (OBJECT_CREATE): objname: "' + str(event['info']['key']['name']) + \
- '" type: "' + str(event['event']) + '"')
+ log.debug('Event (OBJECT_CREATE): objname: "' + str(event['info']['key']['name']) +
+ '" type: "' + str(event['event']) + '"')
+ keys = list(bucket.list())
+ # TODO: use exact match
+ verify_events_by_elements(parsed_result['events'], keys, exact_match=False)
+ # get the events from the deletions subscription
+ result, _ = sub_delete_conf.get_events()
+ parsed_result = json.loads(result)
+ for event in parsed_result['events']:
+ log.debug('Event (OBJECT_DELETE): objname: "' + str(event['info']['key']['name']) +
+ '" type: "' + str(event['event']) + '"')
+ assert_equal(len(parsed_result['events']), 0)
+ # get the events from the all events subscription
+ result, _ = sub_conf.get_events()
+ parsed_result = json.loads(result)
+ for event in parsed_result['events']:
+ log.debug('Event (OBJECT_CREATE,OBJECT_DELETE): objname: "' +
+ str(event['info']['key']['name']) + '" type: "' + str(event['event']) + '"')
+ # TODO: use exact match
+ verify_events_by_elements(parsed_result['events'], keys, exact_match=False)
+ # delete objects from the bucket
+ for key in bucket.list():
+ key.delete()
+ # wait for sync
+ zone_bucket_checkpoint(ps_zones[0].zone, zones[0].zone, bucket_name)
+ log.debug("Event (OBJECT_DELETE) synced")
+
+ # get the events from the creations subscription
+ result, _ = sub_create_conf.get_events()
+ parsed_result = json.loads(result)
+ for event in parsed_result['events']:
+ log.debug('Event (OBJECT_CREATE): objname: "' + str(event['info']['key']['name']) +
+ '" type: "' + str(event['event']) + '"')
+ # deletions should not change the creation events
+ # TODO: use exact match
+ verify_events_by_elements(parsed_result['events'], keys, exact_match=False)
+ # get the events from the deletions subscription
+ result, _ = sub_delete_conf.get_events()
+ parsed_result = json.loads(result)
+ for event in parsed_result['events']:
+ log.debug('Event (OBJECT_DELETE): objname: "' + str(event['info']['key']['name']) +
+ '" type: "' + str(event['event']) + '"')
+ # only deletions should be listed here
+ # TODO: use exact match
+ verify_events_by_elements(parsed_result['events'], keys, exact_match=False, deletions=True)
+ # get the events from the all events subscription
+ result, _ = sub_create_conf.get_events()
+ parsed_result = json.loads(result)
+ for event in parsed_result['events']:
+ log.debug('Event (OBJECT_CREATE,OBJECT_DELETE): objname: "' + str(event['info']['key']['name']) +
+ '" type: "' + str(event['event']) + '"')
+ # both deletions and creations should be here
+ # TODO: use exact match
+ verify_events_by_elements(parsed_result['events'], keys, exact_match=False, deletions=False)
+ # verify_events_by_elements(parsed_result['events'], keys, exact_match=False, deletions=True)
+ # TODO: (1) test deletions (2) test overall number of events
+
+ # test subscription deletion when topic is specified
+ _, status = sub_create_conf.del_config(topic=True)
+ assert_equal(status/100, 2)
+ _, status = sub_delete_conf.del_config(topic=True)
+ assert_equal(status/100, 2)
+ _, status = sub_conf.del_config(topic=True)
+ assert_equal(status/100, 2)
+
+ # cleanup
+ notification_create_conf.del_config()
+ notification_delete_conf.del_config()
+ notification_conf.del_config()
+ topic_create_conf.del_config()
+ topic_delete_conf.del_config()
+ topic_conf.del_config()
+ zones[0].delete_bucket(bucket_name)
+
+
+def test_ps_event_fetching():
+ """ test incremental fetching of events from a subscription """
+ zones, ps_zones = init_env()
+ bucket_name = gen_bucket_name()
+ topic_name = bucket_name+TOPIC_SUFFIX
+
+ # create topic
+ topic_conf = PSTopic(ps_zones[0].conn, topic_name)
+ topic_conf.set_config()
+ # create bucket on the first of the rados zones
+ bucket = zones[0].create_bucket(bucket_name)
+ # wait for sync
+ zone_meta_checkpoint(ps_zones[0].zone)
+ # create notifications
+ notification_conf = PSNotification(ps_zones[0].conn, bucket_name,
+ topic_name)
+ _, status = notification_conf.set_config()
+ assert_equal(status/100, 2)
+ # create subscription
+ sub_conf = PSSubscription(ps_zones[0].conn, bucket_name+SUB_SUFFIX,
+ topic_name)
+ _, status = sub_conf.set_config()
+ assert_equal(status/100, 2)
+ # create objects in the bucket
+ number_of_objects = 100
+ for i in range(number_of_objects):
+ key = bucket.new_key(str(i))
+ key.set_contents_from_string('bar')
+ # wait for sync
+ zone_bucket_checkpoint(ps_zones[0].zone, zones[0].zone, bucket_name)
+ max_events = 15
+ total_events_count = 0
+ next_marker = None
+ all_events = []
+ while True:
+ # get the events from the subscription
+ result, _ = sub_conf.get_events(max_events, next_marker)
+ parsed_result = json.loads(result)
+ events = parsed_result['events']
+ total_events_count += len(events)
+ all_events.extend(events)
+ next_marker = parsed_result['next_marker']
+ for event in events:
+ log.debug('Event: objname: "' + str(event['info']['key']['name']) + '" type: "' + str(event['event']) + '"')
+ if next_marker == '':
+ break
+ keys = list(bucket.list())
+ # TODO: use exact match
+ verify_events_by_elements(all_events, keys, exact_match=False)
+
+ # cleanup
+ sub_conf.del_config()
+ notification_conf.del_config()
+ topic_conf.del_config()
+ for key in bucket.list():
+ key.delete()
+ zones[0].delete_bucket(bucket_name)
+
+
+def test_ps_event_acking():
+ """ test acking of some events in a subscription """
+ zones, ps_zones = init_env()
+ bucket_name = gen_bucket_name()
+ topic_name = bucket_name+TOPIC_SUFFIX
+
+ # create topic
+ topic_conf = PSTopic(ps_zones[0].conn, topic_name)
+ topic_conf.set_config()
+ # create bucket on the first of the rados zones
+ bucket = zones[0].create_bucket(bucket_name)
+ # wait for sync
+ zone_meta_checkpoint(ps_zones[0].zone)
+ # create notifications
+ notification_conf = PSNotification(ps_zones[0].conn, bucket_name,
+ topic_name)
+ _, status = notification_conf.set_config()
+ assert_equal(status/100, 2)
+ # create subscription
+ sub_conf = PSSubscription(ps_zones[0].conn, bucket_name+SUB_SUFFIX,
+ topic_name)
+ _, status = sub_conf.set_config()
+ assert_equal(status/100, 2)
+ # create objects in the bucket
+ number_of_objects = 10
+ for i in range(number_of_objects):
+ key = bucket.new_key(str(i))
+ key.set_contents_from_string('bar')
+ # wait for sync
+ zone_bucket_checkpoint(ps_zones[0].zone, zones[0].zone, bucket_name)
+
+ # get the create events from the subscription
+ result, _ = sub_conf.get_events()
+ parsed_result = json.loads(result)
+ events = parsed_result['events']
+ original_number_of_events = len(events)
+ for event in events:
+ log.debug('Event (before ack) id: "' + str(event['id']) + '"')
+ keys = list(bucket.list())
+ # TODO: use exact match
+ verify_events_by_elements(events, keys, exact_match=False)
+ # ack half of the events
+ events_to_ack = number_of_objects/2
+ for event in events:
+ if events_to_ack == 0:
+ break
+ _, status = sub_conf.ack_events(event['id'])
+ assert_equal(status/100, 2)
+ events_to_ack -= 1
+
+ # verify that acked events are gone
+ result, _ = sub_conf.get_events()
+ parsed_result = json.loads(result)
+ for event in parsed_result['events']:
+ log.debug('Event (after ack) id: "' + str(event['id']) + '"')
+ assert len(parsed_result['events']) >= (original_number_of_events - number_of_objects/2)
+
+ # cleanup
+ sub_conf.del_config()
+ notification_conf.del_config()
+ topic_conf.del_config()
+ for key in bucket.list():
+ key.delete()
+ zones[0].delete_bucket(bucket_name)
+
+
+def test_ps_creation_triggers():
+ """ test object creation notifications in using put/copy/post """
+ zones, ps_zones = init_env()
+ bucket_name = gen_bucket_name()
+ topic_name = bucket_name+TOPIC_SUFFIX
+
+ # create topic
+ topic_conf = PSTopic(ps_zones[0].conn, topic_name)
+ topic_conf.set_config()
+ # create bucket on the first of the rados zones
+ bucket = zones[0].create_bucket(bucket_name)
+ # wait for sync
+ zone_meta_checkpoint(ps_zones[0].zone)
+ # create notifications
+ notification_conf = PSNotification(ps_zones[0].conn, bucket_name,
+ topic_name)
+ _, status = notification_conf.set_config()
+ assert_equal(status/100, 2)
+ # create subscription
+ sub_conf = PSSubscription(ps_zones[0].conn, bucket_name+SUB_SUFFIX,
+ topic_name)
+ _, status = sub_conf.set_config()
+ assert_equal(status/100, 2)
+ # create objects in the bucket using PUT
+ key = bucket.new_key('put')
+ key.set_contents_from_string('bar')
+ # create objects in the bucket using COPY
+ bucket.copy_key('copy', bucket.name, key.name)
+ # create objects in the bucket using multi-part upload
+ fp = tempfile.TemporaryFile(mode='w')
+ fp.write('bar')
+ fp.close()
+ uploader = bucket.initiate_multipart_upload('multipart')
+ fp = tempfile.TemporaryFile(mode='r')
+ uploader.upload_part_from_file(fp, 1)
+ uploader.complete_upload()
+ fp.close()
+ # wait for sync
+ zone_bucket_checkpoint(ps_zones[0].zone, zones[0].zone, bucket_name)
+
+ # get the create events from the subscription
+ result, _ = sub_conf.get_events()
+ parsed_result = json.loads(result)
+ for event in parsed_result['events']:
+ log.debug('Event key: "' + str(event['info']['key']['name']) + '" type: "' + str(event['event']) + '"')
+
+ # TODO: verify the specific 3 keys: 'put', 'copy' and 'multipart'
+ assert len(parsed_result['events']) >= 3
+ # cleanup
+ sub_conf.del_config()
+ notification_conf.del_config()
+ topic_conf.del_config()
+ for key in bucket.list():
+ key.delete()
+ zones[0].delete_bucket(bucket_name)
+
+
+def test_ps_s3_creation_triggers_on_master():
+ """ test object creation s3 notifications in using put/copy/post on master"""
+ if skip_push_tests:
+ return SkipTest("PubSub push tests don't run in teuthology")
+ hostname = get_ip()
+ proc = init_rabbitmq()
+ if proc is None:
+ return SkipTest('end2end amqp tests require rabbitmq-server installed')
+ zones, _ = init_env(require_ps=False)
+ realm = get_realm()
+ zonegroup = realm.master_zonegroup()
+
+ # create bucket
+ bucket_name = gen_bucket_name()
+ bucket = zones[0].create_bucket(bucket_name)
+ topic_name = bucket_name + TOPIC_SUFFIX
+
+ # start amqp receiver
+ exchange = 'ex1'
+ task, receiver = create_amqp_receiver_thread(exchange, topic_name)
+ task.start()
+
+ # create s3 topic
+ endpoint_address = 'amqp://' + hostname
+ endpoint_args = 'push-endpoint='+endpoint_address+'&amqp-exchange=' + exchange +'&amqp-ack-level=broker'
+ topic_conf = PSTopicS3(zones[0].conn, topic_name, zonegroup.name, endpoint_args=endpoint_args)
+ topic_arn = topic_conf.set_config()
+ # create s3 notification
+ notification_name = bucket_name + NOTIFICATION_SUFFIX
+ topic_conf_list = [{'Id': notification_name,'TopicArn': topic_arn,
+ 'Events': ['s3:ObjectCreated:Put', 's3:ObjectCreated:Copy']
+ }]
+
+ s3_notification_conf = PSNotificationS3(zones[0].conn, bucket_name, topic_conf_list)
+ response, status = s3_notification_conf.set_config()
+ assert_equal(status/100, 2)
+
+ # create objects in the bucket using PUT
+ key = bucket.new_key('put')
+ key.set_contents_from_string('bar')
+ # create objects in the bucket using COPY
+ bucket.copy_key('copy', bucket.name, key.name)
+ # create objects in the bucket using multi-part upload
+ fp = tempfile.TemporaryFile(mode='w')
+ fp.write('bar')
+ fp.close()
+ uploader = bucket.initiate_multipart_upload('multipart')
+ fp = tempfile.TemporaryFile(mode='r')
+ uploader.upload_part_from_file(fp, 1)
+ uploader.complete_upload()
+ fp.close()
+
+ print 'wait for 5sec for the messages...'
+ time.sleep(5)
+
+ # check amqp receiver
keys = list(bucket.list())
- # TODO: set exact_match to true
- verify_events_by_elements(parsed_result['events'], keys, exact_match=False)
- # get the events from the deletions subscription
- result, _ = sub_delete_conf.get_events()
+ receiver.verify_s3_events(keys, exact_match=True)
+
+ # cleanup
+ stop_amqp_receiver(receiver, task)
+ s3_notification_conf.del_config()
+ topic_conf.del_config()
+ for key in bucket.list():
+ key.delete()
+ # delete the bucket
+ zones[0].delete_bucket(bucket_name)
+ clean_rabbitmq(proc)
+
+
+def test_ps_s3_multipart_on_master():
+ """ test multipart object upload on master"""
+ if skip_push_tests:
+ return SkipTest("PubSub push tests don't run in teuthology")
+ hostname = get_ip()
+ proc = init_rabbitmq()
+ if proc is None:
+ return SkipTest('end2end amqp tests require rabbitmq-server installed')
+ zones, _ = init_env(require_ps=False)
+ realm = get_realm()
+ zonegroup = realm.master_zonegroup()
+
+ # create bucket
+ bucket_name = gen_bucket_name()
+ bucket = zones[0].create_bucket(bucket_name)
+ topic_name = bucket_name + TOPIC_SUFFIX
+
+ # start amqp receivers
+ exchange = 'ex1'
+ task1, receiver1 = create_amqp_receiver_thread(exchange, topic_name+'_1')
+ task1.start()
+ task2, receiver2 = create_amqp_receiver_thread(exchange, topic_name+'_2')
+ task2.start()
+ task3, receiver3 = create_amqp_receiver_thread(exchange, topic_name+'_3')
+ task3.start()
+
+ # create s3 topics
+ endpoint_address = 'amqp://' + hostname
+ endpoint_args = 'push-endpoint=' + endpoint_address + '&amqp-exchange=' + exchange + '&amqp-ack-level=broker'
+ topic_conf1 = PSTopicS3(zones[0].conn, topic_name+'_1', zonegroup.name, endpoint_args=endpoint_args)
+ topic_arn1 = topic_conf1.set_config()
+ topic_conf2 = PSTopicS3(zones[0].conn, topic_name+'_2', zonegroup.name, endpoint_args=endpoint_args)
+ topic_arn2 = topic_conf2.set_config()
+ topic_conf3 = PSTopicS3(zones[0].conn, topic_name+'_3', zonegroup.name, endpoint_args=endpoint_args)
+ topic_arn3 = topic_conf3.set_config()
+
+ # create s3 notifications
+ notification_name = bucket_name + NOTIFICATION_SUFFIX
+ topic_conf_list = [{'Id': notification_name+'_1', 'TopicArn': topic_arn1,
+ 'Events': ['s3:ObjectCreated:*']
+ },
+ {'Id': notification_name+'_2', 'TopicArn': topic_arn2,
+ 'Events': ['s3:ObjectCreated:Post']
+ },
+ {'Id': notification_name+'_3', 'TopicArn': topic_arn3,
+ 'Events': ['s3:ObjectCreated:CompleteMultipartUpload']
+ }]
+ s3_notification_conf = PSNotificationS3(zones[0].conn, bucket_name, topic_conf_list)
+ response, status = s3_notification_conf.set_config()
+ assert_equal(status/100, 2)
+
+ # create objects in the bucket using multi-part upload
+ fp = tempfile.TemporaryFile(mode='w+b')
+ content = bytearray(os.urandom(1024*1024))
+ fp.write(content)
+ fp.flush()
+ fp.seek(0)
+ uploader = bucket.initiate_multipart_upload('multipart')
+ uploader.upload_part_from_file(fp, 1)
+ uploader.complete_upload()
+ fp.close()
+
+ print 'wait for 5sec for the messages...'
+ time.sleep(5)
+
+ # check amqp receiver
+ events = receiver1.get_and_reset_events()
+ assert_equal(len(events), 3)
+
+ events = receiver2.get_and_reset_events()
+ assert_equal(len(events), 1)
+ assert_equal(events[0]['eventName'], 's3:ObjectCreated:Post')
+ assert_equal(events[0]['s3']['configurationId'], notification_name+'_2')
+
+ events = receiver3.get_and_reset_events()
+ assert_equal(len(events), 1)
+ assert_equal(events[0]['eventName'], 's3:ObjectCreated:CompleteMultipartUpload')
+ assert_equal(events[0]['s3']['configurationId'], notification_name+'_3')
+
+ # cleanup
+ stop_amqp_receiver(receiver1, task1)
+ stop_amqp_receiver(receiver2, task2)
+ stop_amqp_receiver(receiver3, task3)
+ s3_notification_conf.del_config()
+ topic_conf1.del_config()
+ topic_conf2.del_config()
+ topic_conf3.del_config()
+ for key in bucket.list():
+ key.delete()
+ # delete the bucket
+ zones[0].delete_bucket(bucket_name)
+ clean_rabbitmq(proc)
+
+
+def test_ps_versioned_deletion():
+ """ test notification of deletion markers """
+ zones, ps_zones = init_env()
+ bucket_name = gen_bucket_name()
+ topic_name = bucket_name+TOPIC_SUFFIX
+
+ # create topics
+ topic_conf1 = PSTopic(ps_zones[0].conn, topic_name+'_1')
+ _, status = topic_conf1.set_config()
+ assert_equal(status/100, 2)
+ topic_conf2 = PSTopic(ps_zones[0].conn, topic_name+'_2')
+ _, status = topic_conf2.set_config()
+ assert_equal(status/100, 2)
+
+ # create bucket on the first of the rados zones
+ bucket = zones[0].create_bucket(bucket_name)
+ bucket.configure_versioning(True)
+
+ # wait for sync
+ zone_meta_checkpoint(ps_zones[0].zone)
+
+ # create notifications
+ event_type1 = 'OBJECT_DELETE'
+ notification_conf1 = PSNotification(ps_zones[0].conn, bucket_name,
+ topic_name+'_1',
+ event_type1)
+ _, status = notification_conf1.set_config()
+ assert_equal(status/100, 2)
+ event_type2 = 'DELETE_MARKER_CREATE'
+ notification_conf2 = PSNotification(ps_zones[0].conn, bucket_name,
+ topic_name+'_2',
+ event_type2)
+ _, status = notification_conf2.set_config()
+ assert_equal(status/100, 2)
+
+ # create subscriptions
+ sub_conf1 = PSSubscription(ps_zones[0].conn, bucket_name+SUB_SUFFIX+'_1',
+ topic_name+'_1')
+ _, status = sub_conf1.set_config()
+ assert_equal(status/100, 2)
+ sub_conf2 = PSSubscription(ps_zones[0].conn, bucket_name+SUB_SUFFIX+'_2',
+ topic_name+'_2')
+ _, status = sub_conf2.set_config()
+ assert_equal(status/100, 2)
+
+ # create objects in the bucket
+ key = bucket.new_key('foo')
+ key.set_contents_from_string('bar')
+ v1 = key.version_id
+ key.set_contents_from_string('kaboom')
+ v2 = key.version_id
+ # create deletion marker
+ delete_marker_key = bucket.delete_key(key.name)
+
+ # wait for sync
+ zone_bucket_checkpoint(ps_zones[0].zone, zones[0].zone, bucket_name)
+
+ # delete the deletion marker
+ delete_marker_key.delete()
+ # delete versions
+ bucket.delete_key(key.name, version_id=v2)
+ bucket.delete_key(key.name, version_id=v1)
+
+ # wait for sync
+ zone_bucket_checkpoint(ps_zones[0].zone, zones[0].zone, bucket_name)
+
+ # get the delete events from the subscription
+ result, _ = sub_conf1.get_events()
parsed_result = json.loads(result)
for event in parsed_result['events']:
- log.debug('Event (OBJECT_DELETE): objname: "' + str(event['info']['key']['name']) + \
- '" type: "' + str(event['event']) + '"')
- assert_equal(len(parsed_result['events']), 0)
- # get the events from the all events subscription
- result, _ = sub_conf.get_events()
+ log.debug('Event key: "' + str(event['info']['key']['name']) + '" type: "' + str(event['event']) + '"')
+ assert_equal(str(event['event']), event_type1)
+
+ result, _ = sub_conf2.get_events()
parsed_result = json.loads(result)
for event in parsed_result['events']:
- log.debug('Event (OBJECT_CREATE,OBJECT_DELETE): objname: "' + \
- str(event['info']['key']['name']) + '" type: "' + str(event['event']) + '"')
- # TODO: set exact_match to true
- verify_events_by_elements(parsed_result['events'], keys, exact_match=False)
+ log.debug('Event key: "' + str(event['info']['key']['name']) + '" type: "' + str(event['event']) + '"')
+ assert_equal(str(event['event']), event_type2)
+
+ # cleanup
+ # follwing is needed for the cleanup in the case of 3-zones
+ # see: http://tracker.ceph.com/issues/39142
+ realm = get_realm()
+ zonegroup = realm.master_zonegroup()
+ zonegroup_conns = ZonegroupConns(zonegroup)
+ try:
+ zonegroup_bucket_checkpoint(zonegroup_conns, bucket_name)
+ zones[0].delete_bucket(bucket_name)
+ except:
+ log.debug('zonegroup_bucket_checkpoint failed, cannot delete bucket')
+ sub_conf1.del_config()
+ sub_conf2.del_config()
+ notification_conf1.del_config()
+ notification_conf2.del_config()
+ topic_conf1.del_config()
+ topic_conf2.del_config()
+
+
+def test_ps_s3_metadata_on_master():
+ """ test s3 notification of metadata on master """
+ if skip_push_tests:
+ return SkipTest("PubSub push tests don't run in teuthology")
+ hostname = get_ip()
+ proc = init_rabbitmq()
+ if proc is None:
+ return SkipTest('end2end amqp tests require rabbitmq-server installed')
+ zones, _ = init_env(require_ps=False)
+ realm = get_realm()
+ zonegroup = realm.master_zonegroup()
+
+ # create bucket
+ bucket_name = gen_bucket_name()
+ bucket = zones[0].create_bucket(bucket_name)
+ topic_name = bucket_name + TOPIC_SUFFIX
+
+ # start amqp receiver
+ exchange = 'ex1'
+ task, receiver = create_amqp_receiver_thread(exchange, topic_name)
+ task.start()
+
+ # create s3 topic
+ endpoint_address = 'amqp://' + hostname
+ endpoint_args = 'push-endpoint='+endpoint_address+'&amqp-exchange=' + exchange +'&amqp-ack-level=broker'
+ topic_conf = PSTopicS3(zones[0].conn, topic_name, zonegroup.name, endpoint_args=endpoint_args)
+ topic_arn = topic_conf.set_config()
+ # create s3 notification
+ notification_name = bucket_name + NOTIFICATION_SUFFIX
+ topic_conf_list = [{'Id': notification_name,'TopicArn': topic_arn,
+ 'Events': ['s3:ObjectCreated:*']
+ }]
+
+ s3_notification_conf = PSNotificationS3(zones[0].conn, bucket_name, topic_conf_list)
+ response, status = s3_notification_conf.set_config()
+ assert_equal(status/100, 2)
+
+ # create objects in the bucket
+ key = bucket.new_key('foo')
+ key.set_metadata('meta1', 'This is my metadata value')
+ key.set_contents_from_string('aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa')
+ keys = list(bucket.list())
+ print 'wait for 5sec for the messages...'
+ time.sleep(5)
+ # check amqp receiver
+ receiver.verify_s3_events(keys, exact_match=True)
+
+ # cleanup
+ stop_amqp_receiver(receiver, task)
+ s3_notification_conf.del_config()
+ topic_conf.del_config()
+ for key in bucket.list():
+ key.delete()
+ # delete the bucket
+ zones[0].delete_bucket(bucket_name)
+ clean_rabbitmq(proc)
+
+
+def test_ps_s3_versioned_deletion_on_master():
+ """ test s3 notification of deletion markers on master """
+ if skip_push_tests:
+ return SkipTest("PubSub push tests don't run in teuthology")
+ hostname = get_ip()
+ proc = init_rabbitmq()
+ if proc is None:
+ return SkipTest('end2end amqp tests require rabbitmq-server installed')
+ zones, _ = init_env(require_ps=False)
+ realm = get_realm()
+ zonegroup = realm.master_zonegroup()
+
+ # create bucket
+ bucket_name = gen_bucket_name()
+ bucket = zones[0].create_bucket(bucket_name)
+ bucket.configure_versioning(True)
+ topic_name = bucket_name + TOPIC_SUFFIX
+
+ # start amqp receiver
+ exchange = 'ex1'
+ task, receiver = create_amqp_receiver_thread(exchange, topic_name)
+ task.start()
+
+ # create s3 topic
+ endpoint_address = 'amqp://' + hostname
+ endpoint_args = 'push-endpoint='+endpoint_address+'&amqp-exchange=' + exchange +'&amqp-ack-level=broker'
+ topic_conf = PSTopicS3(zones[0].conn, topic_name, zonegroup.name, endpoint_args=endpoint_args)
+ topic_arn = topic_conf.set_config()
+ # create s3 notification
+ notification_name = bucket_name + NOTIFICATION_SUFFIX
+ # TODO use s3:ObjectRemoved:DeleteMarkerCreated once supported in the code
+ topic_conf_list = [{'Id': notification_name+'_1', 'TopicArn': topic_arn,
+ 'Events': ['s3:ObjectRemoved:*']
+ },
+ {'Id': notification_name+'_2', 'TopicArn': topic_arn,
+ 'Events': ['s3:ObjectRemoved:DeleteMarkerCreated']
+ },
+ {'Id': notification_name+'_3', 'TopicArn': topic_arn,
+ 'Events': ['s3:ObjectRemoved:Delete']
+ }]
+ s3_notification_conf = PSNotificationS3(zones[0].conn, bucket_name, topic_conf_list)
+ response, status = s3_notification_conf.set_config()
+ assert_equal(status/100, 2)
+
+ # create objects in the bucket
+ key = bucket.new_key('foo')
+ key.set_contents_from_string('bar')
+ v1 = key.version_id
+ key.set_contents_from_string('kaboom')
+ v2 = key.version_id
+ # create delete marker (non versioned deletion)
+ delete_marker_key = bucket.delete_key(key.name)
+
+ time.sleep(1)
+
+ # versioned deletion
+ bucket.delete_key(key.name, version_id=v2)
+ bucket.delete_key(key.name, version_id=v1)
+ delete_marker_key.delete()
+
+ print 'wait for 5sec for the messages...'
+ time.sleep(5)
+
+ # check amqp receiver
+ events = receiver.get_and_reset_events()
+ delete_events = 0
+ delete_marker_create_events = 0
+ for event in events:
+ if event['eventName'] == 's3:ObjectRemoved:Delete':
+ delete_events += 1
+ assert event['s3']['configurationId'] in [notification_name+'_1', notification_name+'_3']
+ if event['eventName'] == 's3:ObjectRemoved:DeleteMarkerCreated':
+ delete_marker_create_events += 1
+ assert event['s3']['configurationId'] in [notification_name+'_1', notification_name+'_2']
+
+ # 3 key versions were deleted (v1, v2 and the deletion marker)
+ # notified over the same topic via 2 notifications (1,3)
+ assert_equal(delete_events, 3*2)
+ # 1 deletion marker was created
+ # notified over the same topic over 2 notifications (1,2)
+ assert_equal(delete_marker_create_events, 1*2)
+
+ # cleanup
+ stop_amqp_receiver(receiver, task)
+ s3_notification_conf.del_config()
+ topic_conf.del_config()
+ # delete the bucket
+ zones[0].delete_bucket(bucket_name)
+ clean_rabbitmq(proc)
+
+
+def test_ps_push_http():
+ """ test pushing to http endpoint """
+ if skip_push_tests:
+ return SkipTest("PubSub push tests don't run in teuthology")
+ zones, ps_zones = init_env()
+ bucket_name = gen_bucket_name()
+ topic_name = bucket_name+TOPIC_SUFFIX
+
+ # create random port for the http server
+ host = get_ip()
+ port = random.randint(10000, 20000)
+ # start an http server in a separate thread
+ http_server = StreamingHTTPServer(host, port)
+
+ # create topic
+ topic_conf = PSTopic(ps_zones[0].conn, topic_name)
+ _, status = topic_conf.set_config()
+ assert_equal(status/100, 2)
+ # create bucket on the first of the rados zones
+ bucket = zones[0].create_bucket(bucket_name)
+ # wait for sync
+ zone_meta_checkpoint(ps_zones[0].zone)
+ # create notifications
+ notification_conf = PSNotification(ps_zones[0].conn, bucket_name,
+ topic_name)
+ _, status = notification_conf.set_config()
+ assert_equal(status/100, 2)
+ # create subscription
+ sub_conf = PSSubscription(ps_zones[0].conn, bucket_name+SUB_SUFFIX,
+ topic_name, endpoint='http://'+host+':'+str(port))
+ _, status = sub_conf.set_config()
+ assert_equal(status/100, 2)
+ # create objects in the bucket
+ number_of_objects = 10
+ for i in range(number_of_objects):
+ key = bucket.new_key(str(i))
+ key.set_contents_from_string('bar')
+ # wait for sync
+ zone_bucket_checkpoint(ps_zones[0].zone, zones[0].zone, bucket_name)
+ # check http server
+ keys = list(bucket.list())
+ # TODO: use exact match
+ http_server.verify_events(keys, exact_match=False)
+
# delete objects from the bucket
for key in bucket.list():
key.delete()
# wait for sync
+ zone_meta_checkpoint(ps_zones[0].zone)
zone_bucket_checkpoint(ps_zones[0].zone, zones[0].zone, bucket_name)
- log.debug("Event (OBJECT_DELETE) synced")
-
- # get the events from the creations subscription
- result, _ = sub_create_conf.get_events()
- parsed_result = json.loads(result)
- for event in parsed_result['events']:
- log.debug('Event (OBJECT_CREATE): objname: "' + str(event['info']['key']['name']) + \
- '" type: "' + str(event['event']) + '"')
- # deletions should not change the creation events
- # TODO: set exact_match to true
- verify_events_by_elements(parsed_result['events'], keys, exact_match=False)
- # get the events from the deletions subscription
- result, _ = sub_delete_conf.get_events()
- parsed_result = json.loads(result)
- for event in parsed_result['events']:
- log.debug('Event (OBJECT_DELETE): objname: "' + str(event['info']['key']['name']) + \
- '" type: "' + str(event['event']) + '"')
- # only deletions should be listed here
- # TODO: set exact_match to true
- verify_events_by_elements(parsed_result['events'], keys, exact_match=False, deletions=True)
- # get the events from the all events subscription
- result, _ = sub_create_conf.get_events()
+ # check http server
+ # TODO: use exact match
+ http_server.verify_events(keys, deletions=True, exact_match=False)
+
+ # cleanup
+ sub_conf.del_config()
+ notification_conf.del_config()
+ topic_conf.del_config()
+ zones[0].delete_bucket(bucket_name)
+ http_server.close()
+
+
+def test_ps_s3_push_http():
+ """ test pushing to http endpoint s3 record format"""
+ if skip_push_tests:
+ return SkipTest("PubSub push tests don't run in teuthology")
+ zones, ps_zones = init_env()
+ bucket_name = gen_bucket_name()
+ topic_name = bucket_name+TOPIC_SUFFIX
+
+ # create random port for the http server
+ host = get_ip()
+ port = random.randint(10000, 20000)
+ # start an http server in a separate thread
+ http_server = StreamingHTTPServer(host, port)
+
+ # create topic
+ topic_conf = PSTopic(ps_zones[0].conn, topic_name,
+ endpoint='http://'+host+':'+str(port))
+ result, status = topic_conf.set_config()
+ assert_equal(status/100, 2)
parsed_result = json.loads(result)
- for event in parsed_result['events']:
- log.debug('Event (OBJECT_CREATE,OBJECT_DELETE): objname: "' + str(event['info']['key']['name']) + \
- '" type: "' + str(event['event']) + '"')
- # both deletions and creations should be here
- verify_events_by_elements(parsed_result['events'], keys, exact_match=False, deletions=False)
- # verify_events_by_elements(parsed_result['events'], keys, exact_match=False, deletions=True)
- # TODO: (1) test deletions (2) test overall number of events
+ topic_arn = parsed_result['arn']
+ # create bucket on the first of the rados zones
+ bucket = zones[0].create_bucket(bucket_name)
+ # wait for sync
+ zone_meta_checkpoint(ps_zones[0].zone)
+ # create s3 notification
+ notification_name = bucket_name + NOTIFICATION_SUFFIX
+ topic_conf_list = [{'Id': notification_name,
+ 'TopicArn': topic_arn,
+ 'Events': ['s3:ObjectCreated:*', 's3:ObjectRemoved:*']
+ }]
+ s3_notification_conf = PSNotificationS3(ps_zones[0].conn, bucket_name, topic_conf_list)
+ _, status = s3_notification_conf.set_config()
+ assert_equal(status/100, 2)
+ # create objects in the bucket
+ number_of_objects = 10
+ for i in range(number_of_objects):
+ key = bucket.new_key(str(i))
+ key.set_contents_from_string('bar')
+ # wait for sync
+ zone_bucket_checkpoint(ps_zones[0].zone, zones[0].zone, bucket_name)
+ # check http server
+ keys = list(bucket.list())
+ # TODO: use exact match
+ http_server.verify_s3_events(keys, exact_match=False)
+
+ # delete objects from the bucket
+ for key in bucket.list():
+ key.delete()
+ # wait for sync
+ zone_meta_checkpoint(ps_zones[0].zone)
+ zone_bucket_checkpoint(ps_zones[0].zone, zones[0].zone, bucket_name)
+ # check http server
+ # TODO: use exact match
+ http_server.verify_s3_events(keys, deletions=True, exact_match=False)
+
+ # cleanup
+ s3_notification_conf.del_config()
+ topic_conf.del_config()
+ zones[0].delete_bucket(bucket_name)
+ http_server.close()
+
+
+def test_ps_push_amqp():
+ """ test pushing to amqp endpoint """
+ if skip_push_tests:
+ return SkipTest("PubSub push tests don't run in teuthology")
+ hostname = get_ip()
+ proc = init_rabbitmq()
+ if proc is None:
+ return SkipTest('end2end amqp tests require rabbitmq-server installed')
+ zones, ps_zones = init_env()
+ bucket_name = gen_bucket_name()
+ topic_name = bucket_name+TOPIC_SUFFIX
+
+ # create topic
+ exchange = 'ex1'
+ task, receiver = create_amqp_receiver_thread(exchange, topic_name)
+ task.start()
+ topic_conf = PSTopic(ps_zones[0].conn, topic_name)
+ _, status = topic_conf.set_config()
+ assert_equal(status/100, 2)
+ # create bucket on the first of the rados zones
+ bucket = zones[0].create_bucket(bucket_name)
+ # wait for sync
+ zone_meta_checkpoint(ps_zones[0].zone)
+ # create notifications
+ notification_conf = PSNotification(ps_zones[0].conn, bucket_name,
+ topic_name)
+ _, status = notification_conf.set_config()
+ assert_equal(status/100, 2)
+ # create subscription
+ sub_conf = PSSubscription(ps_zones[0].conn, bucket_name+SUB_SUFFIX,
+ topic_name, endpoint='amqp://'+hostname,
+ endpoint_args='amqp-exchange='+exchange+'&amqp-ack-level=broker')
+ _, status = sub_conf.set_config()
+ assert_equal(status/100, 2)
+ # create objects in the bucket
+ number_of_objects = 10
+ for i in range(number_of_objects):
+ key = bucket.new_key(str(i))
+ key.set_contents_from_string('bar')
+ # wait for sync
+ zone_bucket_checkpoint(ps_zones[0].zone, zones[0].zone, bucket_name)
+ # check amqp receiver
+ keys = list(bucket.list())
+ # TODO: use exact match
+ receiver.verify_events(keys, exact_match=False)
+
+ # delete objects from the bucket
+ for key in bucket.list():
+ key.delete()
+ # wait for sync
+ zone_meta_checkpoint(ps_zones[0].zone)
+ zone_bucket_checkpoint(ps_zones[0].zone, zones[0].zone, bucket_name)
+ # check amqp receiver
+ # TODO: use exact match
+ receiver.verify_events(keys, deletions=True, exact_match=False)
# cleanup
- sub_create_conf.del_config()
- sub_delete_conf.del_config()
+ stop_amqp_receiver(receiver, task)
sub_conf.del_config()
- notification_create_conf.del_config()
- notification_delete_conf.del_config()
notification_conf.del_config()
- topic_create_conf.del_config()
- topic_delete_conf.del_config()
topic_conf.del_config()
zones[0].delete_bucket(bucket_name)
+ clean_rabbitmq(proc)
+
+
+def test_ps_s3_push_amqp():
+ """ test pushing to amqp endpoint s3 record format"""
+ if skip_push_tests:
+ return SkipTest("PubSub push tests don't run in teuthology")
+ hostname = get_ip()
+ proc = init_rabbitmq()
+ if proc is None:
+ return SkipTest('end2end amqp tests require rabbitmq-server installed')
+ zones, ps_zones = init_env()
+ bucket_name = gen_bucket_name()
+ topic_name = bucket_name+TOPIC_SUFFIX
+
+ # create topic
+ exchange = 'ex1'
+ task, receiver = create_amqp_receiver_thread(exchange, topic_name)
+ task.start()
+ topic_conf = PSTopic(ps_zones[0].conn, topic_name,
+ endpoint='amqp://' + hostname,
+ endpoint_args='amqp-exchange=' + exchange + '&amqp-ack-level=none')
+ result, status = topic_conf.set_config()
+ assert_equal(status/100, 2)
+ parsed_result = json.loads(result)
+ topic_arn = parsed_result['arn']
+ # create bucket on the first of the rados zones
+ bucket = zones[0].create_bucket(bucket_name)
+ # wait for sync
+ zone_meta_checkpoint(ps_zones[0].zone)
+ # create s3 notification
+ notification_name = bucket_name + NOTIFICATION_SUFFIX
+ topic_conf_list = [{'Id': notification_name,
+ 'TopicArn': topic_arn,
+ 'Events': ['s3:ObjectCreated:*', 's3:ObjectRemoved:*']
+ }]
+ s3_notification_conf = PSNotificationS3(ps_zones[0].conn, bucket_name, topic_conf_list)
+ _, status = s3_notification_conf.set_config()
+ assert_equal(status/100, 2)
+ # create objects in the bucket
+ number_of_objects = 10
+ for i in range(number_of_objects):
+ key = bucket.new_key(str(i))
+ key.set_contents_from_string('bar')
+ # wait for sync
+ zone_bucket_checkpoint(ps_zones[0].zone, zones[0].zone, bucket_name)
+ # check amqp receiver
+ keys = list(bucket.list())
+ # TODO: use exact match
+ receiver.verify_s3_events(keys, exact_match=False)
+
+ # delete objects from the bucket
+ for key in bucket.list():
+ key.delete()
+ # wait for sync
+ zone_meta_checkpoint(ps_zones[0].zone)
+ zone_bucket_checkpoint(ps_zones[0].zone, zones[0].zone, bucket_name)
+ # check amqp receiver
+ # TODO: use exact match
+ receiver.verify_s3_events(keys, deletions=True, exact_match=False)
+
+ # cleanup
+ stop_amqp_receiver(receiver, task)
+ s3_notification_conf.del_config()
+ topic_conf.del_config()
+ zones[0].delete_bucket(bucket_name)
+ clean_rabbitmq(proc)
-def test_ps_event_fetching():
- """ test incremental fetching of events from a subscription """
+def test_ps_delete_bucket():
+ """ test notification status upon bucket deletion """
zones, ps_zones = init_env()
bucket_name = gen_bucket_name()
- topic_name = bucket_name+TOPIC_SUFFIX
-
- # create topic
- topic_conf = PSTopic(ps_zones[0].conn, topic_name)
- topic_conf.set_config()
# create bucket on the first of the rados zones
bucket = zones[0].create_bucket(bucket_name)
# wait for sync
zone_meta_checkpoint(ps_zones[0].zone)
- # create notifications
+ topic_name = bucket_name + TOPIC_SUFFIX
+ # create topic
+ topic_name = bucket_name + TOPIC_SUFFIX
+ topic_conf = PSTopic(ps_zones[0].conn, topic_name)
+ response, status = topic_conf.set_config()
+ assert_equal(status/100, 2)
+ parsed_result = json.loads(response)
+ topic_arn = parsed_result['arn']
+ # create one s3 notification
+ notification_name = bucket_name + NOTIFICATION_SUFFIX
+ topic_conf_list = [{'Id': notification_name,
+ 'TopicArn': topic_arn,
+ 'Events': ['s3:ObjectCreated:*']
+ }]
+ s3_notification_conf = PSNotificationS3(ps_zones[0].conn, bucket_name, topic_conf_list)
+ response, status = s3_notification_conf.set_config()
+ assert_equal(status/100, 2)
+
+ # create non-s3 notification
notification_conf = PSNotification(ps_zones[0].conn, bucket_name,
topic_name)
_, status = notification_conf.set_config()
assert_equal(status/100, 2)
- # create subscription
- sub_conf = PSSubscription(ps_zones[0].conn, bucket_name+SUB_SUFFIX,
- topic_name)
- _, status = sub_conf.set_config()
- assert_equal(status/100, 2)
+
# create objects in the bucket
- number_of_objects = 100
+ number_of_objects = 10
for i in range(number_of_objects):
key = bucket.new_key(str(i))
key.set_contents_from_string('bar')
- # wait for sync
+ # wait for bucket sync
zone_bucket_checkpoint(ps_zones[0].zone, zones[0].zone, bucket_name)
- max_events = 15
- total_events_count = 0
- next_marker = None
- all_events = []
- while True:
- # get the events from the subscription
- result, _ = sub_conf.get_events(max_events, next_marker)
- parsed_result = json.loads(result)
- events = parsed_result['events']
- total_events_count += len(events)
- all_events.extend(events)
- next_marker = parsed_result['next_marker']
- for event in events:
- log.debug('Event: objname: "' + str(event['info']['key']['name']) + '" type: "' + str(event['event']) + '"')
- if next_marker == '':
- break
keys = list(bucket.list())
- # TODO: set exact_match to true
- verify_events_by_elements(all_events, keys, exact_match=False)
+ # delete objects from the bucket
+ for key in bucket.list():
+ key.delete()
+ # wait for bucket sync
+ zone_bucket_checkpoint(ps_zones[0].zone, zones[0].zone, bucket_name)
+ # delete the bucket
+ zones[0].delete_bucket(bucket_name)
+ # wait for meta sync
+ zone_meta_checkpoint(ps_zones[0].zone)
+
+ # get the events from the auto-generated subscription
+ sub_conf = PSSubscription(ps_zones[0].conn, notification_name,
+ topic_name)
+ result, _ = sub_conf.get_events()
+ parsed_result = json.loads(result)
+ # TODO: use exact match
+ verify_s3_records_by_elements(parsed_result['Records'], keys, exact_match=False)
+ # s3 notification is deleted with bucket
+ _, status = s3_notification_conf.get_config(notification=notification_name)
+ assert_equal(status, 404)
+ # non-s3 notification is deleted with bucket
+ _, status = notification_conf.get_config()
+ assert_equal(status, 404)
# cleanup
sub_conf.del_config()
- notification_conf.del_config()
topic_conf.del_config()
- for key in bucket.list():
- key.delete()
+
+
+def test_ps_missing_topic():
+ """ test creating a subscription when no topic info exists"""
+ zones, ps_zones = init_env()
+ bucket_name = gen_bucket_name()
+ topic_name = bucket_name+TOPIC_SUFFIX
+
+ # create bucket on the first of the rados zones
+ zones[0].create_bucket(bucket_name)
+ # wait for sync
+ zone_meta_checkpoint(ps_zones[0].zone)
+ # create s3 notification
+ notification_name = bucket_name + NOTIFICATION_SUFFIX
+ topic_arn = 'arn:aws:sns:::' + topic_name
+ topic_conf_list = [{'Id': notification_name,
+ 'TopicArn': topic_arn,
+ 'Events': ['s3:ObjectCreated:*']
+ }]
+ s3_notification_conf = PSNotificationS3(ps_zones[0].conn, bucket_name, topic_conf_list)
+ try:
+ s3_notification_conf.set_config()
+ except:
+ log.info('missing topic is expected')
+ else:
+ assert 'missing topic is expected'
+
+ # cleanup
zones[0].delete_bucket(bucket_name)
-def test_ps_event_acking():
- """ test acking of some events in a subscription """
+def test_ps_s3_topic_update():
+ """ test updating topic associated with a notification"""
+ if skip_push_tests:
+ return SkipTest("PubSub push tests don't run in teuthology")
+ rabbit_proc = init_rabbitmq()
+ if rabbit_proc is None:
+ return SkipTest('end2end amqp tests require rabbitmq-server installed')
zones, ps_zones = init_env()
bucket_name = gen_bucket_name()
topic_name = bucket_name+TOPIC_SUFFIX
- # create topic
- topic_conf = PSTopic(ps_zones[0].conn, topic_name)
- topic_conf.set_config()
+ # create amqp topic
+ hostname = get_ip()
+ exchange = 'ex1'
+ amqp_task, receiver = create_amqp_receiver_thread(exchange, topic_name)
+ amqp_task.start()
+ topic_conf = PSTopic(ps_zones[0].conn, topic_name,
+ endpoint='amqp://' + hostname,
+ endpoint_args='amqp-exchange=' + exchange + '&amqp-ack-level=none')
+ result, status = topic_conf.set_config()
+ assert_equal(status/100, 2)
+ parsed_result = json.loads(result)
+ topic_arn = parsed_result['arn']
+ # get topic
+ result, _ = topic_conf.get_config()
+ # verify topic content
+ parsed_result = json.loads(result)
+ assert_equal(parsed_result['topic']['name'], topic_name)
+ assert_equal(parsed_result['topic']['dest']['push_endpoint'], topic_conf.parameters['push-endpoint'])
+
+ # create http server
+ port = random.randint(10000, 20000)
+ # start an http server in a separate thread
+ http_server = StreamingHTTPServer(hostname, port)
+
# create bucket on the first of the rados zones
bucket = zones[0].create_bucket(bucket_name)
# wait for sync
zone_meta_checkpoint(ps_zones[0].zone)
- # create notifications
- notification_conf = PSNotification(ps_zones[0].conn, bucket_name,
- topic_name)
- _, status = notification_conf.set_config()
- assert_equal(status/100, 2)
- # create subscription
- sub_conf = PSSubscription(ps_zones[0].conn, bucket_name+SUB_SUFFIX,
- topic_name)
- _, status = sub_conf.set_config()
+ # create s3 notification
+ notification_name = bucket_name + NOTIFICATION_SUFFIX
+ topic_conf_list = [{'Id': notification_name,
+ 'TopicArn': topic_arn,
+ 'Events': ['s3:ObjectCreated:*']
+ }]
+ s3_notification_conf = PSNotificationS3(ps_zones[0].conn, bucket_name, topic_conf_list)
+ _, status = s3_notification_conf.set_config()
assert_equal(status/100, 2)
# create objects in the bucket
number_of_objects = 10
# wait for sync
zone_bucket_checkpoint(ps_zones[0].zone, zones[0].zone, bucket_name)
- # get the create events from the subscription
- result, _ = sub_conf.get_events()
- parsed_result = json.loads(result)
- events = parsed_result['events']
- original_number_of_events = len(events)
- for event in events:
- log.debug('Event (before ack) id: "' + str(event['id']) + '"')
keys = list(bucket.list())
- # TODO: set exact_match to true
- verify_events_by_elements(events, keys, exact_match=False)
- # ack half of the events
- events_to_ack = number_of_objects/2
- for event in events:
- if events_to_ack == 0:
- break
- _, status = sub_conf.ack_events(event['id'])
- assert_equal(status/100, 2)
- events_to_ack -= 1
+ # TODO: use exact match
+ receiver.verify_s3_events(keys, exact_match=False)
- # verify that acked events are gone
- result, _ = sub_conf.get_events()
+ # update the same topic with new endpoint
+ topic_conf = PSTopic(ps_zones[0].conn, topic_name,
+ endpoint='http://'+ hostname + ':' + str(port))
+ _, status = topic_conf.set_config()
+ assert_equal(status/100, 2)
+ # get topic
+ result, _ = topic_conf.get_config()
+ # verify topic content
parsed_result = json.loads(result)
- for event in parsed_result['events']:
- log.debug('Event (after ack) id: "' + str(event['id']) + '"')
- assert_equal(len(parsed_result['events']), original_number_of_events - number_of_objects/2)
+ assert_equal(parsed_result['topic']['name'], topic_name)
+ assert_equal(parsed_result['topic']['dest']['push_endpoint'], topic_conf.parameters['push-endpoint'])
+
+ # delete current objects and create new objects in the bucket
+ for key in bucket.list():
+ key.delete()
+ for i in range(number_of_objects):
+ key = bucket.new_key(str(i+100))
+ key.set_contents_from_string('bar')
+ # wait for sync
+ zone_meta_checkpoint(ps_zones[0].zone)
+ zone_bucket_checkpoint(ps_zones[0].zone, zones[0].zone, bucket_name)
+
+ keys = list(bucket.list())
+ # verify that notifications are still sent to amqp
+ # TODO: use exact match
+ receiver.verify_s3_events(keys, exact_match=False)
+
+ # update notification to update the endpoint from the topic
+ topic_conf_list = [{'Id': notification_name,
+ 'TopicArn': topic_arn,
+ 'Events': ['s3:ObjectCreated:*']
+ }]
+ s3_notification_conf = PSNotificationS3(ps_zones[0].conn, bucket_name, topic_conf_list)
+ _, status = s3_notification_conf.set_config()
+ assert_equal(status/100, 2)
+
+ # delete current objects and create new objects in the bucket
+ for key in bucket.list():
+ key.delete()
+ for i in range(number_of_objects):
+ key = bucket.new_key(str(i+200))
+ key.set_contents_from_string('bar')
+ # wait for sync
+ zone_meta_checkpoint(ps_zones[0].zone)
+ zone_bucket_checkpoint(ps_zones[0].zone, zones[0].zone, bucket_name)
+
+ keys = list(bucket.list())
+ # check that updates switched to http
+ # TODO: use exact match
+ http_server.verify_s3_events(keys, exact_match=False)
# cleanup
- sub_conf.del_config()
- notification_conf.del_config()
- topic_conf.del_config()
+ # delete objects from the bucket
+ stop_amqp_receiver(receiver, amqp_task)
for key in bucket.list():
key.delete()
+ s3_notification_conf.del_config()
+ topic_conf.del_config()
zones[0].delete_bucket(bucket_name)
+ http_server.close()
+ clean_rabbitmq(rabbit_proc)
+
+
+def test_ps_s3_notification_update():
+ """ test updating the topic of a notification"""
+ if skip_push_tests:
+ return SkipTest("PubSub push tests don't run in teuthology")
+ hostname = get_ip()
+ rabbit_proc = init_rabbitmq()
+ if rabbit_proc is None:
+ return SkipTest('end2end amqp tests require rabbitmq-server installed')
-def test_ps_creation_triggers():
- """ test object creation notifications in using put/copy/post """
zones, ps_zones = init_env()
bucket_name = gen_bucket_name()
- topic_name = bucket_name+TOPIC_SUFFIX
+ topic_name1 = bucket_name+'amqp'+TOPIC_SUFFIX
+ topic_name2 = bucket_name+'http'+TOPIC_SUFFIX
+
+ # create topics
+ # start amqp receiver in a separate thread
+ exchange = 'ex1'
+ amqp_task, receiver = create_amqp_receiver_thread(exchange, topic_name1)
+ amqp_task.start()
+ # create random port for the http server
+ http_port = random.randint(10000, 20000)
+ # start an http server in a separate thread
+ http_server = StreamingHTTPServer(hostname, http_port)
+
+ topic_conf1 = PSTopic(ps_zones[0].conn, topic_name1,
+ endpoint='amqp://' + hostname,
+ endpoint_args='amqp-exchange=' + exchange + '&amqp-ack-level=none')
+ result, status = topic_conf1.set_config()
+ parsed_result = json.loads(result)
+ topic_arn1 = parsed_result['arn']
+ assert_equal(status/100, 2)
+ topic_conf2 = PSTopic(ps_zones[0].conn, topic_name2,
+ endpoint='http://'+hostname+':'+str(http_port))
+ result, status = topic_conf2.set_config()
+ parsed_result = json.loads(result)
+ topic_arn2 = parsed_result['arn']
+ assert_equal(status/100, 2)
- # create topic
- topic_conf = PSTopic(ps_zones[0].conn, topic_name)
- topic_conf.set_config()
# create bucket on the first of the rados zones
bucket = zones[0].create_bucket(bucket_name)
# wait for sync
zone_meta_checkpoint(ps_zones[0].zone)
- # create notifications
- notification_conf = PSNotification(ps_zones[0].conn, bucket_name,
- topic_name)
- _, status = notification_conf.set_config()
+ # create s3 notification with topic1
+ notification_name = bucket_name + NOTIFICATION_SUFFIX
+ topic_conf_list = [{'Id': notification_name,
+ 'TopicArn': topic_arn1,
+ 'Events': ['s3:ObjectCreated:*']
+ }]
+ s3_notification_conf = PSNotificationS3(ps_zones[0].conn, bucket_name, topic_conf_list)
+ _, status = s3_notification_conf.set_config()
assert_equal(status/100, 2)
- # create subscription
- sub_conf = PSSubscription(ps_zones[0].conn, bucket_name+SUB_SUFFIX,
- topic_name)
- _, status = sub_conf.set_config()
+ # create objects in the bucket
+ number_of_objects = 10
+ for i in range(number_of_objects):
+ key = bucket.new_key(str(i))
+ key.set_contents_from_string('bar')
+ # wait for sync
+ zone_bucket_checkpoint(ps_zones[0].zone, zones[0].zone, bucket_name)
+
+ keys = list(bucket.list())
+ # TODO: use exact match
+ receiver.verify_s3_events(keys, exact_match=False);
+
+ # update notification to use topic2
+ topic_conf_list = [{'Id': notification_name,
+ 'TopicArn': topic_arn2,
+ 'Events': ['s3:ObjectCreated:*']
+ }]
+ s3_notification_conf = PSNotificationS3(ps_zones[0].conn, bucket_name, topic_conf_list)
+ _, status = s3_notification_conf.set_config()
assert_equal(status/100, 2)
- # create objects in the bucket using PUT
- key = bucket.new_key('put')
- key.set_contents_from_string('bar')
- # create objects in the bucket using COPY
- bucket.copy_key('copy', bucket.name, key.name)
- # create objects in the bucket using multi-part upload
- fp = tempfile.TemporaryFile(mode='w')
- fp.write('bar')
- fp.close()
- uploader = bucket.initiate_multipart_upload('multipart')
- fp = tempfile.TemporaryFile(mode='r')
- uploader.upload_part_from_file(fp, 1)
- uploader.complete_upload()
- fp.close()
+
+ # delete current objects and create new objects in the bucket
+ for key in bucket.list():
+ key.delete()
+ for i in range(number_of_objects):
+ key = bucket.new_key(str(i+100))
+ key.set_contents_from_string('bar')
# wait for sync
+ zone_meta_checkpoint(ps_zones[0].zone)
zone_bucket_checkpoint(ps_zones[0].zone, zones[0].zone, bucket_name)
- # get the create events from the subscription
- result, _ = sub_conf.get_events()
- parsed_result = json.loads(result)
- for event in parsed_result['events']:
- log.debug('Event key: "' + str(event['info']['key']['name']) + '" type: "' + str(event['event']) + '"')
+ keys = list(bucket.list())
+ # check that updates switched to http
+ # TODO: use exact match
+ http_server.verify_s3_events(keys, exact_match=False)
- # TODO: verify the specific 3 keys: 'put', 'copy' and 'multipart'
- assert len(parsed_result['events']) >= 3
# cleanup
- sub_conf.del_config()
- notification_conf.del_config()
- topic_conf.del_config()
+ # delete objects from the bucket
+ stop_amqp_receiver(receiver, amqp_task)
for key in bucket.list():
key.delete()
+ s3_notification_conf.del_config()
+ topic_conf1.del_config()
+ topic_conf2.del_config()
zones[0].delete_bucket(bucket_name)
+ http_server.close()
+ clean_rabbitmq(rabbit_proc)
-def test_ps_versioned_deletion():
- """ test notification of deletion markers """
+def test_ps_s3_multiple_topics_notification():
+ """ test notification creation with multiple topics"""
+ if skip_push_tests:
+ return SkipTest("PubSub push tests don't run in teuthology")
+ hostname = get_ip()
+ rabbit_proc = init_rabbitmq()
+ if rabbit_proc is None:
+ return SkipTest('end2end amqp tests require rabbitmq-server installed')
+
zones, ps_zones = init_env()
bucket_name = gen_bucket_name()
- topic_name = bucket_name+TOPIC_SUFFIX
+ topic_name1 = bucket_name+'amqp'+TOPIC_SUFFIX
+ topic_name2 = bucket_name+'http'+TOPIC_SUFFIX
+
+ # create topics
+ # start amqp receiver in a separate thread
+ exchange = 'ex1'
+ amqp_task, receiver = create_amqp_receiver_thread(exchange, topic_name1)
+ amqp_task.start()
+ # create random port for the http server
+ http_port = random.randint(10000, 20000)
+ # start an http server in a separate thread
+ http_server = StreamingHTTPServer(hostname, http_port)
+
+ topic_conf1 = PSTopic(ps_zones[0].conn, topic_name1,
+ endpoint='amqp://' + hostname,
+ endpoint_args='amqp-exchange=' + exchange + '&amqp-ack-level=none')
+ result, status = topic_conf1.set_config()
+ parsed_result = json.loads(result)
+ topic_arn1 = parsed_result['arn']
+ assert_equal(status/100, 2)
+ topic_conf2 = PSTopic(ps_zones[0].conn, topic_name2,
+ endpoint='http://'+hostname+':'+str(http_port))
+ result, status = topic_conf2.set_config()
+ parsed_result = json.loads(result)
+ topic_arn2 = parsed_result['arn']
+ assert_equal(status/100, 2)
- # create topic
- topic_conf = PSTopic(ps_zones[0].conn, topic_name)
- topic_conf.set_config()
# create bucket on the first of the rados zones
bucket = zones[0].create_bucket(bucket_name)
- bucket.configure_versioning(True)
# wait for sync
zone_meta_checkpoint(ps_zones[0].zone)
- # create notifications
- notification_conf = PSNotification(ps_zones[0].conn, bucket_name,
- topic_name, "OBJECT_DELETE")
- _, status = notification_conf.set_config()
+ # create s3 notification
+ notification_name1 = bucket_name + NOTIFICATION_SUFFIX + '_1'
+ notification_name2 = bucket_name + NOTIFICATION_SUFFIX + '_2'
+ topic_conf_list = [
+ {
+ 'Id': notification_name1,
+ 'TopicArn': topic_arn1,
+ 'Events': ['s3:ObjectCreated:*']
+ },
+ {
+ 'Id': notification_name2,
+ 'TopicArn': topic_arn2,
+ 'Events': ['s3:ObjectCreated:*']
+ }]
+ s3_notification_conf = PSNotificationS3(ps_zones[0].conn, bucket_name, topic_conf_list)
+ _, status = s3_notification_conf.set_config()
assert_equal(status/100, 2)
- # create subscription
- sub_conf = PSSubscription(ps_zones[0].conn, bucket_name+SUB_SUFFIX,
- topic_name)
- _, status = sub_conf.set_config()
+ result, _ = s3_notification_conf.get_config()
+ assert_equal(len(result['TopicConfigurations']), 2)
+ assert_equal(result['TopicConfigurations'][0]['Id'], notification_name1)
+ assert_equal(result['TopicConfigurations'][1]['Id'], notification_name2)
+
+ # get auto-generated subscriptions
+ sub_conf1 = PSSubscription(ps_zones[0].conn, notification_name1,
+ topic_name1)
+ _, status = sub_conf1.get_config()
assert_equal(status/100, 2)
+ sub_conf2 = PSSubscription(ps_zones[0].conn, notification_name2,
+ topic_name2)
+ _, status = sub_conf2.get_config()
+ assert_equal(status/100, 2)
+
# create objects in the bucket
- key = bucket.new_key('foo')
- key.set_contents_from_string('bar')
- v1 = key.version_id
- key.set_contents_from_string('kaboom')
- v2 = key.version_id
- # wait for sync
- zone_bucket_checkpoint(ps_zones[0].zone, zones[0].zone, bucket_name)
- # set delete markers
- bucket.delete_key(key.name, version_id=v2)
- bucket.delete_key(key.name, version_id=v1)
+ number_of_objects = 10
+ for i in range(number_of_objects):
+ key = bucket.new_key(str(i))
+ key.set_contents_from_string('bar')
# wait for sync
zone_bucket_checkpoint(ps_zones[0].zone, zones[0].zone, bucket_name)
- # get the create events from the subscription
- result, _ = sub_conf.get_events()
+ # get the events from both of the subscription
+ result, _ = sub_conf1.get_events()
parsed_result = json.loads(result)
- for event in parsed_result['events']:
- log.debug('Event key: "' + str(event['info']['key']['name']) + '" type: "' + str(event['event']) + '"')
+ for record in parsed_result['Records']:
+ log.debug(record)
+ keys = list(bucket.list())
+ # TODO: use exact match
+ verify_s3_records_by_elements(parsed_result['Records'], keys, exact_match=False)
+ receiver.verify_s3_events(keys, exact_match=False)
+
+ result, _ = sub_conf2.get_events()
+ parsed_result = json.loads(result)
+ for record in parsed_result['Records']:
+ log.debug(record)
+ keys = list(bucket.list())
+ # TODO: use exact match
+ verify_s3_records_by_elements(parsed_result['Records'], keys, exact_match=False)
+ http_server.verify_s3_events(keys, exact_match=False)
- # TODO: verify the specific events
- assert len(parsed_result['events']) >= 2
-
# cleanup
- sub_conf.del_config()
- notification_conf.del_config()
- topic_conf.del_config()
+ stop_amqp_receiver(receiver, amqp_task)
+ s3_notification_conf.del_config()
+ topic_conf1.del_config()
+ topic_conf2.del_config()
+ # delete objects from the bucket
+ for key in bucket.list():
+ key.delete()
zones[0].delete_bucket(bucket_name)
+ http_server.close()
+ clean_rabbitmq(rabbit_proc)
import logging
import httplib
import urllib
+import urlparse
import hmac
import hashlib
import base64
+import xmltodict
from time import gmtime, strftime
-from multisite import Zone
+from .multisite import Zone
+import boto3
+from botocore.client import Config
log = logging.getLogger('rgw_multi.tests')
class PSZone(Zone): # pylint: disable=too-many-ancestors
""" PubSub zone class """
+ def __init__(self, name, zonegroup=None, cluster=None, data=None, zone_id=None, gateways=None, full_sync='false', retention_days ='7'):
+ self.full_sync = full_sync
+ self.retention_days = retention_days
+ self.master_zone = zonegroup.master_zone
+ super(PSZone, self).__init__(name, zonegroup, cluster, data, zone_id, gateways)
+
def is_read_only(self):
return True
def create(self, cluster, args=None, **kwargs):
if args is None:
args = ''
- args += ['--tier-type', self.tier_type()]
+ tier_config = ','.join(['start_with_full_sync=' + self.full_sync, 'event_retention_days=' + self.retention_days])
+ args += ['--tier-type', self.tier_type(), '--sync-from-all=0', '--sync-from', self.master_zone.name, '--tier-config', tier_config]
return self.json_command(cluster, 'create', args)
def has_buckets(self):
NO_HTTP_BODY = ''
-def make_request(conn, method, resource, parameters=None):
+def print_connection_info(conn):
+ """print connection details"""
+ print('Endpoint: ' + conn.host + ':' + str(conn.port))
+ print('AWS Access Key:: ' + conn.aws_access_key_id)
+ print('AWS Secret Key:: ' + conn.aws_secret_access_key)
+
+
+def make_request(conn, method, resource, parameters=None, sign_parameters=False, extra_parameters=None):
"""generic request sending to pubsub radogw
should cover: topics, notificatios and subscriptions
"""
# remove 'None' from keys with no values
url_params = url_params.replace('=None', '')
url_params = '?' + url_params
+ if extra_parameters is not None:
+ url_params = url_params + '&' + extra_parameters
string_date = strftime("%a, %d %b %Y %H:%M:%S +0000", gmtime())
string_to_sign = method + '\n\n\n' + string_date + '\n' + resource
+ if sign_parameters:
+ string_to_sign += url_params
signature = base64.b64encode(hmac.new(conn.aws_secret_access_key,
string_to_sign.encode('utf-8'),
hashlib.sha1).digest())
return data, status
+def print_connection_info(conn):
+ """print info of connection"""
+ print("Host: " + conn.host+':'+str(conn.port))
+ print("AWS Secret Key: " + conn.aws_secret_access_key)
+ print("AWS Access Key: " + conn.aws_access_key_id)
+
+
class PSTopic:
"""class to set/get/delete a topic
- PUT /topics/<topic name>
+ PUT /topics/<topic name>[?push-endpoint=<endpoint>&[<arg1>=<value1>...]]
GET /topics/<topic name>
DELETE /topics/<topic name>
"""
- def __init__(self, conn, topic_name):
+ def __init__(self, conn, topic_name, endpoint=None, endpoint_args=None):
self.conn = conn
assert topic_name.strip()
self.resource = '/topics/'+topic_name
+ if endpoint is not None:
+ self.parameters = {'push-endpoint': endpoint}
+ self.extra_parameters = endpoint_args
+ else:
+ self.parameters = None
+ self.extra_parameters = None
- def send_request(self, method):
+ def send_request(self, method, get_list=False, parameters=None, extra_parameters=None):
"""send request to radosgw"""
- return make_request(self.conn, method, self.resource)
+ if get_list:
+ return make_request(self.conn, method, '/topics')
+ return make_request(self.conn, method, self.resource,
+ parameters=parameters, extra_parameters=extra_parameters)
def get_config(self):
"""get topic info"""
def set_config(self):
"""set topic"""
- return self.send_request('PUT')
+ return self.send_request('PUT', parameters=self.parameters, extra_parameters=self.extra_parameters)
def del_config(self):
"""delete topic"""
return self.send_request('DELETE')
+
+ def get_list(self):
+ """list all topics"""
+ return self.send_request('GET', get_list=True)
+
+
+def delete_all_s3_topics(conn, region):
+ try:
+ client = boto3.client('sns',
+ endpoint_url='http://'+conn.host+':'+str(conn.port),
+ aws_access_key_id=conn.aws_access_key_id,
+ aws_secret_access_key=conn.aws_secret_access_key,
+ region_name=region,
+ config=Config(signature_version='s3'))
+
+ topics = client.list_topics()['Topics']
+ for topic in topics:
+ print 'topic cleanup, deleting: ' + topic['TopicArn']
+ assert client.delete_topic(TopicArn=topic['TopicArn'])['ResponseMetadata']['HTTPStatusCode'] == 200
+ except:
+ print 'failed to do topic cleanup. if there are topics they may need to be manually deleted'
+
+
+class PSTopicS3:
+ """class to set/list/get/delete a topic
+ POST ?Action=CreateTopic&Name=<topic name>&push-endpoint=<endpoint>&[<arg1>=<value1>...]]
+ POST ?Action=ListTopics
+ POST ?Action=GetTopic&TopicArn=<topic-arn>
+ POST ?Action=DeleteTopic&TopicArn=<topic-arn>
+ """
+ def __init__(self, conn, topic_name, region, endpoint_args=None):
+ self.conn = conn
+ self.topic_name = topic_name.strip()
+ assert self.topic_name
+ self.topic_arn = ''
+ self.attributes = {}
+ if endpoint_args is not None:
+ self.attributes = {nvp[0] : nvp[1] for nvp in urlparse.parse_qsl(endpoint_args, keep_blank_values=True)}
+ self.client = boto3.client('sns',
+ endpoint_url='http://'+conn.host+':'+str(conn.port),
+ aws_access_key_id=conn.aws_access_key_id,
+ aws_secret_access_key=conn.aws_secret_access_key,
+ region_name=region,
+ config=Config(signature_version='s3'))
+
+
+ def get_config(self):
+ """get topic info"""
+ parameters = {'Action': 'GetTopic', 'TopicArn': self.topic_arn}
+ body = urllib.urlencode(parameters)
+ string_date = strftime("%a, %d %b %Y %H:%M:%S +0000", gmtime())
+ content_type = 'application/x-www-form-urlencoded; charset=utf-8'
+ resource = '/'
+ method = 'POST'
+ string_to_sign = method + '\n\n' + content_type + '\n' + string_date + '\n' + resource
+ log.debug('StringTosign: %s', string_to_sign)
+ signature = base64.b64encode(hmac.new(self.conn.aws_secret_access_key,
+ string_to_sign.encode('utf-8'),
+ hashlib.sha1).digest())
+ headers = {'Authorization': 'AWS '+self.conn.aws_access_key_id+':'+signature,
+ 'Date': string_date,
+ 'Host': self.conn.host+':'+str(self.conn.port),
+ 'Content-Type': content_type}
+ http_conn = httplib.HTTPConnection(self.conn.host, self.conn.port)
+ if log.getEffectiveLevel() <= 10:
+ http_conn.set_debuglevel(5)
+ http_conn.request(method, resource, body, headers)
+ response = http_conn.getresponse()
+ data = response.read()
+ status = response.status
+ http_conn.close()
+ dict_response = xmltodict.parse(data)
+ return dict_response, status
+
+ def set_config(self):
+ """set topic"""
+ result = self.client.create_topic(Name=self.topic_name, Attributes=self.attributes)
+ self.topic_arn = result['TopicArn']
+ return self.topic_arn
+
+ def del_config(self):
+ """delete topic"""
+ result = self.client.delete_topic(TopicArn=self.topic_arn)
+ return result['ResponseMetadata']['HTTPStatusCode']
+
+ def get_list(self):
+ """list all topics"""
+ return self.client.list_topics()
class PSNotification:
return self.send_request('GET')
def set_config(self):
- """setnotification"""
+ """set notification"""
return self.send_request('PUT', self.parameters)
def del_config(self):
return self.send_request('DELETE', self.parameters)
+class PSNotificationS3:
+ """class to set/get/delete an S3 notification
+ PUT /<bucket>?notification
+ GET /<bucket>?notification[=<notification>]
+ DELETE /<bucket>?notification[=<notification>]
+ """
+ def __init__(self, conn, bucket_name, topic_conf_list):
+ self.conn = conn
+ assert bucket_name.strip()
+ self.bucket_name = bucket_name
+ self.resource = '/'+bucket_name
+ self.topic_conf_list = topic_conf_list
+ self.client = boto3.client('s3',
+ endpoint_url='http://'+conn.host+':'+str(conn.port),
+ aws_access_key_id=conn.aws_access_key_id,
+ aws_secret_access_key=conn.aws_secret_access_key,
+ config=Config(signature_version='s3'))
+
+ def send_request(self, method, parameters=None):
+ """send request to radosgw"""
+ return make_request(self.conn, method, self.resource,
+ parameters=parameters, sign_parameters=True)
+
+ def get_config(self, notification=None):
+ """get notification info"""
+ parameters = None
+ if notification is None:
+ response = self.client.get_bucket_notification_configuration(Bucket=self.bucket_name)
+ status = response['ResponseMetadata']['HTTPStatusCode']
+ return response, status
+ parameters = {'notification': notification}
+ response, status = self.send_request('GET', parameters=parameters)
+ dict_response = xmltodict.parse(response)
+ return dict_response, status
+
+ def set_config(self):
+ """set notification"""
+ response = self.client.put_bucket_notification_configuration(Bucket=self.bucket_name,
+ NotificationConfiguration={
+ 'TopicConfigurations': self.topic_conf_list
+ })
+ status = response['ResponseMetadata']['HTTPStatusCode']
+ return response, status
+
+ def del_config(self, notification=None):
+ """delete notification"""
+ parameters = {'notification': notification}
+
+ return self.send_request('DELETE', parameters)
+
+
class PSSubscription:
"""class to set/get/delete a subscription:
- PUT /subscriptions/<sub-name>?topic=<topic-name>
+ PUT /subscriptions/<sub-name>?topic=<topic-name>[&push-endpoint=<endpoint>&[<arg1>=<value1>...]]
GET /subscriptions/<sub-name>
DELETE /subscriptions/<sub-name>
also to get list of events, and ack them:
GET /subscriptions/<sub-name>?events[&max-entries=<max-entries>][&marker=<marker>]
POST /subscriptions/<sub-name>?ack&event-id=<event-id>
"""
- def __init__(self, conn, sub_name, topic_name):
+ def __init__(self, conn, sub_name, topic_name, endpoint=None, endpoint_args=None):
self.conn = conn
assert topic_name.strip()
self.resource = '/subscriptions/'+sub_name
- self.parameters = {'topic': topic_name}
+ if endpoint is not None:
+ self.parameters = {'topic': topic_name, 'push-endpoint': endpoint}
+ self.extra_parameters = endpoint_args
+ else:
+ self.parameters = {'topic': topic_name}
+ self.extra_parameters = None
- def send_request(self, method, parameters=None):
+ def send_request(self, method, parameters=None, extra_parameters=None):
"""send request to radosgw"""
- return make_request(self.conn, method, self.resource, parameters)
+ return make_request(self.conn, method, self.resource,
+ parameters=parameters,
+ extra_parameters=extra_parameters)
def get_config(self):
"""get subscription info"""
def set_config(self):
"""set subscription"""
- return self.send_request('PUT', self.parameters)
+ return self.send_request('PUT', parameters=self.parameters, extra_parameters=self.extra_parameters)
- def del_config(self):
+ def del_config(self, topic=False):
"""delete subscription"""
+ if topic:
+ return self.send_request('DELETE', self.parameters)
return self.send_request('DELETE')
def get_events(self, max_entries=None, marker=None):
""" ack events in a subscription """
parameters = {'ack': None, 'event-id': event_id}
return self.send_request('POST', parameters)
+
+
+class PSZoneConfig:
+ """ pubsub zone configuration """
+ def __init__(self, cfg, section):
+ self.full_sync = cfg.get(section, 'start_with_full_sync')
+ self.retention_days = cfg.get(section, 'retention_days')
```
$ nosetests test_multi.py:<specific_test_name>
```
+To run miltiple tests based on wildcard string, use the following format:
+```
+$ nosetests test_multi.py -m "<wildcard string>"
+```
Note that the test to run, does not have to be inside the `test_multi.py` file.
Note that different options for running specific and multiple tests exists in the [nose documentation](https://nose.readthedocs.io/en/latest/usage.html#options), as well as other options to control the execution of the tests.
## Configuration
+### Environment Variables
+Following RGW environment variables are taken into consideration when running the tests:
+ - `RGW_FRONTEND`: used to change frontend to 'civetweb' or 'beast' (default)
+ - `RGW_VALGRIND`: used to run the radosgw under valgrind. e.g. RGW_VALGRIND=yes
+Other environment variables used to configure elements other than RGW can also be used as they are used in vstart.sh. E.g. MON, OSD, MGR, MSD
The configuration file for the run has 3 sections:
### Default
This section holds the following parameters:
- `num_zonegroups`: number of zone groups (integer, default 1)
- `num_zones`: number of regular zones in each group (integer, default 3)
- `num_ps_zones`: number of pubsub zones in each group (integer, default 0)
+ - `num_az_zones`: number of archive zones (integer, default 0, max value 1)
- `gateways_per_zone`: number of RADOS gateways per zone (integer, default 2)
- `no_bootstrap`: whether to assume that the cluster is already up and does not need to be setup again. If set to "false", it will try to re-run the cluster, so, `mstop.sh` must be called beforehand. Should be set to false, anytime the configuration is changed. Otherwise, and assuming the cluster is already up, it should be set to "true" to save on execution time (boolean, default false)
- `log_level`: console log level of the logs in the tests, note that any program invoked from the test my emit logs regardless of that setting (integer, default 20)
*TODO*
### Cloud
*TODO*
+### PubSub
+*TODO*
## Writing Tests
New tests should be added into the `/path/to/ceph/src/test/rgw/rgw_multi` subdirectory.
- Base classes are in: `/path/to/ceph/src/test/rgw/rgw_multi/multisite.py`
from rgw_multi.zone_cloud import CloudZone as CloudZone
from rgw_multi.zone_cloud import CloudZoneConfig as CloudZoneConfig
from rgw_multi.zone_ps import PSZone as PSZone
+from rgw_multi.zone_ps import PSZoneConfig as PSZoneConfig
# make tests from rgw_multi.tests available to nose
from rgw_multi.tests import *
env = os.environ.copy()
env['CEPH_NUM_MDS'] = '0'
cmd += ['-n']
+ # cmd += ['-o']
+ # cmd += ['rgw_cache_enabled=false']
bash(cmd, env=env)
self.needs_reset = False
def start(self, args = None):
""" start the gateway """
assert(self.cluster)
+ env = os.environ.copy()
+ # to change frontend, set RGW_FRONTEND env variable
+ # e.g. RGW_FRONTEND=civetweb
+ # to run test under valgrind memcheck, set RGW_VALGRIND to 'yes'
+ # e.g. RGW_VALGRIND=yes
cmd = [mstart_path + 'mrgw.sh', self.cluster.cluster_id, str(self.port)]
if self.id:
cmd += ['-i', self.id]
cmd += ['--debug-rgw=20', '--debug-ms=1']
if args:
cmd += args
- bash(cmd)
+ bash(cmd, env=env)
def stop(self):
""" stop the gateway """
'num_zonegroups': 1,
'num_zones': 3,
'num_ps_zones': 0,
+ 'num_az_zones': 0,
'gateways_per_zone': 2,
'no_bootstrap': 'false',
'log_level': 20,
parser.add_argument('--reconfigure-delay', type=int, default=cfg.getint(section, 'reconfigure_delay'))
parser.add_argument('--num-ps-zones', type=int, default=cfg.getint(section, 'num_ps_zones'))
+
es_cfg = []
cloud_cfg = []
+ ps_cfg = []
+ az_cfg = []
for s in cfg.sections():
if s.startswith('elasticsearch'):
es_cfg.append(ESZoneConfig(cfg, s))
elif s.startswith('cloud'):
cloud_cfg.append(CloudZoneConfig(cfg, s))
+ elif s.startswith('pubsub'):
+ ps_cfg.append(PSZoneConfig(cfg, s))
argv = []
admin_user = multisite.User('zone.user')
user_creds = gen_credentials()
- user = multisite.User('tester')
+ user = multisite.User('tester', tenant=args.tenant)
realm = multisite.Realm('r')
if bootstrap:
num_es_zones = len(es_cfg)
num_cloud_zones = len(cloud_cfg)
+ num_ps_zones_from_conf = len(ps_cfg)
+ num_ps_zones = args.num_ps_zones if num_ps_zones_from_conf == 0 else num_ps_zones_from_conf
+ print 'num_ps_zones = ' + str(num_ps_zones)
num_zones = args.num_zones + num_es_zones + num_cloud_zones + args.num_ps_zones
ccfg.target_path, zonegroup, cluster)
elif ps_zone:
zone_index = z - args.num_zones - num_es_zones - num_cloud_zones
- zone = PSZone(zone_name(zg, z), zonegroup, cluster)
+ if num_ps_zones_from_conf == 0:
+ zone = PSZone(zone_name(zg, z), zonegroup, cluster)
+ else:
+ pscfg = ps_cfg[zone_index]
+ zone = PSZone(zone_name(zg, z), zonegroup, cluster,
+ full_sync=pscfg.full_sync, retention_days=pscfg.retention_days)
else:
zone = RadosZone(zone_name(zg, z), zonegroup, cluster)
# create test user
arg = ['--display-name', '"Test User"']
arg += user_creds.credential_args()
- if args.tenant:
- cmd += ['--tenant', args.tenant]
user.create(zone, arg)
else:
# read users and update keys
admin_user.info(zone)
admin_creds = admin_user.credentials[0]
- user.info(zone)
+ arg = []
+ user.info(zone, arg)
user_creds = user.credentials[0]
if not bootstrap:
config = Config(checkpoint_retries=args.checkpoint_retries,
checkpoint_delay=args.checkpoint_delay,
- reconfigure_delay=args.reconfigure_delay)
+ reconfigure_delay=args.reconfigure_delay,
+ tenant=args.tenant)
init_multi(realm, user, config)
def setup_module():
// vim: ts=8 sw=2 smarttab
#include "rgw/rgw_amqp.h"
+#include "common/ceph_context.h"
#include "amqp_mock.h"
#include <gtest/gtest.h>
#include <chrono>
const std::chrono::milliseconds wait_time(300);
-TEST(AMQP_Connection, ConnectionOK)
+class CctCleaner {
+ CephContext* cct;
+public:
+ CctCleaner(CephContext* _cct) : cct(_cct) {}
+ ~CctCleaner() {
+#ifdef WITH_SEASTAR
+ delete cct;
+#else
+ cct->put();
+#endif
+ }
+};
+
+auto cct = new CephContext(CEPH_ENTITY_TYPE_CLIENT);
+
+CctCleaner cleaner(cct);
+
+class TestAMQP : public ::testing::Test {
+protected:
+ void SetUp() override {
+ ASSERT_TRUE(amqp::init(cct));
+ }
+
+ void TearDown() override {
+ amqp::shutdown();
+ }
+};
+
+TEST_F(TestAMQP, ConnectionOK)
{
const auto connection_number = amqp::get_connection_count();
amqp::connection_ptr_t conn = amqp::connect("amqp://localhost", "ex1");
EXPECT_EQ(rc, 0);
}
-TEST(AMQP_Connection, ConnectionReuse)
+TEST_F(TestAMQP, ConnectionReuse)
{
+ amqp::connection_ptr_t conn1 = amqp::connect("amqp://localhost", "ex1");
+ EXPECT_TRUE(conn1);
const auto connection_number = amqp::get_connection_count();
- amqp::connection_ptr_t conn = amqp::connect("amqp://localhost", "ex1");
- EXPECT_TRUE(conn);
+ amqp::connection_ptr_t conn2 = amqp::connect("amqp://localhost", "ex1");
+ EXPECT_TRUE(conn2);
EXPECT_EQ(amqp::get_connection_count(), connection_number);
- auto rc = amqp::publish(conn, "topic", "message");
+ auto rc = amqp::publish(conn1, "topic", "message");
EXPECT_EQ(rc, 0);
}
-TEST(AMQP_Connection, NameResolutionFail)
+TEST_F(TestAMQP, NameResolutionFail)
{
const auto connection_number = amqp::get_connection_count();
amqp::connection_ptr_t conn = amqp::connect("amqp://kaboom", "ex1");
EXPECT_LT(rc, 0);
}
-TEST(AMQP_Connection, InvalidPort)
+TEST_F(TestAMQP, InvalidPort)
{
const auto connection_number = amqp::get_connection_count();
amqp::connection_ptr_t conn = amqp::connect("amqp://localhost:1234", "ex1");
EXPECT_LT(rc, 0);
}
-TEST(AMQP_Connection, InvalidHost)
+TEST_F(TestAMQP, InvalidHost)
{
const auto connection_number = amqp::get_connection_count();
amqp::connection_ptr_t conn = amqp::connect("amqp://0.0.0.1", "ex1");
EXPECT_LT(rc, 0);
}
-TEST(AMQP_Connection, InvalidVhost)
+TEST_F(TestAMQP, InvalidVhost)
{
const auto connection_number = amqp::get_connection_count();
amqp::connection_ptr_t conn = amqp::connect("amqp://localhost/kaboom", "ex1");
EXPECT_LT(rc, 0);
}
-TEST(AMQP_Connection, UserPassword)
+TEST_F(TestAMQP, UserPassword)
{
amqp_mock::set_valid_host("127.0.0.1");
{
amqp_mock::set_valid_host("localhost");
}
-TEST(AMQP_Connection, URLParseError)
+TEST_F(TestAMQP, URLParseError)
{
const auto connection_number = amqp::get_connection_count();
amqp::connection_ptr_t conn = amqp::connect("http://localhost", "ex1");
EXPECT_LT(rc, 0);
}
-TEST(AMQP_Connection, ExchangeMismatch)
+TEST_F(TestAMQP, ExchangeMismatch)
{
const auto connection_number = amqp::get_connection_count();
amqp::connection_ptr_t conn = amqp::connect("http://localhost", "ex2");
EXPECT_LT(rc, 0);
}
-TEST(AMQP_Connection, MaxConnections)
+TEST_F(TestAMQP, MaxConnections)
{
// fill up all connections
std::vector<amqp::connection_ptr_t> connections;
std::atomic<bool> callback_invoked = false;
+std::atomic<int> callbacks_invoked = 0;
+
// note: because these callback are shared among different "publish" calls
// they should be used on different connections
callback_invoked = true;
}
-TEST(AMQP_PublishAndWait, ReceiveAck)
+void my_callback_expect_multiple_acks(int rc) {
+ EXPECT_EQ(0, rc);
+ ++callbacks_invoked;
+}
+
+class dynamic_callback_wrapper {
+ dynamic_callback_wrapper() = default;
+public:
+ static dynamic_callback_wrapper* create() {
+ return new dynamic_callback_wrapper;
+ }
+ void callback(int rc) {
+ EXPECT_EQ(0, rc);
+ ++callbacks_invoked;
+ delete this;
+ }
+};
+
+
+TEST_F(TestAMQP, ReceiveAck)
{
callback_invoked = false;
const std::string host("localhost1");
amqp_mock::set_valid_host("localhost");
}
-TEST(AMQP_PublishAndWait, ReceiveNack)
+TEST_F(TestAMQP, ReceiveMultipleAck)
+{
+ callbacks_invoked = 0;
+ const std::string host("localhost1");
+ amqp_mock::set_valid_host(host);
+ amqp::connection_ptr_t conn = amqp::connect("amqp://" + host, "ex1");
+ EXPECT_TRUE(conn);
+ const auto NUMBER_OF_CALLS = 100;
+ for (auto i=0; i < NUMBER_OF_CALLS; ++i) {
+ auto rc = publish_with_confirm(conn, "topic", "message", my_callback_expect_multiple_acks);
+ EXPECT_EQ(rc, 0);
+ }
+ std::this_thread::sleep_for(wait_time);
+ EXPECT_EQ(callbacks_invoked, NUMBER_OF_CALLS);
+ callbacks_invoked = 0;
+ amqp_mock::set_valid_host("localhost");
+}
+
+TEST_F(TestAMQP, ReceiveAckForMultiple)
+{
+ callbacks_invoked = 0;
+ const std::string host("localhost1");
+ amqp_mock::set_valid_host(host);
+ amqp::connection_ptr_t conn = amqp::connect("amqp://" + host, "ex1");
+ EXPECT_TRUE(conn);
+ amqp_mock::set_multiple(59);
+ const auto NUMBER_OF_CALLS = 100;
+ for (auto i=0; i < NUMBER_OF_CALLS; ++i) {
+ auto rc = publish_with_confirm(conn, "topic", "message", my_callback_expect_multiple_acks);
+ EXPECT_EQ(rc, 0);
+ }
+ std::this_thread::sleep_for(wait_time);
+ EXPECT_EQ(callbacks_invoked, NUMBER_OF_CALLS);
+ callbacks_invoked = 0;
+ amqp_mock::set_valid_host("localhost");
+}
+
+TEST_F(TestAMQP, DynamicCallback)
+{
+ callbacks_invoked = 0;
+ const std::string host("localhost1");
+ amqp_mock::set_valid_host(host);
+ amqp::connection_ptr_t conn = amqp::connect("amqp://" + host, "ex1");
+ EXPECT_TRUE(conn);
+ amqp_mock::set_multiple(59);
+ const auto NUMBER_OF_CALLS = 100;
+ for (auto i=0; i < NUMBER_OF_CALLS; ++i) {
+ auto rc = publish_with_confirm(conn, "topic", "message",
+ std::bind(&dynamic_callback_wrapper::callback, dynamic_callback_wrapper::create(), std::placeholders::_1));
+ EXPECT_EQ(rc, 0);
+ }
+ std::this_thread::sleep_for(wait_time);
+ EXPECT_EQ(callbacks_invoked, NUMBER_OF_CALLS);
+ callbacks_invoked = 0;
+ amqp_mock::set_valid_host("localhost");
+}
+
+TEST_F(TestAMQP, ReceiveNack)
{
callback_invoked = false;
amqp_mock::REPLY_ACK = false;
amqp_mock::set_valid_host("localhost");
}
-TEST(AMQP_PublishAndWait, FailWrite)
+TEST_F(TestAMQP, FailWrite)
{
callback_invoked = false;
amqp_mock::FAIL_NEXT_WRITE = true;
amqp_mock::set_valid_host("localhost");
}
-TEST(AMQP_PublishAndWait, ClosedConnection)
+TEST_F(TestAMQP, ClosedConnection)
{
callback_invoked = false;
const auto current_connections = amqp::get_connection_count();
amqp_mock::set_valid_host("localhost");
}
-TEST(AMQP_ConnectionRetry, InvalidHost)
+TEST_F(TestAMQP, RetryInvalidHost)
{
const std::string host = "192.168.0.1";
const auto connection_number = amqp::get_connection_count();
amqp_mock::set_valid_host("localhost");
}
-TEST(AMQP_ConnectionRetry, InvalidPort)
+TEST_F(TestAMQP, RetryInvalidPort)
{
const int port = 9999;
const auto connection_number = amqp::get_connection_count();
amqp_mock::set_valid_port(5672);
}
-TEST(AMQP_ConnectionRetry, FailWrite)
+TEST_F(TestAMQP, RetryFailWrite)
{
callback_invoked = false;
amqp_mock::FAIL_NEXT_WRITE = true;
amqp_mock::set_valid_host("localhost");
}
+int fail_after = -1;
+int recover_after = -1;
+bool expect_zero_rc = true;
+
+void my_callback_triggering_failure(int rc) {
+ if (expect_zero_rc) {
+ EXPECT_EQ(rc, 0);
+ } else {
+ EXPECT_NE(rc, 0);
+ }
+ ++callbacks_invoked;
+ if (fail_after == callbacks_invoked) {
+ amqp_mock::FAIL_NEXT_READ = true;
+ expect_zero_rc = false;
+
+ }
+ if (recover_after == callbacks_invoked) {
+ amqp_mock::FAIL_NEXT_READ = false;
+ }
+}
+
+TEST_F(TestAMQP, AcksWithReconnect)
+{
+ callbacks_invoked = 0;
+ const std::string host("localhost1");
+ amqp_mock::set_valid_host(host);
+ amqp::connection_ptr_t conn = amqp::connect("amqp://" + host, "ex1");
+ EXPECT_TRUE(conn);
+ amqp_mock::set_multiple(59);
+ // failure will take effect after: max(59, 70)
+ fail_after = 70;
+ // all callback are flushed during failure, so, recover will take effect after: max(90, 100)
+ recover_after = 90;
+ const auto NUMBER_OF_CALLS = 100;
+ for (auto i = 0; i < NUMBER_OF_CALLS; ++i) {
+ auto rc = publish_with_confirm(conn, "topic", "message", my_callback_triggering_failure);
+ EXPECT_EQ(rc, 0);
+ }
+ // connection failes before multiple acks
+ std::this_thread::sleep_for(wait_time);
+ EXPECT_EQ(callbacks_invoked, NUMBER_OF_CALLS);
+ // publish more mesages
+ expect_zero_rc = true;
+ for (auto i = 0; i < NUMBER_OF_CALLS; ++i) {
+ auto rc = publish_with_confirm(conn, "topic", "message", my_callback_triggering_failure);
+ EXPECT_EQ(rc, 0);
+ }
+ std::this_thread::sleep_for(wait_time);
+ EXPECT_EQ(callbacks_invoked, 2*NUMBER_OF_CALLS);
+ callbacks_invoked = 0;
+ amqp_mock::set_valid_host("localhost");
+ fail_after = -1;
+}
+
--- /dev/null
+// -*- mode:C++; tab-width:8; c-basic-offset:2; indent-tabs-mode:t -*-
+// vim: ts=8 sw=2 smarttab
+
+#include "rgw/rgw_arn.h"
+#include <gtest/gtest.h>
+
+using namespace rgw;
+
+const int BASIC_ENTRIES = 6;
+
+const std::string basic_str[BASIC_ENTRIES] = {"arn:aws:s3:us-east-1:12345:resource",
+ "arn:aws:s3:us-east-1:12345:resourceType/resource",
+ "arn:aws:s3:us-east-1:12345:resourceType/resource/qualifier",
+ "arn:aws:s3:us-east-1:12345:resourceType/resource:qualifier",
+ "arn:aws:s3:us-east-1:12345:resourceType:resource",
+ "arn:aws:s3:us-east-1:12345:resourceType:resource/qualifier"};
+
+const std::string expected_basic_resource[BASIC_ENTRIES] = {"resource",
+ "resourceType/resource",
+ "resourceType/resource/qualifier",
+ "resourceType/resource:qualifier",
+ "resourceType:resource",
+ "resourceType:resource/qualifier"};
+TEST(TestARN, Basic)
+{
+ for (auto i = 0; i < BASIC_ENTRIES; ++i) {
+ boost::optional<ARN> arn = ARN::parse(basic_str[i]);
+ ASSERT_TRUE(arn);
+ EXPECT_EQ(arn->partition, Partition::aws);
+ EXPECT_EQ(arn->service, Service::s3);
+ EXPECT_STREQ(arn->region.c_str(), "us-east-1");
+ EXPECT_STREQ(arn->account.c_str(), "12345");
+ EXPECT_STREQ(arn->resource.c_str(), expected_basic_resource[i].c_str());
+ }
+}
+
+TEST(TestARN, ToString)
+{
+ for (auto i = 0; i < BASIC_ENTRIES; ++i) {
+ boost::optional<ARN> arn = ARN::parse(basic_str[i]);
+ ASSERT_TRUE(arn);
+ EXPECT_STREQ(to_string(*arn).c_str(), basic_str[i].c_str());
+ }
+}
+
+const std::string expected_basic_resource_type[BASIC_ENTRIES] =
+ {"", "resourceType", "resourceType", "resourceType", "resourceType", "resourceType"};
+const std::string expected_basic_qualifier[BASIC_ENTRIES] =
+ {"", "", "qualifier", "qualifier", "", "qualifier"};
+
+TEST(TestARNResource, Basic)
+{
+ for (auto i = 0; i < BASIC_ENTRIES; ++i) {
+ boost::optional<ARN> arn = ARN::parse(basic_str[i]);
+ ASSERT_TRUE(arn);
+ ASSERT_FALSE(arn->resource.empty());
+ boost::optional<ARNResource> resource = ARNResource::parse(arn->resource);
+ ASSERT_TRUE(resource);
+ EXPECT_STREQ(resource->resource.c_str(), "resource");
+ EXPECT_STREQ(resource->resource_type.c_str(), expected_basic_resource_type[i].c_str());
+ EXPECT_STREQ(resource->qualifier.c_str(), expected_basic_qualifier[i].c_str());
+ }
+}
+
+const int EMPTY_ENTRIES = 4;
+
+const std::string empty_str[EMPTY_ENTRIES] = {"arn:aws:s3:::resource",
+ "arn:aws:s3::12345:resource",
+ "arn:aws:s3:us-east-1::resource",
+ "arn:aws:s3:us-east-1:12345:"};
+
+TEST(TestARN, Empty)
+{
+ for (auto i = 0; i < EMPTY_ENTRIES; ++i) {
+ boost::optional<ARN> arn = ARN::parse(empty_str[i]);
+ ASSERT_TRUE(arn);
+ EXPECT_EQ(arn->partition, Partition::aws);
+ EXPECT_EQ(arn->service, Service::s3);
+ EXPECT_TRUE(arn->region.empty() || arn->region == "us-east-1");
+ EXPECT_TRUE(arn->account.empty() || arn->account == "12345");
+ EXPECT_TRUE(arn->resource.empty() || arn->resource == "resource");
+ }
+}
+
+const int WILDCARD_ENTRIES = 3;
+
+const std::string wildcard_str[WILDCARD_ENTRIES] = {"arn:aws:s3:*:*:resource",
+ "arn:aws:s3:*:12345:resource",
+ "arn:aws:s3:us-east-1:*:resource"};
+
+// FIXME: currently the following: "arn:aws:s3:us-east-1:12345:*"
+// does not fail, even if "wildcard" is not set to "true"
+
+TEST(TestARN, Wildcard)
+{
+ for (auto i = 0; i < WILDCARD_ENTRIES; ++i) {
+ EXPECT_FALSE(ARN::parse(wildcard_str[i]));
+ boost::optional<ARN> arn = ARN::parse(wildcard_str[i], true);
+ ASSERT_TRUE(arn);
+ EXPECT_EQ(arn->partition, Partition::aws);
+ EXPECT_EQ(arn->service, Service::s3);
+ EXPECT_TRUE(arn->region == "*" || arn->region == "us-east-1");
+ EXPECT_TRUE(arn->account == "*" || arn->account == "12345");
+ EXPECT_TRUE(arn->resource == "*" || arn->resource == "resource");
+ }
+}
+
using rgw::auth::Identity;
using rgw::auth::Principal;
-using rgw::IAM::ARN;
+using rgw::ARN;
using rgw::IAM::Effect;
using rgw::IAM::Environment;
-using rgw::IAM::Partition;
+using rgw::Partition;
using rgw::IAM::Policy;
using rgw::IAM::s3All;
using rgw::IAM::s3Count;
using rgw::IAM::None;
using rgw::IAM::s3PutBucketAcl;
using rgw::IAM::s3PutBucketPolicy;
-using rgw::IAM::Service;
+using rgw::Service;
using rgw::IAM::TokenID;
using rgw::IAM::Version;
using rgw::IAM::Action_t;
expected_output_with_attributes);
}
+static const char* expected_xml_output = "<Items xmlns=\"https://www.ceph.com/doc/\">"
+ "<Item Order=\"0\"><NameAndStatus><Name>hello</Name><Status>True</Status></NameAndStatus><Value>0</Value></Item>"
+ "<Item Order=\"1\"><NameAndStatus><Name>hello</Name><Status>False</Status></NameAndStatus><Value>1</Value></Item>"
+ "<Item Order=\"2\"><NameAndStatus><Name>hello</Name><Status>True</Status></NameAndStatus><Value>2</Value></Item>"
+ "<Item Order=\"3\"><NameAndStatus><Name>hello</Name><Status>False</Status></NameAndStatus><Value>3</Value></Item>"
+ "<Item Order=\"4\"><NameAndStatus><Name>hello</Name><Status>True</Status></NameAndStatus><Value>4</Value></Item>"
+ "</Items>";
+TEST(TestEncoder, ListWithAttrsAndNS)
+{
+ XMLFormatter f;
+ const auto array_size = 5;
+ f.open_array_section_in_ns("Items", "https://www.ceph.com/doc/");
+ for (auto i = 0; i < array_size; ++i) {
+ FormatterAttrs item_attrs("Order", std::to_string(i).c_str(), NULL);
+ f.open_object_section_with_attrs("Item", item_attrs);
+ f.open_object_section("NameAndStatus");
+ encode_xml("Name", "hello", &f);
+ encode_xml("Status", (i%2 == 0), &f);
+ f.close_section();
+ encode_xml("Value", i, &f);
+ f.close_section();
+ }
+ f.close_section();
+ std::stringstream ss;
+ f.flush(ss);
+ ASSERT_STREQ(ss.str().c_str(), expected_xml_output);
+}
+