-14.2.4
-------
-
-* In the Zabbix Mgr Module there was a typo in the key being send
- to Zabbix for PGs in backfill_wait state. The key that was sent
- was 'wait_backfill' and the correct name is 'backfill_wait'.
- Update your Zabbix template accordingly so that it accepts the
- new key being send to Zabbix.
-
-14.2.3
---------
-
-* Nautilus-based librbd clients can now open images on Jewel clusters.
-
-* The RGW "num_rados_handles" has been removed.
- If you were using a value of "num_rados_handles" greater than 1
- multiply your current "objecter_inflight_ops" and
- "objecter_inflight_op_bytes" paramaeters by the old
- "num_rados_handles" to get the same throttle behavior.
-
-* The ``bluestore_no_per_pool_stats_tolerance`` config option has been
- replaced with ``bluestore_fsck_error_on_no_per_pool_stats``
- (default: false). The overall default behavior has not changed:
- fsck will warn but not fail on legacy stores, and repair will
- convert to per-pool stats.
-
-14.2.2
-------
-
-* The no{up,down,in,out} related commands has been revamped.
- There are now 2 ways to set the no{up,down,in,out} flags:
- the old 'ceph osd [un]set <flag>' command, which sets cluster-wide flags;
- and the new 'ceph osd [un]set-group <flags> <who>' command,
- which sets flags in batch at the granularity of any crush node,
- or device class.
-
-* RGW: radosgw-admin introduces two subcommands that allow the
- managing of expire-stale objects that might be left behind after a
- bucket reshard in earlier versions of RGW. One subcommand lists such
- objects and the other deletes them. Read the troubleshooting section
- of the dynamic resharding docs for details.
-
-14.2.5
-------
-
-* The telemetry module now has a 'device' channel, enabled by default, that
- will report anonymized hard disk and SSD health metrics to telemetry.ceph.com
- in order to build and improve device failure prediction algorithms. Because
- the content of telemetry reports has changed, you will need to either re-opt-in
- with::
-
- ceph telemetry on
-
- You can view exactly what information will be reported first with::
-
- ceph telemetry show
- ceph telemetry show device # specifically show the device channel
-
- If you are not comfortable sharing device metrics, you can disable that
- channel first before re-opting-in:
-
- ceph config set mgr mgr/telemetry/channel_crash false
- ceph telemetry on
-
-* The telemetry module now reports more information about CephFS file systems,
- including:
-
- - how many MDS daemons (in total and per file system)
- - which features are (or have been) enabled
- - how many data pools
- - approximate file system age (year + month of creation)
- - how many files, bytes, and snapshots
- - how much metadata is being cached
-
- We have also added:
-
- - which Ceph release the monitors are running
- - whether msgr v1 or v2 addresses are used for the monitors
- - whether IPv4 or IPv6 addresses are used for the monitors
- - whether RADOS cache tiering is enabled (and which mode)
- - whether pools are replicated or erasure coded, and
- which erasure code profile plugin and parameters are in use
- - how many hosts are in the cluster, and how many hosts have each type of daemon
- - whether a separate OSD cluster network is being used
- - how many RBD pools and images are in the cluster, and how many pools have RBD mirroring enabled
- - how many RGW daemons, zones, and zonegroups are present; which RGW frontends are in use
- - aggregate stats about the CRUSH map, like which algorithms are used, how big buckets are, how many rules are defined, and what tunables are in use
-
- If you had telemetry enabled, you will need to re-opt-in with::
-
- ceph telemetry on
-
- You can view exactly what information will be reported first with::
-
- ceph telemetry show # see everything
- ceph telemetry show basic # basic cluster info (including all of the new info)
-
-* A health warning is now generated if the average osd heartbeat ping
- time exceeds a configurable threshold for any of the intervals
- computed. The OSD computes 1 minute, 5 minute and 15 minute
- intervals with average, minimum and maximum values. New configuration
- option ``mon_warn_on_slow_ping_ratio`` specifies a percentage of
- ``osd_heartbeat_grace`` to determine the threshold. A value of zero
- disables the warning. New configuration option
- ``mon_warn_on_slow_ping_time`` specified in milliseconds over-rides the
- computed value, causes a warning
- when OSD heartbeat pings take longer than the specified amount.
- New admin command ``ceph daemon mgr.# dump_osd_network [threshold]`` command will
- list all connections with a ping time longer than the specified threshold or
- value determined by the config options, for the average for any of the 3 intervals.
- New admin command ``ceph daemon osd.# dump_osd_network [threshold]`` will
- do the same but only including heartbeats initiated by the specified OSD.
-
-* New OSD daemon command dump_recovery_reservations which reveals the
- recovery locks held (in_progress) and waiting in priority queues.
-
-* New OSD daemon command dump_scrub_reservations which reveals the
- scrub reservations that are held for local (primary) and remote (replica) PGs.
-
-14.2.6
+14.2.7
------
* The following OSD memory config options related to bluestore cache autotuning can now
ceph config set global <option> <value>
-14.2.7
-------
-
* The MGR now accepts 'profile rbd' and 'profile rbd-read-only' user caps.
These caps can be used to provide users access to MGR-based RBD functionality
such as 'rbd perf image iostat' an 'rbd perf image iotop'.
would not allow a pool to ever have completely balanced PGs. For example, if
crush requires 1 replica on each of 3 racks, but there are fewer OSDs in 1 of
the racks. In those cases, the configuration value can be increased.
+
+* RGW: a mismatch between the bucket notification documentation and the actual
+ message format was fixed. This means that any endpoints receiving bucket
+ notification, will now receive the same notifications inside an JSON array
+ named 'Records'. Note that this does not affect pulling bucket notification
+ from a subscription in a 'pubsub' zone, as these are already wrapped inside
+ that array.
"eTag":"",
"versionId":"",
"sequencer": "",
- "metadata":""
+ "metadata":[]
}
},
"eventId":"",
- s3.object.version: object version in case of versioned bucket
- s3.object.sequencer: monotonically increasing identifier of the change per object (hexadecimal format)
- s3.object.metadata: any metadata set on the object sent as: ``x-amz-meta-`` (an extension to the S3 notification API)
-- s3.eventId: not supported (an extension to the S3 notification API)
+- s3.eventId: unique ID of the event, that could be used for acking (an extension to the S3 notification API)
.. _PubSub Module : ../pubsub-module
.. _S3 Notification Compatibility: ../s3-notification-compatibility
"eTag":"",
"versionId":"",
"sequencer":"",
- "metadata":""
+ "metadata":[]
}
},
"eventId":"",
- requestParameters: not supported
- responseElements: not supported
- s3.configurationId: notification ID that created the subscription for the event
-- s3.eventId: unique ID of the event, that could be used for acking (an extension to the S3 notification API)
- s3.bucket.name: name of the bucket
- s3.bucket.ownerIdentity.principalId: owner of the bucket
- s3.bucket.arn: ARN of the bucket
const std::string& etag,
EventType event_type,
rgw_pubsub_s3_record& record) {
- record.eventVersion = "2.1";
- record.eventSource = "aws:s3";
record.eventTime = mtime;
record.eventName = to_string(event_type);
record.userIdentity = s->user->user_id.id; // user that triggered the change
- record.sourceIPAddress = ""; // IP address of client that triggered the change: TODO
record.x_amz_request_id = s->req_id; // request ID of the original change
record.x_amz_id_2 = s->host_id; // RGW on which the change was made
- record.s3SchemaVersion = "1.0";
// configurationId is filled from subscription configuration
record.bucket_name = s->bucket_name;
record.bucket_ownerIdentity = s->bucket_owner.get_id().id;
const utime_t ts(real_clock::now());
boost::algorithm::hex((const char*)&ts, (const char*)&ts + sizeof(utime_t),
std::back_inserter(record.object_sequencer));
- // event ID is rgw extension (not in the S3 spec), used for acking the event
- // same format is used in both S3 compliant and Ceph specific events
- // not used in case of push-only mode
- record.id = "";
+ set_event_id(record.id, etag, ts);
record.bucket_id = s->bucket.bucket_id;
// pass meta data
record.x_meta_map = s->info.x_meta_map;
#define dout_subsys ceph_subsys_rgw
+void set_event_id(std::string& id, const std::string& hash, const utime_t& ts) {
+ char buf[64];
+ const auto len = snprintf(buf, sizeof(buf), "%010ld.%06ld.%s", (long)ts.sec(), (long)ts.usec(), hash.c_str());
+ if (len > 0) {
+ id.assign(buf, len);
+ }
+}
+
bool rgw_s3_key_filter::decode_xml(XMLObj* obj) {
XMLObjIter iter = obj->find("FilterRule");
XMLObj *o;
<Name></Name>
<Value></Value>
</FilterRule>
- </s3Metadata>
+ </S3Metadata>
</Filter>
<Id>notification1</Id>
<Topic>arn:aws:sns:<region>:<account>:<topic></Topic>
struct rgw_pubsub_s3_record {
constexpr static const char* const json_type_plural = "Records";
- // 2.2
- std::string eventVersion;
+ std::string eventVersion = "2.2";
// aws:s3
- std::string eventSource;
+ std::string eventSource = "ceph:s3";
// zonegroup
std::string awsRegion;
// time of the request
ceph::real_time eventTime;
// type of the event
std::string eventName;
- // user that sent the requet (not implemented)
+ // user that sent the request
std::string userIdentity;
// IP address of source of the request (not implemented)
std::string sourceIPAddress;
std::string x_amz_request_id;
// radosgw that received the request
std::string x_amz_id_2;
- // 1.0
- std::string s3SchemaVersion;
+ std::string s3SchemaVersion = "1.0";
// ID received in the notification request
std::string configurationId;
// bucket name
std::string bucket_name;
- // bucket owner (not implemented)
+ // bucket owner
std::string bucket_ownerIdentity;
// bucket ARN
std::string bucket_arn;
// object key
std::string object_key;
- // object size (not implemented)
- uint64_t object_size;
+ // object size
+ uint64_t object_size = 0;
// object etag
std::string object_etag;
// object version id bucket is versioned
std::string object_sequencer;
// this is an rgw extension (not S3 standard)
// used to store a globally unique identifier of the event
- // that could be used for acking
+ // that could be used for acking or any other identification of the event
std::string id;
// this is an rgw extension holding the internal bucket id
std::string bucket_id;
};
WRITE_CLASS_ENCODER(rgw_pubsub_event)
+// settign a unique ID for an event/record based on object hash and timestamp
+void set_event_id(std::string& id, const std::string& hash, const utime_t& ts);
+
struct rgw_pubsub_sub_dest {
std::string bucket_name;
std::string oid_prefix;
}
};
-static void set_event_id(std::string& id, const std::string& hash, const utime_t& ts) {
- char buf[64];
- const auto len = snprintf(buf, sizeof(buf), "%010ld.%06ld.%s", (long)ts.sec(), (long)ts.usec(), hash.c_str());
- if (len > 0) {
- id.assign(buf, len);
- }
-}
-
static void make_event_ref(CephContext *cct, const rgw_bucket& bucket,
const rgw_obj_key& key,
const ceph::real_time& mtime,
*record = std::make_shared<rgw_pubsub_s3_record>();
EventRef<rgw_pubsub_s3_record>& r = *record;
- r->eventVersion = "2.1";
- r->eventSource = "aws:s3";
r->eventTime = mtime;
r->eventName = rgw::notify::to_string(event_type);
- r->userIdentity = ""; // user that triggered the change: not supported in sync module
- r->sourceIPAddress = ""; // IP address of client that triggered the change: not supported in sync module
- r->x_amz_request_id = ""; // request ID of the original change: not supported in sync module
- r->x_amz_id_2 = ""; // RGW on which the change was made: not supported in sync module
- r->s3SchemaVersion = "1.0";
+ // userIdentity: not supported in sync module
+ // x_amz_request_id: not supported in sync module
+ // x_amz_id_2: not supported in sync module
// configurationId is filled from subscription configuration
r->bucket_name = bucket.name;
r->bucket_ownerIdentity = owner.to_str();
r->bucket_arn = to_string(rgw::ARN(bucket));
r->bucket_id = bucket.bucket_id; // rgw extension
r->object_key = key.name;
- r->object_size = 0; // not supported in sync module
+ // object_size not supported in sync module
objstore_event oevent(bucket, key, mtime, attrs);
r->object_etag = oevent.get_hash();
r->object_versionId = key.instance;
boost::algorithm::hex((const char*)&ts, (const char*)&ts + sizeof(utime_t),
std::back_inserter(r->object_sequencer));
- // event ID is rgw extension (not in the S3 spec), used for acking the event
- // same format is used in both S3 compliant and Ceph specific events
set_event_id(r->id, r->object_etag, ts);
}