--- /dev/null
+v16.1.0 Octopus
+===============
+
+This is the Release Candidate of the Pacific series.
+
+
+Notable Changes
+---------------
+
+* New ``bluestore_rocksdb_options_annex`` config parameter. Complements
+ ``bluestore_rocksdb_options`` and allows setting rocksdb options without
+ repeating the existing defaults.
+
+* The cephfs addes two new CDentry tags, 'I' --> 'i' and 'L' --> 'l', and
+ on-RADOS metadata is no longer backwards compatible after upgraded to Pacific
+ or a later release.
+
+* $pid expansion in config paths like ``admin_socket`` will now properly expand
+ to the daemon pid for commands like ``ceph-mds`` or ``ceph-osd``. Previously
+ only ``ceph-fuse``/``rbd-nbd`` expanded ``$pid`` with the actual daemon pid.
+
+* The allowable options for some ``radosgw-admin`` commands have been changed.
+
+ * ``mdlog-list``, ``datalog-list``, ``sync-error-list`` no longer accepts
+ start and end dates, but does accept a single optional start marker.
+ * ``mdlog-trim``, ``datalog-trim``, ``sync-error-trim`` only accept a
+ single marker giving the end of the trimmed range.
+ * Similarly the date ranges and marker ranges have been removed on
+ the RESTful DATALog and MDLog list and trim operations.
+
+* ceph-volume: The ``lvm batch`` subcommand received a major rewrite. This
+ closed a number of bugs and improves usability in terms of size specification
+ and calculation, as well as idempotency behaviour and disk replacement
+ process.
+ Please refer to https://docs.ceph.com/en/latest/ceph-volume/lvm/batch/ for
+ more detailed information.
+
+* Configuration variables for permitted scrub times have changed. The legal
+ values for ``osd_scrub_begin_hour`` and ``osd_scrub_end_hour`` are 0 - 23.
+ The use of 24 is now illegal. Specifying ``0`` for both values causes every
+ hour to be allowed. The legal values for ``osd_scrub_begin_week_day`` and
+ ``osd_scrub_end_week_day`` are 0 - 6. The use of 7 is now illegal.
+ Specifying ``0`` for both values causes every day of the week to be allowed.
+
+* Multiple file systems in a single Ceph cluster is now stable. New Ceph clusters
+ enable support for multiple file systems by default. Existing clusters
+ must still set the "enable_multiple" flag on the fs. Please see the CephFS
+ documentation for more information.
+
+* volume/nfs: Recently "ganesha-" prefix from cluster id and nfs-ganesha common
+ config object was removed, to ensure consistent namespace across different
+ orchestrator backends. Please delete any existing nfs-ganesha clusters prior
+ to upgrading and redeploy new clusters after upgrading to Pacific.
+
+* A new health check, DAEMON_OLD_VERSION, will warn if different versions of Ceph are running
+ on daemons. It will generate a health error if multiple versions are detected.
+ This condition must exist for over mon_warn_older_version_delay (set to 1 week by default) in order for the
+ health condition to be triggered. This allows most upgrades to proceed
+ without falsely seeing the warning. If upgrade is paused for an extended
+ time period, health mute can be used like this
+ "ceph health mute DAEMON_OLD_VERSION --sticky". In this case after
+ upgrade has finished use "ceph health unmute DAEMON_OLD_VERSION".
+
+* MGR: progress module can now be turned on/off, using the commands:
+ ``ceph progress on`` and ``ceph progress off``.
+* An AWS-compliant API: "GetTopicAttributes" was added to replace the existing "GetTopic" API. The new API
+ should be used to fetch information about topics used for bucket notifications.
+
+* librbd: The shared, read-only parent cache's config option ``immutable_object_cache_watermark`` now has been updated
+ to property reflect the upper cache utilization before space is reclaimed. The default ``immutable_object_cache_watermark``
+ now is ``0.9``. If the capacity reaches 90% the daemon will delete cold cache.
+
+* Scrubs are more aggressive in trying to find more simultaneous possible PGs within osd_max_scrubs limitation.
+ It is possible that increasing osd_scrub_sleep may be necessary to maintain client responsiveness.
+
+* OSD: the option ``osd_fast_shutdown_notify_mon`` has been introduced to allow
+ the OSD to notify the monitor it is shutting down even if ``osd_fast_shutdown``
+ is enabled. This helps with the monitor logs on larger clusters, that may get
+ many 'osd.X reported immediately failed by osd.Y' messages, and confuse tools.
+
+