4 * The 'ceph osd rm' command has been deprecated. Users should use
5 'ceph osd destroy' or 'ceph osd purge' (but after first confirming it is
6 safe to do so via the 'ceph osd safe-to-destroy' command).
8 * The MDS now supports dropping its cache for the purposes of benchmarking.
10 ceph tell mds.* cache drop <timeout>
12 Note that the MDS cache is cooperatively managed by the clients. It is
13 necessary for clients to give up capabilities in order for the MDS to fully
14 drop its cache. This is accomplished by asking all clients to trim as many
15 caps as possible. The timeout argument to the `cache drop` command controls
16 how long the MDS waits for clients to complete trimming caps. This is optional
17 and is 0 by default (no timeout). Keep in mind that clients may still retain
18 caps to open files which will prevent the metadata for those files from being
19 dropped by both the client and the MDS. (This is an equivalent scenario to
20 dropping the Linux page/buffer/inode/dentry caches with some processes pinning
21 some inodes/dentries/pages in cache.)
23 * The mon_health_preluminous_compat and mon_health_preluminous_compat_warning
24 config options are removed, as the related functionality is more
25 than two versions old. Any legacy monitoring system expecting Jewel-style
26 health output will need to be updated to work with Nautilus.
28 * Nautilus is not supported on any distros still running upstart so upstart
29 specific files and references have been removed.
31 * The 'ceph pg <pgid> list_missing' command has been renamed to
32 'ceph pg <pgid> list_unfound' to better match its behaviour.
34 * The 'rbd-mirror' daemon can now retrieve remote peer cluster configuration
35 secrets from the monitor. To use this feature, the 'rbd-mirror' daemon
36 CephX user for the local cluster must use the 'profile rbd-mirror' mon cap.
37 The secrets can be set using the 'rbd mirror pool peer add' and
38 'rbd mirror pool peer set' actions.
40 * The `ceph mds deactivate` is fully obsolete and references to it in the docs
41 have been removed or clarified.
43 * The libcephfs bindings added the ceph_select_filesystem function
44 for use with multiple filesystems.
46 * The cephfs python bindings now include mount_root and filesystem_name
47 options in the mount() function.
49 * erasure-code: add experimental *Coupled LAYer (CLAY)* erasure codes
50 support. It features less network traffic and disk I/O when performing
53 * The 'cache drop' OSD command has been added to drop an OSD's caches:
55 - ``ceph tell osd.x cache drop``
57 * The 'cache status' OSD command has been added to get the cache stats of an
60 - ``ceph tell osd.x cache status'
62 * The libcephfs added several functions that allow restarted client to destroy
63 or reclaim state held by a previous incarnation. These functions are for NFS
66 * The `ceph` command line tool now accepts keyword arguments in
67 the format "--arg=value" or "--arg value".
72 * The Telegraf module for the Manager allows for sending statistics to
73 an Telegraf Agent over TCP, UDP or a UNIX Socket. Telegraf can then
74 send the statistics to databases like InfluxDB, ElasticSearch, Graphite
77 * The graylog fields naming the originator of a log event have
78 changed: the string-form name is now included (e.g., ``"name":
79 "mgr.foo"``), and the rank-form name is now in a nested section
80 (e.g., ``"rank": {"type": "mgr", "num": 43243}``).
82 * If the cluster log is directed at syslog, the entries are now
83 prefixed by both the string-form name and the rank-form name (e.g.,
84 ``mgr.x mgr.12345 ...`` instead of just ``mgr.12345 ...``).
86 * The JSON output of the ``osd find`` command has replaced the ``ip``
87 field with an ``addrs`` section to reflect that OSDs may bind to
90 * CephFS clients without the 's' flag in their authentication capability
91 string will no longer be able to create/delete snapshots. To allow
92 ``client.foo`` to create/delete snapshots in the ``bar`` directory of
93 filesystem ``cephfs_a``, use command:
95 - ``ceph auth caps client.foo mon 'allow r' osd 'allow rw tag cephfs data=cephfs_a' mds 'allow rw, allow rws path=/bar'``
97 * The ``osd_heartbeat_addr`` option has been removed as it served no
98 (good) purpose: the OSD should always check heartbeats on both the
99 public and cluster networks.
101 * The ``rados`` tool's ``mkpool`` and ``rmpool`` commands have been
102 removed because they are redundant; please use the ``ceph osd pool
103 create`` and ``ceph osd pool rm`` commands instead.
105 * The ``auid`` property for cephx users and RADOS pools has been
106 removed. This was an undocumented and partially implemented
107 capability that allowed cephx users to map capabilities to RADOS
108 pools that they "owned". Because there are no users we have removed
109 this support. If any cephx capabilities exist in the cluster that
110 restrict based on auid then they will no longer parse, and the
111 cluster will report a health warning like::
113 AUTH_BAD_CAPS 1 auth entities have invalid capabilities
114 client.bad osd capability parse failed, stopped at 'allow rwx auid 123' of 'allow rwx auid 123'
116 The capability can be adjusted with the ``ceph auth caps`` command. For example,::
118 ceph auth caps client.bad osd 'allow rwx pool foo'
120 * The ``ceph-kvstore-tool`` ``repair`` command has been renamed
121 ``destructive-repair`` since we have discovered it can corrupt an
122 otherwise healthy rocksdb database. It should be used only as a last-ditch
123 attempt to recover data from an otherwise corrupted store.
126 * The default memory utilization for the mons has been increased
127 somewhat. Rocksdb now uses 512 MB of RAM by default, which should
128 be sufficient for small to medium-sized clusters; large clusters
129 should tune this up. Also, the ``mon_osd_cache_size`` has been
130 increase from 10 OSDMaps to 500, which will translate to an
131 additional 500 MB to 1 GB of RAM for large clusters, and much less
134 * The ``mgr/balancer/max_misplaced`` option has been replaced by a new
135 global ``target_max_misplaced_ratio`` option that throttles both
136 balancer activity and automated adjustments to ``pgp_num`` (normally as a
137 result of ``pg_num`` changes). If you have customized the balancer module
138 option, you will need to adjust your config to set the new global option
139 or revert to the default of .05 (5%).
141 * By default, Ceph no longer issues a health warning when there are
142 misplaced objects (objects that are fully replicated but not stored
143 on the intended OSDs). You can reenable the old warning by setting
144 ``mon_warn_on_misplaced`` to ``true``.
146 Upgrading from Luminous
147 -----------------------
149 * During the upgrade from luminous to nautilus, it will not be possible to create
150 a new OSD using a luminous ceph-osd daemon after the monitors have been
151 upgraded to nautilus.