From 145c37c04854e2484d0370ada5238aa142c1ccde Mon Sep 17 00:00:00 2001 From: Patrick Donnelly Date: Tue, 12 Mar 2019 13:54:06 -0700 Subject: [PATCH] doc: add CephFS notes for nautilus Signed-off-by: Patrick Donnelly --- doc/cephfs/standby.rst | 2 ++ doc/releases/nautilus.rst | 39 +++++++++++++++++++++++++++++++++------ 2 files changed, 35 insertions(+), 6 deletions(-) diff --git a/doc/cephfs/standby.rst b/doc/cephfs/standby.rst index d4be67d0a3b..8983415dbbf 100644 --- a/doc/cephfs/standby.rst +++ b/doc/cephfs/standby.rst @@ -77,6 +77,8 @@ Each file system may set the number of standby daemons wanted using: Setting ``count`` to 0 will disable the health check. +.. _mds-standby-replay: + Configuring standby-replay -------------------------- diff --git a/doc/releases/nautilus.rst b/doc/releases/nautilus.rst index c2892afb431..ac5c270f473 100644 --- a/doc/releases/nautilus.rst +++ b/doc/releases/nautilus.rst @@ -84,6 +84,39 @@ Major Changes from Mimic - *CephFS*: + * MDS stability has been greatly improved for large caches and + long-running clients with a lot of RAM. Cache trimming and client + capability recall is now throttled to prevent overloading the MDS. + * CephFS may now be exported via NFS-Ganesha clusters in environments managed + by Rook. Ceph manages the clusters and ensures high-availability and + scalability. An `introductory demo + `_ + is available. More automation of this feature is expected to be forthcoming + in future minor releases of Nautilus. + * The MDS ``mds_standby_for_*``, ``mon_force_standby_active``, and + ``mds_standby_replay`` configuration options have been obsoleted. Instead, + the operator :ref:`may now set ` the new + ``allow_standby_replay`` flag on the CephFS file system. This setting + causes standbys to become standby-replay for any available rank in the file + system. + * MDS now supports dropping its cache which concurrently asks clients + to trim their caches. This is done using MDS admin socket ``cache drop`` + command. + * It is now possible to check the progress of an on-going scrub in the MDS. + Additionally, a scrub may be paused or aborted. See :ref:`the disaster + recovery documentation ` for more information. + * A new interface for creating volumes is provided via the ``ceph volume`` + command-line-interface. + * A new cephfs-shell tool is available for manipulating a CephFS file + system without mounting. + * CephFS-related output from ``ceph status`` has been reformatted for brevity, + clarity, and usefulness. + * Lazy IO has been revamped. It can be turned on by the client using the new + CEPH_O_LAZY flag to the ``ceph_open`` C/C++ API or via the config option + ``client_force_lazyio``. + * CephFS file system can now be brought down rapidly via the ``ceph fs fail`` + command. See :ref:`the administration page ` for + more information. - *RBD*: @@ -540,12 +573,6 @@ These changes occurred between the Mimic and Nautilus releases. ``mds_recall_warning_decay_rate`` (default: 60s) sets the threshold for this warning. -* The MDS mds_standby_for_*, mon_force_standby_active, and mds_standby_replay - configuration options have been obsoleted. Instead, the operator may now set - the new "allow_standby_replay" flag on the CephFS file system. This setting - causes standbys to become standby-replay for any available rank in the file - system. - * The Telegraf module for the Manager allows for sending statistics to an Telegraf Agent over TCP, UDP or a UNIX Socket. Telegraf can then send the statistics to databases like InfluxDB, ElasticSearch, Graphite -- 2.39.5