The MDS and clients both try to enforce a cache size. The mechanism for
specifying the MDS cache size is described below. Note that the MDS cache size
-is a not a hard limit. The MDS always allows clients to lookup new metadata
-which is loaded into the cache. This is an essential policy as its avoids
+is not a hard limit. The MDS always allows clients to lookup new metadata
+which is loaded into the cache. This is an essential policy as it avoids
deadlock in client requests (some requests may rely on held capabilities before
capabilities are released).
When the MDS cache is too large, the MDS will **recall** client state so cache
-items become unpinned and eligble to be dropped. The MDS can only drop cache
+items become unpinned and eligible to be dropped. The MDS can only drop cache
state when no clients refer to the metadata to be dropped. Also described below
is how to configure the MDS recall settings for your workload's needs. This is
necessary if the internal throttles on the MDS recall can not keep up with the
it will reach a steady state of ``-ln(0.5)/rate*threshold`` items removed per
second.
-The defaults are conservative and may need changed for production MDS with
+The defaults are conservative and may need to be changed for production MDS with
large cache sizes.
The MDS also keeps track of whether sessions are quiescent. If a client session
is not utilizing its capabilities or is otherwise quiet, the MDS will begin
-recalling state from the session even if its not under cache pressure. This
+recalling state from the session even if it's not under cache pressure. This
helps the MDS avoid future work when the cluster workload is hot and cache
pressure is forcing the MDS to recall state. The expectation is that a client
not utilizing its capabilities is unlikely to use those capabilities anytime
The configuration ``mds_session_cache_liveness_decay_rate`` indicates the
half-life for the decay counter tracking the use of capabilities by the client.
Each time a client manipulates or acquires a capability, the MDS will increment
-the counter. This is a rough but effective way to monitor utilization of the
+the counter. This is a rough but effective way to monitor the utilization of the
client cache.
The ``mds_session_cache_liveness_magnitude`` is a base-2 magnitude difference
Ceph Dashboard Design Goals
===========================
-.. note:: this document is intended to provide a focal point for discussing the overall design
+.. note:: This document is intended to provide a focal point for discussing the overall design
principles for mgr/dashboard
Introduction
============
-Most distributed storage architectures are inherently complex, and can present a management challenge
+Most distributed storage architectures are inherently complex and can present a management challenge
to Operations teams who are typically stretched across multiple product and platform disciplines. In
general terms, the complexity of any solution can have a direct bearing on the operational costs
incurred to manage it. The answer is simple...make it simple :)
Ceph has historically been administered from the CLI. The CLI has always and will always offer the
richest, most flexible way to install and manage a Ceph cluster. Administrators who require and
-demand this level of control are unlikely to adopt a UI for anything more than a technical curiousity.
+demand this level of control are unlikely to adopt a UI for anything more than a technical curiosity.
The relevance of the UI is therefore more critical for a new SysAdmin, where it can help technology
adoption and reduce the operational friction that is normally experienced when implementing a new
different views
#. **Data timeliness**. Data displayed in the UI must be timely. State information **must** be reasonably
recent for it to be relevant and acted upon with confidence. In addition, the age of the data should
- be shown as an age (e.g. 20s ago) rather than UTC timestamps to make it more immediately consumable by
+ be shown as age (e.g. 20s ago) rather than UTC timestamps to make it more immediately consumable by
the Administrator.
#. **Automate through workflows**. If the admin has to follow a 'recipe' to perform a task, the goal of
the dashboard UI should be to implement the flow.
#. **Provide a natural next step**. The UI **is** the *expert system*, so instead of expecting the user
- to know where to they go next, the UI should lead them. This means linking components together to
- establish a flow, and deeper integration between the alertmanager implementation and the dashboard
+ to know where they go next, the UI should lead them. This means linking components together to
+ establish a flow and deeper integration between the alertmanager implementation and the dashboard
elements enabling an Admin to efficiently step from alert to affected component.
#. **Platform visibility**. The platform (OS and hardware configuration) is a fundamental component of the
solution, so providing platform level insights can help deliver a more holistic view of the Ceph cluster.
Focus On User Experience
========================
Ultimately, the goal must be to move away from pushing complexity onto the GUI user through multi-step
-workflows like iSCSI configuration, or setting specific cluster flags in defined sequences. Simplicity,
-should be the goal for the UI...let's leave complexity to the CLI.
+workflows like iSCSI configuration or setting specific cluster flags in defined sequences. Simplicity
+should be the goal for the UI...let's leave the complexity to the CLI.