ceph osd reweight-by-pg {<int[100-]>} {<poolname> [<poolname...]}
{--no-increasing}
-Subcommand ``reweight-by-utilization`` reweight OSDs by utilization
-[overload-percentage-for-consideration, default 120].
+Subcommand ``reweight-by-utilization`` reweights OSDs by utilization. It only reweights
+outlier OSDs whose utilization exceeds the average, eg. the default 120%
+limits reweight to those OSDs that are more than 20% over the average.
+[overload-threshold, default 120 [max_weight_change, default 0.05 [max_osds_to_adjust, default 4]]]
Usage::
- ceph osd reweight-by-utilization {<int[100-]>}
+ ceph osd reweight-by-utilization {<int[100-]> {<float[0.0-]> {<int[0-]>}}}
{--no-increasing}
Subcommand ``rm`` removes osd(s) <id> [<id>...] from the OSD map.
---------------------
This example shows a single node configuration running ceph-mgr and
-node_exporter on a server called ``senta04``. Note that this requires to add the
-appropriate instance label to every ``node_exporter`` target individually.
+node_exporter on a server called ``senta04``. Note that this requires one
+to add an appropriate and unique ``instance`` label to each ``node_exporter`` target.
This is just an example: there are other ways to configure prometheus
scrape targets and label rewrite rules.
Automatic Cache Sizing
======================
-Bluestore can be configured to automatically resize it's caches when TCMalloc
+Bluestore can be configured to automatically resize its caches when TCMalloc
is configured as the memory allocator and the ``bluestore_cache_autotune``
setting is enabled. This option is currently enabled by default. Bluestore
will attempt to keep OSD heap memory usage under a designated target size via
It is possible to run a Ceph Storage Cluster with two networks: a public
(front-side) network and a cluster (back-side) network. However, this approach
-complicates network configuration (both hardware and software) and does not usually have a significant impact on overall performance. For this reason, we generally recommend that dual-NIC systems either be configured with two IPs on the same network, or bonded.
+complicates network configuration (both hardware and software) and does not usually
+have a significant impact on overall performance. For this reason, we recommend
+that for resilience and capacity dual-NIC systems either active/active bond
+these interfaces or implemebnt a layer 3 multipath strategy with eg. FRR.
If, despite the complexity, one still wishes to use two networks, each
:term:`Ceph Node` will need to have more than one NIC. See `Hardware