}
Finally, inject the CRUSH map to make the rule available to the cluster::
-
+
$ crushtool -c crush.map.txt -o crush2.map.bin
$ ceph osd setcrushmap -i crush2.map.bin
.. _Changing Monitor elections: ../change-mon-elections
And lastly, tell the cluster to enter stretch mode. Here, ``mon.e`` is the
-tiebreaker and we are splitting across data centers ::
+tiebreaker and we are splitting across data centers. ``mon.e`` should be also
+set a datacenter, that will differ from ``site1`` and ``site2``. For this
+purpose you can create another datacenter bucket named ```site3`` in your
+CRUSH and place ``mon.e`` there ::
- $ ceph mon enable_stretch_mode e stretch_rule data center
+ $ ceph mon set_location e datacenter=site3
+ $ ceph mon enable_stretch_mode e stretch_rule datacenter
When stretch mode is enabled, the OSDs wlll only take PGs active when
they peer across data centers (or whatever other CRUSH bucket type
and stops requiring the always-alive site when peering (so that you can fail
over to the other site, if necessary).
-
Stretch Mode Limitations
========================
As implied by the setup, stretch mode only handles 2 sites with OSDs.
cross-data-center peering early and are willing to risk data downtime (or have
verified separately that all the PGs can peer, even if they aren't fully
recovered), you can invoke ::
-
+
$ ceph osd force_healthy_stretch_mode --yes-i-really-mean-it
This command should not be necessary; it is included to deal with