they peer across data centers (or whatever other CRUSH bucket type
you specified), assuming both are alive. Pools will increase in size
from the default 3 to 4, expecting 2 copies in each site. OSDs will only
-be allowed to connect to monitors in the same data center.
+be allowed to connect to monitors in the same data center. New monitors
+will not be allowed to join the cluster if they do not specify a location.
If all the OSDs and monitors from a data center become inaccessible
at once, the surviving data center will enter a degraded stretch mode. This
Other commands
==============
+If your tiebreaker monitor fails for some reason, you can replace it. Turn on
+a new monitor and run ::
+
+ $ ceph mon set_new_tiebreaker mon.<new_mon_name>
+
+This command will protest if the new monitor is in the same location as existing
+non-tiebreaker monitors. This command WILL NOT remove the previous tiebreaker
+monitor; you should do so yourself.
+
+If you are writing your own tooling for deploying Ceph, you can use a new
+``--set-crush-location`` option when booting monitors, instead of running
+``ceph mon set_location``. This option accepts only a single "bucket=loc" pair, eg
+``ceph-mon --set-crush-location 'datacenter=a'``, which must match the
+bucket type you specified when running ``enable_stretch_mode``.
+
+
When in stretch degraded mode, the cluster will go into "recovery" mode automatically
when the disconnected data center comes back. If that doesn't work, or you want to
enable recovery mode early, you can invoke ::