* You can pass any initial Ceph configuration options to the new
cluster by putting them in a standard ini-style configuration file
- and using the ``--config *<config-file>*`` option.
+ and using the ``--config *<config-file>*`` option. For example::
+
+ $ cat <<EOF > initial-ceph.conf
+ [global]
+ osd crush chooseleaf type = 0
+ EOF
+ $ ./cephadm bootstrap --config initial-ceph.conf ...
* The ``--ssh-user *<user>*`` option makes it possible to choose which ssh
user cephadm will use to connect to hosts. The associated ssh key will be
The OS kernel version (maj.min) is checked for consistency across the hosts. Once again, the
majority of the hosts is used as the basis of identifying anomalies.
-/etc/ceph/ceph.conf
-===================
+Client keyrings and configs
+===========================
+
+Cephadm can distribute copies of the ``ceph.conf`` and client keyring
+files to hosts. For example, it is usually a good idea to store a
+copy of the config and ``client.admin`` keyring on any hosts that will
+be used to administer the cluster via the CLI. By default, cephadm will do
+this for any nodes with the ``admin`` label (which normally includes the bootstrap
+host).
+
+When a client keyring is placed under management, cephadm will:
+
+ - build a list of target hosts based on the specified placement spec (see :ref:`orchestrator-cli-placement-spec`)
+ - store a copy of the ``/etc/ceph/ceph.conf`` file on the specified host(s)
+ - store a copy of the keyring file on the specified host(s)
+ - update the ``ceph.conf`` file as needed (e.g., due to a change in the cluster monitors)
+ - update the keyring file if the entity's key is changed (e.g., via ``ceph auth ...`` commands)
+ - ensure the keyring file has the specified ownership and mode
+ - remove the keyring file when client keyring management is disabled
+ - remove the keyring file from old hosts if the keyring placement spec is updated (as needed)
+
+To view which client keyrings are currently under management::
+
+ ceph orch client-keyring ls
-Cephadm distributes a minimized ``ceph.conf`` that only contains
-a minimal set of information to connect to the Ceph cluster.
+To place a keyring under management::
-To update the configuration settings, instead of manually editing
-the ``ceph.conf`` file, use the config database instead::
+ ceph orch client-keyring set <entity> <placement> [--mode=<mode>] [--owner=<uid>.<gid>] [--path=<path>]
- ceph config set ...
+- By default, the *path* will be ``/etc/ceph/client.{entity}.keyring``, which is where
+ Ceph looks by default. Be careful specifying alternate locations as existing files
+ maybe overwritten.
+- A placement of ``*`` (all hosts) is common.
+- The mode defaults to ``0600`` and ownership to ``0:0`` (user root, group root).
-See :ref:`ceph-conf-database` for details.
+For example, to create and deploy a ``client.rbd`` key to hosts with the ``rbd-client`` label and group readable by uid/gid 107 (qemu),::
-By default, cephadm does not deploy that minimized ``ceph.conf`` across the
-cluster. To enable the management of ``/etc/ceph/ceph.conf`` files on all
-hosts, please enable this by running::
+ ceph auth get-or-create-key client.rbd mon 'profile rbd' mgr 'profile rbd' osd 'profile rbd pool=my_rbd_pool'
+ ceph orch client-keyring set client.rbd label:rbd-client --owner 107:107 --mode 640
- ceph config set mgr mgr/cephadm/manage_etc_ceph_ceph_conf true
+The resulting keyring file is::
-If enabled, by default cephadm will update ``ceph.conf`` on all cluster hosts. To
-change the set of hosts that get a managed config file, you can update the
-``mgr/cephadm/manage_etc_ceph_ceph_conf_hosts`` setting to a different placement
-spec (see :ref:`orchestrator-cli-placement-spec`). For example, to limit config
-file updates to hosts with the ``foo`` label::
+ -rw-r-----. 1 qemu qemu 156 Apr 21 08:47 /etc/ceph/client.client.rbd.keyring
+
+To disable management of a keyring file::
+
+ ceph orch client-keyring rm <entity>
+
+Note that this will delete any keyring files for this entity that were previously written
+to cluster nodes.
+
+
+/etc/ceph/ceph.conf
+===================
- ceph config set mgr mgr/cephadm/manage_etc_ceph_ceph_conf_host label:foo
+It may also be useful to distribute ``ceph.conf`` files to hosts without an associated
+client keyring file. By default, cephadm only deploys a ``ceph.conf`` file to hosts where a client keyring
+is also distributed (see above). To write config files to hosts without client keyrings::
-To set up an initial configuration before bootstrapping
-the cluster, create an initial ``ceph.conf`` file. For example::
+ ceph config set mgr mgr/cephadm/manage_etc_ceph_ceph_conf true
- cat <<EOF > /etc/ceph/ceph.conf
- [global]
- osd crush chooseleaf type = 0
- EOF
+By default, the configs are written to all hosts (i.e., those listed
+by ``ceph orch host ls``). To specify which hosts get a ``ceph.conf``::
-Then, run bootstrap referencing this file::
+ ceph config set mgr mgr/cephadm/manage_etc_ceph_ceph_conf_host <placement spec>
- cephadm bootstrap -c /root/ceph.conf ...
+For example, to distribute configs to hosts with the ``bare_config`` label,::
+ ceph config set mgr mgr/cephadm/manage_etc_ceph_ceph_conf_host label:bare_config
+(See :ref:`orchestrator-cli-placement-spec` for more information about placement specs.)
import json
import logging
from typing import TYPE_CHECKING, Dict, List, Iterator, Optional, Any, Tuple, Set, Mapping, cast, \
- NamedTuple
+ NamedTuple, Type
import orchestrator
from ceph.deployment import inventory
-from ceph.deployment.service_spec import ServiceSpec
+from ceph.deployment.service_spec import ServiceSpec, PlacementSpec
from ceph.utils import str_to_datetime, datetime_to_str, datetime_now
from orchestrator import OrchestratorError, HostSpec, OrchestratorEvent, service_to_daemon_types
return self.spec_created.get(spec.service_name())
+class ClientKeyringSpec(object):
+ """
+ A client keyring file that we should maintain
+ """
+ def __init__(
+ self,
+ entity: str,
+ placement: PlacementSpec,
+ mode: Optional[int] = None,
+ uid: Optional[int] = None,
+ gid: Optional[int] = None,
+ ) -> None:
+ self.entity = entity
+ self.placement = placement
+ self.mode = mode or 0o600
+ self.uid = uid or 0
+ self.gid = gid or 0
+
+ def validate(self) -> None:
+ pass
+
+ def to_json(self) -> Dict[str, Any]:
+ return {
+ 'entity': self.entity,
+ 'placement': self.placement.to_json(),
+ 'mode': self.mode,
+ 'uid': self.uid,
+ 'gid': self.gid,
+ }
+
+ @property
+ def path(self) -> str:
+ return f'/etc/ceph/ceph.{self.entity}.keyring'
+
+ @classmethod
+ def from_json(cls: Type, data: dict) -> 'ClientKeyringSpec':
+ c = data.copy()
+ if 'placement' in c:
+ c['placement'] = PlacementSpec.from_json(c['placement'])
+ _cls = cls(**c)
+ _cls.validate()
+ return _cls
+
+
+class ClientKeyringStore():
+ """
+ Track client keyring files that we are supposed to maintain
+ """
+
+ def __init__(self, mgr):
+ # type: (CephadmOrchestrator) -> None
+ self.mgr: CephadmOrchestrator = mgr
+ self.mgr = mgr
+ self.keys: Dict[str, ClientKeyringSpec] = {}
+
+ def load(self) -> None:
+ c = self.mgr.get_store('client_keyrings') or b'{}'
+ j = json.loads(c)
+ for e, d in j.items():
+ self.keys[e] = ClientKeyringSpec.from_json(d)
+
+ def save(self) -> None:
+ data = {
+ k: v.to_json() for k, v in self.keys.items()
+ }
+ self.mgr.set_store('client_keyrings', json.dumps(data))
+
+ def update(self, ks: ClientKeyringSpec) -> None:
+ self.keys[ks.entity] = ks
+ self.save()
+
+ def rm(self, entity: str) -> None:
+ if entity in self.keys:
+ del self.keys[entity]
+ self.save()
+
+
class HostCache():
"""
HostCache stores different things:
NodeExporterService
from .services.exporter import CephadmExporter, CephadmExporterConfig
from .schedule import HostAssignment
-from .inventory import Inventory, SpecStore, HostCache, EventStore
+from .inventory import Inventory, SpecStore, HostCache, EventStore, ClientKeyringStore, ClientKeyringSpec
from .upgrade import CephadmUpgrade
from .template import TemplateMgr
from .utils import CEPH_TYPES, GATEWAY_TYPES, forall_hosts, cephadmNoImage
self.spec_store = SpecStore(self)
self.spec_store.load()
+ self.keys = ClientKeyringStore(self)
+ self.keys.load()
+
# ensure the host lists are in sync
for h in self.inventory.keys():
if h not in self.cache.daemons:
return HandleCommandResult(stdout='\n'.join(run(host)))
+ @orchestrator._cli_read_command('orch client-keyring ls')
+ def _client_keyring_ls(self, format: Format = Format.plain) -> HandleCommandResult:
+ if format != Format.plain:
+ output = to_format(self.keys.keys.values(), format, many=True, cls=ClientKeyringSpec)
+ else:
+ table = PrettyTable(
+ ['ENTITY', 'PLACEMENT', 'MODE', 'OWNER', 'PATH'],
+ border=False)
+ table.align = 'l'
+ table.left_padding_width = 0
+ table.right_padding_width = 2
+ for ks in sorted(self.keys.keys.values(), key=lambda ks: ks.entity):
+ table.add_row((
+ ks.entity, ks.placement.pretty_str(),
+ utils.file_mode_to_str(ks.mode),
+ f'{ks.uid}:{ks.gid}',
+ ks.path,
+ ))
+ output = table.get_string()
+ return HandleCommandResult(stdout=output)
+
+ @orchestrator._cli_write_command('orch client-keyring set')
+ def _client_keyring_set(
+ self,
+ entity: str,
+ placement: str,
+ owner: Optional[str] = None,
+ mode: Optional[str] = None,
+ ) -> HandleCommandResult:
+ if not entity.startswith('client.'):
+ raise OrchestratorError('entity must start with client.')
+ if owner:
+ try:
+ uid, gid = map(int, owner.split(':'))
+ except Exception:
+ raise OrchestratorError('owner must look like "<uid>:<gid>", e.g., "0:0"')
+ else:
+ uid = 0
+ gid = 0
+ if mode:
+ try:
+ imode = int(mode, 8)
+ except Exception:
+ raise OrchestratorError('mode must be an octal mode, e.g. "600"')
+ else:
+ imode = 0o600
+ pspec = PlacementSpec.from_string(placement)
+ ks = ClientKeyringSpec(entity, pspec, mode=imode, uid=uid, gid=gid)
+ self.keys.update(ks)
+ self._kick_serve_loop()
+ return HandleCommandResult()
+
+ @orchestrator._cli_write_command('orch client-keyring rm')
+ def _client_keyring_rm(
+ self,
+ entity: str,
+ ) -> HandleCommandResult:
+ self.keys.rm(entity)
+ self._kick_serve_loop()
+ return HandleCommandResult()
+
def _get_connection(self, host: str) -> Tuple['remoto.backends.BaseConnection',
'remoto.backends.LegacyModuleExecute']:
"""
+import hashlib
import json
import logging
from collections import defaultdict
client_files: Dict[str, Dict[str, Tuple[int, int, int, bytes, str]]] = {}
# ceph.conf
- if self.mgr.manage_etc_ceph_ceph_conf:
+ if self.mgr.manage_etc_ceph_ceph_conf or self.mgr.keys.keys:
config = self.mgr.get_minimal_ceph_conf().encode('utf-8')
config_digest = ''.join('%02x' % c for c in hashlib.sha256(config).digest())
except Exception as e:
self.mgr.log.warning(f'unable to calc conf hosts: {self.mgr.manage_etc_ceph_ceph_conf_hosts}: {e}')
+ # client keyrings
+ for ks in self.mgr.keys.keys.values():
+ assert config
+ assert config_digest
+ try:
+ ret, keyring, err = self.mgr.mon_command({
+ 'prefix': 'auth get',
+ 'entity': ks.entity,
+ })
+ if ret:
+ self.log.warning(f'unable to fetch keyring for {ks.entity}')
+ continue
+ digest = ''.join('%02x' % c for c in hashlib.sha256(keyring.encode('utf-8')).digest())
+ ha = HostAssignment(
+ spec=ServiceSpec('mon', placement=ks.placement),
+ hosts=self.mgr._schedulable_hosts(),
+ daemons=[],
+ networks=self.mgr.cache.networks,
+ )
+ all_slots, _, _ = ha.place()
+ for host in {s.hostname for s in all_slots}:
+ if host not in client_files:
+ client_files[host] = {}
+ client_files[host]['/etc/ceph/ceph.conf'] = (
+ 0o644, 0, 0, bytes(config), str(config_digest)
+ )
+ client_files[host][ks.path] = (
+ ks.mode, ks.uid, ks.gid, keyring.encode('utf-8'), digest
+ )
+ except Exception as e:
+ self.log.warning(f'unable to calc client keyring {ks.entity} placement {ks.placement}: {e}')
+
@forall_hosts
def refresh(host: str) -> None:
def ceph_release_to_major(release: str) -> int:
return ord(release[0]) - ord('a') + 1
+
+
+def file_mode_to_str(mode: int) -> str:
+ r = ''
+ for shift in range(0, 9, 3):
+ r = (
+ f'{"r" if (mode >> shift) & 4 else "-"}'
+ f'{"w" if (mode >> shift) & 2 else "-"}'
+ f'{"x" if (mode >> shift) & 1 else "-"}'
+ ) + r
+ return r