mkdir /tmp/rpm-repo
../ceph-build/push_to_rpm_repo.sh /tmp/release /tmp/rpm-repo 0.xx
-Next add any additional rpms to the repo that are needed such as leveldb and
-and ceph-deploy. See RPM Backports section
+Next add any additional rpms to the repo that are needed such as leveldb.
+See RPM Backports section
Finally, sign the rpms and build the repo indexes::
../ceph-build/push_to_deb_repo.sh /tmp/release /tmp/debian-repo 0.xx main
-Next add any addition debian packages that are needed such as leveldb and
-ceph-deploy. See the Debian Backports section below.
+Next add any addition debian packages that are needed such as leveldb.
+See the Debian Backports section below.
Debian packages are signed when added to the repo, so no further action is
needed.
+++ /dev/null
-.. _ceph-deploy-index:
-
-============================
- Installation (ceph-deploy)
-============================
-
-.. raw:: html
-
- <style type="text/css">div.body h3{margin:5px 0px 0px 0px;}</style>
- <table cellpadding="10"><colgroup><col width="33%"><col width="33%"><col width="33%"></colgroup><tbody valign="top"><tr><td><h3>Step 1: Preflight</h3>
-
-A :term:`Ceph Client` and a :term:`Ceph Node` may require some basic
-configuration work prior to deploying a Ceph Storage Cluster. You can also
-avail yourself of help by getting involved in the Ceph community.
-
-.. toctree::
-
- Preflight <quick-start-preflight>
-
-.. raw:: html
-
- </td><td><h3>Step 2: Storage Cluster</h3>
-
-Once you have completed your preflight checklist, you should be able to begin
-deploying a Ceph Storage Cluster.
-
-.. toctree::
-
- Storage Cluster Quick Start <quick-ceph-deploy>
-
-
-.. raw:: html
-
- </td><td><h3>Step 3: Ceph Client(s)</h3>
-
-Most Ceph users don't store objects directly in the Ceph Storage Cluster. They typically use at least one of
-Ceph Block Devices, the Ceph File System, and Ceph Object Storage.
-
-.. toctree::
-
- Block Device Quick Start <../../start/quick-rbd>
- Filesystem Quick Start <quick-cephfs>
- Object Storage Quick Start <quick-rgw>
-
-.. raw:: html
-
- </td></tr></tbody></table>
-
-
-.. toctree::
- :hidden:
-
- Upgrading Ceph <upgrading-ceph>
- Install Ceph Object Gateway <install-ceph-gateway>
-
-
+++ /dev/null
-===========================
-Install Ceph Object Gateway
-===========================
-
-As of `firefly` (v0.80), Ceph Object Gateway is running on Civetweb (embedded
-into the ``ceph-radosgw`` daemon) instead of Apache and FastCGI. Using Civetweb
-simplifies the Ceph Object Gateway installation and configuration.
-
-.. note:: To run the Ceph Object Gateway service, you should have a running
- Ceph storage cluster, and the gateway host should have access to the
- public network.
-
-.. note:: In version 0.80, the Ceph Object Gateway does not support SSL. You
- may setup a reverse proxy server with SSL to dispatch HTTPS requests
- as HTTP requests to CivetWeb.
-
-Execute the Pre-Installation Procedure
---------------------------------------
-
-See Preflight_ and execute the pre-installation procedures on your Ceph Object
-Gateway node. Specifically, you should disable ``requiretty`` on your Ceph
-Deploy user, set SELinux to ``Permissive`` and set up a Ceph Deploy user with
-password-less ``sudo``. For Ceph Object Gateways, you will need to open the
-port that Civetweb will use in production.
-
-.. note:: Civetweb runs on port ``7480`` by default.
-
-Install Ceph Object Gateway
----------------------------
-
-From the working directory of your administration server, install the Ceph
-Object Gateway package on the Ceph Object Gateway node. For example::
-
- ceph-deploy install --rgw <gateway-node1> [<gateway-node2> ...]
-
-The ``ceph-common`` package is a dependency, so ``ceph-deploy`` will install
-this too. The ``ceph`` CLI tools are intended for administrators. To make your
-Ceph Object Gateway node an administrator node, execute the following from the
-working directory of your administration server::
-
- ceph-deploy admin <node-name>
-
-Create a Gateway Instance
--------------------------
-
-From the working directory of your administration server, create an instance of
-the Ceph Object Gateway on the Ceph Object Gateway. For example::
-
- ceph-deploy rgw create <gateway-node1>
-
-Once the gateway is running, you should be able to access it on port ``7480``
-with an unauthenticated request like this::
-
- http://client-node:7480
-
-If the gateway instance is working properly, you should receive a response like
-this::
-
- <?xml version="1.0" encoding="UTF-8"?>
- <ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
- <Owner>
- <ID>anonymous</ID>
- <DisplayName></DisplayName>
- </Owner>
- <Buckets>
- </Buckets>
- </ListAllMyBucketsResult>
-
-If at any point you run into trouble and you want to start over, execute the
-following to purge the configuration::
-
- ceph-deploy purge <gateway-node1> [<gateway-node2>]
- ceph-deploy purgedata <gateway-node1> [<gateway-node2>]
-
-If you execute ``purge``, you must re-install Ceph.
-
-Change the Default Port
------------------------
-
-Civetweb runs on port ``7480`` by default. To change the default port (e.g., to
-port ``80``), modify your Ceph configuration file in the working directory of
-your administration server. Add a section entitled
-``[client.rgw.<gateway-node>]``, replacing ``<gateway-node>`` with the short
-node name of your Ceph Object Gateway node (i.e., ``hostname -s``).
-
-.. note:: As of version 11.0.1, the Ceph Object Gateway **does** support SSL.
- See `Using SSL with Civetweb`_ for information on how to set that up.
-
-For example, if your node name is ``gateway-node1``, add a section like this
-after the ``[global]`` section::
-
- [client.rgw.gateway-node1]
- rgw_frontends = "civetweb port=80"
-
-.. note:: Ensure that you leave no whitespace between ``port=<port-number>`` in
- the ``rgw_frontends`` key/value pair. The ``[client.rgw.gateway-node1]``
- heading identifies this portion of the Ceph configuration file as
- configuring a Ceph Storage Cluster client where the client type is a Ceph
- Object Gateway (i.e., ``rgw``), and the name of the instance is
- ``gateway-node1``.
-
-Push the updated configuration file to your Ceph Object Gateway node
-(and other Ceph nodes)::
-
- ceph-deploy --overwrite-conf config push <gateway-node> [<other-nodes>]
-
-To make the new port setting take effect, restart the Ceph Object
-Gateway::
-
- sudo systemctl restart ceph-radosgw.service
-
-Finally, check to ensure that the port you selected is open on the node's
-firewall (e.g., port ``80``). If it is not open, add the port and reload the
-firewall configuration. If you use the ``firewalld`` daemon, execute::
-
- sudo firewall-cmd --list-all
- sudo firewall-cmd --zone=public --add-port 80/tcp --permanent
- sudo firewall-cmd --reload
-
-If you use ``iptables``, execute::
-
- sudo iptables --list
- sudo iptables -I INPUT 1 -i <iface> -p tcp -s <ip-address>/<netmask> --dport 80 -j ACCEPT
-
-Replace ``<iface>`` and ``<ip-address>/<netmask>`` with the relevant values for
-your Ceph Object Gateway node.
-
-Once you have finished configuring ``iptables``, ensure that you make the
-change persistent so that it will be in effect when your Ceph Object Gateway
-node reboots. Execute::
-
- sudo apt-get install iptables-persistent
-
-A terminal UI will open up. Select ``yes`` for the prompts to save current
-``IPv4`` iptables rules to ``/etc/iptables/rules.v4`` and current ``IPv6``
-iptables rules to ``/etc/iptables/rules.v6``.
-
-The ``IPv4`` iptables rule that you set in the earlier step will be loaded in
-``/etc/iptables/rules.v4`` and will be persistent across reboots.
-
-If you add a new ``IPv4`` iptables rule after installing
-``iptables-persistent`` you will have to add it to the rule file. In such case,
-execute the following as the ``root`` user::
-
- iptables-save > /etc/iptables/rules.v4
-
-.. _Using SSL with Civetweb:
-
-Using SSL with Civetweb
------------------------
-
-Before using SSL with civetweb, you will need a certificate that will match
-the host name that that will be used to access the Ceph Object Gateway.
-You may wish to obtain one that has `subject alternate name` fields for
-more flexibility. If you intend to use S3-style subdomains
-(`Add Wildcard to DNS`_), you will need a `wildcard` certificate.
-
-Civetweb requires that the server key, server certificate, and any other
-CA or intermediate certificates be supplied in one file. Each of these
-items must be in `pem` form. Because the combined file contains the
-secret key, it should be protected from unauthorized access.
-
-To configure ssl operation, append ``s`` to the port number. For eg::
-
- [client.rgw.gateway-node1]
- rgw_frontends = civetweb port=443s ssl_certificate=/etc/ceph/private/keyandcert.pem
-
-.. versionadded :: Luminous
-
-Furthermore, civetweb can be made to bind to multiple ports, by separating them
-with ``+`` in the configuration. This allows for use cases where both ssl and
-non-ssl connections are hosted by a single rgw instance. For eg::
-
- [client.rgw.gateway-node1]
- rgw_frontends = civetweb port=80+443s ssl_certificate=/etc/ceph/private/keyandcert.pem
-
-Additional Civetweb Configuration Options
------------------------------------------
-Some additional configuration options can be adjusted for the embedded Civetweb web server
-in the **Ceph Object Gateway** section of the ``ceph.conf`` file.
-A list of supported options, including an example, can be found in the `HTTP Frontends`_.
-
-Migrating from Apache to Civetweb
----------------------------------
-
-If you are running the Ceph Object Gateway on Apache and FastCGI with Ceph
-Storage v0.80 or above, you are already running Civetweb--it starts with the
-``ceph-radosgw`` daemon and it's running on port 7480 by default so that it
-doesn't conflict with your Apache and FastCGI installation and other commonly
-used web service ports. Migrating to use Civetweb basically involves removing
-your Apache installation. Then, you must remove Apache and FastCGI settings
-from your Ceph configuration file and reset ``rgw_frontends`` to Civetweb.
-
-Referring back to the description for installing a Ceph Object Gateway with
-``ceph-deploy``, notice that the configuration file only has one setting
-``rgw_frontends`` (and that's assuming you elected to change the default port).
-The ``ceph-deploy`` utility generates the data directory and the keyring for
-you--placing the keyring in ``/var/lib/ceph/radosgw/{rgw-instance}``. The daemon
-looks in default locations, whereas you may have specified different settings
-in your Ceph configuration file. Since you already have keys and a data
-directory, you will want to maintain those paths in your Ceph configuration
-file if you used something other than default paths.
-
-A typical Ceph Object Gateway configuration file for an Apache-based deployment
-looks something similar as the following:
-
-On Red Hat Enterprise Linux::
-
- [client.radosgw.gateway-node1]
- host = {hostname}
- keyring = /etc/ceph/ceph.client.radosgw.keyring
- rgw socket path = ""
- log file = /var/log/radosgw/client.radosgw.gateway-node1.log
- rgw frontends = fastcgi socket\_port=9000 socket\_host=0.0.0.0
- rgw print continue = false
-
-On Ubuntu::
-
- [client.radosgw.gateway-node]
- host = {hostname}
- keyring = /etc/ceph/ceph.client.radosgw.keyring
- rgw socket path = /var/run/ceph/ceph.radosgw.gateway.fastcgi.sock
- log file = /var/log/radosgw/client.radosgw.gateway-node1.log
-
-To modify it for use with Civetweb, simply remove the Apache-specific settings
-such as ``rgw_socket_path`` and ``rgw_print_continue``. Then, change the
-``rgw_frontends`` setting to reflect Civetweb rather than the Apache FastCGI
-front end and specify the port number you intend to use. For example::
-
- [client.radosgw.gateway-node1]
- host = {hostname}
- keyring = /etc/ceph/ceph.client.radosgw.keyring
- log file = /var/log/radosgw/client.radosgw.gateway-node1.log
- rgw_frontends = civetweb port=80
-
-Finally, restart the Ceph Object Gateway. On Red Hat Enterprise Linux execute::
-
- sudo systemctl restart ceph-radosgw.service
-
-On Ubuntu execute::
-
- sudo service radosgw restart id=rgw.<short-hostname>
-
-If you used a port number that is not open, you will also need to open that
-port on your firewall.
-
-Configure Bucket Sharding
--------------------------
-
-A Ceph Object Gateway stores bucket index data in the ``index_pool``, which
-defaults to ``.rgw.buckets.index``. Sometimes users like to put many objects
-(hundreds of thousands to millions of objects) in a single bucket. If you do
-not use the gateway administration interface to set quotas for the maximum
-number of objects per bucket, the bucket index can suffer significant
-performance degradation when users place large numbers of objects into a
-bucket.
-
-In Ceph 0.94, you may shard bucket indices to help prevent performance
-bottlenecks when you allow a high number of objects per bucket. The
-``rgw_override_bucket_index_max_shards`` setting allows you to set a maximum
-number of shards per bucket. The default value is ``0``, which means bucket
-index sharding is off by default.
-
-To turn bucket index sharding on, set ``rgw_override_bucket_index_max_shards``
-to a value greater than ``0``.
-
-For simple configurations, you may add ``rgw_override_bucket_index_max_shards``
-to your Ceph configuration file. Add it under ``[global]`` to create a
-system-wide value. You can also set it for each instance in your Ceph
-configuration file.
-
-Once you have changed your bucket sharding configuration in your Ceph
-configuration file, restart your gateway. On Red Hat Enterprise Linux execute::
-
- sudo systemctl restart ceph-radosgw.service
-
-On Ubuntu execute::
-
- sudo service radosgw restart id=rgw.<short-hostname>
-
-For federated configurations, each zone may have a different ``index_pool``
-setting for failover. To make the value consistent for a zonegroup's zones, you
-may set ``rgw_override_bucket_index_max_shards`` in a gateway's zonegroup
-configuration. For example::
-
- radosgw-admin zonegroup get > zonegroup.json
-
-Open the ``zonegroup.json`` file and edit the ``bucket_index_max_shards`` setting
-for each named zone. Save the ``zonegroup.json`` file and reset the zonegroup.
-For example::
-
- radosgw-admin zonegroup set < zonegroup.json
-
-Once you have updated your zonegroup, update and commit the period.
-For example::
-
- radosgw-admin period update --commit
-
-.. note:: Mapping the index pool (for each zone, if applicable) to a CRUSH
- rule of SSD-based OSDs may also help with bucket index performance.
-
-.. _Add Wildcard to DNS:
-
-Add Wildcard to DNS
--------------------
-
-To use Ceph with S3-style subdomains (e.g., bucket-name.domain-name.com), you
-need to add a wildcard to the DNS record of the DNS server you use with the
-``ceph-radosgw`` daemon.
-
-The address of the DNS must also be specified in the Ceph configuration file
-with the ``rgw dns name = {hostname}`` setting.
-
-For ``dnsmasq``, add the following address setting with a dot (.) prepended to
-the host name::
-
- address=/.{hostname-or-fqdn}/{host-ip-address}
-
-For example::
-
- address=/.gateway-node1/192.168.122.75
-
-
-For ``bind``, add a wildcard to the DNS record. For example::
-
- $TTL 604800
- @ IN SOA gateway-node1. root.gateway-node1. (
- 2 ; Serial
- 604800 ; Refresh
- 86400 ; Retry
- 2419200 ; Expire
- 604800 ) ; Negative Cache TTL
- ;
- @ IN NS gateway-node1.
- @ IN A 192.168.122.113
- * IN CNAME @
-
-Restart your DNS server and ping your server with a subdomain to ensure that
-your DNS configuration works as expected::
-
- ping mybucket.{hostname}
-
-For example::
-
- ping mybucket.gateway-node1
-
-Add Debugging (if needed)
--------------------------
-
-Once you finish the setup procedure, if you encounter issues with your
-configuration, you can add debugging to the ``[global]`` section of your Ceph
-configuration file and restart the gateway(s) to help troubleshoot any
-configuration issues. For example::
-
- [global]
- #append the following in the global section.
- debug ms = 1
- debug rgw = 20
-
-Using the Gateway
------------------
-
-To use the REST interfaces, first create an initial Ceph Object Gateway user
-for the S3 interface. Then, create a subuser for the Swift interface. You then
-need to verify if the created users are able to access the gateway.
-
-Create a RADOSGW User for S3 Access
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-A ``radosgw`` user needs to be created and granted access. The command ``man
-radosgw-admin`` will provide information on additional command options.
-
-To create the user, execute the following on the ``gateway host``::
-
- sudo radosgw-admin user create --uid="testuser" --display-name="First User"
-
-The output of the command will be something like the following::
-
- {
- "user_id": "testuser",
- "display_name": "First User",
- "email": "",
- "suspended": 0,
- "max_buckets": 1000,
- "subusers": [],
- "keys": [{
- "user": "testuser",
- "access_key": "I0PJDPCIYZ665MW88W9R",
- "secret_key": "dxaXZ8U90SXydYzyS5ivamEP20hkLSUViiaR+ZDA"
- }],
- "swift_keys": [],
- "caps": [],
- "op_mask": "read, write, delete",
- "default_placement": "",
- "placement_tags": [],
- "bucket_quota": {
- "enabled": false,
- "max_size_kb": -1,
- "max_objects": -1
- },
- "user_quota": {
- "enabled": false,
- "max_size_kb": -1,
- "max_objects": -1
- },
- "temp_url_keys": []
- }
-
-.. note:: The values of ``keys->access_key`` and ``keys->secret_key`` are
- needed for access validation.
-
-.. important:: Check the key output. Sometimes ``radosgw-admin`` generates a
- JSON escape character ``\`` in ``access_key`` or ``secret_key``
- and some clients do not know how to handle JSON escape
- characters. Remedies include removing the JSON escape character
- ``\``, encapsulating the string in quotes, regenerating the key
- and ensuring that it does not have a JSON escape character or
- specify the key and secret manually. Also, if ``radosgw-admin``
- generates a JSON escape character ``\`` and a forward slash ``/``
- together in a key, like ``\/``, only remove the JSON escape
- character ``\``. Do not remove the forward slash ``/`` as it is
- a valid character in the key.
-
-Create a Swift User
-^^^^^^^^^^^^^^^^^^^
-
-A Swift subuser needs to be created if this kind of access is needed. Creating
-a Swift user is a two step process. The first step is to create the user. The
-second is to create the secret key.
-
-Execute the following steps on the ``gateway host``:
-
-Create the Swift user::
-
- sudo radosgw-admin subuser create --uid=testuser --subuser=testuser:swift --access=full
-
-The output will be something like the following::
-
- {
- "user_id": "testuser",
- "display_name": "First User",
- "email": "",
- "suspended": 0,
- "max_buckets": 1000,
- "subusers": [{
- "id": "testuser:swift",
- "permissions": "full-control"
- }],
- "keys": [{
- "user": "testuser:swift",
- "access_key": "3Y1LNW4Q6X0Y53A52DET",
- "secret_key": ""
- }, {
- "user": "testuser",
- "access_key": "I0PJDPCIYZ665MW88W9R",
- "secret_key": "dxaXZ8U90SXydYzyS5ivamEP20hkLSUViiaR+ZDA"
- }],
- "swift_keys": [],
- "caps": [],
- "op_mask": "read, write, delete",
- "default_placement": "",
- "placement_tags": [],
- "bucket_quota": {
- "enabled": false,
- "max_size_kb": -1,
- "max_objects": -1
- },
- "user_quota": {
- "enabled": false,
- "max_size_kb": -1,
- "max_objects": -1
- },
- "temp_url_keys": []
- }
-
-Create the secret key::
-
- sudo radosgw-admin key create --subuser=testuser:swift --key-type=swift --gen-secret
-
-The output will be something like the following::
-
- {
- "user_id": "testuser",
- "display_name": "First User",
- "email": "",
- "suspended": 0,
- "max_buckets": 1000,
- "subusers": [{
- "id": "testuser:swift",
- "permissions": "full-control"
- }],
- "keys": [{
- "user": "testuser:swift",
- "access_key": "3Y1LNW4Q6X0Y53A52DET",
- "secret_key": ""
- }, {
- "user": "testuser",
- "access_key": "I0PJDPCIYZ665MW88W9R",
- "secret_key": "dxaXZ8U90SXydYzyS5ivamEP20hkLSUViiaR+ZDA"
- }],
- "swift_keys": [{
- "user": "testuser:swift",
- "secret_key": "244+fz2gSqoHwR3lYtSbIyomyPHf3i7rgSJrF\/IA"
- }],
- "caps": [],
- "op_mask": "read, write, delete",
- "default_placement": "",
- "placement_tags": [],
- "bucket_quota": {
- "enabled": false,
- "max_size_kb": -1,
- "max_objects": -1
- },
- "user_quota": {
- "enabled": false,
- "max_size_kb": -1,
- "max_objects": -1
- },
- "temp_url_keys": []
- }
-
-Access Verification
-^^^^^^^^^^^^^^^^^^^
-
-Test S3 Access
-""""""""""""""
-
-You need to write and run a Python test script for verifying S3 access. The S3
-access test script will connect to the ``radosgw``, create a new bucket and
-list all buckets. The values for ``aws_access_key_id`` and
-``aws_secret_access_key`` are taken from the values of ``access_key`` and
-``secret_key`` returned by the ``radosgw-admin`` command.
-
-Execute the following steps:
-
-#. You will need to install the ``python-boto`` package::
-
- sudo yum install python-boto
-
-#. Create the Python script::
-
- vi s3test.py
-
-#. Add the following contents to the file::
-
- import boto.s3.connection
-
- access_key = 'I0PJDPCIYZ665MW88W9R'
- secret_key = 'dxaXZ8U90SXydYzyS5ivamEP20hkLSUViiaR+ZDA'
- conn = boto.connect_s3(
- aws_access_key_id=access_key,
- aws_secret_access_key=secret_key,
- host='{hostname}', port={port},
- is_secure=False, calling_format=boto.s3.connection.OrdinaryCallingFormat(),
- )
-
- bucket = conn.create_bucket('my-new-bucket')
- for bucket in conn.get_all_buckets():
- print "{name} {created}".format(
- name=bucket.name,
- created=bucket.creation_date,
- )
-
-
- Replace ``{hostname}`` with the hostname of the host where you have
- configured the gateway service i.e., the ``gateway host``. Replace ``{port}``
- with the port number you are using with Civetweb.
-
-#. Run the script::
-
- python s3test.py
-
- The output will be something like the following::
-
- my-new-bucket 2015-02-16T17:09:10.000Z
-
-Test swift access
-"""""""""""""""""
-
-Swift access can be verified via the ``swift`` command line client. The command
-``man swift`` will provide more information on available command line options.
-
-To install ``swift`` client, execute the following commands. On Red Hat
-Enterprise Linux::
-
- sudo yum install python-setuptools
- sudo easy_install pip
- sudo pip install --upgrade setuptools
- sudo pip install --upgrade python-swiftclient
-
-On Debian-based distributions::
-
- sudo apt-get install python-setuptools
- sudo easy_install pip
- sudo pip install --upgrade setuptools
- sudo pip install --upgrade python-swiftclient
-
-To test swift access, execute the following::
-
- swift -V 1 -A http://{IP ADDRESS}:{port}/auth -U testuser:swift -K '{swift_secret_key}' list
-
-Replace ``{IP ADDRESS}`` with the public IP address of the gateway server and
-``{swift_secret_key}`` with its value from the output of ``radosgw-admin key
-create`` command executed for the ``swift`` user. Replace {port} with the port
-number you are using with Civetweb (e.g., ``7480`` is the default). If you
-don't replace the port, it will default to port ``80``.
-
-For example::
-
- swift -V 1 -A http://10.19.143.116:7480/auth -U testuser:swift -K '244+fz2gSqoHwR3lYtSbIyomyPHf3i7rgSJrF/IA' list
-
-The output should be::
-
- my-new-bucket
-
-.. _Preflight: ../quick-start-preflight
-.. _HTTP Frontends: ../../../radosgw/frontends
+++ /dev/null
-.. _storage-cluster-quick-start:
-
-===========================
-Storage Cluster Quick Start
-===========================
-
-If you haven't completed your `Preflight Checklist`_, do that first. This
-**Quick Start** sets up a :term:`Ceph Storage Cluster` using ``ceph-deploy``
-on your admin node. Create a three Ceph Node cluster so you can
-explore Ceph functionality.
-
-.. include:: quick-common.rst
-
-As a first exercise, create a Ceph Storage Cluster with one Ceph Monitor and three
-Ceph OSD Daemons. Once the cluster reaches a ``active + clean`` state, expand it
-by adding a fourth Ceph OSD Daemon, and two more Ceph Monitors.
-For best results, create a directory on your admin node for maintaining the
-configuration files and keys that ``ceph-deploy`` generates for your cluster. ::
-
- mkdir my-cluster
- cd my-cluster
-
-The ``ceph-deploy`` utility will output files to the current directory. Ensure you
-are in this directory when executing ``ceph-deploy``.
-
-.. important:: Do not call ``ceph-deploy`` with ``sudo`` or run it as ``root``
- if you are logged in as a different user, because it will not issue ``sudo``
- commands needed on the remote host.
-
-
-Starting over
-=============
-
-If at any point you run into trouble and you want to start over, execute
-the following to purge the Ceph packages, and erase all its data and configuration::
-
- ceph-deploy purge {ceph-node} [{ceph-node}]
- ceph-deploy purgedata {ceph-node} [{ceph-node}]
- ceph-deploy forgetkeys
- rm ceph.*
-
-If you execute ``purge``, you must re-install Ceph. The last ``rm``
-command removes any files that were written out by ceph-deploy locally
-during a previous installation.
-
-
-Create a Cluster
-================
-
-On your admin node from the directory you created for holding your
-configuration details, perform the following steps using ``ceph-deploy``.
-
-#. Create the cluster. ::
-
- ceph-deploy new {initial-monitor-node(s)}
-
- Specify node(s) as hostname, fqdn or hostname:fqdn. For example::
-
- ceph-deploy new node1
-
- Check the output of ``ceph-deploy`` with ``ls`` and ``cat`` in the
- current directory. You should see a Ceph configuration file
- (``ceph.conf``), a monitor secret keyring (``ceph.mon.keyring``),
- and a log file for the new cluster. See `ceph-deploy new -h`_ for
- additional details.
-
- Note for users of Ubuntu 18.04: Python 2 is a prerequisite of Ceph.
- Install the ``python-minimal`` package on Ubuntu 18.04 to provide
- Python 2::
-
- [Ubuntu 18.04] $ sudo apt install python-minimal
-
-#. If you have more than one network interface, add the ``public network``
- setting under the ``[global]`` section of your Ceph configuration file.
- See the `Network Configuration Reference`_ for details. ::
-
- public network = {ip-address}/{bits}
-
- For example,::
-
- public network = 10.1.2.0/24
-
- to use IPs in the 10.1.2.0/24 (or 10.1.2.0/255.255.255.0) network.
-
-#. If you are deploying in an IPv6 environment, add the following to
- ``ceph.conf`` in the local directory::
-
- echo ms bind ipv6 = true >> ceph.conf
-
-#. Install Ceph packages.::
-
- ceph-deploy install {ceph-node} [...]
-
- For example::
-
- ceph-deploy install node1 node2 node3
-
- The ``ceph-deploy`` utility will install Ceph on each node.
-
-#. Deploy the initial monitor(s) and gather the keys::
-
- ceph-deploy mon create-initial
-
- Once you complete the process, your local directory should have the following
- keyrings:
-
- - ``ceph.client.admin.keyring``
- - ``ceph.bootstrap-mgr.keyring``
- - ``ceph.bootstrap-osd.keyring``
- - ``ceph.bootstrap-mds.keyring``
- - ``ceph.bootstrap-rgw.keyring``
- - ``ceph.bootstrap-rbd.keyring``
- - ``ceph.bootstrap-rbd-mirror.keyring``
-
- .. note:: If this process fails with a message similar to "Unable to
- find /etc/ceph/ceph.client.admin.keyring", please ensure that the
- IP listed for the monitor node in ceph.conf is the Public IP, not
- the Private IP.
-
-#. Use ``ceph-deploy`` to copy the configuration file and admin key to
- your admin node and your Ceph Nodes so that you can use the ``ceph``
- CLI without having to specify the monitor address and
- ``ceph.client.admin.keyring`` each time you execute a command. ::
-
- ceph-deploy admin {ceph-node(s)}
-
- For example::
-
- ceph-deploy admin node1 node2 node3
-
-#. Deploy a manager daemon. (Required only for luminous+ builds)::
-
- ceph-deploy mgr create node1 *Required only for luminous+ builds, i.e >= 12.x builds*
-
-#. Add three OSDs. For the purposes of these instructions, we assume you have an
- unused disk in each node called ``/dev/vdb``. *Be sure that the device is not currently in use and does not contain any important data.* ::
-
- ceph-deploy osd create --data {device} {ceph-node}
-
- For example::
-
- ceph-deploy osd create --data /dev/vdb node1
- ceph-deploy osd create --data /dev/vdb node2
- ceph-deploy osd create --data /dev/vdb node3
-
- .. note:: If you are creating an OSD on an LVM volume, the argument to
- ``--data`` *must* be ``volume_group/lv_name``, rather than the path to
- the volume's block device.
-
-#. Check your cluster's health. ::
-
- ssh node1 sudo ceph health
-
- Your cluster should report ``HEALTH_OK``. You can view a more complete
- cluster status with::
-
- ssh node1 sudo ceph -s
-
-
-Expanding Your Cluster
-======================
-
-Once you have a basic cluster up and running, the next step is to expand
-cluster. Then add a Ceph Monitor and Ceph Manager to ``node2`` and ``node3``
-to improve reliability and availability.
-
-.. ditaa::
-
- /------------------\ /----------------\
- | ceph-deploy | | node1 |
- | Admin Node | | cCCC |
- | +-------->+ |
- | | | mon.node1 |
- | | | osd.0 |
- | | | mgr.node1 |
- \---------+--------/ \----------------/
- |
- | /----------------\
- | | node2 |
- | | cCCC |
- +----------------->+ |
- | | osd.1 |
- | | mon.node2 |
- | \----------------/
- |
- | /----------------\
- | | node3 |
- | | cCCC |
- +----------------->+ |
- | osd.2 |
- | mon.node3 |
- \----------------/
-
-Adding Monitors
----------------
-
-A Ceph Storage Cluster requires at least one Ceph Monitor and Ceph
-Manager to run. For high availability, Ceph Storage Clusters typically
-run multiple Ceph Monitors so that the failure of a single Ceph
-Monitor will not bring down the Ceph Storage Cluster. Ceph uses the
-Paxos algorithm, which requires a majority of monitors (i.e., greater
-than *N/2* where *N* is the number of monitors) to form a quorum.
-Odd numbers of monitors tend to be better, although this is not required.
-
-.. tip: If you did not define the ``public network`` option above then
- the new monitor will not know which IP address to bind to on the
- new hosts. You can add this line to your ``ceph.conf`` by editing
- it now and then push it out to each node with
- ``ceph-deploy --overwrite-conf config push {ceph-nodes}``.
-
-Add two Ceph Monitors to your cluster::
-
- ceph-deploy mon add {ceph-nodes}
-
-For example::
-
- ceph-deploy mon add node2 node3
-
-Once you have added your new Ceph Monitors, Ceph will begin synchronizing
-the monitors and form a quorum. You can check the quorum status by executing
-the following::
-
- ceph quorum_status --format json-pretty
-
-
-.. tip:: When you run Ceph with multiple monitors, you SHOULD install and
- configure NTP on each monitor host. Ensure that the
- monitors are NTP peers.
-
-Adding Managers
----------------
-
-The Ceph Manager daemons operate in an active/standby pattern. Deploying
-additional manager daemons ensures that if one daemon or host fails, another
-one can take over without interrupting service.
-
-To deploy additional manager daemons::
-
- ceph-deploy mgr create node2 node3
-
-You should see the standby managers in the output from::
-
- ssh node1 sudo ceph -s
-
-
-Add an RGW Instance
--------------------
-
-To use the :term:`Ceph Object Gateway` component of Ceph, you must deploy an
-instance of :term:`RGW`. Execute the following to create an new instance of
-RGW::
-
- ceph-deploy rgw create {gateway-node}
-
-For example::
-
- ceph-deploy rgw create node1
-
-By default, the :term:`RGW` instance will listen on port 7480. This can be
-changed by editing ceph.conf on the node running the :term:`RGW` as follows:
-
-.. code-block:: ini
-
- [client]
- rgw frontends = civetweb port=80
-
-To use an IPv6 address, use:
-
-.. code-block:: ini
-
- [client]
- rgw frontends = civetweb port=[::]:80
-
-
-
-Storing/Retrieving Object Data
-==============================
-
-To store object data in the Ceph Storage Cluster, a Ceph client must:
-
-#. Set an object name
-#. Specify a `pool`_
-
-The Ceph Client retrieves the latest cluster map and the CRUSH algorithm
-calculates how to map the object to a `placement group`_, and then calculates
-how to assign the placement group to a Ceph OSD Daemon dynamically. To find the
-object location, all you need is the object name and the pool name. For
-example::
-
- ceph osd map {poolname} {object-name}
-
-.. topic:: Exercise: Locate an Object
-
- As an exercise, lets create an object. Specify an object name, a path to
- a test file containing some object data and a pool name using the
- ``rados put`` command on the command line. For example::
-
- echo {Test-data} > testfile.txt
- ceph osd pool create mytest
- rados put {object-name} {file-path} --pool=mytest
- rados put test-object-1 testfile.txt --pool=mytest
-
- To verify that the Ceph Storage Cluster stored the object, execute
- the following::
-
- rados -p mytest ls
-
- Now, identify the object location::
-
- ceph osd map {pool-name} {object-name}
- ceph osd map mytest test-object-1
-
- Ceph should output the object's location. For example::
-
- osdmap e537 pool 'mytest' (1) object 'test-object-1' -> pg 1.d1743484 (1.4) -> up [1,0] acting [1,0]
-
- To remove the test object, simply delete it using the ``rados rm``
- command.
-
- For example::
-
- rados rm test-object-1 --pool=mytest
-
- To delete the ``mytest`` pool::
-
- ceph osd pool rm mytest
-
- (For safety reasons you will need to supply additional arguments as
- prompted; deleting pools destroys data.)
-
-As the cluster evolves, the object location may change dynamically. One benefit
-of Ceph's dynamic rebalancing is that Ceph relieves you from having to perform
-data migration or balancing manually.
-
-
-.. _Preflight Checklist: ../quick-start-preflight
-.. _Ceph Deploy: ../../../rados/deployment
-.. _ceph-deploy install -h: ../../../rados/deployment/ceph-deploy-install
-.. _ceph-deploy new -h: ../../../rados/deployment/ceph-deploy-new
-.. _ceph-deploy osd: ../../../rados/deployment/ceph-deploy-osd
-.. _Running Ceph with Upstart: ../../../rados/operations/operating#running-ceph-with-upstart
-.. _Running Ceph with sysvinit: ../../../rados/operations/operating#running-ceph-with-sysvinit
-.. _CRUSH Map: ../../../rados/operations/crush-map
-.. _pool: ../../../rados/operations/pools
-.. _placement group: ../../../rados/operations/placement-groups
-.. _Monitoring a Cluster: ../../../rados/operations/monitoring
-.. _Monitoring OSDs and PGs: ../../../rados/operations/monitoring-osd-pg
-.. _Network Configuration Reference: ../../../rados/configuration/network-config-ref
-.. _User Management: ../../../rados/operations/user-management
+++ /dev/null
-===================
- CephFS Quick Start
-===================
-
-To use the :term:`CephFS` Quick Start guide, you must have executed the
-procedures in the `Storage Cluster Quick Start`_ guide first. Execute this
-quick start on the admin host.
-
-Prerequisites
-=============
-
-#. Verify that you have an appropriate version of the Linux kernel.
- See `OS Recommendations`_ for details. ::
-
- lsb_release -a
- uname -r
-
-#. On the admin node, use ``ceph-deploy`` to install Ceph on your
- ``ceph-client`` node. ::
-
- ceph-deploy install ceph-client
-
-#. Optionally, if you want a FUSE-mounted file system, you would need to
- install ``ceph-fuse`` package as well.
-
-#. Ensure that the :term:`Ceph Storage Cluster` is running and in an ``active +
- clean`` state. ::
-
- ceph -s [-m {monitor-ip-address}] [-k {path/to/ceph.client.admin.keyring}]
-
-
-Deploy Metadata Server
-======================
-
-All metadata operations in CephFS happen via a metadata server, so you need at
-least one metadata server. Execute the following to create a metadata server::
-
- ceph-deploy mds create {ceph-node}
-
-For example::
-
- ceph-deploy mds create node1
-
-Now, your Ceph cluster would look like this:
-
-.. ditaa::
-
- /------------------\ /----------------\
- | ceph-deploy | | node1 |
- | Admin Node | | cCCC |
- | +-------->+ mon.node1 |
- | | | osd.0 |
- | | | mgr.node1 |
- | | | mds.node1 |
- \---------+--------/ \----------------/
- |
- | /----------------\
- | | node2 |
- | | cCCC |
- +----------------->+ |
- | | osd.1 |
- | | mon.node2 |
- | \----------------/
- |
- | /----------------\
- | | node3 |
- | | cCCC |
- +----------------->+ |
- | osd.2 |
- | mon.node3 |
- \----------------/
-
-Create a File System
-====================
-
-You have already created an MDS (`Storage Cluster Quick Start`_) but it will not
-become active until you create some pools and a file system. See
-:doc:`/cephfs/createfs`. ::
-
- ceph osd pool create cephfs_data 32
- ceph osd pool create cephfs_meta 32
- ceph fs new mycephfs cephfs_meta cephfs_data
-
-.. note:: In case you have multiple Ceph applications and/or have multiple
- CephFSs on the same cluster, it would be easier to name your pools as
- <application>.<fs-name>.<pool-name>. In that case, the above pools would
- be named as cephfs.mycehfs.data and cephfs.mycehfs.meta.
-
-Quick word about Pools and PGs
-------------------------------
-
-Replication Number/Pool Size
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-Since the default replication number/size is 3, you'd need 3 OSDs to get
-``active+clean`` for all PGs. Alternatively, you may change the replication
-number for the pool to match the number of OSDs::
-
- ceph osd pool set cephfs_data size {number-of-osds}
- ceph osd pool set cephfs_meta size {number-of-osds}
-
-Usually, setting ``pg_num`` to 32 gives a perfectly healthy cluster. To pick
-appropriate value for ``pg_num``, refer `Placement Group`_. You can also use
-pg_autoscaler plugin instead. Introduced by Nautilus release, it can
-automatically increase/decrease value of ``pg_num``; refer the
-`Placement Group`_ to find out more about it.
-
-When all OSDs are on the same node...
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-And, in case you have deployed all of the OSDs on the same node, you would need
-to create a new CRUSH rule to replicate data across OSDs and set the rule on the
-CephFS pools, since the default CRUSH rule is to replicate data across
-different nodes::
-
- ceph osd crush rule create-replicated rule_foo default osd
- ceph osd pool set cephfs_data crush_rule rule_foo
- ceph osd pool set cephfs_meta crush_rule rule_foo
-
-Using Erasure Coded pools
-^^^^^^^^^^^^^^^^^^^^^^^^^
-You may also use Erasure Coded pools which can be more effecient and
-cost-saving since they allow stripping object data across OSDs and
-replicating these stripes with encoded redundancy information. The number
-of OSDs across which the data is stripped is `k` and number of replica is `m`.
-You'll need to pick up these values before creating CephFS pools. The
-following commands create a erasure code profile, creates a pool that'll
-use it and then enables it on the pool::
-
- ceph osd erasure-code-profile set ec-42-profile k=4 m=2 crush-failure-domain=host crush-device-class=ssd
- ceph osd pool create cephfs_data_ec42 64 erasure ec-42-profile
- ceph osd pool set cephfs_data_ec42 allow_ec_overwrites true
- ceph fs add_data_pool mycephfs cephfs_data_ec42
-
-You can also mark directories so that they are only stored on certain pools::
-
- setfattr -n ceph.dir.layout -v pool=cephfs_data_ec42 /mnt/mycephfs/logs
-
-This way you can choose the replication strategy for each directory on your
-Ceph file system.
-
-.. note:: Erasure Coded pools can not be used for CephFS metadata pools.
-
-Erasure coded pool were introduced in Firefly and could be used directly by
-CephFS Luminous onwards. Refer `this article <https://ceph.io/community/new-luminous-erasure-coding-rbd-cephfs/>`_
-by Sage Weil to understand EC, it's background, limitations and other details
-in Ceph's context. Read more about `Erasure Code`_ here.
-
-Mounting the File System
-========================
-
-Using Kernel Driver
--------------------
-
-The command to mount CephFS using kernel driver looks like this::
-
- sudo mount -t ceph :{path-to-mounted} {mount-point} -o name={user-name}
- sudo mount -t ceph :/ /mnt/mycephfs -o name=admin # usable version
-
-``{path-to-be-mounted}`` is the path within CephFS that will be mounted,
-``{mount-point}`` is the point in your file system upon which CephFS will be
-mounted and ``{user-name}`` is the name of CephX user that has the
-authorization to mount CephFS on the machine. Following command is the
-extended form, however these extra details are automatically figured out by
-by the mount.ceph helper program::
-
- sudo mount -t ceph {ip-address-of-MON}:{port-number-of-MON}:{path-to-be-mounted} -o name={user-name},secret={secret-key} {mount-point}
-
-If you have multiple file systems on your cluster you would need to pass
-``fs={fs-name}`` option to ``-o`` option to the ``mount`` command::
-
- sudo mount -t ceph :/ /mnt/kcephfs2 -o name=admin,fs=mycephfs2
-
-Refer `mount.ceph man page`_ and `Mount CephFS using Kernel Driver`_ to read
-more about this.
-
-
-Using FUSE
-----------
-
-To mount CephFS using FUSE (Filesystem in User Space) run::
-
- sudo ceph-fuse /mnt/mycephfs
-
-To mount a particular directory within CephFS you can use ``-r``::
-
- sudo ceph-fuse -r {path-to-be-mounted} /mnt/mycephfs
-
-If you have multiple file systems on your cluster you would need to pass
-``--client_fs {fs-name}`` to the ``ceph-fuse`` command::
-
- sudo ceph-fuse /mnt/mycephfs2 --client_fs mycephfs2
-
-Refer `ceph-fuse man page`_ and `Mount CephFS using FUSE`_ to read more about
-this.
-
-.. note:: Mount the CephFS file system on the admin node, not the server node.
-
-
-Additional Information
-======================
-
-See `CephFS`_ for additional information. See `Troubleshooting`_ if you
-encounter trouble.
-
-.. _Storage Cluster Quick Start: ../quick-ceph-deploy
-.. _CephFS: ../../../cephfs/
-.. _Troubleshooting: ../../../cephfs/troubleshooting
-.. _OS Recommendations: ../../../start/os-recommendations
-.. _Placement Group: ../../../rados/operations/placement-groups
-.. _mount.ceph man page: ../../../man/8/mount.ceph
-.. _Mount CephFS using Kernel Driver: ../../../cephfs/mount-using-kernel-driver
-.. _ceph-fuse man page: ../../../man/8/ceph-fuse
-.. _Mount CephFS using FUSE: ../../../cephfs/mount-using-fuse
-.. _Erasure Code: ../../../rados/operations/erasure-code
+++ /dev/null
-.. ditaa::
-
- /------------------\ /-----------------\
- | admin-node | | node1 |
- | +-------->+ cCCC |
- | ceph-deploy | | mon.node1 |
- | | | osd.0 |
- \---------+--------/ \-----------------/
- |
- | /----------------\
- | | node2 |
- +----------------->+ cCCC |
- | | osd.1 |
- | \----------------/
- |
- | /----------------\
- | | node3 |
- +----------------->| cCCC |
- | osd.2 |
- \----------------/
-
+++ /dev/null
-:orphan:
-
-===========================
- Quick Ceph Object Storage
-===========================
-
-To use the :term:`Ceph Object Storage` Quick Start guide, you must have executed the
-procedures in the `Storage Cluster Quick Start`_ guide first. Make sure that you
-have at least one :term:`RGW` instance running.
-
-Configure new RGW instance
-==========================
-
-The :term:`RGW` instance created by the `Storage Cluster Quick Start`_ will run using
-the embedded CivetWeb webserver. ``ceph-deploy`` will create the instance and start
-it automatically with default parameters.
-
-To administer the :term:`RGW` instance, see details in the the
-`RGW Admin Guide`_.
-
-Additional details may be found in the `Configuring Ceph Object Gateway`_ guide, but
-the steps specific to Apache are no longer needed.
-
-.. note:: Deploying RGW using ``ceph-deploy`` and using the CivetWeb webserver instead
- of Apache is new functionality as of **Hammer** release.
-
-
-.. _Storage Cluster Quick Start: ../quick-ceph-deploy
-.. _RGW Admin Guide: ../../../radosgw/admin
-.. _Configuring Ceph Object Gateway: ../../../radosgw/config-fcgi
+++ /dev/null
-===============================
-Ceph Object Gateway Quick Start
-===============================
-
-As of `firefly` (v0.80), Ceph Storage dramatically simplifies installing and
-configuring a Ceph Object Gateway. The Gateway daemon embeds Civetweb, so you
-do not have to install a web server or configure FastCGI. Additionally,
-``ceph-deploy`` can install the gateway package, generate a key, configure a
-data directory and create a gateway instance for you.
-
-.. tip:: Civetweb uses port ``7480`` by default. You must either open port
- ``7480``, or set the port to a preferred port (e.g., port ``80``) in your Ceph
- configuration file.
-
-To start a Ceph Object Gateway, follow the steps below:
-
-Installing Ceph Object Gateway
-==============================
-
-#. Execute the pre-installation steps on your ``client-node``. If you intend to
- use Civetweb's default port ``7480``, you must open it using either
- ``firewall-cmd`` or ``iptables``. See `Preflight Checklist`_ for more
- information.
-
-#. From the working directory of your administration server, install the Ceph
- Object Gateway package on the ``client-node`` node. For example::
-
- ceph-deploy install --rgw <client-node> [<client-node> ...]
-
-Creating the Ceph Object Gateway Instance
-=========================================
-
-From the working directory of your administration server, create an instance of
-the Ceph Object Gateway on the ``client-node``. For example::
-
- ceph-deploy rgw create <client-node>
-
-Once the gateway is running, you should be able to access it on port ``7480``.
-(e.g., ``http://client-node:7480``).
-
-Configuring the Ceph Object Gateway Instance
-============================================
-
-#. To change the default port (e.g., to port ``80``), modify your Ceph
- configuration file. Add a section entitled ``[client.rgw.<client-node>]``,
- replacing ``<client-node>`` with the short node name of your Ceph client
- node (i.e., ``hostname -s``). For example, if your node name is
- ``client-node``, add a section like this after the ``[global]`` section::
-
- [client.rgw.client-node]
- rgw_frontends = "civetweb port=80"
-
- .. note:: Ensure that you leave no whitespace between ``port=<port-number>``
- in the ``rgw_frontends`` key/value pair.
-
- .. important:: If you intend to use port 80, make sure that the Apache
- server is not running otherwise it will conflict with Civetweb. We recommend
- to remove Apache in this case.
-
-#. To make the new port setting take effect, restart the Ceph Object Gateway.
- On Red Hat Enterprise Linux 7 and Fedora, run the following command::
-
- sudo systemctl restart ceph-radosgw.service
-
- On Red Hat Enterprise Linux 6 and Ubuntu, run the following command::
-
- sudo service radosgw restart id=rgw.<short-hostname>
-
-#. Finally, check to ensure that the port you selected is open on the node's
- firewall (e.g., port ``80``). If it is not open, add the port and reload the
- firewall configuration. For example::
-
- sudo firewall-cmd --list-all
- sudo firewall-cmd --zone=public --add-port 80/tcp --permanent
- sudo firewall-cmd --reload
-
- See `Preflight Checklist`_ for more information on configuring firewall with
- ``firewall-cmd`` or ``iptables``.
-
- You should be able to make an unauthenticated request, and receive a
- response. For example, a request with no parameters like this::
-
- http://<client-node>:80
-
- Should result in a response like this::
-
- <?xml version="1.0" encoding="UTF-8"?>
- <ListAllMyBucketsResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
- <Owner>
- <ID>anonymous</ID>
- <DisplayName></DisplayName>
- </Owner>
- <Buckets>
- </Buckets>
- </ListAllMyBucketsResult>
-
-See the `Configuring Ceph Object Gateway`_ guide for additional administration
-and API details.
-
-.. _Configuring Ceph Object Gateway: ../../../radosgw/config-ref
-.. _Preflight Checklist: ../quick-start-preflight
+++ /dev/null
-=====================
- Preflight Checklist
-=====================
-
-The ``ceph-deploy`` tool operates out of a directory on an admin
-:term:`Node`. Any host with network connectivity and a modern python
-environment and ssh (such as Linux) should work.
-
-In the descriptions below, :term:`Node` refers to a single machine.
-
-.. include:: quick-common.rst
-
-
-Ceph-deploy Setup
-=================
-
-Add Ceph repositories to the ``ceph-deploy`` admin node. Then, install
-``ceph-deploy``.
-
-Debian/Ubuntu
--------------
-
-For Debian and Ubuntu distributions, perform the following steps:
-
-#. Add the release key::
-
- wget -q -O- 'https://download.ceph.com/keys/release.asc' | sudo apt-key add -
-
-#. Add the Ceph packages to your repository. Use the command below and
- replace ``{ceph-stable-release}`` with a stable Ceph release (e.g.,
- ``luminous``.) For example::
-
- echo deb https://download.ceph.com/debian-{ceph-stable-release}/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list
-
-#. Update your repository and install ``ceph-deploy``::
-
- sudo apt update
- sudo apt install ceph-deploy
-
-.. note:: You can also use the EU mirror eu.ceph.com for downloading your packages by replacing ``https://download.ceph.com/`` by ``http://eu.ceph.com/``
-
-
-RHEL/CentOS
------------
-
-For CentOS 7, perform the following steps:
-
-#. On Red Hat Enterprise Linux 7, register the target machine with
- ``subscription-manager``, verify your subscriptions, and enable the
- "Extras" repository for package dependencies. For example::
-
- sudo subscription-manager repos --enable=rhel-7-server-extras-rpms
-
-#. Install and enable the Extra Packages for Enterprise Linux (EPEL)
- repository::
-
- sudo yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
-
- Please see the `EPEL wiki`_ page for more information.
-
-#. Add the Ceph repository to your yum configuration file at ``/etc/yum.repos.d/ceph.repo`` with the following command. Replace ``{ceph-stable-release}`` with a stable Ceph release (e.g.,
- ``luminous``.) For example::
-
- cat << EOM > /etc/yum.repos.d/ceph.repo
- [ceph-noarch]
- name=Ceph noarch packages
- baseurl=https://download.ceph.com/rpm-{ceph-stable-release}/el7/noarch
- enabled=1
- gpgcheck=1
- type=rpm-md
- gpgkey=https://download.ceph.com/keys/release.asc
- EOM
-
-#. You may need to install python setuptools required by ceph-deploy:
-
- sudo yum install python-setuptools
-
-#. Update your repository and install ``ceph-deploy``::
-
- sudo yum update
- sudo yum install ceph-deploy
-
-.. note:: You can also use the EU mirror eu.ceph.com for downloading your packages by replacing ``https://download.ceph.com/`` by ``http://eu.ceph.com/``
-
-
-openSUSE
---------
-
-The Ceph project does not currently publish release RPMs for openSUSE, but
-a stable version of Ceph is included in the default update repository, so
-installing it is just a matter of::
-
- sudo zypper install ceph
- sudo zypper install ceph-deploy
-
-If the distro version is out-of-date, open a bug at
-https://bugzilla.opensuse.org/index.cgi and possibly try your luck with one of
-the following repositories:
-
-#. Hammer::
-
- https://software.opensuse.org/download.html?project=filesystems%3Aceph%3Ahammer&package=ceph
-
-#. Jewel::
-
- https://software.opensuse.org/download.html?project=filesystems%3Aceph%3Ajewel&package=ceph
-
-
-Ceph Node Setup
-===============
-
-The admin node must have password-less SSH access to Ceph nodes.
-When ceph-deploy logs in to a Ceph node as a user, that particular
-user must have passwordless ``sudo`` privileges.
-
-
-Install NTP
------------
-
-We recommend installing NTP on Ceph nodes (especially on Ceph Monitor nodes) to
-prevent issues arising from clock drift. See `Clock`_ for details.
-
-On CentOS / RHEL, execute::
-
- sudo yum install ntp ntpdate ntp-doc
-
-On Debian / Ubuntu, execute::
-
- sudo apt install ntpsec
-
-or::
-
- sudo apt install chrony
-
-Ensure that you enable the NTP service. Ensure that each Ceph Node uses the
-same NTP time server. See `NTP`_ for details.
-
-
-Install SSH Server
-------------------
-
-For **ALL** Ceph Nodes perform the following steps:
-
-#. Install an SSH server (if necessary) on each Ceph Node::
-
- sudo apt install openssh-server
-
- or::
-
- sudo yum install openssh-server
-
-
-#. Ensure the SSH server is running on **ALL** Ceph Nodes.
-
-
-Create a Ceph Deploy User
--------------------------
-
-The ``ceph-deploy`` utility must login to a Ceph node as a user
-that has passwordless ``sudo`` privileges, because it needs to install
-software and configuration files without prompting for passwords.
-
-Recent versions of ``ceph-deploy`` support a ``--username`` option so you can
-specify any user that has password-less ``sudo`` (including ``root``, although
-this is **NOT** recommended). To use ``ceph-deploy --username {username}``, the
-user you specify must have password-less SSH access to the Ceph node, as
-``ceph-deploy`` will not prompt you for a password.
-
-We recommend creating a specific user for ``ceph-deploy`` on **ALL** Ceph nodes
-in the cluster. Please do **NOT** use "ceph" as the user name. A uniform user
-name across the cluster may improve ease of use (not required), but you should
-avoid obvious user names, because hackers typically use them with brute force
-hacks (e.g., ``root``, ``admin``, ``{productname}``). The following procedure,
-substituting ``{username}`` for the user name you define, describes how to
-create a user with passwordless ``sudo``.
-
-.. note:: Starting with the :ref:`Infernalis release <infernalis-release-notes>`, the "ceph" user name is reserved
- for the Ceph daemons. If the "ceph" user already exists on the Ceph nodes,
- removing the user must be done before attempting an upgrade.
-
-#. Create a new user on each Ceph Node. ::
-
- ssh user@ceph-server
- sudo useradd -d /home/{username} -m {username}
- sudo passwd {username}
-
-#. For the new user you added to each Ceph node, ensure that the user has
- ``sudo`` privileges. ::
-
- echo "{username} ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/{username}
- sudo chmod 0440 /etc/sudoers.d/{username}
-
-
-Enable Password-less SSH
-------------------------
-
-Since ``ceph-deploy`` will not prompt for a password, you must generate
-SSH keys on the admin node and distribute the public key to each Ceph
-node. ``ceph-deploy`` will attempt to generate the SSH keys for initial
-monitors.
-
-#. Generate the SSH keys, but do not use ``sudo`` or the
- ``root`` user. Leave the passphrase empty::
-
- ssh-keygen
-
- Generating public/private key pair.
- Enter file in which to save the key (/ceph-admin/.ssh/id_rsa):
- Enter passphrase (empty for no passphrase):
- Enter same passphrase again:
- Your identification has been saved in /ceph-admin/.ssh/id_rsa.
- Your public key has been saved in /ceph-admin/.ssh/id_rsa.pub.
-
-#. Copy the key to each Ceph Node, replacing ``{username}`` with the user name
- you created with `Create a Ceph Deploy User`_. ::
-
- ssh-copy-id {username}@node1
- ssh-copy-id {username}@node2
- ssh-copy-id {username}@node3
-
-#. (Recommended) Modify the ``~/.ssh/config`` file of your ``ceph-deploy``
- admin node so that ``ceph-deploy`` can log in to Ceph nodes as the user you
- created without requiring you to specify ``--username {username}`` each
- time you execute ``ceph-deploy``. This has the added benefit of streamlining
- ``ssh`` and ``scp`` usage. Replace ``{username}`` with the user name you
- created::
-
- Host node1
- Hostname node1
- User {username}
- Host node2
- Hostname node2
- User {username}
- Host node3
- Hostname node3
- User {username}
-
-
-Enable Networking On Bootup
----------------------------
-
-Ceph OSDs peer with each other and report to Ceph Monitors over the network.
-If networking is ``off`` by default, the Ceph cluster cannot come online
-during bootup until you enable networking.
-
-The default configuration on some distributions (e.g., CentOS) has the
-networking interface(s) off by default. Ensure that, during boot up, your
-network interface(s) turn(s) on so that your Ceph daemons can communicate over
-the network. For example, on Red Hat and CentOS, navigate to
-``/etc/sysconfig/network-scripts`` and ensure that the ``ifcfg-{iface}`` file
-has ``ONBOOT`` set to ``yes``.
-
-
-Ensure Connectivity
--------------------
-
-Ensure connectivity using ``ping`` with short hostnames (``hostname -s``).
-Address hostname resolution issues as necessary.
-
-.. note:: Hostnames should resolve to a network IP address, not to the
- loopback IP address (e.g., hostnames should resolve to an IP address other
- than ``127.0.0.1``). If you use your admin node as a Ceph node, you
- should also ensure that it resolves to its hostname and IP address
- (i.e., not its loopback IP address).
-
-
-Open Required Ports
--------------------
-
-Ceph Monitors communicate using port ``6789`` by default. Ceph OSDs communicate
-in a port range of ``6800:7300`` by default. See the `Network Configuration
-Reference`_ for details. Ceph OSDs can use multiple network connections to
-communicate with clients, monitors, other OSDs for replication, and other OSDs
-for heartbeats.
-
-On some distributions (e.g., RHEL), the default firewall configuration is fairly
-strict. You may need to adjust your firewall settings allow inbound requests so
-that clients in your network can communicate with daemons on your Ceph nodes.
-
-For ``firewalld`` on RHEL 7, add the ``ceph-mon`` service for Ceph Monitor
-nodes and the ``ceph`` service for Ceph OSDs and MDSs to the public zone and
-ensure that you make the settings permanent so that they are enabled on reboot.
-
-For example, on monitors::
-
- sudo firewall-cmd --zone=public --add-service=ceph-mon --permanent
-
-and on OSDs and MDSs::
-
- sudo firewall-cmd --zone=public --add-service=ceph --permanent
-
-Once you have finished configuring firewalld with the ``--permanent`` flag, you can make the changes live immediately without rebooting::
-
- sudo firewall-cmd --reload
-
-For ``iptables``, add port ``6789`` for Ceph Monitors and ports ``6800:7300``
-for Ceph OSDs. For example::
-
- sudo iptables -A INPUT -i {iface} -p tcp -s {ip-address}/{netmask} --dport 6789 -j ACCEPT
-
-Once you have finished configuring ``iptables``, ensure that you make the
-changes persistent on each node so that they will be in effect when your nodes
-reboot. For example::
-
- /sbin/service iptables save
-
-TTY
----
-
-On CentOS and RHEL, you may receive an error while trying to execute
-``ceph-deploy`` commands. If ``requiretty`` is set by default on your Ceph
-nodes, disable it by executing ``sudo visudo`` and locate the ``Defaults
-requiretty`` setting. Change it to ``Defaults:ceph !requiretty`` or comment it
-out to ensure that ``ceph-deploy`` can connect using the user you created with
-`Create a Ceph Deploy User`_.
-
-.. note:: If editing, ``/etc/sudoers``, ensure that you use
- ``sudo visudo`` rather than a text editor.
-
-
-SELinux
--------
-
-On CentOS and RHEL, SELinux is set to ``Enforcing`` by default. To streamline your
-installation, we recommend setting SELinux to ``Permissive`` or disabling it
-entirely and ensuring that your installation and cluster are working properly
-before hardening your configuration. To set SELinux to ``Permissive``, execute the
-following::
-
- sudo setenforce 0
-
-To configure SELinux persistently (recommended if SELinux is an issue), modify
-the configuration file at ``/etc/selinux/config``.
-
-
-Priorities/Preferences
-----------------------
-
-Ensure that your package manager has priority/preferences packages installed and
-enabled. On CentOS, you may need to install EPEL. On RHEL, you may need to
-enable optional repositories. ::
-
- sudo yum install yum-plugin-priorities
-
-For example, on RHEL 7 server, execute the following to install
-``yum-plugin-priorities`` and enable the ``rhel-7-server-optional-rpms``
-repository::
-
- sudo yum install yum-plugin-priorities --enablerepo=rhel-7-server-optional-rpms
-
-
-Summary
-=======
-
-This completes the Quick Start Preflight. Proceed to the `Storage Cluster
-Quick Start`_.
-
-.. _Storage Cluster Quick Start: ../quick-ceph-deploy
-.. _OS Recommendations: ../../../start/os-recommendations
-.. _Network Configuration Reference: ../../../rados/configuration/network-config-ref
-.. _Clock: ../../../rados/configuration/mon-config-ref#clock
-.. _NTP: http://www.ntp.org/
-.. _Infernalis release: ../../../release-notes/#v9-1-0-infernalis-release-candidate
-.. _EPEL wiki: https://fedoraproject.org/wiki/EPEL
+++ /dev/null
-================
- Upgrading Ceph
-================
-
-Each release of Ceph may have additional steps. Refer to the `release notes
-document of your release`_ to identify release-specific procedures for your
-cluster before using the upgrade procedures.
-
-
-Summary
-=======
-
-You can upgrade daemons in your Ceph cluster while the cluster is online and in
-service! Certain types of daemons depend upon others. For example, Ceph Metadata
-Servers and Ceph Object Gateways depend upon Ceph Monitors and Ceph OSD Daemons.
-We recommend upgrading in this order:
-
-#. `Ceph Deploy`_
-#. Ceph Monitors
-#. Ceph OSD Daemons
-#. Ceph Metadata Servers
-#. Ceph Object Gateways
-
-As a general rule, we recommend upgrading all the daemons of a specific type
-(e.g., all ``ceph-mon`` daemons, all ``ceph-osd`` daemons, etc.) to ensure that
-they are all on the same release. We also recommend that you upgrade all the
-daemons in your cluster before you try to exercise new functionality in a
-release.
-
-The `Upgrade Procedures`_ are relatively simple, but do look at the `release
-notes document of your release`_ before upgrading. The basic process involves
-three steps:
-
-#. Use ``ceph-deploy`` on your admin node to upgrade the packages for
- multiple hosts (using the ``ceph-deploy install`` command), or login to each
- host and upgrade the Ceph package `using your distro's package manager`_.
- For example, when `Upgrading Monitors`_, the ``ceph-deploy`` syntax might
- look like this::
-
- ceph-deploy install --release {release-name} ceph-node1[ ceph-node2]
- ceph-deploy install --release firefly mon1 mon2 mon3
-
- **Note:** The ``ceph-deploy install`` command will upgrade the packages
- in the specified node(s) from the old release to the release you specify.
- There is no ``ceph-deploy upgrade`` command.
-
-#. Login in to each Ceph node and restart each Ceph daemon.
- See `Operating a Cluster`_ for details.
-
-#. Ensure your cluster is healthy. See `Monitoring a Cluster`_ for details.
-
-.. important:: Once you upgrade a daemon, you cannot downgrade it.
-
-
-Ceph Deploy
-===========
-
-Before upgrading Ceph daemons, upgrade the ``ceph-deploy`` tool. ::
-
- sudo pip install -U ceph-deploy
-
-Or::
-
- sudo apt-get install ceph-deploy
-
-Or::
-
- sudo yum install ceph-deploy python-pushy
-
-
-Upgrade Procedures
-==================
-
-The following sections describe the upgrade process.
-
-.. important:: Each release of Ceph may have some additional steps. Refer to
- the `release notes document of your release`_ for details **BEFORE** you
- begin upgrading daemons.
-
-
-Upgrading Monitors
-------------------
-
-To upgrade monitors, perform the following steps:
-
-#. Upgrade the Ceph package for each daemon instance.
-
- You may use ``ceph-deploy`` to address all monitor nodes at once.
- For example::
-
- ceph-deploy install --release {release-name} ceph-node1[ ceph-node2]
- ceph-deploy install --release hammer mon1 mon2 mon3
-
- You may also use the package manager for your Linux distribution on
- each individual node. To upgrade packages manually on each Debian/Ubuntu
- host, perform the following steps::
-
- ssh {mon-host}
- sudo apt-get update && sudo apt-get install ceph
-
- On CentOS/Red Hat hosts, perform the following steps::
-
- ssh {mon-host}
- sudo yum update && sudo yum install ceph
-
-
-#. Restart each monitor. For Ubuntu distributions, use::
-
- sudo systemctl restart ceph-mon@{hostname}.service
-
- For CentOS/Red Hat/Debian distributions, use::
-
- sudo /etc/init.d/ceph restart {mon-id}
-
- For CentOS/Red Hat distributions deployed with ``ceph-deploy``,
- the monitor ID is usually ``mon.{hostname}``.
-
-#. Ensure each monitor has rejoined the quorum::
-
- ceph mon stat
-
-Ensure that you have completed the upgrade cycle for all of your Ceph Monitors.
-
-
-Upgrading an OSD
-----------------
-
-To upgrade a Ceph OSD Daemon, perform the following steps:
-
-#. Upgrade the Ceph OSD Daemon package.
-
- You may use ``ceph-deploy`` to address all Ceph OSD Daemon nodes at
- once. For example::
-
- ceph-deploy install --release {release-name} ceph-node1[ ceph-node2]
- ceph-deploy install --release hammer osd1 osd2 osd3
-
- You may also use the package manager on each node to upgrade packages
- `using your distro's package manager`_. For Debian/Ubuntu hosts, perform the
- following steps on each host::
-
- ssh {osd-host}
- sudo apt-get update && sudo apt-get install ceph
-
- For CentOS/Red Hat hosts, perform the following steps::
-
- ssh {osd-host}
- sudo yum update && sudo yum install ceph
-
-
-#. Restart the OSD, where ``N`` is the OSD number. For Ubuntu, use::
-
- sudo systemctl restart ceph-osd@{N}.service
-
- For multiple OSDs on a host, you may restart all of them with systemd. ::
-
- sudo systemctl restart ceph-osd
-
- For CentOS/Red Hat/Debian distributions, use::
-
- sudo /etc/init.d/ceph restart N
-
-
-#. Ensure each upgraded Ceph OSD Daemon has rejoined the cluster::
-
- ceph osd stat
-
-Ensure that you have completed the upgrade cycle for all of your
-Ceph OSD Daemons.
-
-
-Upgrading a Metadata Server
----------------------------
-
-To upgrade a Ceph Metadata Server, perform the following steps:
-
-#. Upgrade the Ceph Metadata Server package. You may use ``ceph-deploy`` to
- address all Ceph Metadata Server nodes at once, or use the package manager
- on each node. For example::
-
- ceph-deploy install --release {release-name} ceph-node1
- ceph-deploy install --release hammer mds1
-
- To upgrade packages manually, perform the following steps on each
- Debian/Ubuntu host::
-
- ssh {mon-host}
- sudo apt-get update && sudo apt-get install ceph-mds
-
- Or the following steps on CentOS/Red Hat hosts::
-
- ssh {mon-host}
- sudo yum update && sudo yum install ceph-mds
-
-
-#. Restart the metadata server. For Ubuntu, use::
-
- sudo systemctl restart ceph-mds@{hostname}.service
-
- For CentOS/Red Hat/Debian distributions, use::
-
- sudo /etc/init.d/ceph restart mds.{hostname}
-
- For clusters deployed with ``ceph-deploy``, the name is usually either
- the name you specified on creation or the hostname.
-
-#. Ensure the metadata server is up and running::
-
- ceph mds stat
-
-
-Upgrading a Client
-------------------
-
-Once you have upgraded the packages and restarted daemons on your Ceph
-cluster, we recommend upgrading ``ceph-common`` and client libraries
-(``librbd1`` and ``librados2``) on your client nodes too.
-
-#. Upgrade the package::
-
- ssh {client-host}
- apt-get update && sudo apt-get install ceph-common librados2 librbd1 python-rados python-rbd
-
-#. Ensure that you have the latest version::
-
- ceph --version
-
-If you do not have the latest version, you may need to uninstall, auto remove
-dependencies and reinstall.
-
-
-.. _using your distro's package manager: ../../install-storage-cluster/
-.. _Operating a Cluster: ../../../rados/operations/operating
-.. _Monitoring a Cluster: ../../../rados/operations/monitoring
-.. _release notes document of your release: ../../../releases
management features and dashboard integration are not available.
-:ref:`ceph-deploy <ceph-deploy-index>` is a tool for quickly deploying clusters.
+``ceph-deploy`` is a tool for quickly deploying clusters.
.. IMPORTANT::
:hidden:
index_manual
- ceph-deploy/index
Once you have the Ceph software (or added repositories), installing the software
is easy. To install packages on each :term:`Ceph Node` in your cluster. You may
-use ``ceph-deploy`` to install Ceph for your storage cluster, or use package
+use ``cephadm`` to install Ceph for your storage cluster, or use package
management tools. You should install Yum Priorities for RHEL/CentOS and other
distributions that use Yum if you intend to install the Ceph Object Gateway or
QEMU.
.. toctree::
:maxdepth: 1
- Install ceph-deploy <install-ceph-deploy>
+ Install cephadm <../cephadm/install>
Install Ceph Storage Cluster <install-storage-cluster>
Install Virtualization for Block <install-vm-cloud>
+++ /dev/null
-=====================
- Install Ceph Deploy
-=====================
-
-The ``ceph-deploy`` tool enables you to set up and tear down Ceph clusters
-for development, testing and proof-of-concept projects.
-
-
-APT
----
-
-To install ``ceph-deploy`` with ``apt``, execute the following::
-
- sudo apt-get update && sudo apt-get install ceph-deploy
-
-
-RPM
----
-
-To install ``ceph-deploy`` with ``yum``, execute the following::
-
- sudo yum install ceph-deploy
-
This guide describes installing Ceph packages manually. This procedure
is only for users who are not installing with a deployment tool such as
-``ceph-deploy``, ``chef``, ``juju``, etc.
-
-.. tip:: You can also use ``ceph-deploy`` to install Ceph packages, which may
- be more convenient since you can install ``ceph`` on multiple hosts with
- a single command.
+``cephadm``, ``chef``, ``juju``, etc.
Installing with APT
default, so it's useful to know about them when setting up your cluster for
production.
-Following the same configuration as `Installation (ceph-deploy)`_, we will set up a
-cluster with ``node1`` as the monitor node, and ``node2`` and ``node3`` for
-OSD nodes.
+We will set up a cluster with ``node1`` as the monitor node, and ``node2`` and
+``node3`` for OSD nodes.
ls /etc/ceph
- **Note:** Deployment tools may remove this directory when purging a
- cluster (e.g., ``ceph-deploy purgedata {node-name}``, ``ceph-deploy purge
- {node-name}``).
#. Create a Ceph configuration file. By default, Ceph uses
``ceph.conf``, where ``ceph`` reflects the cluster name. ::
To add (or remove) additional Ceph OSD Daemons, see `Add/Remove OSDs`_.
-.. _Installation (ceph-deploy): ../ceph-deploy
.. _Add/Remove Monitors: ../../rados/operations/add-or-rm-mons
.. _Add/Remove OSDs: ../../rados/operations/add-or-rm-osds
.. _Network Configuration Reference: ../../rados/configuration/network-config-ref
default, so it's useful to know about them when setting up your cluster for
production.
-Following the same configuration as `Installation (ceph-deploy)`_, we will set up a
-cluster with ``node1`` as the monitor node, and ``node2`` and ``node3`` for
-OSD nodes.
+We will set up a cluster with ``node1`` as the monitor node, and ``node2`` and
+``node3`` for OSD nodes.
ls /etc/ceph
- **Note:** Deployment tools may remove this directory when purging a
- cluster (e.g., ``ceph-deploy purgedata {node-name}``, ``ceph-deploy purge
- {node-name}``).
-
#. Create a Ceph configuration file. By default, Ceph uses
``ceph.conf``, where ``ceph`` reflects the cluster name. ::
To add (or remove) additional Ceph OSD Daemons, see `Add/Remove OSDs`_.
-.. _Installation (ceph-deploy): ../ceph-deploy
.. _Add/Remove Monitors: ../../rados/operations/add-or-rm-mons
.. _Add/Remove OSDs: ../../rados/operations/add-or-rm-osds
.. _Network Configuration Reference: ../../rados/configuration/network-config-ref
man/8/ceph-create-keys
man/8/ceph-debugpack
man/8/ceph-dencoder
- man/8/ceph-deploy
man/8/ceph-volume
man/8/ceph-volume-systemd
man/8/ceph-fuse
see a health warning to that effect, and some of the other information
in the output of `ceph status` will be missing or stale until a mgr is started.
-Use your normal deployment tools, such as ceph-ansible or ceph-deploy, to
+Use your normal deployment tools, such as ceph-ansible or cephadm, to
set up ceph-mgr daemons on each of your mon nodes. It is not mandatory
to place mgr daemons on the same nodes as mons, but it is almost always
sensible.
the location of the keyring. The ``python-ceph`` module doesn't have the default
location, so you need to specify the keyring path. The easiest way to specify
the keyring is to add it to the Ceph configuration file. The following Ceph
-configuration file example uses the ``client.admin`` keyring you generated with
-``ceph-deploy``.
+configuration file example uses the ``client.admin`` keyring.
.. code-block:: ini
:linenos:
There are two main scenarios for deploying a Ceph cluster, which impact
how you initially configure Cephx. Most first time Ceph users use
-``ceph-deploy`` to create a cluster (easiest). For clusters using
+``cephadm`` to create a cluster (easiest). For clusters using
other deployment tools (e.g., Chef, Juju, Puppet, etc.), you will need
to use the manual procedures or configure your deployment tool to
bootstrap your monitor(s).
-ceph-deploy
------------
-
-When you deploy a cluster with ``ceph-deploy``, you do not have to bootstrap the
-monitor manually or create the ``client.admin`` user or keyring. The steps you
-execute in the `Storage Cluster Quick Start`_ will invoke ``ceph-deploy`` to do
-that for you.
-
-When you execute ``ceph-deploy new {initial-monitor(s)}``, Ceph will create a
-monitor keyring for you (only used to bootstrap monitors), and it will generate
-an initial Ceph configuration file for you, which contains the following
-authentication settings, indicating that Ceph enables authentication by
-default::
-
- auth_cluster_required = cephx
- auth_service_required = cephx
- auth_client_required = cephx
-
-When you execute ``ceph-deploy mon create-initial``, Ceph will bootstrap the
-initial monitor(s), retrieve a ``ceph.client.admin.keyring`` file containing the
-key for the ``client.admin`` user. Additionally, it will also retrieve keyrings
-that give ``ceph-deploy`` and ``ceph-volume`` utilities the ability to prepare and
-activate OSDs and metadata servers.
-
-When you execute ``ceph-deploy admin {node-name}`` (**note:** Ceph must be
-installed first), you are pushing a Ceph configuration file and the
-``ceph.client.admin.keyring`` to the ``/etc/ceph`` directory of the node. You
-will be able to execute Ceph administrative functions as ``root`` on the command
-line of that node.
-
-
Manual Deployment
-----------------
The most common way to provide these keys to the ``ceph`` administrative
commands and clients is to include a Ceph keyring under the ``/etc/ceph``
-directory. For Cuttlefish and later releases using ``ceph-deploy``, the filename
+directory. For Octopus and later releases using ``cephadm``, the filename
is usually ``ceph.client.admin.keyring`` (or ``$cluster.client.admin.keyring``).
If you include the keyring under the ``/etc/ceph`` directory, you don't need to
specify a ``keyring`` entry in your Ceph configuration file.
We recommend copying the Ceph Storage Cluster's keyring file to nodes where you
will run administrative commands, because it contains the ``client.admin`` key.
-You may use ``ceph-deploy admin`` to perform this task. See `Create an Admin
-Host`_ for details. To perform this step manually, execute the following::
+To perform this step manually, execute the following::
sudo scp {user}@{ceph-cluster-host}:/etc/ceph/ceph.client.admin.keyring /etc/ceph/ceph.client.admin.keyring
Daemon Keyrings
---------------
-Administrative users or deployment tools (e.g., ``ceph-deploy``) may generate
+Administrative users or deployment tools (e.g., ``cephadm``) may generate
daemon keyrings in the same way as generating user keyrings. By default, Ceph
stores daemons keyrings inside their data directory. The default keyring
locations, and the capabilities necessary for the daemon to function, are shown
:Default: ``60*60``
-.. _Storage Cluster Quick Start: ../../../start/quick-ceph-deploy/
.. _Monitor Bootstrapping: ../../../install/manual-deployment#monitor-bootstrapping
.. _Operating a Cluster: ../../operations/operating
.. _Manual Deployment: ../../../install/manual-deployment
.. _Ceph configuration: ../ceph-conf
-.. _Create an Admin Host: ../../deployment/ceph-deploy-admin
.. _Architecture - High Availability Authentication: ../../../architecture#high-availability-authentication
.. _User Management: ../../operations/user-management
the command line to retrieve the name of the node. Do not use ``host``
settings for anything other than initial monitors unless you are deploying
Ceph manually. You **MUST NOT** specify ``host`` under individual daemons
- when using deployment tools like ``chef`` or ``ceph-deploy``, as those tools
+ when using deployment tools like ``chef`` or ``cephadm``, as those tools
will enter the appropriate values for you in the cluster map.
/var/lib/ceph/mon/$cluster-$id
-You or a deployment tool (e.g., ``ceph-deploy``) must create the corresponding
+You or a deployment tool (e.g., ``cephadm``) must create the corresponding
directory. With metavariables fully expressed and a cluster named "ceph", the
foregoing directory would evaluate to::
/var/lib/ceph/osd/$cluster-$id
-You or a deployment tool (e.g., ``ceph-deploy``) must create the corresponding
+You or a deployment tool (e.g., ``cephadm``) must create the corresponding
directory. With metavariables fully expressed and a cluster named "ceph", the
foregoing directory would evaluate to::
building a reliable :term:`Ceph Storage Cluster`. **All Ceph Storage Clusters
have at least one monitor**. A monitor configuration usually remains fairly
consistent, but you can add, remove or replace a monitor in a cluster. See
-`Adding/Removing a Monitor`_ and `Add/Remove a Monitor (ceph-deploy)`_ for
-details.
+`Adding/Removing a Monitor`_ for details.
.. index:: Ceph Monitor; Paxos
In most configuration and deployment cases, tools that deploy Ceph may help
bootstrap the Ceph Monitors by generating a monitor map for you (e.g.,
-``ceph-deploy``, etc). A Ceph Monitor requires a few explicit
+``cephadm``, etc). A Ceph Monitor requires a few explicit
settings:
- **Filesystem ID**: The ``fsid`` is the unique identifier for your
object store. Since you can run multiple clusters on the same
hardware, you must specify the unique ID of the object store when
bootstrapping a monitor. Deployment tools usually do this for you
- (e.g., ``ceph-deploy`` can call a tool like ``uuidgen``), but you
+ (e.g., ``cephadm`` can call a tool like ``uuidgen``), but you
may specify the ``fsid`` manually too.
- **Monitor ID**: A monitor ID is a unique ID assigned to each monitor within
by a deployment tool, or using the ``ceph`` commandline.
- **Keys**: The monitor must have secret keys. A deployment tool such as
- ``ceph-deploy`` usually does this for you, but you may
+ ``cephadm`` usually does this for you, but you may
perform this step manually too. See `Monitor Keyrings`_ for details.
For additional details on bootstrapping, see `Bootstrapping a Monitor`_.
.. _Monitor lookup through DNS: ../mon-lookup-dns
.. _ACID: https://en.wikipedia.org/wiki/ACID
.. _Adding/Removing a Monitor: ../../operations/add-or-rm-mons
-.. _Add/Remove a Monitor (ceph-deploy): ../../deployment/ceph-deploy-mon
.. _Monitoring a Cluster: ../../operations/monitoring
.. _Monitoring OSDs and PGs: ../../operations/monitoring-osd-pg
.. _Bootstrapping a Monitor: ../../../dev/mon-bootstrap
on behalf of Ceph Clients, which means replication and other factors impose
additional loads on Ceph Storage Cluster networks.
-Our Quick Start configurations provide a trivial `Ceph configuration file`_ that
+Our Quick Start configurations provide a trivial Ceph configuration file that
sets monitor IP addresses and daemon host names only. Unless you specify a
cluster network, Ceph assumes a single "public" network. Ceph functions just
fine with a public network only, but you may see significant performance
To configure Ceph networks, you must add a network configuration to the
``[global]`` section of the configuration file. Our 5-minute Quick Start
-provides a trivial `Ceph configuration file`_ that assumes one public network
+provides a trivial Ceph configuration file that assumes one public network
with client and server on the same network and subnet. Ceph functions just fine
with a public network only. However, Ceph allows you to establish much more
specific criteria, including multiple IP network and subnet masks for your
.. _Scalability and High Availability: ../../../architecture#scalability-and-high-availability
.. _Hardware Recommendations - Networks: ../../../start/hardware-recommendations#networks
-.. _Ceph configuration file: ../../../start/quick-ceph-deploy/#create-a-cluster
.. _hardware recommendations: ../../../start/hardware-recommendations
.. _Monitor / OSD Interaction: ../mon-osd-interaction
.. _Message Signatures: ../auth-config-ref#signatures
+++ /dev/null
-=============
- Admin Tasks
-=============
-
-Once you have set up a cluster with ``ceph-deploy``, you may
-provide the client admin key and the Ceph configuration file
-to another host so that a user on the host may use the ``ceph``
-command line as an administrative user.
-
-
-Create an Admin Host
-====================
-
-To enable a host to execute ceph commands with administrator
-privileges, use the ``admin`` command. ::
-
- ceph-deploy admin {host-name [host-name]...}
-
-
-Deploy Config File
-==================
-
-To send an updated copy of the Ceph configuration file to hosts
-in your cluster, use the ``config push`` command. ::
-
- ceph-deploy config push {host-name [host-name]...}
-
-.. tip:: With a base name and increment host-naming convention,
- it is easy to deploy configuration files via simple scripts
- (e.g., ``ceph-deploy config hostname{1,2,3,4,5}``).
-
-Retrieve Config File
-====================
-
-To retrieve a copy of the Ceph configuration file from a host
-in your cluster, use the ``config pull`` command. ::
-
- ceph-deploy config pull {host-name [host-name]...}
+++ /dev/null
-====================
- Package Management
-====================
-
-Install
-=======
-
-To install Ceph packages on your cluster hosts, open a command line on your
-client machine and type the following::
-
- ceph-deploy install {hostname [hostname] ...}
-
-Without additional arguments, ``ceph-deploy`` will install the most recent
-major release of Ceph to the cluster host(s). To specify a particular package,
-you may select from the following:
-
-- ``--release <code-name>``
-- ``--testing``
-- ``--dev <branch-or-tag>``
-
-For example::
-
- ceph-deploy install --release cuttlefish hostname1
- ceph-deploy install --testing hostname2
- ceph-deploy install --dev wip-some-branch hostname{1,2,3,4,5}
-
-For additional usage, execute::
-
- ceph-deploy install -h
-
-
-Uninstall
-=========
-
-To uninstall Ceph packages from your cluster hosts, open a terminal on
-your admin host and type the following::
-
- ceph-deploy uninstall {hostname [hostname] ...}
-
-On a Debian or Ubuntu system, you may also::
-
- ceph-deploy purge {hostname [hostname] ...}
-
-The tool will uninstall ``ceph`` packages from the specified hosts. Purge
-additionally removes configuration files.
-
+++ /dev/null
-=================
- Keys Management
-=================
-
-
-Gather Keys
-===========
-
-Before you can provision a host to run OSDs or metadata servers, you must gather
-monitor keys and the OSD and MDS bootstrap keyrings. To gather keys, enter the
-following::
-
- ceph-deploy gatherkeys {monitor-host}
-
-
-.. note:: To retrieve the keys, you specify a host that has a
- Ceph monitor.
-
-.. note:: If you have specified multiple monitors in the setup of the cluster,
- make sure, that all monitors are up and running. If the monitors haven't
- formed quorum, ``ceph-create-keys`` will not finish and the keys are not
- generated.
-
-Forget Keys
-===========
-
-When you are no longer using ``ceph-deploy`` (or if you are recreating a
-cluster), you should delete the keys in the local directory of your admin host.
-To delete keys, enter the following::
-
- ceph-deploy forgetkeys
-
+++ /dev/null
-============================
- Add/Remove Metadata Server
-============================
-
-With ``ceph-deploy``, adding and removing metadata servers is a simple task. You
-just add or remove one or more metadata servers on the command line with one
-command.
-
-See `MDS Config Reference`_ for details on configuring metadata servers.
-
-
-Add a Metadata Server
-=====================
-
-Once you deploy monitors and OSDs you may deploy the metadata server(s). ::
-
- ceph-deploy mds create {host-name}[:{daemon-name}] [{host-name}[:{daemon-name}] ...]
-
-You may specify a daemon instance a name (optional) if you would like to run
-multiple daemons on a single server.
-
-
-Remove a Metadata Server
-========================
-
-Coming soon...
-
-.. If you have a metadata server in your cluster that you'd like to remove, you may use
-.. the ``destroy`` option. ::
-
-.. ceph-deploy mds destroy {host-name}[:{daemon-name}] [{host-name}[:{daemon-name}] ...]
-
-.. You may specify a daemon instance a name (optional) if you would like to destroy
-.. a particular daemon that runs on a single server with multiple MDS daemons.
-
-.. .. note:: Ensure that if you remove a metadata server, the remaining metadata
- servers will be able to service requests from CephFS clients. If that is not
- possible, consider adding a metadata server before destroying the metadata
- server you would like to take offline.
-
-
-.. _MDS Config Reference: ../../../cephfs/mds-config-ref
+++ /dev/null
-=====================
- Add/Remove Monitors
-=====================
-
-With ``ceph-deploy``, adding and removing monitors is a simple task. You just
-add or remove one or more monitors on the command line with one command. Before
-``ceph-deploy``, the process of `adding and removing monitors`_ involved
-numerous manual steps. Using ``ceph-deploy`` imposes a restriction: **you may
-only install one monitor per host.**
-
-.. note:: We do not recommend commingling monitors and OSDs on
- the same host.
-
-For high availability, you should run a production Ceph cluster with **AT
-LEAST** three monitors. Ceph uses the Paxos algorithm, which requires a
-consensus among the majority of monitors in a quorum. With Paxos, the monitors
-cannot determine a majority for establishing a quorum with only two monitors. A
-majority of monitors must be counted as such: 1:1, 2:3, 3:4, 3:5, 4:6, etc.
-
-See `Monitor Config Reference`_ for details on configuring monitors.
-
-
-Add a Monitor
-=============
-
-Once you create a cluster and install Ceph packages to the monitor host(s), you
-may deploy the monitor(s) to the monitor host(s). When using ``ceph-deploy``,
-the tool enforces a single monitor per host. ::
-
- ceph-deploy mon create {host-name [host-name]...}
-
-
-.. note:: Ensure that you add monitors such that they may arrive at a consensus
- among a majority of monitors, otherwise other steps (like ``ceph-deploy gatherkeys``)
- will fail.
-
-.. note:: When adding a monitor on a host that was not in hosts initially defined
- with the ``ceph-deploy new`` command, a ``public network`` statement needs
- to be added to the ceph.conf file.
-
-Remove a Monitor
-================
-
-If you have a monitor in your cluster that you'd like to remove, you may use
-the ``destroy`` option. ::
-
- ceph-deploy mon destroy {host-name [host-name]...}
-
-
-.. note:: Ensure that if you remove a monitor, the remaining monitors will be
- able to establish a consensus. If that is not possible, consider adding a
- monitor before removing the monitor you would like to take offline.
-
-
-.. _adding and removing monitors: ../../operations/add-or-rm-mons
-.. _Monitor Config Reference: ../../configuration/mon-config-ref
+++ /dev/null
-==================
- Create a Cluster
-==================
-
-The first step in using Ceph with ``ceph-deploy`` is to create a new Ceph
-cluster. A new Ceph cluster has:
-
-- A Ceph configuration file, and
-- A monitor keyring.
-
-The Ceph configuration file consists of at least:
-
-- Its own file system ID (``fsid``)
-- The initial monitor(s) hostname(s), and
-- The initial monitor(s) and IP address(es).
-
-For additional details, see the `Monitor Configuration Reference`_.
-
-The ``ceph-deploy`` tool also creates a monitor keyring and populates it with a
-``[mon.]`` key. For additional details, see the `Cephx Guide`_.
-
-
-Usage
------
-
-To create a cluster with ``ceph-deploy``, use the ``new`` command and specify
-the host(s) that will be initial members of the monitor quorum. ::
-
- ceph-deploy new {host [host], ...}
-
-For example::
-
- ceph-deploy new mon1.foo.com
- ceph-deploy new mon{1,2,3}
-
-The ``ceph-deploy`` utility will use DNS to resolve hostnames to IP
-addresses. The monitors will be named using the first component of
-the name (e.g., ``mon1`` above). It will add the specified host names
-to the Ceph configuration file. For additional details, execute::
-
- ceph-deploy new -h
-
-
-
-.. _Monitor Configuration Reference: ../../configuration/mon-config-ref
-.. _Cephx Guide: ../../../dev/mon-bootstrap#secret-keys
+++ /dev/null
-=================
- Add/Remove OSDs
-=================
-
-Adding and removing Ceph OSD Daemons to your cluster may involve a few more
-steps when compared to adding and removing other Ceph daemons. Ceph OSD Daemons
-write data to the disk and to journals. So you need to provide a disk for the
-OSD and a path to the journal partition (i.e., this is the most common
-configuration, but you may configure your system to your own needs).
-
-In Ceph v0.60 and later releases, Ceph supports ``dm-crypt`` on disk encryption.
-You may specify the ``--dmcrypt`` argument when preparing an OSD to tell
-``ceph-deploy`` that you want to use encryption. You may also specify the
-``--dmcrypt-key-dir`` argument to specify the location of ``dm-crypt``
-encryption keys.
-
-You should test various drive configurations to gauge their throughput before
-building out a large cluster. See `Data Storage`_ for additional details.
-
-
-List Disks
-==========
-
-To list the disks on a node, execute the following command::
-
- ceph-deploy disk list {node-name [node-name]...}
-
-
-Zap Disks
-=========
-
-To zap a disk (delete its partition table) in preparation for use with Ceph,
-execute the following::
-
- ceph-deploy disk zap {osd-server-name} {disk-name}
- ceph-deploy disk zap osdserver1 /dev/sdb /dev/sdc
-
-.. important:: This will delete all data.
-
-
-Create OSDs
-===========
-
-Once you create a cluster, install Ceph packages, and gather keys, you
-may create the OSDs and deploy them to the OSD node(s). If you need to
-identify a disk or zap it prior to preparing it for use as an OSD,
-see `List Disks`_ and `Zap Disks`_. ::
-
- ceph-deploy osd create --data {data-disk} {node-name}
-
-For example::
-
- ceph-deploy osd create --data /dev/ssd osd-server1
-
-For bluestore (the default) the example assumes a disk dedicated to one Ceph
-OSD Daemon. Filestore is also supported, in which case a ``--journal`` flag in
-addition to ``--filestore`` needs to be used to define the Journal device on
-the remote host.
-
-.. note:: When running multiple Ceph OSD daemons on a single node, and
- sharing a partitioned journal with each OSD daemon, you should consider
- the entire node the minimum failure domain for CRUSH purposes, because
- if the SSD drive fails, all of the Ceph OSD daemons that journal to it
- will fail too.
-
-
-List OSDs
-=========
-
-To list the OSDs deployed on a node(s), execute the following command::
-
- ceph-deploy osd list {node-name}
-
-
-Destroy OSDs
-============
-
-.. note:: Coming soon. See `Remove OSDs`_ for manual procedures.
-
-.. To destroy an OSD, execute the following command::
-
-.. ceph-deploy osd destroy {node-name}:{path-to-disk}[:{path/to/journal}]
-
-.. Destroying an OSD will take it ``down`` and ``out`` of the cluster.
-
-.. _Data Storage: ../../../start/hardware-recommendations#data-storage
-.. _Remove OSDs: ../../operations/add-or-rm-osds#removing-osds-manual
+++ /dev/null
-==============
- Purge a Host
-==============
-
-When you remove Ceph daemons and uninstall Ceph, there may still be extraneous
-data from the cluster on your server. The ``purge`` and ``purgedata`` commands
-provide a convenient means of cleaning up a host.
-
-
-Purge Data
-==========
-
-To remove all data from ``/var/lib/ceph`` (but leave Ceph packages intact),
-execute the ``purgedata`` command.
-
- ceph-deploy purgedata {hostname} [{hostname} ...]
-
-
-Purge
-=====
-
-To remove all data from ``/var/lib/ceph`` and uninstall Ceph packages, execute
-the ``purge`` command.
-
- ceph-deploy purge {hostname} [{hostname} ...]
\ No newline at end of file
+++ /dev/null
-=================
- Ceph Deployment
-=================
-
-The ``ceph-deploy`` tool is a way to deploy Ceph relying only upon SSH access to
-the servers, ``sudo``, and some Python. It runs on your workstation, and does
-not require servers, databases, or any other tools. If you set up and
-tear down Ceph clusters a lot, and want minimal extra bureaucracy,
-``ceph-deploy`` is an ideal tool. The ``ceph-deploy`` tool is not a generic
-deployment system. It was designed exclusively for Ceph users who want to get
-Ceph up and running quickly with sensible initial configuration settings without
-the overhead of installing Chef, Puppet or Juju. Users who want fine-control
-over security settings, partitions or directory locations should use a tool
-such as Juju, Puppet, `Chef`_ or Crowbar.
-
-
-With ``ceph-deploy``, you can develop scripts to install Ceph packages on remote
-hosts, create a cluster, add monitors, gather (or forget) keys, add OSDs and
-metadata servers, configure admin hosts, and tear down the clusters.
-
-.. raw:: html
-
- <table cellpadding="10"><tbody valign="top"><tr><td>
-
-.. toctree::
-
- Preflight Checklist <preflight-checklist>
- Install Ceph <ceph-deploy-install>
-
-.. raw:: html
-
- </td><td>
-
-.. toctree::
-
- Create a Cluster <ceph-deploy-new>
- Add/Remove Monitor(s) <ceph-deploy-mon>
- Key Management <ceph-deploy-keys>
- Add/Remove OSD(s) <ceph-deploy-osd>
- Add/Remove MDS(s) <ceph-deploy-mds>
-
-
-.. raw:: html
-
- </td><td>
-
-.. toctree::
-
- Purge Hosts <ceph-deploy-purge>
- Admin Tasks <ceph-deploy-admin>
-
-
-.. raw:: html
-
- </td></tr></tbody></table>
-
-
-.. _Chef: http://tracker.ceph.com/projects/ceph/wiki/Deploying_Ceph_with_Chef
+++ /dev/null
-=====================
- Preflight Checklist
-=====================
-
-.. versionadded:: 0.60
-
-This **Preflight Checklist** will help you prepare an admin node for use with
-``ceph-deploy``, and server nodes for use with passwordless ``ssh`` and
-``sudo``.
-
-Before you can deploy Ceph using ``ceph-deploy``, you need to ensure that you
-have a few things set up first on your admin node and on nodes running Ceph
-daemons.
-
-
-Install an Operating System
-===========================
-
-Install a recent release of Debian or Ubuntu (e.g., 16.04 LTS) on
-your nodes. For additional details on operating systems or to use other
-operating systems other than Debian or Ubuntu, see `OS Recommendations`_.
-
-
-Install an SSH Server
-=====================
-
-The ``ceph-deploy`` utility requires ``ssh``, so your server node(s) require an
-SSH server. ::
-
- sudo apt-get install openssh-server
-
-
-Create a User
-=============
-
-Create a user on nodes running Ceph daemons.
-
-.. tip:: We recommend a username that brute force attackers won't
- guess easily (e.g., something other than ``root``, ``ceph``, etc).
-
-::
-
- ssh user@ceph-server
- sudo useradd -d /home/ceph -m ceph
- sudo passwd ceph
-
-
-``ceph-deploy`` installs packages onto your nodes. This means that
-the user you create requires passwordless ``sudo`` privileges.
-
-.. note:: We **DO NOT** recommend enabling the ``root`` password
- for security reasons.
-
-To provide full privileges to the user, add the following to
-``/etc/sudoers.d/ceph``. ::
-
- echo "ceph ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/ceph
- sudo chmod 0440 /etc/sudoers.d/ceph
-
-
-Configure SSH
-=============
-
-Configure your admin machine with password-less SSH access to each node
-running Ceph daemons (leave the passphrase empty). ::
-
- ssh-keygen
- Generating public/private key pair.
- Enter file in which to save the key (/ceph-client/.ssh/id_rsa):
- Enter passphrase (empty for no passphrase):
- Enter same passphrase again:
- Your identification has been saved in /ceph-client/.ssh/id_rsa.
- Your public key has been saved in /ceph-client/.ssh/id_rsa.pub.
-
-Copy the key to each node running Ceph daemons::
-
- ssh-copy-id ceph@ceph-server
-
-Modify your ~/.ssh/config file of your admin node so that it defaults
-to logging in as the user you created when no username is specified. ::
-
- Host ceph-server
- Hostname ceph-server.fqdn-or-ip-address.com
- User ceph
-
-
-Install ceph-deploy
-===================
-
-To install ``ceph-deploy``, execute the following::
-
- wget -q -O- 'https://download.ceph.com/keys/release.asc' | sudo apt-key add -
- echo deb https://download.ceph.com/debian-nautilus/ $(lsb_release -sc) main | sudo tee /etc/apt/sources.list.d/ceph.list
- sudo apt-get update
- sudo apt-get install ceph-deploy
-
-
-Ensure Connectivity
-===================
-
-Ensure that your Admin node has connectivity to the network and to your Server
-node (e.g., ensure ``iptables``, ``ufw`` or other tools that may prevent
-connections, traffic forwarding, etc. to allow what you need).
-
-
-Once you have completed this pre-flight checklist, you are ready to begin using
-``ceph-deploy``.
-
-.. _OS Recommendations: ../../../start/os-recommendations
Ceph Storage Clusters have a few required settings, but most configuration
settings have default values. A typical deployment uses a deployment tool
to define a cluster and bootstrap a monitor. See `Deployment`_ for details
-on ``ceph-deploy.``
+on ``cephadm.``
.. toctree::
:maxdepth: 2
Configuration <configuration/index>
- Deployment <deployment/index>
+ Deployment <../cephadm/index>
.. raw:: html
.. _Ceph Block Devices: ../rbd/
.. _Ceph File System: ../cephfs/
.. _Ceph Object Storage: ../radosgw/
-.. _Deployment: ../rados/deployment/
+.. _Deployment: ../cephadm/
../../man/8/ceph-volume.rst
../../man/8/ceph-volume-systemd.rst
../../man/8/ceph.rst
- ../../man/8/ceph-deploy.rst
../../man/8/ceph-authtool.rst
../../man/8/ceph-clsinfo.rst
../../man/8/ceph-conf.rst
``profile bootstrap-osd`` (Monitor only)
:Description: Gives a user permissions to bootstrap an OSD. Conferred on
- deployment tools such as ``ceph-volume``, ``ceph-deploy``, etc.
+ deployment tools such as ``ceph-volume``, ``cephadm``, etc.
so that they have permissions to add keys, etc. when
bootstrapping an OSD.
``profile bootstrap-mds`` (Monitor only)
:Description: Gives a user permissions to bootstrap a metadata server.
- Conferred on deployment tools such as ``ceph-deploy``, etc.
+ Conferred on deployment tools such as ``cephadm``, etc.
so they have permissions to add keys, etc. when bootstrapping
a metadata server.
``profile bootstrap-rbd`` (Monitor only)
:Description: Gives a user permissions to bootstrap an RBD user.
- Conferred on deployment tools such as ``ceph-deploy``, etc.
+ Conferred on deployment tools such as ``cephadm``, etc.
so they have permissions to add keys, etc. when bootstrapping
an RBD user.
``profile bootstrap-rbd-mirror`` (Monitor only)
:Description: Gives a user permissions to bootstrap an ``rbd-mirror`` daemon
- user. Conferred on deployment tools such as ``ceph-deploy``, etc.
+ user. Conferred on deployment tools such as ``cephadm``, etc.
so they have permissions to add keys, etc. when bootstrapping
an ``rbd-mirror`` daemon.
can mount kernel clients within virtual machines (VMs) on a single node.
If you are creating OSDs using a single disk, you must create directories
-for the data manually first. For example::
-
- ceph-deploy osd create --data {disk} {host}
+for the data manually first.
Fewer OSDs than Replicas
.. toctree::
:maxdepth: 1
- Manual Install w/Civetweb <../../install/ceph-deploy/install-ceph-gateway>
HTTP Frontends <frontends>
Pool Placement and Storage Classes <placement>
Multisite Configuration <multisite>
Block Device Quick Start
==========================
-To use this guide, you must have executed the procedures in the
-:ref:`storage-cluster-quick-start` guide first. Ensure your
-:term:`Ceph Storage Cluster` is in an ``active + clean`` state
+Ensure your :term:`Ceph Storage Cluster` is in an ``active + clean`` state
before working with the :term:`Ceph Block Device`.
.. note:: The Ceph Block Device is also known as :term:`RBD` or :term:`RADOS`