From d79f2a81541cfa9e905aef00ad517278c2692c9c Mon Sep 17 00:00:00 2001 From: Nitzan Mordechai Date: Sun, 19 Feb 2023 11:33:51 +0000 Subject: [PATCH] docs: warning and remove few docs section for Filestore Update docs after filestore removal. Signed-off-by: Nitzan Mordechai --- doc/architecture.rst | 6 +- doc/ceph-volume/lvm/activate.rst | 15 +-- doc/ceph-volume/lvm/batch.rst | 4 +- doc/ceph-volume/lvm/create.rst | 1 - doc/ceph-volume/lvm/encryption.rst | 4 +- doc/ceph-volume/lvm/prepare.rst | 28 +----- doc/ceph-volume/simple/activate.rst | 5 +- doc/cephadm/adoption.rst | 3 +- doc/dev/dev_cluster_deployment.rst | 4 - doc/dev/object-store.rst | 5 - doc/dev/osd_internals/osd_throttles.rst | 93 ------------------- doc/dev/osd_internals/osd_throttles.txt | 21 ----- doc/glossary.rst | 12 +-- doc/install/manual-deployment.rst | 39 -------- doc/man/8/ceph-volume.rst | 28 +----- doc/man/8/rbd.rst | 3 +- doc/rados/configuration/common.rst | 14 --- .../configuration/filestore-config-ref.rst | 1 + doc/rados/configuration/journal-ref.rst | 2 +- doc/rados/configuration/mclock-config-ref.rst | 4 - doc/rados/configuration/osd-config-ref.rst | 2 +- doc/rados/configuration/storage-devices.rst | 2 + doc/rados/operations/bluestore-migration.rst | 2 + 23 files changed, 33 insertions(+), 265 deletions(-) delete mode 100644 doc/dev/osd_internals/osd_throttles.rst delete mode 100644 doc/dev/osd_internals/osd_throttles.txt diff --git a/doc/architecture.rst b/doc/architecture.rst index 3a59fb436b8..f2b99a56647 100644 --- a/doc/architecture.rst +++ b/doc/architecture.rst @@ -65,10 +65,8 @@ comes through a :term:`Ceph Block Device`, :term:`Ceph Object Storage`, the :term:`Ceph File System` or a custom implementation you create using ``librados``-- which is stored as RADOS objects. Each object is stored on an :term:`Object Storage Device`. Ceph OSD Daemons handle read, write, and -replication operations on storage drives. With the older Filestore back end, -each RADOS object was stored as a separate file on a conventional filesystem -(usually XFS). With the new and default BlueStore back end, objects are -stored in a monolithic database-like fashion. +replication operations on storage drives. With the default BlueStore back end, +objects are stored in a monolithic database-like fashion. .. ditaa:: diff --git a/doc/ceph-volume/lvm/activate.rst b/doc/ceph-volume/lvm/activate.rst index 787b599d2fe..d5129def11d 100644 --- a/doc/ceph-volume/lvm/activate.rst +++ b/doc/ceph-volume/lvm/activate.rst @@ -81,15 +81,11 @@ The systemd unit will look for the matching OSD device, and by looking at its #. Mount the device in the corresponding location (by convention this is ``/var/lib/ceph/osd/-/``) -#. Ensure that all required devices are ready for that OSD. In the case of -a journal (when ``--filestore`` is selected) the device will be queried (with -``blkid`` for partitions, and lvm for logical volumes) to ensure that the -correct device is being linked. The symbolic link will *always* be re-done to -ensure that the correct device is linked. +#. Ensure that all required devices are ready for that OSD. #. Start the ``ceph-osd@0`` systemd unit -.. note:: The system infers the objectstore type (filestore or bluestore) by +.. note:: The system infers the objectstore type by inspecting the LVM tags applied to the OSD devices Existing OSDs @@ -112,10 +108,3 @@ To recap the ``activate`` process for :term:`bluestore`: pointing it to the OSD ``block`` device. #. The systemd unit will ensure all devices are ready and linked #. The matching ``ceph-osd`` systemd unit will get started - -And for :term:`filestore`: - -#. Require both :term:`OSD id` and :term:`OSD uuid` -#. Enable the system unit with matching id and uuid -#. The systemd unit will ensure all devices are ready and mounted (if needed) -#. The matching ``ceph-osd`` systemd unit will get started diff --git a/doc/ceph-volume/lvm/batch.rst b/doc/ceph-volume/lvm/batch.rst index a636e31ec22..2114518bf56 100644 --- a/doc/ceph-volume/lvm/batch.rst +++ b/doc/ceph-volume/lvm/batch.rst @@ -12,8 +12,8 @@ same code path. All ``batch`` does is to calculate the appropriate sizes of all volumes and skip over already created volumes. All the features that ``ceph-volume lvm create`` supports, like ``dmcrypt``, -avoiding ``systemd`` units from starting, defining bluestore or filestore, -are supported. +avoiding ``systemd`` units from starting, defining bluestore, +is supported. .. _ceph-volume-lvm-batch_auto: diff --git a/doc/ceph-volume/lvm/create.rst b/doc/ceph-volume/lvm/create.rst index c90d1f6fa52..17fe9fa5ab2 100644 --- a/doc/ceph-volume/lvm/create.rst +++ b/doc/ceph-volume/lvm/create.rst @@ -17,7 +17,6 @@ immediately after completion. The backing objectstore can be specified with: -* :ref:`--filestore ` * :ref:`--bluestore ` All command line flags and options are the same as ``ceph-volume lvm prepare``. diff --git a/doc/ceph-volume/lvm/encryption.rst b/doc/ceph-volume/lvm/encryption.rst index 66f4ee182b1..4564a7ffed8 100644 --- a/doc/ceph-volume/lvm/encryption.rst +++ b/doc/ceph-volume/lvm/encryption.rst @@ -62,8 +62,8 @@ compatibility and prevent ceph-disk from breaking, ceph-volume uses the same naming convention *although it does not make sense for the new encryption workflow*. -After the common steps of setting up the OSD during the "prepare stage" (either -with :term:`filestore` or :term:`bluestore`), the logical volume is left ready +After the common steps of setting up the OSD during the "prepare stage" ( +with :term:`bluestore`), the logical volume is left ready to be activated, regardless of the state of the device (encrypted or decrypted). diff --git a/doc/ceph-volume/lvm/prepare.rst b/doc/ceph-volume/lvm/prepare.rst index ae6aac414cd..2faf12a4e1f 100644 --- a/doc/ceph-volume/lvm/prepare.rst +++ b/doc/ceph-volume/lvm/prepare.rst @@ -19,15 +19,13 @@ play in the Ceph cluster (for example: BlueStore data or BlueStore WAL+DB). :term:`BlueStore` is the default backend. Ceph permits changing the backend, which can be done by using the following flags and arguments: -* :ref:`--filestore ` * :ref:`--bluestore ` .. _ceph-volume-lvm-prepare_bluestore: ``bluestore`` ------------- -:term:`Bluestore` is the default backend for new OSDs. It -offers more flexibility for devices than :term:`filestore` does. Bluestore +:term:`Bluestore` is the default backend for new OSDs. Bluestore supports the following configurations: * a block device, a block.wal device, and a block.db device @@ -103,8 +101,10 @@ a volume group and a logical volume using the following conventions: ``filestore`` ------------- +.. warning:: Filestore has been deprecated in the Reef release and is no longer supported. + ``Filestore`` is the OSD backend that prepares logical volumes for a -:term:`filestore`-backed object-store OSD. +`filestore`-backed object-store OSD. ``Filestore`` uses a logical volume to store OSD data and it uses @@ -270,8 +270,7 @@ can be started later (for detailed metadata description see Crush device class ------------------ -To set the crush device class for the OSD, use the ``--crush-device-class`` flag. This will -work for both bluestore and filestore OSDs:: +To set the crush device class for the OSD, use the ``--crush-device-class`` flag. ceph-volume lvm prepare --bluestore --data vg/lv --crush-device-class foo @@ -306,11 +305,6 @@ regardless of the type of volume (journal or data) or OSD objectstore: * ``osd_id`` * ``crush_device_class`` -For :term:`filestore` these tags will be added: - -* ``journal_device`` -* ``journal_uuid`` - For :term:`bluestore` these tags will be added: * ``block_device`` @@ -336,15 +330,3 @@ To recap the ``prepare`` process for :term:`bluestore`: #. monmap is fetched for activation #. Data directory is populated by ``ceph-osd`` #. Logical Volumes are assigned all the Ceph metadata using lvm tags - - -And the ``prepare`` process for :term:`filestore`: - -#. Accepts raw physical devices, partitions on physical devices or logical volumes as arguments. -#. Generate a UUID for the OSD -#. Ask the monitor get an OSD ID reusing the generated UUID -#. OSD data directory is created and data volume mounted -#. Journal is symlinked from data volume to journal location -#. monmap is fetched for activation -#. devices is mounted and data directory is populated by ``ceph-osd`` -#. data and journal volumes are assigned all the Ceph metadata using lvm tags diff --git a/doc/ceph-volume/simple/activate.rst b/doc/ceph-volume/simple/activate.rst index 2b2795d0be7..8c7737162e8 100644 --- a/doc/ceph-volume/simple/activate.rst +++ b/doc/ceph-volume/simple/activate.rst @@ -73,8 +73,7 @@ identify the OSD and its devices, and it will proceed to: # mount the device in the corresponding location (by convention this is ``/var/lib/ceph/osd/-/``) -# ensure that all required devices are ready for that OSD and properly linked, -regardless of objectstore used (filestore or bluestore). The symbolic link will -**always** be re-done to ensure that the correct device is linked. +# ensure that all required devices are ready for that OSD and properly linked. +The symbolic link will **always** be re-done to ensure that the correct device is linked. # start the ``ceph-osd@0`` systemd unit diff --git a/doc/cephadm/adoption.rst b/doc/cephadm/adoption.rst index 2b38d42d279..86254a16cd4 100644 --- a/doc/cephadm/adoption.rst +++ b/doc/cephadm/adoption.rst @@ -14,8 +14,7 @@ clusters can be converted to a state in which they can be managed by Limitations ----------- -* Cephadm works only with BlueStore OSDs. FileStore OSDs that are in your - cluster cannot be managed with ``cephadm``. +* Cephadm works only with BlueStore OSDs. Preparation ----------- diff --git a/doc/dev/dev_cluster_deployment.rst b/doc/dev/dev_cluster_deployment.rst index 526d7b7eb19..f62ea34c331 100644 --- a/doc/dev/dev_cluster_deployment.rst +++ b/doc/dev/dev_cluster_deployment.rst @@ -37,10 +37,6 @@ Options Create an erasure pool. -.. option:: -f, --filestore - - Use filestore as the osd objectstore backend. - .. option:: --hitset Enable hitset tracking. diff --git a/doc/dev/object-store.rst b/doc/dev/object-store.rst index 2d9a7d8fafb..73ea148bbe9 100644 --- a/doc/dev/object-store.rst +++ b/doc/dev/object-store.rst @@ -52,14 +52,9 @@ "PrimaryLogPG" -> "ObjectStore" "PrimaryLogPG" -> "OSDMap" - "ObjectStore" -> "FileStore" "ObjectStore" -> "BlueStore" "BlueStore" -> "rocksdb" - - "FileStore" -> "xfs" - "FileStore" -> "btrfs" - "FileStore" -> "ext4" } diff --git a/doc/dev/osd_internals/osd_throttles.rst b/doc/dev/osd_internals/osd_throttles.rst deleted file mode 100644 index 1d6fb8f73f9..00000000000 --- a/doc/dev/osd_internals/osd_throttles.rst +++ /dev/null @@ -1,93 +0,0 @@ -============= -OSD Throttles -============= - -There are three significant throttles in the FileStore OSD back end: -wbthrottle, op_queue_throttle, and a throttle based on journal usage. - -WBThrottle ----------- -The WBThrottle is defined in src/os/filestore/WBThrottle.[h,cc] and -included in FileStore as FileStore::wbthrottle. The intention is to -bound the amount of outstanding IO we need to do to flush the journal. -At the same time, we don't want to necessarily do it inline in case we -might be able to combine several IOs on the same object close together -in time. Thus, in FileStore::_write, we queue the fd for asynchronous -flushing and block in FileStore::_do_op if we have exceeded any hard -limits until the background flusher catches up. - -The relevant config options are filestore_wbthrottle*. There are -different defaults for XFS and Btrfs. Each set has hard and soft -limits on bytes (total dirty bytes), ios (total dirty ios), and -inodes (total dirty fds). The WBThrottle will begin flushing -when any of these hits the soft limit and will block in throttle() -while any has exceeded the hard limit. - -Tighter soft limits will cause writeback to happen more quickly, -but may cause the OSD to miss opportunities for write coalescing. -Tighter hard limits may cause a reduction in latency variance by -reducing time spent flushing the journal, but may reduce writeback -parallelism. - -op_queue_throttle ------------------ -The op queue throttle is intended to bound the amount of queued but -uncompleted work in the filestore by delaying threads calling -queue_transactions more and more based on how many ops and bytes are -currently queued. The throttle is taken in queue_transactions and -released when the op is applied to the file system. This period -includes time spent in the journal queue, time spent writing to the -journal, time spent in the actual op queue, time spent waiting for the -wbthrottle to open up (thus, the wbthrottle can push back indirectly -on the queue_transactions caller), and time spent actually applying -the op to the file system. A BackoffThrottle is used to gradually -delay the queueing thread after each throttle becomes more than -filestore_queue_low_threshhold full (a ratio of -filestore_queue_max_(bytes|ops)). The throttles will block once the -max value is reached (filestore_queue_max_(bytes|ops)). - -The significant config options are: -filestore_queue_low_threshhold -filestore_queue_high_threshhold -filestore_expected_throughput_ops -filestore_expected_throughput_bytes -filestore_queue_high_delay_multiple -filestore_queue_max_delay_multiple - -While each throttle is at less than low_threshold of the max, -no delay happens. Between low and high, the throttle will -inject a per-op delay (per op or byte) ramping from 0 at low to -high_delay_multiple/expected_throughput at high. From high to -1, the delay will ramp from high_delay_multiple/expected_throughput -to max_delay_multiple/expected_throughput. - -filestore_queue_high_delay_multiple and -filestore_queue_max_delay_multiple probably do not need to be -changed. - -Setting these properly should help to smooth out op latencies by -mostly avoiding the hard limit. - -See FileStore::throttle_ops and FileSTore::throttle_bytes. - -journal usage throttle ----------------------- -See src/os/filestore/JournalThrottle.h/cc - -The intention of the journal usage throttle is to gradually slow -down queue_transactions callers as the journal fills up in order -to smooth out hiccup during filestore syncs. JournalThrottle -wraps a BackoffThrottle and tracks journaled but not flushed -journal entries so that the throttle can be released when the -journal is flushed. The configs work very similarly to the -op_queue_throttle. - -The significant config options are: -journal_throttle_low_threshhold -journal_throttle_high_threshhold -filestore_expected_throughput_ops -filestore_expected_throughput_bytes -journal_throttle_high_multiple -journal_throttle_max_multiple - -.. literalinclude:: osd_throttles.txt diff --git a/doc/dev/osd_internals/osd_throttles.txt b/doc/dev/osd_internals/osd_throttles.txt deleted file mode 100644 index 0332377efb6..00000000000 --- a/doc/dev/osd_internals/osd_throttles.txt +++ /dev/null @@ -1,21 +0,0 @@ - Messenger throttle (number and size) - |-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| - FileStore op_queue throttle (number and size, includes a soft throttle based on filestore_expected_throughput_(ops|bytes)) - |--------------------------------------------------------| - WBThrottle - |---------------------------------------------------------------------------------------------------------| - Journal (size, includes a soft throttle based on filestore_expected_throughput_bytes) - |-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| - |----------------------------------------------------------------------------------------------------> flushed ----------------> synced - | -Op: Read Header --DispatchQ--> OSD::_dispatch --OpWQ--> PG::do_request --journalq--> Journal --FileStore::OpWQ--> Apply Thread --Finisher--> op_applied -------------------------------------------------------------> Complete - | | -SubOp: --Messenger--> ReadHeader --DispatchQ--> OSD::_dispatch --OpWQ--> PG::do_request --journalq--> Journal --FileStore::OpWQ--> Apply Thread --Finisher--> sub_op_applied - - | - |-----------------------------> flushed ----------------> synced - |------------------------------------------------------------------------------------------| - Journal (size) - |---------------------------------| - WBThrottle - |-----------------------------------------------------| - FileStore op_queue throttle (number and size) diff --git a/doc/glossary.rst b/doc/glossary.rst index ea72ae8c268..f5d1d627203 100644 --- a/doc/glossary.rst +++ b/doc/glossary.rst @@ -14,10 +14,10 @@ was designed specifically for use with Ceph. BlueStore was introduced in the Ceph Kraken release. In the Ceph Luminous release, BlueStore became Ceph's default storage back end, - supplanting FileStore. Unlike :term:`filestore`, BlueStore - stores objects directly on Ceph block devices without any file - system interface. Since Luminous (12.2), BlueStore has been - Ceph's default and recommended storage back end. + supplanting FileStore. BlueStore stores objects directly on + Ceph block devices without any file system interface. + Since Luminous (12.2), BlueStore has been Ceph's default + and recommended storage back end. Bucket In the context of :term:`RGW`, a bucket is a group of objects. @@ -234,10 +234,6 @@ Another name for :term:`Dashboard`. Dashboard Plugin - filestore - A back end for OSD daemons, where a Journal is needed and files - are written to the filesystem. - FQDN **F**\ully **Q**\ualified **D**\omain **N**\ame. A domain name that is applied to a node in a network and that specifies the diff --git a/doc/install/manual-deployment.rst b/doc/install/manual-deployment.rst index 95232fce2aa..cf8bf050d40 100644 --- a/doc/install/manual-deployment.rst +++ b/doc/install/manual-deployment.rst @@ -353,45 +353,6 @@ activate): sudo ceph-volume lvm activate 0 a7f64266-0894-4f1e-a635-d0aeaca0e993 -filestore -^^^^^^^^^ -#. Create the OSD. :: - - ssh {osd node} - sudo ceph-volume lvm create --filestore --data {data-path} --journal {journal-path} - - For example:: - - ssh osd-node1 - sudo ceph-volume lvm create --filestore --data /dev/hdd1 --journal /dev/hdd2 - -Alternatively, the creation process can be split in two phases (prepare, and -activate): - -#. Prepare the OSD. :: - - ssh {node-name} - sudo ceph-volume lvm prepare --filestore --data {data-path} --journal {journal-path} - - For example:: - - ssh osd-node1 - sudo ceph-volume lvm prepare --filestore --data /dev/hdd1 --journal /dev/hdd2 - - Once prepared, the ``ID`` and ``FSID`` of the prepared OSD are required for - activation. These can be obtained by listing OSDs in the current server:: - - sudo ceph-volume lvm list - -#. Activate the OSD:: - - sudo ceph-volume lvm activate --filestore {ID} {FSID} - - For example:: - - sudo ceph-volume lvm activate --filestore 0 a7f64266-0894-4f1e-a635-d0aeaca0e993 - - Long Form --------- diff --git a/doc/man/8/ceph-volume.rst b/doc/man/8/ceph-volume.rst index 216e272708d..8254cf400b2 100644 --- a/doc/man/8/ceph-volume.rst +++ b/doc/man/8/ceph-volume.rst @@ -80,9 +80,8 @@ batch .. program:: ceph-volume lvm batch -Creates OSDs from a list of devices using a ``filestore`` -or ``bluestore`` (default) setup. It will create all necessary volume groups -and logical volumes required to have a working OSD. +Creates OSDs from a list of devices using a ``bluestore`` (default) setup. +It will create all necessary volume groups and logical volumes required to have a working OSD. Example usage with three devices:: @@ -98,10 +97,6 @@ Optional arguments: Use the bluestore objectstore (default) -.. option:: --filestore - - Use the filestore objectstore - .. option:: --yes Skip the report and prompt to continue provisioning @@ -179,10 +174,6 @@ Optional Arguments: bluestore objectstore (default) -.. option:: --filestore - - filestore objectstore - .. option:: --all Activate all OSDs found in the system @@ -202,13 +193,12 @@ prepare .. program:: ceph-volume lvm prepare -Prepares a logical volume to be used as an OSD and journal using a ``filestore`` -or ``bluestore`` (default) setup. It will not create or modify the logical volumes -except for adding extra metadata. +Prepares a logical volume to be used as an OSD and journal using a ``bluestore`` (default) setup. +It will not create or modify the logical volumes except for adding extra metadata. Usage:: - ceph-volume lvm prepare --filestore --data --journal + ceph-volume lvm prepare --bluestore --data --journal Optional arguments: @@ -232,10 +222,6 @@ Optional arguments: Path to a bluestore block.db logical volume or partition -.. option:: --filestore - - Use the filestore objectstore - .. option:: --dmcrypt Enable encryption for the underlying OSD devices @@ -493,10 +479,6 @@ Optional Arguments: bluestore objectstore (default) -.. option:: --filestore - - filestore objectstore - .. note:: It requires a matching JSON file with the following format:: diff --git a/doc/man/8/rbd.rst b/doc/man/8/rbd.rst index 59e34620975..a46429d9b2f 100644 --- a/doc/man/8/rbd.rst +++ b/doc/man/8/rbd.rst @@ -831,8 +831,7 @@ Per mapping (block device) `rbd device map` options: drop discards that are too small. For bluestore, the recommended setting is bluestore_min_alloc_size (currently set to 4K for all types of drives, previously used to be set to 64K for hard disk drives and 16K for - solid-state drives). For filestore with filestore_punch_hole = false, the - recommended setting is image object size (typically 4M). + solid-state drives). * crush_location=x - Specify the location of the client in terms of CRUSH hierarchy (since 5.8). This is a set of key-value pairs separated from diff --git a/doc/rados/configuration/common.rst b/doc/rados/configuration/common.rst index 1d218a6f11a..06b3e9160bc 100644 --- a/doc/rados/configuration/common.rst +++ b/doc/rados/configuration/common.rst @@ -103,20 +103,6 @@ Reference`_. OSDs ==== -When Ceph production clusters deploy :term:`Ceph OSD Daemons`, the typical -arrangement is that one node has one OSD daemon running Filestore on one -storage device. BlueStore is now the default back end, but when using Filestore -you must specify a journal size. For example: - -.. code-block:: ini - - [osd] - osd_journal_size = 10000 - - [osd.0] - host = {hostname} #manual deployments only. - - By default, Ceph expects to store a Ceph OSD Daemon's data on the following path:: diff --git a/doc/rados/configuration/filestore-config-ref.rst b/doc/rados/configuration/filestore-config-ref.rst index 8c7e9ab8444..c9a3cb285f4 100644 --- a/doc/rados/configuration/filestore-config-ref.rst +++ b/doc/rados/configuration/filestore-config-ref.rst @@ -1,6 +1,7 @@ ============================ Filestore Config Reference ============================ +.. warning:: Filestore has been deprecated in the Reef release and is no longer supported. The Filestore back end is no longer the default when creating new OSDs, though Filestore OSDs are still supported. diff --git a/doc/rados/configuration/journal-ref.rst b/doc/rados/configuration/journal-ref.rst index ccf7e09f158..5ce5a5e2dd1 100644 --- a/doc/rados/configuration/journal-ref.rst +++ b/doc/rados/configuration/journal-ref.rst @@ -1,7 +1,7 @@ ========================== Journal Config Reference ========================== - +.. warning:: Filestore has been deprecated in the Reef release and is no longer supported. .. index:: journal; journal configuration Filestore OSDs use a journal for two reasons: speed and consistency. Note diff --git a/doc/rados/configuration/mclock-config-ref.rst b/doc/rados/configuration/mclock-config-ref.rst index f2a1603b5dc..dccf6ef251c 100644 --- a/doc/rados/configuration/mclock-config-ref.rst +++ b/doc/rados/configuration/mclock-config-ref.rst @@ -7,10 +7,6 @@ QoS support in Ceph is implemented using a queuing scheduler based on `the dmClock algorithm`_. See :ref:`dmclock-qos` section for more details. -.. note:: The *mclock_scheduler* is supported for BlueStore OSDs. For Filestore - OSDs the *osd_op_queue* is set to *wpq* and is enforced even if you - attempt to change it. - To make the usage of mclock more user-friendly and intuitive, mclock config profiles are introduced. The mclock profiles mask the low level details from users, making it easier to configure and use mclock. diff --git a/doc/rados/configuration/osd-config-ref.rst b/doc/rados/configuration/osd-config-ref.rst index deffd6103f3..3c3b378e7b4 100644 --- a/doc/rados/configuration/osd-config-ref.rst +++ b/doc/rados/configuration/osd-config-ref.rst @@ -7,7 +7,7 @@ You can configure Ceph OSD Daemons in the Ceph configuration file (or in recent releases, the central config store), but Ceph OSD Daemons can use the default values and a very minimal configuration. A minimal -Ceph OSD Daemon configuration sets ``osd journal size`` (for Filestore), ``host``, and +Ceph OSD Daemon configuration sets ``host`` and uses default values for nearly everything else. Ceph OSD Daemons are numerically identified in incremental fashion, beginning diff --git a/doc/rados/configuration/storage-devices.rst b/doc/rados/configuration/storage-devices.rst index 660a2906a75..3b9d45a3d58 100644 --- a/doc/rados/configuration/storage-devices.rst +++ b/doc/rados/configuration/storage-devices.rst @@ -71,6 +71,8 @@ For more information, see :doc:`bluestore-config-ref` and :doc:`/rados/operation FileStore --------- +.. warning:: Filestore has been deprecated in the Reef release and is no longer supported. + FileStore is the legacy approach to storing objects in Ceph. It relies on a standard file system (normally XFS) in combination with a diff --git a/doc/rados/operations/bluestore-migration.rst b/doc/rados/operations/bluestore-migration.rst index 7cee07156ff..ecd7a4c5e4c 100644 --- a/doc/rados/operations/bluestore-migration.rst +++ b/doc/rados/operations/bluestore-migration.rst @@ -1,6 +1,8 @@ ===================== BlueStore Migration ===================== +.. warning:: Filestore has been deprecated in the Reef release and is no longer supported. + Please migrate to BlueStore. Each OSD must be formatted as either Filestore or BlueStore. However, a Ceph cluster can operate with a mixture of both Filestore OSDs and BlueStore OSDs. -- 2.39.5