.. _ceph-dokan:
+
=======================
Mount CephFS on Windows
=======================
the file system. If you have created more than one file system, you will
choose which to use when mounting.
- - `Mount CephFS`_
- - `Mount CephFS as FUSE`_
- - `Mount CephFS on Windows`_
-
-.. _Mount CephFS: ../../cephfs/mount-using-kernel-driver
-.. _Mount CephFS as FUSE: ../../cephfs/mount-using-fuse
-.. _Mount CephFS on Windows: ../../cephfs/ceph-dokan
+ - :ref:`cephfs_mount_using_kernel_driver`
+ - :ref:`cephfs_mount_using_fuse`
+ - :ref:`ceph-dokan`
If you have created more than one file system, and a client does not
specify a file system when mounting, you can control which file system
they will see by using the ``ceph fs set-default`` command.
-Adding a Data Pool to the File System
+Adding a Data Pool to the File System
-------------------------------------
See :ref:`adding-data-pool-to-file-system`.
.. code:: bash
ceph osd pool set my_ec_pool allow_ec_overwrites true
-
+
Note that EC overwrites are only supported when using OSDs with the BlueStore backend.
If you are storing lots of small files or are frequently modifying files you can improve performance by enabling EC optimizations, which is done as follows:
Finally, to mount CephFS on your client nodes, see `Mount CephFS:
Prerequisites`_ page. Additionally, a command-line shell utility is available
-for interactive access or scripting via the `cephfs-shell`_.
+for interactive access or scripting via the :ref:`cephfs-shell <cephfs-shell>`.
.. _Orchestrator: ../mgr/orchestrator
.. _deploy MDS manually as needed: add-remove-mds
.. _create other CephFS volumes: fs-volumes
.. _Orchestrator deployment table: ../mgr/orchestrator/#current-implementation-status
.. _Mount CephFS\: Prerequisites: mount-prerequisites
-.. _cephfs-shell: ../man/8/cephfs-shell
.. raw:: html
--->
-.. toctree::
+.. toctree::
:maxdepth: 1
:hidden:
--->
-.. toctree::
+.. toctree::
:maxdepth: 1
:hidden:
--->
-.. toctree::
+.. toctree::
:maxdepth: 1
:hidden:
--->
-.. toctree::
+.. toctree::
:hidden:
Client eviction <eviction>
--->
-.. toctree::
+.. toctree::
:maxdepth: 1
:hidden:
.. _MDS Config Reference:
+
======================
MDS Config Reference
======================
===========================
You can use CephFS by mounting the file system on a machine or by using
-:ref:`cephfs-shell <cephfs-shell>`. A system mount can be performed using `the
-kernel driver`_ as well as `the FUSE driver`_. Both have their own advantages
-and disadvantages. Read the following section to understand more about both of
-these ways to mount CephFS.
+:ref:`cephfs-shell <cephfs-shell>`. A system mount can be performed using
+:ref:`the kernel driver <cephfs_mount_using_kernel_driver>` as well as
+:ref:`the FUSE driver <cephfs_mount_using_fuse>`. Both have their own
+advantages and disadvantages. Read the following section to understand
+more about both of these ways to mount CephFS.
-For Windows CephFS mounts, please check the `ceph-dokan`_ page.
+For Windows CephFS mounts, please check the :ref:`ceph-dokan <ceph-dokan>`
+page.
Which CephFS Client?
--------------------
individually, please check respective mount documents.
.. _Client Authentication: ../client-auth
-.. _cephfs-shell: ..cephfs-shell
-.. _the kernel driver: ../mount-using-kernel-driver
-.. _the FUSE driver: ../mount-using-fuse
-.. _ceph-dokan: ../ceph-dokan
========================
`ceph-fuse`_ can be used as an alternative to the :ref:`CephFS kernel
-driver<cephfs-mount-using-kernel-driver>` to mount CephFS file systems.
+driver<cephfs_mount_using_kernel_driver>` to mount CephFS file systems.
`ceph-fuse`_ mounts are made in userspace. This means that `ceph-fuse`_ mounts
are less performant than kernel driver mounts, but they are easier to manage
and easier to upgrade.
Synopsis
========
-This is the general form of the command for mounting CephFS via FUSE:
+This is the general form of the command for mounting CephFS via FUSE:
.. prompt:: bash #
-.. _cephfs-mount-using-kernel-driver:
+.. _cephfs_mount_using_kernel_driver:
=================================
Mount CephFS using Kernel Driver
./do_cmake -DWITH_LTTNG=ON
-If your Ceph deployment is package-based (YUM, DNF, APT) vs containerized, install the required software packages according to the module which you want to track, otherwise, it may cause a coredump due to missing *tp.solibrary files::
+If your Ceph deployment is package-based (YUM, DNF, APT) vs containerized, install the required software packages according to the module which you want to track, otherwise, it may cause a coredump due to missing ``*tp.solibrary`` files::
librbd-devel
librgw-devel
For example, for deploying a node with eight CPU cores per OSD:
-.. code-block:: bash #
+.. prompt:: bash #
ceph config set osd crimson_cpu_num 8
CyanStore **does not store data** and should be used only for measuring OSD overhead, without the cost of actually storing data.
Non-Native Backends
-------------------
+-------------------
Non-native backends operate through a **thread pool proxy**, which interfaces with object stores running in **alien threads**—worker threads not managed by Seastar.
These backends allow Crimson to interact with legacy or external object store implementations:
(as determined by `nproc`) will be assigned to the object store.
``--bluestore``
- Use the alienized BlueStore as the object store backend. This is the default (see below section on the `object store backend`_ for more details)
+ Use the alienized BlueStore as the object store backend. This is the default (see above section on the `object store backends`_ for more details)
``--cyanstore``
Use CyanStore as the object store backend.
===============
* `RBD Windows documentation`_
-* `CephFS Windows documentation`_
+* :ref:`CephFS Windows documentation <ceph-dokan>`
* `Windows troubleshooting`_
-.. _CephFS Windows documentation: ../../cephfs/ceph-dokan
.. _Windows configuration sample: ../windows-basic-config
.. _RBD Windows documentation: ../../rbd/rbd-windows/
.. _Windows troubleshooting: ../windows-troubleshooting
:orphan:
-.. _man-ceph-fuse:
+.. _man-ceph-fuse:
=========================================
ceph-fuse -- FUSE-based client for ceph
| | | between different source buckets writing log records to the same log bucket. | |
+-------------------------------+-----------+--------------------------------------------------------------------------------------+----------+
| ``LoggingType`` | String | The type of logging. Valid values are: | No |
-| | | ``Standard`` (default) all bucket operations are logged after being performed. | |
+| | | ``Standard`` (default) all bucket operations are logged after being performed. | |
| | | The log record will contain all fields. | |
| | | ``Journal`` only operations that modify and object are logged. | |
| | | Will record the minimum subset of fields in the log record that is needed | |
- https://tracker.ceph.com/issues/67179
- https://tracker.ceph.com/issues/66867
+
* RBD: Moving an image that is a member of a group to trash is no longer
allowed. `rbd trash mv` command now behaves the same way as `rbd rm` in this
scenario.