From: Alfredo Deza Date: Thu, 17 Aug 2017 13:27:12 +0000 (-0400) Subject: doc/glossary add terminology used by ceph-volume X-Git-Tag: v13.0.0~49^2~7 X-Git-Url: http://git.apps.os.sepia.ceph.com/?a=commitdiff_plain;h=302d79d772a05984b5ec26a4751642430817bd42;p=ceph.git doc/glossary add terminology used by ceph-volume Signed-off-by: Alfredo Deza --- diff --git a/doc/glossary.rst b/doc/glossary.rst index 9c2d4e81f97db..382b922658938 100644 --- a/doc/glossary.rst +++ b/doc/glossary.rst @@ -4,7 +4,7 @@ Ceph is growing rapidly. As firms deploy Ceph, the technical terms such as "RADOS", "RBD," "RGW" and so forth require corresponding marketing terms -that explain what each component does. The terms in this glossary are +that explain what each component does. The terms in this glossary are intended to complement the existing technical terminology. Sometimes more than one term applies to a definition. Generally, the first @@ -12,21 +12,21 @@ term reflects a term consistent with Ceph's marketing, and secondary terms reflect either technical terms or legacy ways of referring to Ceph systems. -.. glossary:: +.. glossary:: Ceph Project - The aggregate term for the people, software, mission and infrastructure + The aggregate term for the people, software, mission and infrastructure of Ceph. - + cephx The Ceph authentication protocol. Cephx operates like Kerberos, but it has no single point of failure. Ceph Ceph Platform - All Ceph software, which includes any piece of code hosted at + All Ceph software, which includes any piece of code hosted at `http://github.com/ceph`_. - + Ceph System Ceph Stack A collection of two or more components of Ceph. @@ -35,7 +35,7 @@ reflect either technical terms or legacy ways of referring to Ceph systems. Node Host Any single machine or server in a Ceph System. - + Ceph Storage Cluster Ceph Object Store RADOS @@ -45,7 +45,7 @@ reflect either technical terms or legacy ways of referring to Ceph systems. Ceph Cluster Map cluster map - The set of maps comprising the monitor map, OSD map, PG map, MDS map and + The set of maps comprising the monitor map, OSD map, PG map, MDS map and CRUSH map. See `Cluster Map`_ for details. Ceph Object Storage @@ -56,13 +56,13 @@ reflect either technical terms or legacy ways of referring to Ceph systems. RADOS Gateway RGW The S3/Swift gateway component of Ceph. - + Ceph Block Device RBD The block storage component of Ceph. - + Ceph Block Storage - The block storage "product," service or capabilities when used in + The block storage "product," service or capabilities when used in conjunction with ``librbd``, a hypervisor such as QEMU or Xen, and a hypervisor abstraction layer such as ``libvirt``. @@ -73,7 +73,7 @@ reflect either technical terms or legacy ways of referring to Ceph systems. Cloud Platforms Cloud Stacks - Third party cloud provisioning platforms such as OpenStack, CloudStack, + Third party cloud provisioning platforms such as OpenStack, CloudStack, OpenNebula, ProxMox, etc. Object Storage Device @@ -82,7 +82,7 @@ reflect either technical terms or legacy ways of referring to Ceph systems. Sometimes, Ceph users use the term "OSD" to refer to :term:`Ceph OSD Daemon`, though the proper term is "Ceph OSD". - + Ceph OSD Daemon Ceph OSD Daemons Ceph OSD @@ -90,7 +90,29 @@ reflect either technical terms or legacy ways of referring to Ceph systems. disk (:term:`OSD`). Sometimes, Ceph users use the term "OSD" to refer to "Ceph OSD Daemon", though the proper term is "Ceph OSD". - + + OSD id + The integer that defines an OSD. It is generated by the monitors as part + of the creation of a new OSD. + + OSD fsid + This is a unique identifier used to further improve the uniqueness of an + OSD and it is found in the OSD path in a file called ``osd_fsid``. This + ``fsid`` term is used interchangeably with ``uuid`` + + OSD uuid + Just like the OSD fsid, this is the OSD unique identifer and is used + interchangeably with ``fsid`` + + bluestore + OSD BlueStore is a new back end for OSD daemons (kraken and newer + versions). Unlike :term:`filestore` it stores objects directly on the + Ceph block devices without any file system interface. + + filestore + A back end for OSD daemons, where a Journal is needed and files are + written to the filesystem. + Ceph Monitor MON The Ceph monitor software. @@ -106,22 +128,22 @@ reflect either technical terms or legacy ways of referring to Ceph systems. Ceph Clients Ceph Client - The collection of Ceph components which can access a Ceph Storage - Cluster. These include the Ceph Object Gateway, the Ceph Block Device, - the Ceph Filesystem, and their corresponding libraries, kernel modules, + The collection of Ceph components which can access a Ceph Storage + Cluster. These include the Ceph Object Gateway, the Ceph Block Device, + the Ceph Filesystem, and their corresponding libraries, kernel modules, and FUSEs. Ceph Kernel Modules - The collection of kernel modules which can be used to interact with the + The collection of kernel modules which can be used to interact with the Ceph System (e.g,. ``ceph.ko``, ``rbd.ko``). Ceph Client Libraries - The collection of libraries that can be used to interact with components + The collection of libraries that can be used to interact with components of the Ceph System. Ceph Release Any distinct numbered version of Ceph. - + Ceph Point Release Any ad-hoc release that includes only bug or security fixes. @@ -130,11 +152,11 @@ reflect either technical terms or legacy ways of referring to Ceph systems. testing, but may contain new features. Ceph Release Candidate - A major version of Ceph that has undergone initial quality assurance + A major version of Ceph that has undergone initial quality assurance testing and is ready for beta testers. Ceph Stable Release - A major version of Ceph where all features from the preceding interim + A major version of Ceph where all features from the preceding interim releases have been put through quality assurance testing successfully. Ceph Test Framework @@ -144,7 +166,7 @@ reflect either technical terms or legacy ways of referring to Ceph systems. CRUSH Controlled Replication Under Scalable Hashing. It is the algorithm Ceph uses to compute object storage locations. - + ruleset A set of CRUSH data placement rules that applies to a particular pool(s). @@ -152,5 +174,9 @@ reflect either technical terms or legacy ways of referring to Ceph systems. Pools Pools are logical partitions for storing objects. + systemd oneshot + A systemd ``type`` where a command is defined in ``ExecStart`` which will + exit upon completion (it is not intended to daemonize) + .. _http://github.com/ceph: http://github.com/ceph .. _Cluster Map: ../architecture#cluster-map