Thin-provisioned snapshottable Ceph Block Devices are an attractive option for
virtualization and cloud computing. In virtual machine scenarios, people
typically deploy a Ceph Block Device with the ``rbd`` network storage driver in
-Qemu/KVM, where the host machine uses ``librbd`` to provide a block device
+QEMU/KVM, where the host machine uses ``librbd`` to provide a block device
service to the guest. Many cloud computing stacks use ``libvirt`` to integrate
-with hypervisors. You can use thin-provisioned Ceph Block Devices with Qemu and
+with hypervisors. You can use thin-provisioned Ceph Block Devices with QEMU and
``libvirt`` to support OpenStack and CloudStack among other solutions.
While we do not provide ``librbd`` support with other hypervisors at this time,
===========
**rbd** is a utility for manipulating rados block device (RBD) images,
-used by the Linux rbd driver and the rbd storage driver for Qemu/KVM.
+used by the Linux rbd driver and the rbd storage driver for QEMU/KVM.
RBD images are simple block devices that are striped over objects and
stored in a RADOS object store. The size of the objects the image is
striped over must be a power of two.
applications, Ceph supports `RBD Caching`_.
Ceph's block devices deliver high performance with infinite scalability to
-`kernel modules`_, or to :abbr:`KVMs (kernel virtual machines)` such as `Qemu`_, and
+`kernel modules`_, or to :abbr:`KVMs (kernel virtual machines)` such as `QEMU`_, and
cloud-based computing systems like `OpenStack`_ and `CloudStack`_ that rely on
-libvirt and Qemu to integrate with Ceph block devices. You can use the same cluster
+libvirt and QEMU to integrate with Ceph block devices. You can use the same cluster
to operate the `Ceph RADOS Gateway`_, the `Ceph FS filesystem`_, and Ceph block
devices simultaneously.
.. _RBD Caching: ../rbd-config-ref/
.. _kernel modules: ../rbd-ko/
-.. _Qemu: ../qemu-rbd/
+.. _QEMU: ../qemu-rbd/
.. _OpenStack: ../rbd-openstack
.. _CloudStack: ../rbd-cloudstack
.. _Ceph RADOS Gateway: ../../radosgw/
==============
This Hammer point release fixes a critical regression in librbd that can cause
-Qemu/KVM to crash when caching is enabled on images that have been cloned.
+QEMU/KVM to crash when caching is enabled on images that have been cloned.
All v0.94.4 Hammer users are strongly encouraged to upgrade.
* rbd: avoid FIEMAP ioctl on import (it is broken on some kernels)
* librbd: fixes for several request/reply ordering bugs
* librbd: only set STRIPINGV2 feature on new images when needed
-* librbd: new async flush method to resolve qemu hangs (requires Qemu update as well)
+* librbd: new async flush method to resolve qemu hangs (requires QEMU update as well)
* librbd: a few fixes to flatten
* ceph-disk: support for dm-crypt
* ceph-disk: many backports to allow bobtail deployments with ceph-deploy, chef