Myoungwon Oh [Fri, 25 Aug 2023 08:55:36 +0000 (17:55 +0900)]
crimson/os/seastore/object_data_handler: consider a RBM case when checking if write can be merged
RBM's paddr always indicates physical address, which means it doesn't have the dealayed.
So, this commit adds a condition that checks if given paddr is used for ongoing write.
Laura Flores [Fri, 8 Sep 2023 17:49:01 +0000 (17:49 +0000)]
src/common: add context to tracing::Tracer::init
Followup to https://github.com/ceph/ceph/pull/50948. This wasn't originally
caught since the centos 8 default build passed, but it did fail the
crimson build: https://shaman.ceph.com/builds/ceph/wip-yuri3-testing-2023-08-15-0955/b5259484dbc61e8573f27e692560c6c107b9d4c1/
Fixes: https://tracker.ceph.com/issues/62761 Signed-off-by: Laura Flores <lflores@ibm.com>
Rishabh Dave [Mon, 11 Sep 2023 09:55:46 +0000 (15:25 +0530)]
doc/cephfs: write cephfs commands fully in docs
We write CephFS commands incompletely in docs. For example, "ceph tell
mds.a help" is simply written as "tell mds.a help". This might confuse
the reader and it won't harm to write the command in full.
Fixes: https://tracker.ceph.com/issues/62791 Signed-off-by: Rishabh Dave <ridave@redhat.com>
Adds subvolume groups into the subvolume tabs in order to select the subvolumes from the appropiate group.
Also adds the capabilities to manage the subvolume groups of the subvolume in the different actions, create, edit, remove.
Fixes: https://tracker.ceph.com/issues/62675 Signed-off-by: Pedro Gonzalez Gomez <pegonzal@redhat.com>
Link to the "Ceph Clients" section of doc/architecture.rst from the
"Ceph Clients" entry in the glossary. A glossary entry should be a short
summary of the topic with which it deals, and it should direct the
reader to further and more detailed reading if the reader is interested.
This does that.
crimson/os/seastore/onode_manager: populate value recorders of onodes to
be erased
Otherwise, the following modification sequence with the same transaction
might lead to onode extents' crc inconsistency during journal replay:
1. modify the last mapping in an onode extent;
2. erase the last mapping in that onode extent.
During journal replay, if the first modification is not recorded in the
delta, the onode extent's content would be inconsistent with that before
the system reboot
Change the sentence structure of a sentence because the verb
"experience" looked like the abstract noun "experience" when I read it
with fresh eyes. I chose the perhaps TESOL-unfriendly verb "incur", but
I believe it is right.
Since then, we've gotten feedback from users in the field testing
compression with extremely positive results. Clyso has also worked with
a customer that has a large RGW deployment that has seen extremely positive
results.
Advantages of using compression
===============================
1) Significantly lower write amplification and space amplifcation.
In the article above, we saw a 4X reduction in space usage in RocksDB when
writing very small (4KB) objects to RGW. On a real production cluster with
1.3 billion objects, Clyso observed a space usage reduction closer to 2.2X
which was still a substantial improvement. This win is important in
multiple cluster configurations:
1A) Pure HDD
Pure HDD clusters are often seek limited under load. This directly impacts
how quickly RocksDB can write data out, which can increase compaction times.
1B) Hybrid Clusters (HDD Block + Flash DB/WAL)
In this configuration, spillover to the HDD can become a concern when
there isn't enough space on the flash devices to hold all RocksDB
SST files for all of the assoicated OSDs on flash. Compression has
dramatic effect on being able to store all SST files in flash and avoid
spillover.
1C) Pure Flash based clusters
A primary concern for pure flash based clusters is write-amplificaiton
and eventual wear out of the flash under write-intensive scenarios.
RocksDB compression not only reduces space-amplification but also
write-amplification. That means lower wear on the flash cells and
longer flash life.
2) Reduced Compaction Times
The customer cluster that Clyso worked with utilized an HDD-only
configuration. Prior to utilizing RocksDB Compaction, this cluster
could take up to several days to complete a manual compaction of a given
OSD during live operation. Enabling LZ4 compression in RocksDB reduced
manual compaction time to closer to 25-30 minutes, with ~2 hours being
the longest manual compaction time observed.
Potential Disadvantages of RocksDB comppression
===============================================
1) Increased CPU usage
While there is CPU usage overhead associated with utilizing compression,
the effect appeared to be negligable, even on an NVMe backed cluster.
Despite restricting NVMe OSDs to 2 cores so that they were extremely
CPU bound during PUT operations, enabling compression had no notable
effect on PUT performance.
2) Lower GET throughput on NVMe
We noticed a very slight performance hit for GETs on NVMe backed
clusters during GET operations, though the effect was primarily observed
when using Snappy compression and not LZ4 compression. LZ4 GET
performance was very close to performance with RocksDB uncompressed.
3) Other performance impact
Potential other concerns might include lower performance during
iteration or other actions, however I expect this to be unlikely.
RocksDB typically performs best when it can read data from SST files in
large chunks and then work from the block cache. Large readahead values
tend to be a win, either to read data into the block cache or so that
data can be read quickly from the kernel page cache. As far as I can
tell, compression is not having a negative impact here and in fact may be
helping in cases where the disk is already quite busy. In general, we
are already completely dependent on our own in-memory caches for things like
bluestore onodes to achieve high performance on NVMe backed OSDs.
More importantly, the goal on 16.2.13+ should be to reduce the overehad
of iterating over tombstones, and our primary method to do this right
now is to issue compactions on iteration when too many tombstones are
encountered. Reducing the impact of compaction directly benefits this
goal.
Why LZ4 Compression?
Snappy and LZ4 compression are both potential default options. Ceph
previously had a bug related to LZ4 compression that could corrupt data,
so on the surface it might be tempting to default to using snappy
compression. There are several reasons why I believe we should use LZ4
compression by default however.
1) The LZ4 bug is fixed, and there have been no reports of issues since
the fix was put in place.
2) The Google developers have made changes to Snappy's build system that
impacts Ceph. Many distributions are working around these changes, but
the Google developers have explicitly stated that they plan to only
support google specific use cases:
"We are unlikely to accept contributions to the build configuration
files, such as CMakeLists.txt. We are focused on maintaining a build
configuration that allows us to test that the project works in a few
supported configurations inside Google. We are not currently interested
in supporting other requirements, such as different operating systems,
compilers, or build systems."
3) LZ4 compression showed less of a performance impact during RGW 4KB
object gets versus Snappy. Snappy showed no performance gains vs LZ4 in
any of the other tests nor did it appear to show a meaningful
compression advantage.
Impact on existing clusters
===========================
Enabling/Disabling compression in RocksDB will require an OSD restart,
but otherwise does not require user action. SST files will gradually be
compressed over time as part of the compaction process. A manual
compaction can be issued to help accelerate this process. The same goes
if users would like to disable compression. New uncompressed SST files
will be written over time as part of the compaction process, and a
manual compaction can be issued to accelerate this process.
Conclusion
==========
In general, enabling RocksDB compression in bluestore appears to be a
dramatic win. I would like to make this our default behavior for Squid
going forward assuming no issues are uncovered during teuthology testing.
Signed-off-by: Mark Nelson <mark.nelson@clyso.com>
NitzanMordhai [Sun, 25 Jun 2023 08:41:55 +0000 (08:41 +0000)]
ceph-dencoder: Add missing rgw types to ceph-dencoder for accurate encode-decode comparison
Currently, ceph-dencoder lacks certain rgw types, preventing us from accurately checking the ceph corpus for encode-decode mismatches.
This pull request aims to address this issue by adding the missing types to ceph-dencoder.
To successfully incorporate these types into ceph-dencoder, we need to introduce the necessary `dump` and `generate_test_instances`
functions that was missing in some types. These functions are essential for proper encode and decode of the added types.
This PR will enhance the functionality of ceph-dencoder by including the missing types, enabling a comprehensive analysis of encode-decode consistency.
With the addition of these types, we can ensure the robustness and correctness of the ceph corpus.
This update will significantly contribute to improving the overall reliability and accuracy of ceph-dencoder.
It allows for a more comprehensive assessment of the encode-decode behavior, leading to enhanced data integrity and stability within the ceph ecosystem.
Rishabh Dave [Wed, 6 Sep 2023 22:16:39 +0000 (03:46 +0530)]
mon/FSCommands: fix variable names and function names in calls
PR #51942 today and PR #52409 was merged last week. Latter PR changed
"fs" to "fsp" and made mds_map and fscid private and introduced
get_mds_map() and get_fsicd() methods instead.
Combination of former and latter PR introduced compilation errors & made
it impossible to build the binaries on the latest main branch. This
commit attemps to fix this issue.
Fixes: https://tracker.ceph.com/issues/62729 Signed-off-by: Rishabh Dave <ridave@redhat.com>
Rishabh Dave [Sun, 30 Jul 2023 17:27:37 +0000 (22:57 +0530)]
qa/cephfs: CephFSTestCase.create_client() must keyring
Replace call to run_ceph_cmd() by call to get_ceph_cmd_stdout() in
method qa.tasks.cephfs.cephfs_test_case.CephFSTestCase.create_client().
run_ceph_cmd() will not return keyring which is wrong.
get_ceph_cmd_stdout() will return the stdout of "ceph auth add"
command, which is the keyring that is expected to be returned by
CephFSTestCase.create_client().
Fixes: https://tracker.ceph.com/issues/62246 Signed-off-by: Rishabh Dave <ridave@redhat.com>
The Windows build script uses static linking by default, the
reason being that some tests were failing to build otherwise,
mostly due to unspecified dependencies.
Now that the issue was addressed, we can enable dynamic linking
by default.
Worth mentioning that the Ceph MSI build script already uses
dynamic linking.
While at it, we'll drop some duplicate defaults from
"win32_deps_build.sh". For better clarity, we'll avoid exporting
some "win32_build.sh" variables, instead passing them explicitly
to "win32_deps_build.sh".
doc/man: remove docs about support for unix domain sockets
doc/man: support for unix domain sockets is not implemented, hence we
removed documentation about it.
(Note: the changes in this commit were the work of Rok Jaklič in
https://github.com/ceph/ceph/pull/48537. This pull request has been
raised because that pull request was for some mysterious reason causing
merge conflicts that were never resolved.)
Co-authored-by: Rok Jaklič rjaklic@gmail.com Signed-off-by: Zac Dover <zac.dover@proton.me>