Add Jinja2 and MarkupSafe dependencies to a requirements.txt style file.
This file tracks the dependencies needed to run the cephadm libs
in the unit test framework. The actual dependencies that get added
to the ziapp are managed by build.py but mirrored here.
Signed-off-by: John Mulligan <jmulligan@redhat.com>
John Mulligan [Mon, 17 Jul 2023 13:24:14 +0000 (09:24 -0400)]
cephadm: add more thorough test coverage to unit file generation
Add tests that check the generation of the standard systemd unit
for cephadm services. This test ensures that non trivial changes
to the content of these files are noticed.
Signed-off-by: John Mulligan <jmulligan@redhat.com>
John Mulligan [Wed, 1 Nov 2023 22:14:34 +0000 (18:14 -0400)]
cephadm: add tests for build.py script
Add tests that cover the four main distros that ceph is built on (in
the ceph infra). These tests should not be run by automation as they
are slow and have special requirements like a working podman.
Instead, these are provided to be run by a dev when build.py is updated.
Signed-off-by: John Mulligan <jmulligan@redhat.com>
John Mulligan [Wed, 1 Nov 2023 22:14:34 +0000 (18:14 -0400)]
cephadm: update the build.py script to work on multiple distros
Unfortunately, a single simple call to pip does not work on all the
distributions that ceph is built on. In particular, Ubuntu 20.04 and
Ubuntu 22.04 come with pip versions that can not correctly handle
disabling wheels and installing Jinja2 (it tries to use the markupsafe
dependency before it is installed). This can be worked around by using a
virtual env and updating pip before proceeding. However, this is not
enough because CentOS/RHEL 8 uses python 3.6 and there is no version of
pip that supports 3.6 that we can update to that is new enough to fix
the issue with disabling wheels. The workaround in this case is to
install each dependency one at a time through multiple calls to pip.
Because of this extra complexity is it simpler to eschew the use of a
requirements.txt file in build.py entirely. Thus the zipapp is built
using build.py only. Requirements files for cephadm are for setting up
the tox environments *only*.
For completeness a new option is added that gives the caller control
over when build.py uses a virtualenv or not. Thus the build.py script
requires at least one of: a working pip that handles disabling wheels;
or, a virtualenv (venv) and the ability to update to a working version
of pip. If the list of distros ceph supports (and the python versions
they use) ever becomes simpler/newer some of this complexity could be
removed from the build.py script.
Signed-off-by: John Mulligan <jmulligan@redhat.com>
John Mulligan [Fri, 14 Jul 2023 19:41:54 +0000 (15:41 -0400)]
cephadm: disable wheels and C compilers when building cephadm zipapp
We can not rely on any particular python version (py 3.6+ is supported)
and can not assume any particular architecture. So using wheels
based on the build system is pointless. Installing binary .so files
compiled from C/C++ similarly so. Attempt to block the behaviors
when adding dependencies to the zipapp.
Signed-off-by: John Mulligan <jmulligan@redhat.com>
John Mulligan [Thu, 19 Oct 2023 13:42:34 +0000 (09:42 -0400)]
cephadm: move extract_uid_gid func to container_types module
While extract_uid_gid isn't a perfect fit for container_types it is a
fairly fundamental function for working with containers in cephadm and
doesn't require anything beyond types in containers_types and that
module's existing imports. Moving extract_uid_gid should allow us to
more easily move other functions in the future.
Signed-off-by: John Mulligan <jmulligan@redhat.com>
Ville Ojamo [Fri, 3 Nov 2023 05:44:00 +0000 (12:44 +0700)]
doc/cephadm/services: remove excess rendered indentation in osd.rst
Start bash command blocks at the left margin, removing
excessive padding/indentation that would render the
block too much towards the right.
At the same time ident the source consistently:
- Two spaces for command blocks and output blocks.
- Four spaces for notes, code blocks.
There seems to be no uniform style for this, sometimes
commands are indented with three spaces but it would
seem two spaces is common. In the end it all renders
the same I guess.
Signed-off-by: Ville Ojamo <14869000+bluikko@users.noreply.github.com>
osd/SnapMapper: maintain the prefix_itr between calls to SnapMapper::get_next_objects_to_trim()
Maintain the prefix_itr between calls to SnapMapper::get_next_objects_to_trim() to prevent searching depleted prefixes.
We got 8 distinct hash prefixes used for searching objects owned by a given PG.
On each call to SnapMapper::get_next_objects_to_trim() we start from the first prefix even after all objects mapped to it were depleted.
This means that we will be searching for 1 non-existing prefix after the first prefix was depleted, 2 after the first two prefixes were depleted... and so on until we will search 7 non-existing prefixes after the first 7 prefixes were depleted.
This is a performance improvement PR only!
It maintains the existing behavior and does not try to fix/change any of the TRIM logic.
I added an extra step after the last object is trimmed doing a full scan of the DB and only if no object was found it will return ENOENT.
This should make the new code no-worse than existing code which returns ENOENT after a full scan found no object.
It should not impact performance in real life snaps as it should only happen once per-snap.
added snap-mapper tests to rados-test-suite
disabled osd_debug_trim_objects when running (SnapMapperTest, prefix_itr) to prevent asserts(as this code does illegal inserts into DELETED snaps)
Code beautifing
Disabled the assert as there is a corner case when we retrieve the last valid object/s in a snap
The prefix_itr is advanced past the last valid value (as we completed a full scan)
If the OSD will call get_next_objects_to_trim() before the retrieved object/s was processed and removed from the SnapMapper DB it won't be found by the next call (as the prefix_itr is invalid).
The object will be found in the second-pass which will seems as if it was added after the trim was started (which is illegal) and will trigger an ASSERT
Signed-off-by: Gabriel BenHanokh <gbenhano@redhat.com>
Ionut Balutoiu [Wed, 1 Nov 2023 16:07:58 +0000 (18:07 +0200)]
rgw: fix cloud-sync multi-tenancy scenario
At the moment, we cannot set buckets prefixed with tenant ID in the
`source_bucket` field from cloud-sync profiles (non-trivial config):
https://docs.ceph.com/en/latest/radosgw/cloud-sync-module/#non-trivial-configuration
This is because the `do_find_profile` function only searches in the
profiles configured using `bucket.name`, and it ignores `bucket.tenant`.
This is problematic in the RGW multi-tenancy scenario:
https://docs.ceph.com/en/latest/radosgw/multitenancy/#rgw-multi-tenancy
At the moment, we can only configure bucket name in the profile
`source_bucket` field. In the multi-tenancy scenario, this would sync
all the buckets (from all the tenants).
Without this fix, we cannot configure a cloud-sync profile that syncs
all the buckets from a tenant to a particular S3 target.
For example, we cannot do this:
* `tenantA/test-bucket` -> S3 target A
* `tenantB/test-bucket` -> S3 target B
* `tenantC/test-bucket` -> S3 target C
We can only do this at the moment:
* `test-bucket` -> S3 target A
If `test-bucket` is present in both `tenantA` and `tenantB`, both
buckets will be synced to S3 target A.
The idea would be to be able to do this:
* `tenantA/*` -> S3 target A
* `tenantB/*` -> S3 target B
* `tenantC/*` -> S3 target C
If `test-bucket` is present in all tenants, each tenant bucket is
synced to its own S3 target.