limit the number of concurrent RGWMetaSyncSingleEntryCRs that each mdlog
shard is allowed to spawn. use META_SYNC_SPAWN_WINDOW=20 to match data-
and bucket sync
rgw: metadata sync treats all errors as 'transient'
collect_children() had a special case for EAGAIN that it treated as
a 'transient' error, which set the can_adjust_marker = false to bail out
of RGWMetaSyncShardCR and retry from the previous marker
but the http client doesn't return EAGAIN - rgw_http_error_to_errno()
defaults to EIO - so this retry logic based on can_adjust_marker never
runs. on any other error, RGWMetaSyncSingleEntryCR would not call
marker_tracker->finish() to advance the sync status marker, and
RGWMetaSyncShardCR would continue on with full- or incremental sync
without ever attempting to retry the failed entries
a detailed comment in collect_children() describes a different strategy
for handling 'permanent' errors, but that was never fully elaborated.
i also don't think there's a reasonable way to differentiate between
transient and permanent errors, so this treats all errors as transient
to be retried
if an error really is permanent for a given metadata key, metadata sync
will get stuck there and require manual intervention
luo rixin [Tue, 29 Dec 2020 06:39:21 +0000 (14:39 +0800)]
rgw/rgw_file: Fix the return value of read() and readlink()
Fixes: https://tracker.ceph.com/issues/49189 Signed-off-by: Dai zhiwei <daizhiwei3@huawei.com> Signed-off-by: luo rixin <luorixin@huawei.com>
(cherry picked from commit bfd83e8fa142873a0bdf09a4d1ad1b04127f5885)
test/rgw: fix use of poll() with timers in unittest_rgw_dmclock_scheduler
the AsyncScheduler uses an asio timer to dispatch work to its executor
with an optional delay. when no delay is requested, it waits on the
timer with an expiration time in the past (crimson::dmclock::TimeZero)
tests are failing here because poll() is returning without executing the
handlers of those expired timers
asio implements these timers with timerfd and epoll. debugging with
strace, i see that these timers armed with timerfd_settime() are not
always immediately ready according to epoll_wait():
rgw: read_obj_policy() consults iam_user_policies on ENOENT
when the head object doesn't exist, read_obj_policy() has to decide
whether to return ENOENT or EACCES
when there's a bucket policy, we check whether it has s3ListBucket
permissions. when there's an assumed role, we also need to check
against the role's policies in s->iam_user_policies
J. Eric Ivancich [Tue, 15 Jun 2021 19:20:33 +0000 (15:20 -0400)]
rgw: when deleted obj removed in versioned bucket, extra del-marker added
After initial checks are complete, this will read the OLH earlier than
previously to check the delete-marker flag and under the bug's
conditions will return -ENOENT rather than create a spurious delete
marker.
J. Eric Ivancich [Wed, 14 Apr 2021 17:55:22 +0000 (13:55 -0400)]
rgw: during reshard lock contention, adjust logging
When RGW fails to get a lock on a reshard log, we log it in such a way
that it looks like an error. Instead we'll make sure that the log
message is informational.
Adam C. Emerson [Fri, 16 Jul 2021 15:20:39 +0000 (11:20 -0400)]
rgw: radosgw-admin errors if marker not specified on data/mdlog trim
Check that a marker was specified and trim if we don't have one.
Also: In a world where we're parsing for generation, it doesn't really
make sense to have a 'no marker specified' as separate from a marker
that is just an empty string.
Also: Successful datalog trim returns zero, not
-ENODATA, and radosgw-admin should expect this.
Fixes: https://tracker.ceph.com/issues/51712 Signed-off-by: Adam C. Emerson <aemerson@redhat.com>
(cherry picked from commit 4cb6fcf7f4e9548a8a0dd017b49a6ea23bedffd6)
Conflicts:
src/rgw/rgw_admin.cc
Cherry-pick notes:
- static_cast for RadosStore not needed in Pacific
Since inject_facts_as_vars is set to false in the ansible.cfg file then we
have to update the references to use ansible_facts[<thing>] instead of
ansible_<thing>.
We already install the dependency from ceph-ansible requirements.txt and to
avoid false positive (like after rebooting a node) we can retry failing test.
Without loading the ansible.cfg file from ceph-ansible project, we don't
have the pipelining enabled which can result in significant performance
improvement.
This removes the ANSIBLE_ACTION_PLUGINS, ANSIBLE_RETRY_FILES_ENABLED and
ANSIBLE_SSH_RETRIES environment variables as it is already included in the
ansible.cfg file.
ceph-volume/tests: update ansible ssh_args env var
The ansible ssh_args parameter is usually defined in the ansible.cfg file.
Currently this variable is overrided in tox to manage the vagrant ssh file
but we lost all default values.
rpm: drop use of $FIRST_ARG in ceph-immutable-object-cache
The use of $FIRST_ARG was probably required because the SUSE-specific
%service_* rpm macros were playing tricks on the shell positional parameters.
This is bad practice and error-prone, so let's assume that no macros should do
that anymore and hence it's safe to assume that positional parameters remain
unchanged after any rpm macro call.
Kefu Chai [Sat, 1 May 2021 15:30:18 +0000 (23:30 +0800)]
common/pick_address: define in_addr_t if it is not defined
neither mingw not not have in_addr_t defined, see
https://docs.microsoft.com/en-us/windows/win32/api/winsock2/ns-winsock2-in_addr,
so define it if it is not defined.
Conflicts:
src/common/options/global.yaml.in: global.yaml.in was introduced
in master only, so, in this change src/common/options.cc is updated
instead.
because fmt is packaged in EPEL, while librados is packaged
in RHEL, so we cannot have fmt as a runtime dependency of librados.
to address this issue, we should compile librados either with static library
or with header-only library of fmt. but because the fedora packaging
guideline does no encourage us to package static libraries, and it would
be complicated to package both static and dynamic library for fmt.
the simpler solution would be to compile Ceph with the header-only
version of fmt.
in this change, we compile ceph with the header-only version of fmt
on RHEL to address the runtime dependency issue.
* an interface library named "fmt-header-only" is introduced. it brings
the support to the header only fmt library.
* fmt::fmt is renamed to fmt
* an option named "WITH_FMT_HEADER_ONLY" is introduced
* fmt::fmt is an alias of "fmt-header-only" if "WITH_FMT_HEADER_ONLY"
is "ON", and an alias of "fmt" otherwise.
because fmt is packaged in EPEL, while librados is packaged
in RHEL, so we cannot have fmt as a runtime dependency of librados.
to address this issue an option "WITH_FMT_HEADER_ONLY" is introduced, so
that we can enable it when building Ceph with the header version of fmt.
and the built packages won't have runtime dependency of fmt.
cephfs-mirror: record directory path cancel in DirRegistry
When removing a directory path from mirroring, cephfs-mirror records
this state in a thread-local storage. The replayer thread backs-off
in midst of mirroring the directory snapshots for this directory path.
However, the state (canceled state) is never cleared causing the thread
to incorrectly assume that other directory paths (which are picked up
by this thread) need backing-off, hence, marking these directory paths
as failed (to synchronize snapshots).
Fix is to store this state in the directory specific store which is
allocated when a thread picks up a directory path for synchronization.
cephfs-mirror: complete context when a mirror instance is not failed or blocklisted
Without this, the updater thread can start processing othere queued
contexts when a mirror instance is failed or blocklisted resulting
in unexpected behavior.
qa/standalone: fixing the timings when waiting for deep-scrub to start
initiate_and_fetch_state() initiates a scrub, then polls the published
PG state looking for 'scrubbing'. Calling flush_pg_stats() as part of
the polling process might cause the scrub and the following recovery to
be missed altogether.
Note: this polling mechanism is definitely not robust. Will be
redesigned in the future.
Ronen Friedman [Sat, 15 May 2021 19:14:38 +0000 (22:14 +0300)]
test: recovery_scrub: do not display 'repair' status on auto-repair deep-scrub
A new test: auto_repair_bluestore_tag.
Based on auto_repair_bluestore_basic. Sets auto-repair, starts a periodic
deep-scrub, then verifies that the PG state, while scrubbing, is 'scrubbing+deep'
and not 'scrubbing+deep+repair'.
Ronen Friedman [Mon, 10 May 2021 13:15:16 +0000 (16:15 +0300)]
osd/scrub: separate between PG state flags and internal scrubber operation
Modify the scrubber to rely on internal flags for 'should we repair' and
'is this a deep scrub', instead of using PG_STATE_REPAIR & PG_STATE_DEEP_SCRUB.
This enables us to implement the 'fix-as-you-go deepscrub' functionality
of 'osd_scrub_auto_repair', without displaying REPAIR status to the user.
Adam Kupczyk [Mon, 24 May 2021 12:27:05 +0000 (14:27 +0200)]
os/bluestore/bluefs: Add test that detects bluefs inconsistency
Add test that detects possible scenario that will cause BlueFS to have file
that contains data that has never been written. This is done by tricking
replay log to already accept file metadata (size, allocations), but actual data
stored in these allocations is not yet synced to disk.
Scenario:
1) write to file h1 on SLOW device
2) flush h1 (and trigger h1 mark to be added to bluefs replay log)
3) write to file h2
4) fsync h2 (forces replay log to be written)
The result is:
- bluefs log now has stable state of h1
- SLOW device is not yet flushed (no fdatasync())
Adam Kupczyk [Mon, 24 May 2021 12:49:51 +0000 (14:49 +0200)]
os/bluestore/bluefs: Remove possibility of bluefs replay log containing files without data
It had been possible to have a bluefs replay log to serialize file metadata (size, allocations),
but actual data stored in these allocations is not yet synced to disk.
This could happen if _flush_range(h1) allocated space for file h1 on device (like SLOW) that will not
be used when flushing future replay log. Such thing can happen when we have h2 that wrote to WAL and
out replay log is on DB. After fsync(h2) we write to replay log, wait for fdatasync on WAL and DB.
There is no waiting on SLOW, but h1 was dirty and has been serialized to replay log.
Solution is to delay notifying replay log that it has to include h1 after finishing fdatasync.
Fixes: https://tracker.ceph.com/issues/50965 Signed-off-by: Adam Kupczyk <akupczyk@redhat.com>
(cherry picked from commit 03ac53f7d4c83e56f664ad371ffe3bc2d40e1837)
Kefu Chai [Thu, 10 Jun 2021 12:19:09 +0000 (20:19 +0800)]
tasks/ceph_manager: ignore EACCES when waiting for quorum
mon_tick_interval is 5 seconds by default. monitors update their
rotating keys every mon_tick_interval. before monitors forms a
quorum, the auth requests from clients are put into the wait list.
these requests are re-enqueued once the monitors form a quorum. but
there is a small window of mon_tick_interval, before they are able
to serve the auth requests even after their claim to be able to
server requests. if these re-enqueued requests happen to be served
in this window, and if authx is enabled, they will be greeted with
errors like
handle_auth_bad_method server allowed_methods [2] but i only support [2]
in the case of ceph cli, the error would look like:
[errno 13] RADOS permission denied (error connecting to the cluster)
so, to address this issue, the EACCES error is ignored when waiting
for a quorum.
ceph-monstore-tool: use a large enough paxos/{first,last}_committed
so the rebuild paxos transaction won't be overwritten by the ones
created before recovery completes.
when the quorum is recovering, the leader will collect the paxos
transactions from peons. if the quorum accept the proposal for setting
the fingerprint, the peon will update the monitor with the paxos
transaction with a newer "last_committed" than the one created using
update_paxos() in ceph_monstore_tool.cc. the latter "last_committed" is
always 0.
so, to avoid this extra paxos proposal obsoleting the "rebuilding" paxos
transaction, we use a large enough number for {first,last}_committed.