Igor Fedotov [Mon, 3 Feb 2020 15:50:50 +0000 (18:50 +0300)]
os/bluestore: do not use 'unused' bitmap if makes no sense.
The processing logic which relies on 'unused' bitmap makes sense for
bluestore setup where min alloc size is different from device block
size. Now omitting if that's not true.
Igor Fedotov [Mon, 3 Feb 2020 15:36:21 +0000 (18:36 +0300)]
os/bluestore: fix unused 'tail' calculation.
Fixes: https://tracker.ceph.com/issues/41901 Signed-off-by: Igor Fedotov <ifedotov@suse.com>
(cherry picked from commit c91cc3a8d689995e8554c41c9b0f652d9a3458da)
Conflicts:
src/test/objectstore/store_test.cc
- omitted test case "TEST_P(StoreTestSpecificAUSize, ReproBug41901Test)"
from the backport, because nautilus does not have the
"bluestore_debug_enforce_settings" option
Patrick Donnelly [Mon, 20 Jan 2020 19:23:09 +0000 (11:23 -0800)]
qa: log warning on scrub error
Instead of printing the (useless) traceback, just print a warning about
ignoring the failure. The traceback makes it harder to search for the
real problem in the teuthology log.
Fixes: https://tracker.ceph.com/issues/43718 Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
(cherry picked from commit b7454e423620e829e7417cdfca1faf5cd91dec3f)
Conflicts:
qa/tasks/mon_thrash.py
- whereas master has "self.manager.raw_cluster_cmd('mon', 'scrub')" in
the try block, in nautilus it is only "self.manager.raw_cluster_cmd('scrub')"
Sage Weil [Tue, 28 Jan 2020 19:33:49 +0000 (13:33 -0600)]
osd: dispatch_context and queue split finish on early bail-out
If we bail out of advance_pg early because there is an upcoming merge, we
still need to dispatch_context() on rctx before we drop the PG lock. And
the rctx that we submit needs to include the on_applied finisher comit
to call _finish_splits.
This is noticeable (at least) when there is a split and merge that are
both known. When we process the split, the new child is added to new_pgs.
When we get to the merge epoch, we stop early and take the bail-out
path.
Fix by adding a dispatch_context call for this path. And further make sure
that both dispatch_context callers in this function queue up the
new_pgs event.
Matthew Oliver [Wed, 26 Feb 2020 06:15:22 +0000 (06:15 +0000)]
rgw: anonomous swift to obj that dont exist should 401
Currently, if you attempt to GET and object in the Swift API that
doesn't exist and you don't pass a `X-Auth-Token` it will 404 instead of
401.
This is actually a rather big problem as it means someone can leak data
out of the cluster, not object data itself, but if an object exists or
not.
This is caused by the SwiftAnonymousEngine's, frankly wide open
is_applicable acceptance. When we get to checking the bucket or object
for user acceptance we deal with it properly, but if the object doesn't
exsit, because the user has been "authorised" rgw returns a 404.
Why? Because we always override the user with the Swift account.
Meaning as far as checks are concerned the auth user is the user, not
and anonymous user.
I assume this is because a swift container could have world readable
reads or writes and in slight s3 and swift api divergents can make these
interesting edge cases leak in.
This patch doesn't change the user to the swift account if they are
anonymous. So we can do some anonymous checks when it suits later in the
request processing path.
Fixes: https://tracker.ceph.com/issues/43617 Signed-off-by: Matthew Oliver <moliver@suse.com>
(cherry picked from commit b03d9754e113d24221f1ce0bac17556ab0017a8a)
Conflicts:
src/rgw/rgw_swift_auth.h
- where master has "rgw_user(s->account_name)", nautilus has
"s->account_name" only
Laura Paduano [Wed, 13 May 2020 12:16:57 +0000 (14:16 +0200)]
Merge pull request #34450 from rhcs-dashboard/wip-44980-nautilus
nautilus: monitoring: Fix pool capacity incorrect
Reviewed-by: Alfonso Martínez <almartin@redhat.com> Reviewed-by: Patrick Seidensal <pseidensal@suse.com> Reviewed-by: Laura Paduano <lpaduano@suse.com>
Removed because that files are not available in Nautilus:
src/pybind/mgr/dashboard/frontend/src/app/shared/services/password-policy.service.ts
src/pybind/mgr/dashboard/frontend/src/app/ceph/cluster/telemetry/telemetry.component.ts
src/pybind/mgr/dashboard/frontend/src/app/ceph/shared/smart-list/smart-list.component.ts
Sage Weil [Thu, 27 Feb 2020 15:30:27 +0000 (09:30 -0600)]
compressor/lz4: rebuild if buffer is not contiguous
In older versions of lz4 (specifically < 1.8.2) bit errors
can be introduced when compressing from fragmented memory. The lz4
bug was fixed by this lz4 commit:
The error can be reproduced using following command :
./frametest -v -i100000000 -s1659 -t31096808
It's actually a bug in the stream LZ4 API,
when starting a new stream
and providing a first chunk to complete with size < MINMATCH.
In which case, the chunk becomes a dictionary.
No hash was generated and stored,
but the chunk is accessible as default position 0 points to dictStart,
and position 0 is still within MAX_DISTANCE.
Then, next attempt to read 32-bits from position 0 fails.
The issue would have been mitigated by starting from index 64 KB,
effectively eliminating position 0 as too far away.
The proper fix is to eliminate such "dictionary" as too small.
Which is what this patch does.
This is a workaround to rebuild our input buffer into a continguos buffer
if it is not already contiguous.
Dan van der Ster [Wed, 26 Feb 2020 20:50:07 +0000 (21:50 +0100)]
test/compressor: test round trip of an osdmap
Check if the compressors can compress/decompress a bufferlist which is not word
aligned, such as a freshly-encoded osdmap.
Related-to: https://tracker.ceph.com/issues/39525 Signed-off-by: Dan van der Ster <daniel.vanderster@cern.ch>
(cherry picked from commit 1b1c71a2c28c38d3e28f006b1cb164435a653c02)
Conflicts:
qa/suites/rbd/openstack/workloads/devstack-tempest-gate.yaml
- some difference compared to master, but the entire test is being deleted so
I didn't examine it further
Or Friedmann [Wed, 4 Sep 2019 13:34:52 +0000 (16:34 +0300)]
fix rgw lc does not delete objects that do not have exactly the same tags as the rule
It is possible that object will have multiple tags more than the rule that applied on.
Object is not being deleted if not all tags exactly the same as in the rule.
S3-tests: ceph/s3-tests#303 Fixes: https://tracker.ceph.com/issues/41652 Signed-off-by: Or Friedmann <ofriedma@redhat.com>
(cherry picked from commit ebb806ba83fa9d68f14194b1f9886f21f7195a3d)
Xiubo Li [Tue, 31 Mar 2020 09:09:45 +0000 (05:09 -0400)]
Client: make sure the Finisher's mutex lock not held during it being distructed
The objecter_finisher is already started in Client::Client(), but
in the failure path when initializing and starting the Client object,
we may not get a chance to call the Client::shutdown() to stop the
Finisher thread, which maybe still holding the mutex lock in it. Then
when destrucing the Finisher object the pthread_mutex_destroy() will
fail.
This fix will delay the objecter_finisher thread to start in ::init()
until we're ready to call Client::shutdown on any errors instead.
mon/OSDMonitor: ensure lec only accounts for up osds
If we also consider down osds, we may very well be in a healthy state
but keeping maps as far back as the last epoch when a given osd went
down. If said osd stays down for eons, we will be keeping bajillions of
maps that we shouldn't.
Conflicts:
src/osd/PeeringState.h
- not introducing this function since it is just a getter and it's not
clear where it should go in nautilus
src/osd/PrimaryLogPG.h
- use last_update_ondisk directly instead of via getter function
xie xingguo [Fri, 13 Mar 2020 00:45:52 +0000 (08:45 +0800)]
qa/osd-recovery: pass osd_pg_log_trim_min = 0 to exercise short pg logs
we have osd_min_pg_log_entries to 2 (good) but not osd_pg_log_trim_min
which defaults to 100. Thus, even on those tests we're only rarely vulnerable.
Reset osd_min_pg_log_entries to 0 to make sure we really
would keep a minimal pg log in hand.
xie xingguo [Thu, 12 Mar 2020 23:59:07 +0000 (07:59 +0800)]
qa/short_pg_log: pass osd_pg_log_trim_min = 0 to exercise short pg logs
we have osd_min_pg_log_entries to 2 (good) but not osd_pg_log_trim_min
which defaults to 100. Thus, even on those tests we're only rarely vulnerable.
Reset osd_min_pg_log_entries to 0 to make sure we really
keep a minimal pg log in hand.
xie xingguo [Thu, 12 Mar 2020 10:01:45 +0000 (18:01 +0800)]
osd/PeeringState: do not trim pg log past last_update_ondisk
Trimming past last_update_ondisk would be really bad, e.g.,
a new interval change would cancel&redo a previous op, and if
we trim past last_update_ondisk, there could be potential
object inconsistencies as log merging won't necessarily be able
to find all divergent entries later (we lost track of the unfinished
op that should really be reverted).
J. Eric Ivancich [Fri, 10 Jan 2020 19:12:35 +0000 (14:12 -0500)]
rgw: clean up address 0-length listing results...
Some minor clean-ups to the previous commit, including adjust logging
messages, rename variable, convert a #define to a constexpr (and
adjust its scope).
J. Eric Ivancich [Thu, 13 Feb 2020 01:38:44 +0000 (20:38 -0500)]
rgw: address 0-length listing results when non-vis entries dominate
A change to advance the marker in RGWRados::cls_bucket_list_ordered to
the last entry visited rather than the final entry in list to push
progress as far as possible.
Since non-vis entries tend to cluster on the same shard, such as
during incomplete multipart uploads, this can severely limit the
number of entries returned by a call to
RGWRados::cls_bucket_list_ordered since once that shard has provided
all its members, we must stop. This interacts with a recent
optimization to reduce the number of entries requested from each
shard. To address this the number of attempts is sent as a parameter,
so the number of entries requested from each shard can grow with each
attempt. Currently the growth is linear but perhaps exponential growth
(capped at number of entries requested) should be considered.
Previously RGWRados::Bucket::List::list_objects_ordered was capped at
2 attempts, but now we keep attempting to insure we make forward
progress and return entries when some exist. If we fail to make
forward progress, we log the error condition and stop looping.
Additional logging, mostly at level 20, is added to the two key
functions involved in ordered bucket listing to make it easier to
follow the logic and address potential future issues that might arise.
Additionally modify attempt number based on how many results were
received.
Change the per-shard request number, so it grows exponentially rather
than linearly as the attempts go up.
J. Eric Ivancich [Mon, 14 Oct 2019 20:21:35 +0000 (16:21 -0400)]
rgw: reduce per-shard entry count during ordered bucket listing
Currently, if a client requests the 1000 next entries from a bucket,
each bucket index shard will receive a request for the 1000 next
entries. When there are hundreds, thousands, or tens of thousands of
bucket index shards, this results in a huge amplification of the
request, even though only 1000 entries will be returned.
These changes reduce the per-bucket index shard requests. These also
allow re-requests in edge cases where all of one shard's returned
entries are consumed. Finally these changes improve the determination
of whether the resulting list is truncated.