The ceph-volume lvm batch --auto introduced by [1] breaks the backward
compatibility when using non rotational devices only (SSD and/or NVMe).
Those devices are reaffected as bluestore db or filestore journal
devices while we want them as data devices.
mgr/cephadm: do not configure Dashboard Ganesha settings
The Dashboard can get cluster information from the Orchestrator.
For settings that are set by previous revisions, the Dashboard will
check them and ask user to remove them.
mgr/dashboard: support Orchestrator and user-defined Ganesha clusters
This change make the Dashboard support two types of Ganesha clusters:
- Orchestrator clusters (Since Octopus)
- Deployed by the Orchestrator.
- The Dashboard gets the pool/namespace that stores Ganesha
configuration objects from the Orchestrator.
- The Dashboard gets the daemons in a cluster from the Orchestrator.
- User-defined clusters (Since Nautilus)
- Clusters defined by using `ceph dashboard
set-ganesha-clusters-rados-pool-namespace` command is treated as
user-defined clusters.
- Each daemon has its own RADOS configuration objects. The
Dashboard uses these objects to deduce daemons.
Conflicts:
src/pybind/mgr/dashboard/openapi.yaml
- We don't have openapi-check feature in the Octopus. The file
is removed in the backport.
src/pybind/mgr/dashboard/services/ganesha.py
src/pybind/mgr/dashboard/tests/test_ganesha.py
- The conflicts are mainly caused by code re-format in the
master.
Nathan Cutler [Tue, 27 Oct 2020 20:40:38 +0000 (21:40 +0100)]
doc/PendingReleaseNotes: clean up for 15.2.6
This commit drops release notes that have already been published and
organizes the remaining release notes under a heading so it is clear
they are targeting the 15.2.6 release.
Conflicts:
src/test/mon/MonMap.cc
- do not attempt to introduce boost::intrusive_ptr into Nautilus
- monmap.build_initial takes bare cct in nautilus (master: cct.get())
Conflicts:
src/test/mon/MonMap.cc
- do not attempt to introduce boost::intrusive_ptr into nautilus
- monmap.build_initial takes bare cct in nautilus (master: cct.get())
Dan van der Ster [Tue, 13 Oct 2020 07:08:12 +0000 (09:08 +0200)]
mds: account for closing sessions in hit_session
While stopping an mds we can reply to a request while all client
sessions are closing. We shouldn't assert in this case.
Fixes: https://tracker.ceph.com/issues/47833 Signed-off-by: Dan van der Ster <daniel.vanderster@cern.ch>
(cherry picked from commit 6823d8fb619c07b4e749ae564df565eadc59c187)
Jason Dillaman [Tue, 13 Oct 2020 01:34:25 +0000 (21:34 -0400)]
librbd: update AioCompletion return value before evaluating pending count
If the pending count is decremented before the return value is updated,
there is a possibility of two ASIO threads concurrently decrementing the
pending count down from 2 -> 1 -> 0. In the second thread (the one that
performs the final decrement from 1 -> 0), it can finalize the completion
before the first thread has had a chance to update the return value.
Fixes: https://tracker.ceph.com/issues/47847 Signed-off-by: Jason Dillaman <dillaman@redhat.com>
(cherry picked from commit 94f3bce53c39017028ce44a80697f55af2a82e68)
Jason Dillaman [Fri, 16 Oct 2020 15:25:39 +0000 (11:25 -0400)]
journal: possible race condition between flush and append callback
When notifying the journal recorder of an overflow or if the object
close request has completed due to no more in-flight IO, it was
possible for a race between a flush request and the processing of
an append completion to attempt to kick off duplicate notifications.
Since the overflowed and closed callbacks are properly protected from
duplicates, use a counter instead of a boolean to track possible
in-flight handler callbacks.
Fixes: https://tracker.ceph.com/issues/47880 Signed-off-by: Jason Dillaman <dillaman@redhat.com>
(cherry picked from commit 458ab997fe77ea78803a34c6c9715225aa3413ba)
KeyCount should return object count + common prefix count.
see S3 example: https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectsV2.html#API_ListObjectsV2_Example_5
rgw/gc: fix for incrementing the perf counter 'gc_retire_object'
in the new gc queue code for omap offload, when gc objects from queue
are deleted. This was missed out initially.
rgw/gc: fixing the condition when marker for a queue is
always reset to empty which causes RGWGC::list to get stuck in
a loop, which ultimately is broken out of when the queue's truncated
flag is false.
1. Check for entries size also while evaluating whether objects cache for
a gc object should be marked as 'transitioned' in case of cls_rgw_gc_list.
When there are no entries, we get back a return value of 0, and the
object cache is not marked as 'transitioned'.
2. Also for the last gc object, we need to check whether the queue is still
under process and set the correct flag.
Missing the two conditions above causes the GC::list to loop continously
over the same gc object.
Xiubo Li [Tue, 20 Oct 2020 05:26:33 +0000 (01:26 -0400)]
qa/cephfs: add session_timeout option support
When the mds revoking the Fwbl caps, the clients need to flush
the dirty data back to the OSDs, but the flush may make the OSDs
to be overloaded and slow, which may take more than 60 seconds to
finish. Then the MDS daemons will report the WRN messages.
For the teuthology test cases, let's just increase the timeout
value to make it work.
jinmyeonglee [Thu, 22 Oct 2020 08:25:51 +0000 (17:25 +0900)]
vstart.sh: fix fs set max_mds bug
Fix a bug where the name used when creating a volume and the name used when setting max_mds were different. Fixes: https://tracker.ceph.com/issues/47946 Signed-off-by: Jinmyeong Lee <jinmyeong.lee@linecorp.com>
(cherry picked from commit 6a9445c2cbe6c0c7045bfaed007cc1920ad132ed)
qa/workunits/rbd: yet another attempt to improve rbd-nbd unmap
Previously it still could race when unmap_device returned success
because the device was not found in `rbd-nbd list-mapped` (the nbd
device was removed) but the test failed because the process was still
found in the ps table.
rgw: radosgw-admin should paginate internally when listing bucket
Currently `radosgw-admin bucket list ...`, when listing a bucket, asks
for the value of "--max-entries" internally. To list a large bucket
entirely the user would have to set "--max-entries" to a large value
(e.g., 10000000). Internally this doesn't paginate, so it will try to
produce the entire list at once. This can consume a lot of memory, and
there are known cases where this induces an out-of-memory crash.
So now we'll set a maximum pagination size of 10,000. So even with
large values of "--max-entries" it will still be able to produce the
full listing without stressing memory, because it will ask for at most
10,000 entries at a time.
Or Friedmann [Thu, 23 Jul 2020 15:36:07 +0000 (18:36 +0300)]
rgw: fix expiration header returned even if there is only one tag in the object the same as the rule
Expiration header returned even if there is only one tag in the object the same as the rule
Signed-off-by: Or Friedmann <ofriedma@redhat.com> Reported-by: Avi Mor <avmor@redhat.com> Fixes: https://tracker.ceph.com/issues/46614
(cherry picked from commit bf7c7e59f390afb53cb1e30a440ab26bb093c11c)
rgw: rgw-orphan-list should use "plain" formatted `rados ls` output
The previous version that used "json-pretty" output for `rados ls`
added complications due to json's escaping of special characters. So
this version returns to the "plain" output for `rados ls` but deals
with entries (oids) that might have namespaces and/or locators as
well.
rgw: allow rgw-orphan-list to note when rados objects are in namespace
Currently namespaces and locators are ignored when `rados ls` is run
by rgw-orphan-list to record RADOS's known objects.
However there have been cases where RADOS objects have a locator, and
when one is included in the listing, the script does not handle it
correctly. Now when objects have locators, we will prevent their
output from entering the .intermediate file.
Additionally we do not expect RGW data objects to be in RADOS
namespaces, so when a namespaced object is detected, we'll error out
with a message.
Conflicts:
src/pybind/mgr/dashboard/frontend/src/app/ceph/shared/smart-list/smart-list.component.html
src/pybind/mgr/dashboard/frontend/src/app/ceph/shared/smart-list/smart-list.component.ts
- Use ngx-bootstrap tabset for tabs.
Patrick Donnelly [Tue, 13 Oct 2020 17:09:41 +0000 (10:09 -0700)]
qa: set rados op timeouts for mds/ceph-fuse
Now that the osdc Objecter obeys updates to these configs, let's use
them to avoid having them block forever on operations that may never
complete (or should complete in a timely manner).
Fixes: https://tracker.ceph.com/issues/47734 Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
(cherry picked from commit d060c9a768c6974d3b68e4d408edf78bba9e0e85)
Have the Objecter track the rados_(mon|osd)_op_timeout configs so that
it can be configured at runtime/startup. This is useful for the
MDS/ceph-fuse so that we can avoid waiting forever for a response from
the Monitors that will never come (statfs on a deleted file system's
pools).
Also: make these configs take a time value rather than double. This is
simpler to deal with in the code and allows time units to be used (e.g.
"5m" for 5 minutes).
Fixes: https://tracker.ceph.com/issues/47734 Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
(cherry picked from commit a8a23747aa081d938c9b277ab42507dd506bf6c2)
Kotresh HR [Tue, 4 Aug 2020 07:57:53 +0000 (13:27 +0530)]
mgr/volumes: Make number of cloner threads configurable
The number of cloner threads is set to 4 and it can't be
configured. This patch makes the number of cloner threads
configurable via the mgr config option "max_concurrent_clones".
On an increase in number of cloner threads, it will just
spawn the difference of threads between existing number of
cloner threads and the new configuration. It will not cancel
the running cloner threads.
On decrease in number of cloner threads, the cases are as follows.
1. If all cloner threads are waiting for the job:
In this case, all threads are notified and required number
threads are terminated.
2. If all the cloner threads are processing a job:
In this case, the condition is validated for each thread after
the current job is finished and the thread is termianted if the
condition for required number of cloner threads is not satisified.
3. If few cloner threads are processing and others are waiting:
The threads which are waiting are notified to validate the
number of threads required. If terminating those doesn't satisfy the
required number of threads, the remaining threads are terminated
upon completion of existing job.