Dan Mick [Thu, 6 Aug 2020 02:05:10 +0000 (02:05 +0000)]
cmake: don't include tags for Python imports, .tox, build/ dirs
For things like cephadm, where there is a lot of "from X import Y",
the import tags become cumbersome. .tox dirs and
python-common/build are just repeats of source files found elsewhere
so result in duplicate tags
Patrick Donnelly [Mon, 10 Aug 2020 20:40:36 +0000 (13:40 -0700)]
Merge PR #36221 into master
* refs/pull/36221/head:
client: switch lock_guard to scoped_lock
client: remove useless unsafe_sync_write
client: make the root member under the client_lock
client: add mount/initialize states support and convert to RWRef
client: add RWRef support
Reviewed-by: Jeff Layton <jlayton@redhat.com> Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
Kefu Chai [Fri, 7 Aug 2020 08:49:58 +0000 (16:49 +0800)]
common/ConfUtils: expose parse_buffer()
so it can be used by crimson for reading settings from a conf file.
crimson reads file with async read. since we already have a specialized
ConfigProxy for crimson, it'd be simpler to expose the shared facility
from ConfUtils instead of specializing it in ConfUtils.
Kefu Chai [Fri, 7 Aug 2020 08:28:30 +0000 (16:28 +0800)]
crimson/osd: use "ceph" for default cluster_name
for couple reasons:
* to avoid the pain to guess / update the cluster name when loading
conf file. in the past, we use "ceph" as a fallback for the cluster
name, which is in turn used as a meta name when expanding setting
values containing "$cluster". so to ensure those settings continue
working, we have to at least set cluster_name a safe value, and
"ceph" is what we've been using.
* in
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2017-June/018519.html,
we decided to drop the cluster_name support in deploy tools, and to
keep this feature in code. so in the long run, we can assume
new clusters will be all named "ceph".
* it's difficult to properly implement the feature to use "ceph",
if no --conf option is specified, as, in Ceph, even the path pointing
to conf file is allowed to contain "$cluster". so, to get it
right, we need to update cluster name stored in options before
reading the option files, this forces us to populate the setting
twice when reading the settings from a conf file. or we could
specialize the process, but i don't think it worthy of the
efforts.
so i think we can just use "ceph" for the cluster name in crimson.
* refs/pull/36501/head:
qa: add tests for mds_min_caps_working_set
mds: add working set minimum for caps
qa: use config_set/config_get
qa: do not append file names to dirname
qa: add exception for test timeouts
A client may hold many inodes pinned in its cache for open files. That
client may be unable to release those caps to respond to cache pressure
from the MDS (or quiescent client cap recall). We should not complain if
that number of capabilities is reasonable (< 10k by default).
Fixes: https://tracker.ceph.com/issues/46830 Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
Otherwise the files generated are not actually under the sub-directory!
This is correcting a confusing aspect of the test infrastructure but
doesn't actually require any changes to the tests.
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
librbd/cache: Fix scoping issue with lambda capture renaming
Clang complains:
/home/jenkins/workspace/ceph-master-compile/src/librbd/cache/ParentCacheObjectDispatch.cc:97:66: error: 'object_off' in capture list does not name a variable
([this, read_data, dispatch_result, on_dispatched, object_no, object_off,
^
/home/jenkins/workspace/ceph-master-compile/src/librbd/cache/ParentCacheObjectDispatch.cc:98:6: error: 'object_len' in capture list does not name a variable
object_len, snap_id, &parent_trace](ObjectCacheRequest* ack) {
^
/home/jenkins/workspace/ceph-master-compile/src/librbd/cache/ParentCacheObjectDispatch.cc:99:41: error: reference to local binding 'object_off' declared in enclosing function 'librbd::cache::ParentCacheObjectDispatch::read'
handle_read_cache(ack, object_no, object_off, object_len, snap_id,
^
/home/jenkins/workspace/ceph-master-compile/src/librbd/cache/ParentCacheObjectDispatch.cc:81:9: note: 'object_off' declared here
auto [object_off, object_len] = extents.front();
^
/home/jenkins/workspace/ceph-master-compile/src/librbd/cache/ParentCacheObjectDispatch.cc:99:53: error: reference to local binding 'object_len' declared in enclosing function 'librbd::cache::ParentCacheObjectDispatch::read'
handle_read_cache(ack, object_no, object_off, object_len, snap_id,
^
/home/jenkins/workspace/ceph-master-compile/src/librbd/cache/ParentCacheObjectDispatch.cc:81:21: note: 'object_len' declared here
auto [object_off, object_len] = extents.front();
^
/home/jenkins/workspace/ceph-master-compile/src/librbd/cache/ParentCacheObjectDispatch.cc:99:41: error: reference to local binding 'object_off' declared in enclosing function 'librbd::cache::ParentCacheObjectDispatch<librbd::ImageCtx>::read'
handle_read_cache(ack, object_no, object_off, object_len, snap_id,
^
/home/jenkins/workspace/ceph-master-compile/src/librbd/cache/ParentCacheObjectDispatch.cc:242:31: note: in instantiation of member function 'librbd::cache::ParentCacheObjectDispatch<librbd::ImageCtx>::read' requested here
template class librbd::cache::ParentCacheObjectDispatch<librbd::ImageCtx>;
^
/home/jenkins/workspace/ceph-master-compile/src/librbd/cache/ParentCacheObjectDispatch.cc:81:9: note: 'object_off' declared here
auto [object_off, object_len] = extents.front();
^
/home/jenkins/workspace/ceph-master-compile/src/librbd/cache/ParentCacheObjectDispatch.cc:99:53: error: reference to local binding 'object_len' declared in enclosing function 'librbd::cache::ParentCacheObjectDispatch<librbd::ImageCtx>::read'
handle_read_cache(ack, object_no, object_off, object_len, snap_id,
^
/home/jenkins/workspace/ceph-master-compile/src/librbd/cache/ParentCacheObjectDispatch.cc:81:21: note: 'object_len' declared here
auto [object_off, object_len] = extents.front();
^
6 errors generated.
fixes: https://github.com/ceph/ceph/pull/36145 Signed-off-by: Willem Jan Withagen <wjw@digiware.nl>
Xiubo Li [Sun, 2 Aug 2020 00:14:36 +0000 (08:14 +0800)]
client: add RWRef support
This is a common read/write reference framework, which will work
like a rw lock, for the readers/writers it won't hold the lock but
will increase the reference instead.
With this we can get rid of big locks in some use case, such as for
the Client.cc, in which such as for the file read()/write() it must
hold the client_lock at the begining until they finish, this will
make sure the _unmount() won't release the resources the read()/write()
are still using. With this it maks breaking the big client_lock to
be possible.
The usage: such as in Client.cc
Readers:
For the read()/writer(), etc, they will be work as "readers", in
the beginning they just need to check the state whether it satisfies
what the "readers" need to be, if not it will return directly, or
will increase the reference and continue.
Writers:
And for the _unmount(), as the "writer" in the beginning it will
just update the state to next stage first and when wait for all
the "readers" to finish. After the "writer" updating the state then
all the new comming "readers" will fail and return directly.
Fixes: https://tracker.ceph.com/issues/46649 Signed-off-by: Xiubo Li <xiubli@redhat.com>
crimson/osd: spawn osd_op_params in do_write_op(). Fix overriding.
This commit deduplicates the `OpsExexuter::do_osd_op()` by moving
`std::optional`-typed `osd_op_params` instantiation to `do_write_op()`.
It also fixes an issue where `clean_regions` of `osd_op_params` were
reset on every write, writefull or truncate.