Kefu Chai [Tue, 30 Mar 2021 18:32:38 +0000 (02:32 +0800)]
mgr/PyModule: put mgr_module_path before Py_GetPath()
pip comes with _vendor/progress. so there is chance to import the vendored
version of "progress" module instead of the "progress" mgr module, and
fail to import the latter.
in this change, the order of paths are rearranged so the configured
`mgr_module_path` is put before the return value of `Py_GetPath()`.
Conflicts:
src/mgr/PyModule.cc
- nautilus has a preprocessor directive "#if PY_MAJOR_VERSION >= 3"
which is not there in master
- since we still need to support python2, apply the same change to
the #else branch at line 351
Conflicts:
src/pybind/mgr/dashboard/services/access_control.py
- Some of the changes are not backported because those features are
not implemented on nautilus. So I left them as it is
mgr/dashboard: filesystem pool size should use stored stat
Fixes: https://tracker.ceph.com/issues/50195 Signed-off-by: Avan Thakkar <athakkar@redhat.com>
Replaces 'bytes_used' with 'stored' stat to see the correct results
of CephFS pool stats.
Aashish Sharma [Thu, 25 Mar 2021 05:55:37 +0000 (11:25 +0530)]
mgr/dashboard:Simplify some complex calculations in test_alerts.yml
run-promtool-unittests is failing with difference in floating point values in some complex calculations. This PR intends to simplify those calculations and fix this issue.
Kefu Chai [Fri, 19 Mar 2021 02:32:16 +0000 (10:32 +0800)]
test: run promtool test without docker on ubuntu/focal
before this change, we use docker for running promtools offered by
a docker image, but this is not efficient, and quite a few developers
do not want to use docker for running "make check". this change was
introduced by #39246, the reason was that, in Ceph's CI process, we
are using Ubuntu/Bionic for running "make check" jobs, but prometheus
packaged by Bionic does not offer the "test rules" command. so, to
address problem, we are using "dnanexus/promtool:2.9.2" docker image
for verifying monitoring/prometheus/alerts/test_alerts.yml.
after this change, we use prometheus packaged by debian derivatives
instead of pulling a docker image.
* debian/control: add prometheus as a "make check" dependency
* install-deps.sh: partially revert 53a5816deda0874a3a37e131e9bc22d88bb2a588, as we don't need to
pull docker or start docker service for using promtool anymore.
* cmake: check if promtool is capable of running "test rules"
command, bail out if it is not.
Conflicts:
debian/control (python-cherrypy3 conficts with python-cherrypy3 | python3-cherrypy3 in nautilus)
install-deps.sh (preload_wheels_for_tox method not in nautilus so removed that)
src/test/CMakeLists.txt (#561-594 new changes overrid these lines, merged correctly now)
Conflicts:
install-deps.sh (changed dnf to yumdnf , preload_wheels method not in nautilus so removed that)
src/test/CMakeLists.txt (#561-594 new changes overrid these lines, merged correctly now)
Aashish Sharma [Mon, 8 Mar 2021 09:44:00 +0000 (15:14 +0530)]
mgr/dashboard: Remove username, password fileds from -Cluster/Manager Modules/dashboard
Username, password fields are empty in Cluster/Manager Modules/dashboard.Since this functionality is when dashboard supported single user-password, now we need to remove these fields from here.
Conflicts:
src/pybind/mgr/dashboard/services/access_control.py(no check_migrate_v0_to_current and check_migrate_v1_to_current methods in nautilus)
src/pybind/mgr/dashboard/tests/test_access_control.py(no test_load_v2 method in nautilus, 'time' import no longer needed with new changes)
to silence the health warning of "mons are allowing insecure global_id
reclaim", which prevents the cluster from being active+clean. couple
tests are expecting a warning free cluster before they starts.
as this option is enabled by default for appeasing the old clients, but when it
comes to most of upstream testing, we can just disable it.
auth/cephx: make KeyServer::build_session_auth_info() less confusing
The second KeyServer::build_session_auth_info() overload is used only
by the monitor, for mon <-> mon authentication. The monitor passes in
service_secret (mon secret) and secret_id (-1). The TTL is irrelevant
because there is no rotation.
However the signature doesn't make it obvious. Clarify that
service_secret and secret_id are input parameters and info is the only
output parameter.
auth/cephx: cap ticket validity by expiration of "next" key
If auth_mon_ticket_ttl is increased by several times as done in
commit 522a52e6c258 ("auth/cephx: rotate auth tickets less often"),
active clients eventually get stuck because the monitor sends out an
auth ticket with a bogus validity. The ticket is secured with the
"current" secret that is scheduled to expire according to the old TTL,
but the validity of the ticket is set to the new TTL. As a result,
the client simply doesn't attempt to renew, letting the secrets rotate
potentially more than once. When that happens, the client first hits
auth authorizer errors as it tries to renew service tickets and when
it finally gets to renewing the auth ticket, it hits the insecure
global_id reclaim wall.
Cap TTL by expiration of "next" key -- the "current" key may be
milliseconds away from expiration and still be used, legitimately.
Do it in KeyServerData alongside key rotation code and propagate the
capped TTL to the upper layer.
Conflicts:
src/pybind/cephfs/mock_cephfs.pxi : Not present in octopus
src/pybind/cephfs/c_cephfs.pxd : Not present in octopus
src/pybind/cephfs/cephfs.pyx : Few of the fops is not part of octopus
which got pulled as part of this backport
src/test/pybind/test_cephfs.py : Few of the fops is not part of
octopus, which got pulled as part of this backport. Added missing
stat import.
- AUTH_INSECURE_GLOBAL_ID_RENEWAL_ALLOWED if we are allowing clients to reclaim
global_ids in an insecure manner (for backwards compatibility until
clients are upgraded)
- AUTH_INSECURE_GLBOAL_ID_RENEWAL if there are currently clients connected that
do not know how to securely renew their global_id, as exposed by
auth_expose_insecure_global_id_reclaim=true. The client auth names and IPs
are listed the alert details (up to a limit, at least).
The docs recommend operators mute these alerts instead of silencing, but
we still include option that allow the alerts to be disabled entirely.
Conflicts:
doc/rados/operations/health-checks.rst [ MON_DISK_* alerts
present but not documented in nautilus; "ceph health mute"
not in nautilus -- silencing temporarily is not possible ]
src/mon/HealthMonitor.cc [ commits e4bf716bfa07 ("mon: store
a reference as member variable") and d0eb22f3ba55
("mon/health_checks: associate a count with health_alert_t")
not in nautilus ]
Ilya Dryomov [Tue, 2 Mar 2021 14:09:26 +0000 (15:09 +0100)]
auth/cephx: ignore CEPH_ENTITY_TYPE_AUTH in requested keys
When handling CEPHX_GET_AUTH_SESSION_KEY requests from nautilus+
clients, ignore CEPH_ENTITY_TYPE_AUTH in CephXAuthenticate::other_keys.
Similarly, when handling CEPHX_GET_PRINCIPAL_SESSION_KEY requests,
ignore CEPH_ENTITY_TYPE_AUTH in CephXServiceTicketRequest::keys.
These fields are intended for requesting service tickets, the auth
ticket (which is really a ticket granting ticket) must not be shared
this way.
Otherwise we end up sharing an auth ticket that a) isn't encrypted
with the old session key even if needed (should_enc_ticket == true)
and b) has the wrong validity, namely auth_service_ticket_ttl instead
of auth_mon_ticket_ttl. In the CEPHX_GET_AUTH_SESSION_KEY case, this
undue ticket immediately supersedes the actual auth ticket already
encoded in the same reply (the reply frame ends up containing two auth
tickets).
Ilya Dryomov [Mon, 22 Mar 2021 18:16:32 +0000 (19:16 +0100)]
auth/cephx: rotate auth tickets less often
If unauthorized global_id (re)use is disallowed, a client that has
been disconnected from the network long enough for keys to rotate
and its auth ticket to expire (i.e. become invalid/unverifiable)
would not be able to reconnect.
The default TTL is 12 hours, resulting in a 12-24 hour reconnect
window (the previous key is kept around, so the actual window can be
up to double the TTL). The setting has stayed the same since 2009,
but it also hasn't been enforced. Bump it to get a 72 hour reconnect
window to cover for something breaking on Friday and not getting fixed
until Monday.
Ilya Dryomov [Thu, 25 Mar 2021 19:59:13 +0000 (20:59 +0100)]
mon: fail fast when unauthorized global_id (re)use is disallowed
When unauthorized global_id (re)use is disallowed, we don't want to
let unpatched clients in because they wouldn't be able to reestablish
their monitor session later, resulting in subtle hangs and disrupted
user workloads.
Denying the initial connect for all legacy (CephXAuthenticate < v3)
clients is not feasible because a large subset of them never stopped
presenting their ticket on reconnects and are therefore compatible with
enforcing mode: most notably all kernel clients but also pre-luminous
userspace clients. They don't need to be patched and excluding them
would significantly hamper the adoption of enforcing mode.
Instead, force clients that we are not sure about to reconnect shortly
after they go through authentication and obtain global_id. This is
done in Monitor::dispatch_op() to capture both msgr1 and msgr2, most
likely instead of dispatching mon_subscribe.
We need to let mon_getmap through for "ceph ping" and "ceph tell" to
work. This does mean that we share the monmap, which lets the client
return from MonClient::authenticate() considering authentication to be
finished and causing the potential reconnect error to not propagate to
the user -- the client would hang waiting for remaining cluster maps.
For msgr1, this is unavoidable because the monmap is sent immediately
after the final MAuthReply. But for msgr2 this is rare: most of the
time we get to their mon_subscribe and cut the connection before they
process the monmap!
Regardless, the user doesn't get a chance to start a workload since
there is no proper higher-level session at that point.
To help with identifying clients that need patching, add global_id and
global_id_status to "sessions" output.
Ilya Dryomov [Sat, 13 Mar 2021 13:53:52 +0000 (14:53 +0100)]
auth/cephx: option to disallow unauthorized global_id (re)use
global_id is a cluster-wide unique id that must remain stable for the
lifetime of the client instance. The cephx protocol has a facility to
allow clients to preserve their global_id across reconnects:
(1) the client should provide its global_id in the initial handshake
message/frame and later include its auth ticket proving previous
possession of that global_id in CEPHX_GET_AUTH_SESSION_KEY request
(2) the monitor should verify that the included auth ticket is valid
and has the same global_id and, if so, allow the reclaim
(3) if the reclaim is allowed, the new auth ticket should be
encrypted with the session key of the included auth ticket to
ensure authenticity of the client performing reclaim. (The
included auth ticket could have been snooped when the monitor
originally shared it with the client or any time the client
provided it back to the monitor as part of requesting service
tickets, but only the genuine client would have its session key
and be able to decrypt.)
Unfortunately, all (1), (2) and (3) have been broken for a while:
- (1) was broken in 2016 by commit a2eb6ae3fb57 ("mon/monclient:
hunt for multiple monitor in parallel") and is addressed in patch
"mon/MonClient: preserve auth state on reconnects"
- it turns out that (2) has never been enforced. When cephx was
being designed and implemented in 2009, two changes to the protocol
raced with each other pulling it in different directions: commits 0669ca21f4f7 ("auth: reuse global_id when requesting tickets")
and fec31964a12b ("auth: when renewing session, encrypt ticket")
added the reclaim mechanism based strictly on auth tickets, while
commit 5eeb711b6b2b ("auth: change server side negotiation a bit")
allowed the client to provide global_id in the initial handshake.
These changes didn't get reconciled and as a result a malicious
client can assign itself any global_id of its choosing by simply
passing something other than 0 in MAuth message or AUTH_REQUEST
frame and not even bother supplying any ticket. This includes
getting a global_id that is being used by another client.
- (3) was broken in 2019 with addition of support for msgr2, where
the new auth ticket ends up being shared unencrypted. However the
root cause is deeper and a malicious client can coerce msgr1 into
the same. This also goes back to 2009 and is addressed in patch
"auth/cephx: ignore CEPH_ENTITY_TYPE_AUTH in requested keys".
Because (2) has never been enforced, no one noticed when (1) got
broken and we began to rely on this flaw for normal operation in
the face of reconnects due to network hiccups or otherwise. As of
today, only pre-luminous userspace clients and kernel clients are
not exercising it on a daily basis.
Bump CephXAuthenticate version and use a dummy v3 to distinguish
between legacy clients that don't (may not) include their auth ticket
and new clients. For new clients, unconditionally disallow claiming
global_id without a corresponding auth ticket. For legacy clients,
introduce a choice between permissive (current behavior, default for
the foreseeable future) and enforcing mode.
If the reclaim is disallowed, return EACCES. While MonClient does
have some provision for global_id changes and we could conceivably
implement enforcement by handing out a fresh global_id instead of
the provided one, those code paths have never been tested and there
are too many ways a sudden global_id change could go wrong.
Ilya Dryomov [Tue, 9 Mar 2021 15:33:55 +0000 (16:33 +0100)]
auth/AuthServiceHandler: keep track of global_id and whether it is new
AuthServiceHandler already has global_id field, but it is unused.
Revive it and let the handler know whether global_id is newly assigned
by the monitor or provided by the client.
Lift the setting of entity_name into AuthServiceHandler.
Conflicts:
src/mon/MonClient.cc [ commit 1e9b18008c5e ("mon: set
MonClient::_add_conn return type to void") not in nautilus ]
src/mon/MonClient.h [ ditto ]
Destroying AuthClientHandler and not resetting global_id is another
way to get MonClient to send CEPHX_GET_AUTH_SESSION_KEY requests with
CephXAuthenticate::old_ticket not populated. This is particularly
pertinent to get_monmap_and_config() which shuts down the bootstrap
MonClient between retry attempts.
Ilya Dryomov [Mon, 8 Mar 2021 14:37:02 +0000 (15:37 +0100)]
mon/MonClient: preserve auth state on reconnects
Commit a2eb6ae3fb57 ("mon/monclient: hunt for multiple monitor in
parallel") introduced a regression where auth state (global_id and
AuthClientHandler) was no longer preserved on reconnects. The ensuing
breakage was quickly noticed and prompted a follow-on fix 8bb6193c8f53
("mon/MonClient: persist global_id across re-connecting").
However, as evident from the subject, the follow-on fix only took
care of the global_id part. AuthClientHandler is still destroyed
and all cephx tickets are discarded. A new from-scratch instance
is created for each MonConnection and CEPHX_GET_AUTH_SESSION_KEY
requests end up with CephXAuthenticate::old_ticket not populated.
The bug is in MonClient, so both msgr1 and msgr2 are affected.
This should have resulted in a similar sort of breakage but didn't
because of a much larger bug. The monitor should have denied the
attempt to reclaim global_id with no valid ticket proving previous
possession of that global_id presented. Alas, it appears that this
aspect of the cephx protocol has never been enforced. This is dealt
with in the next patch.
To fix the issue at hand, clone AuthClientHandler into each
MonConnection so that each respective CEPHX_GET_AUTH_SESSION_KEY
request gets a copy of the current auth ticket.
Ilya Dryomov [Sat, 6 Mar 2021 10:15:40 +0000 (11:15 +0100)]
mon/MonClient: claim active_con's auth explicitly
Eliminate confusion by moving auth from active_con into MonClient
instead of swapping them.
The existing MonClient::auth can be destroyed right away -- I don't
see why active_con would need it or a reason to delay its destruction
(which is what stashing in active_con effectively does).
Sage Weil [Sat, 20 Feb 2021 16:58:47 +0000 (11:58 -0500)]
ceph: print command output to stdout even on error
Currently in the case where the mon returns a command error code, we print
the error stream and Error ... message but not the command output. Usually
there isn't any, so we haven't noticed until now, but there is not reason
why shouldn't return both an error code and some output.
Restructure the code so that the error message goes *after* the JSON output,
where it will be a bit more obvious to the user (if the stdout scrolled
the terminal, for instance). (This is not a change in behavior since
previously we weren't seeing the stdout at all.)
Xuehan Xu [Sun, 7 Feb 2021 04:40:36 +0000 (12:40 +0800)]
mgr: relax osd ok-to-stop condition on degraded pgs
Right now, the "ok-to-stop" condition is relatively rigorous, it allows
stopping an osd only if no PG on it is non-active or degraded. But there
are situations in which an OSD is part of a degraded pg and the pg still
still have > min_size complete replicas after the OSD is stopped.
In 9750061d5d4236aaba156d60790e0b8bcd7cfb64, we changed from considering
just acting to using avail_no_missing (OSDs that have no missing objects).
When the projected pg_acting is constructed this way, we can safely compare
to min_size... even for a PG marked degraded.
Casey Bodley [Mon, 15 Jun 2020 15:45:11 +0000 (11:45 -0400)]
test/rgw: test_datalog_autotrim filters out new entries
if other sync activity is racing with test_datalog_autotrim, it can
create new datalog entries after the 'datalog autotrim' command runs
instead of asserting that the datalog is empty after trim, assert that
any entries have a marker larger than the max-marker reported by
'datalog status' before the trim
rgw: return ERR_NO_SUCH_BUCKET early while evaluating bucket policy
Right now we create a ERR_NO_SUCH_BUCKET ret code but continue further
processing. Since this ret code isn't returned at any stage we end up creating a
bucket instance anyway which shouldn't happen and then succeeding the client
call in cases like put bucket versioning. Return an error code early in these
cases
Kamoltat [Mon, 8 Feb 2021 15:45:06 +0000 (15:45 +0000)]
qa/tasks/mgr/test_progress: fix wait_until_equal
Octopus ceph_test_case doesn't have period arg
so remove that in wait_until_equal. Also increase
time to wait for complete events by using RECOVERY_PERIOD
instead of EVENT_CREATION_PERIOD
Not needed in masters because only octopus and nautilus
doesn't have a period argument in qa/tasks/mgr/test_progress.py
wait_until_equals() function
luo rixin [Thu, 20 Feb 2020 12:07:39 +0000 (20:07 +0800)]
test/TestOSDScrub: fix mktime() error
The var tm tm isn't initialized, when the tm.tm_isdst is a
positive value, mktime(&tm) return -1 result in test failed
in ubuntu 19.10 for aarch64 GLIBC2.30.
Kefu Chai [Fri, 30 Aug 2019 08:21:03 +0000 (16:21 +0800)]
cmake: SKIP_RPATH if RPATH is not necessary
some executables like ceph_test_mon_memory_target do not link against
libraries built from source tree, like librados and libceph-common. so
cmake does not set RPATH for them. hence cmake complains like:
before this change, `CMAKE_INSTALL_RPATH` is set globally. so cmake is
asked to rewrite the RPATH for all installed targets. but this is not
needed. as some executables do not link against libceph-common. hence,
cmake complains when installing them, like:
CMake Error at src/test/mon/cmake_install.cmake:90 (file):
file RPATH_CHANGE could not write new RPATH:
/usr/lib64/ceph
to the file:
/home/abuild/rpmbuild/BUILDROOT/ceph-15.0.0-4347.g85a07b9.x86_64/usr/bin/ceph_test_log_rss_usage
No valid ELF RPATH or RUNPATH entry exists in the file;
after this change, `SKIP_RPATH` is set for those executables which do
not link against any libraries created from ceph source tree. so we can
avoid setting the RPATH for these executables when `make install`.