Ernesto Puerta [Fri, 14 Jan 2022 11:56:55 +0000 (12:56 +0100)]
Merge pull request #44507 from votdev/issue_53813_nfs_page_not_found
mgr/dashboard: NFS pages shows 'Page not found'
Reviewed-by: Alfonso Martínez <almartin@redhat.com> Reviewed-by: Laura Paduano <lpaduano@suse.com> Reviewed-by: Nizamudeen A <nia@redhat.com> Reviewed-by: Tatjana Dehler <tdehler@suse.com> Reviewed-by: Volker Theile <vtheile@suse.com>
mgr/prometheus: Fix regression with OSD/host details/overview dashboards
Fix issues with PromQL expressions and vector matching with the
`ceph_disk_occupation` metric.
As it turns out, `ceph_disk_occupation` cannot simply be used as
expected, as there seem to be some edge cases for users that have
several OSDs on a single disk. This leads to issues which cannot be
approached by PromQL alone (many-to-many PromQL erros). The data we
have expected is simply different in some rare cases.
I have not found a sole PromQL solution to this issue. What we basically
need is the following.
1. Match on labels `host` and `instance` to get one or more OSD names
from a metadata metric (`ceph_disk_occupation`) to let a user know
about which OSDs belong to which disk.
2. Match on labels `ceph_daemon` of the `ceph_disk_occupation` metric,
in which case the value of `ceph_daemon` must not refer to more than
a single OSD. The exact opposite to requirement 1.
As both operations are currently performed on a single metric, and there
is no way to satisfy both requirements on a single metric, the intention
of this commit is to extend the metric by providing a similar metric
that satisfies one of the requirements. This enables the queries to
differentiate between a vector matching operation to show a string to
the user (where `ceph_daemon` could possibly be `osd.1` or
`osd.1+osd.2`) and to match a vector by having a single `ceph_daemon` in
the condition for the matching.
Although the `ceph_daemon` label is used on a variety of daemons, only
OSDs seem to be affected by this issue (only if more than one OSD is run
on a single disk). This means that only the `ceph_disk_occupation`
metadata metric seems to need to be extended and provided as two
metrics.
`ceph_disk_occupation` is supposed to be used for matching the
`ceph_daemon` label value.
`ceph_disk_occupation_human` is supposed to be used for anything where
the resulting data is displayed to be consumed by humans (graphs, alert
messages, etc).
Josh Salomon [Thu, 13 Jan 2022 02:23:07 +0000 (02:23 +0000)]
osd, tools: refactor OSDMap::calc_pg_upmaps (simplify the code)
This is the first commit in a series of commits that aims at adding a primary balancer to Ceph and improving the current upmap balancer functionality. This first commit focuses on simplifying (refactoring) the code of `calc_pg_upmaps` so it is easier to change in the future. This PR keeps the existing functionality as-is and does not change anything but the code structure.
As part of the work is major refactoring of OSDMap::calc_pg_upmaps, the first thing is adding an --upmap-seed param to osdmaptool so test results can be compared without the random factor.
Other changes made:
- Divided sections of `OSDMap::calc_pg_upmaps` into their own separate functions
- Renamed tmp to tmp_osd_map
- Changed all the occurances of 'first' and 'second' in the function to more meaningful names.
gal salomon [Mon, 12 Apr 2021 05:54:37 +0000 (08:54 +0300)]
parquet implementation:
(1) adding arrow/parquet to make(install is missing)
(2) s3select-operation contains 2 flows CSV and Parquet
(3) upon parquet-flow s3select processing engine is calling (via callback) to get-size and range-request, the range-requests are a-sync, thus the caller is waiting until notification.
(4) flow : execute --> s3select --(arrow layer)--> range-request --> GetObj::execute --> send_response_data --> notify-range-request --> (back-to) --> s3select
(5) on parquet flow the s3select is handling the response (using call-backs) because of aws-response-limitation (16mb)
add unique pointer (rgw_api); verify magic number for parquet objects; s3select module update
fix buffer-over-flow (copy range request)
change the range-request flow. now,it needs to use the callback parametrs (ofs & len) and not to use the element length
refactoring. seperate the CSV flow from the parquet flow, a phase before adding conditional build(depend on arrow package installation)
adding arrow/parquet installation to debian/control
align s3select repo with RGW (missing API"s, such as get_error_description)
undefined reference to arrow symbol
fix comment: using optional_yield by value
fix comments; remove future/promise
s3select: a leak fix
s3select: fixing result production
s3select,s3tests : parquet alignments
typo: git-remote --> git_remote
s3select: remove redundant comma(end of projections); bug fix in parquet flow upon aggregation queries
adding arrow/parquet
editorial. remove blank lines
s3select: merged with master(output serialization,presto alignments)
merging(not rebase) master functionlities into parquet branch
(*) a dedicated source-files for s3select operation.
(*) s3select-engine: fix leaks on parquet flows, enabling allocate csv_object and parquet_object on stack
(*) the csv_object and parquet object allocated on stack (no heap allocation)
move data-members from heap to stack allocation, refactoring, separate flows for CSV and parquet. s3select: bug fix
conditional build: upon arrow package is installed the parquet flow become visable, thus enables to process parquet object. in case the package is not installed only CSV is usable
Ilya Dryomov [Tue, 11 Jan 2022 12:13:01 +0000 (13:13 +0100)]
qa/run_xfstests_qemu.sh: harden against wget failures
If wget fails (e.g. due to a certificate issue), it still creates
an empty file. Then this file is marked executable, ./"${SCRIPT}"
immediately returns 0 and run_xfstests_qemu.sh exits successfully
without running a single xfstest.
This started on Sep 30, 2021 with the expiration of Let's Encrypt
root certificate -- all qemu jobs with "test: qa/run_xfstests_qemu.sh"
just booted the VM for a couple of seconds and reported success.
RGW Zipper - don't load stats for every bucket load
This was a side-effect of consolidating the Zipper API, and resulted in
a large performance hit. Stats are only needed if they are requested,
so don't load them every time.
Signed-off-by: Daniel Gryniewicz <dang@redhat.com>
Laura Flores [Tue, 4 Jan 2022 22:54:33 +0000 (22:54 +0000)]
mgr/telemetry: add the rocksdb version number to telemetry
Capturing the RocksDB version number in Telemetry would allow us to check that users are using the appropriate RocksDB version for their Ceph cluster. For instance, if a user is working in a Pacific cluster, but their RocksDB version is meant for Nautilus, that might be a problem.
It is strucured as "rocksdb_stats" --> "version" in anticipation of more stats that can will be added under "rocksdb_stats".
Kalpesh Pandya [Thu, 11 Nov 2021 06:46:16 +0000 (12:16 +0530)]
qa/tasks: Checking for kafka cleanup
Adding a sleep after running ./kafka-server-stop.sh and ./zookeeper-server-stop.sh
scripts so that nothing gets logged into the kafka logs after the sleep time.
And finally killing the process.
This resolves: https://tracker.ceph.com/issues/53220
osd: Display scheduler specific info when dumping an OpSchedulerItem
Implement logic to dump information relevant to the scheduler type being
employed when dumping details about an OpSchedulerItem. For e.g., the
'priority' field is relevant for the 'wpq' scheduler, but for the
'mclock_scheduler', the 'qos_cost' gives more information during debugging.
A couple of additional fields called 'qos_cost' and 'is_qos_request' are
introduced in OpSchedulerItem class. These are mainly used to facilitate
dumping of relevant information depending on the scheduler type. The
interesting points are when an item is enqueued and dequeued.
For the 'mclock_scheduler', the 'class_id' and the 'qos_cost' fields are
dumped during enqueue and dequeue op respectively. For the 'wpq' scheduler
things remain the same as before.
An additional benefit of this change is to help immediately identify the
type of scheduler being used for a given shard depending on what is dumped
in the debug messages while debugging.