Varsha Rao [Tue, 31 Mar 2020 11:48:03 +0000 (17:18 +0530)]
mgr/volumes: Create multiple CephFS exports
Using the following fs nfs interface multiple exports can be created:
ceph fs nfs export create <fsname> <binding> --readonly --path=<pathname> --attach=<clusterid>
Varsha Rao [Mon, 30 Mar 2020 10:17:15 +0000 (15:47 +0530)]
mgr/volumes: Move ganesha common config to vstart
This is a preparatoy patch before calling orchestrator for nfs cluster
deployment. All the ganesha config is moved to vstart. Keyring creation is also
taken care by vstart.
The volumes fs nfs cluster create interface does the following things:
1) Create a common recovery pool for all ganesha clusters. Each cluster will
have their own namespace.
2) Create an empty rados conf object named 'conf-nfs' for saving all export
urls.
Call to orch interface will be done in future patch.
Varsha Rao [Tue, 24 Mar 2020 18:56:56 +0000 (00:26 +0530)]
mgr/volumes/nfs: Fix mypy errors
This patch fixes the following mypy errors:
volumes/module.py:9: note: In module imported here,
volumes/__init__.py:2: note: ... from here:
volumes/fs/nfs.py: note: In member "validate_path" of class "CephFSFSal":
volumes/fs/nfs.py:80: error: Name 're' is not defined
volumes/fs/nfs.py: note: In member "create_path" of class "CephFSFSal":
volumes/fs/nfs.py:83: error: Name 'CephFS' is not defined
volumes/fs/nfs.py: note: In member "_persist_daemon_configuration" of class "GaneshaConf":
volumes/fs/nfs.py:259: error: Need type annotation for 'daemon_map' (hint: "daemon_map: Dict[<type>, <type>] = ...")
volumes/fs/nfs.py: note: In member "create_instance" of class "NFSConfig":
volumes/fs/nfs.py:386: error: Incompatible types in assignment (expression has type "GaneshaConf", variable has type "str")
volumes/fs/nfs.py: note: In member "create_export" of class "NFSConfig":
volumes/fs/nfs.py:389: error: "str" has no attribute "create_export"
volumes/fs/nfs.py: note: In member "delete_export" of class "NFSConfig":
volumes/fs/nfs.py:404: error: "str" has no attribute "has_export"
volumes/fs/nfs.py:407: error: "str" has no attribute "remove_export"
volumes/fs/nfs.py:408: error: "str" has no attribute "reload_daemons"
volumes/__init__.py:2: note: In module imported here:
volumes/module.py: note: In member "_cmd_fs_nfs_export_create" of class "Module":
volumes/module.py:406: error: "str" has no attribute "check_fsal_valid"
volumes/module.py:407: error: "str" has no attribute "create_instance"
volumes/module.py:408: error: "str" has no attribute "create_export"
volumes/module.py: note: In member "_cmd_fs_nfs_export_delete" of class "Module":
volumes/module.py:412: error: "str" has no attribute "delete_export"
volumes/module.py: note: In member "_cmd_fs_nfs_cluster_create" of class "Module":
volumes/module.py:415: error: Incompatible types in assignment (expression has type "NFSConfig", variable has type "str")
volumes/module.py:416: error: "str" has no attribute "create_nfs_cluster"
* refs/pull/35759/head:
qa: fix flake8 warnings
doc: add documentation for new ephemeral pinning feature
pybind/mgr/volumes: wire up pinning subvolumes/subvolumegroups
qa: adapt tests for empty pinned dir export
qa: break export pin tests into discrete tests
qa: add more ephemeral pin tests
qa: add tests for ephemeral pinning
mds: add maximum random ephemeral pin percentage
mds: replicate random pin state
mds: finish implementation of ephemeral pins
mds: do string equality comparison
mds: add ephemeral pinning for subtrees
mds: trim pinned and empty subtrees
mds: refactor remove_subtree
mds: allow export of pinned directory if empty
mds: reduce subtree processing verbosity
mds: skip export of empty directories
mds: remove frozen export pin from queue
mds: simplify for loop construction
mds: add debug messages for export queue processing
qa: refactor _wait_subtree and _get_subtree
qa: use status from wait_for_daemons
qa: quietly print json output from asok commands
mgr/dashboard: Prometheus query error in the metrics of Pools, OSDs and RBD images Fixes: https://tracker.ceph.com/issues/45068 Signed-off-by: Avan Thakkar <athakkar@redhat.com>
(cherry picked from commit 47b515c09496da8fc326300bab6618250466effe)
This is slightly evil in its current form. The MDS should use locks to
transmit state changes but right now it's just set when the CInode is
replicated. This replication of this state marker is necessary for
failover situations where we want the randomly pinned subtree to remain
pinned across failovers.
Note: this problem does not exist for the ephemeral distributed pins
because simple knowledge of the immediate parent's setting (which is
replicated normally) is sufficient to determine if the CInode is
ephemerally distributed. Ditto for regular export pins.
The string::find method would return true for ceph.dir.pin even for the
other ephemeral pin xattr names. For this reason, it was never possible
to actually turn ephemeral pins on!
This PR introduces inode xattrs export_ephemeral_random and
export_ephemeral_distributed which enables two different metadata
distribution strategies - the first being suitable for a more depthwise
scaling of metadata (height of the tree keeps increasing) and the latter
for horizontal scaling (many subtrees under a single parent).
export_ephemeral_distributed applies is not hierarchical. Any direct
descendant directory (i.e. a child directory) has an ephemeral export
pin applied to it according to a consistent hash of the child directory
inode number. export_ephemeral_distributed is hierarchical like
"export_pin". Any CDir loaded into the cache may be ephemerally pinned
to a random rank. Like "export_ephemeral_distributed", the random rank
is determined by a consistent hash.
The metadata distribution strategies are facilitated by using John
Lamping and Eric Veach's Jump Consistent Hashing as the consistent hash
algorithm. This hashing algorithm eliminates the need to store the data
structures representing the consistent hash cluster state and performs
as well as Akamai's original implementation providing a fairly uniform
distribution. This algorithm only works for distributed systems with
numbered buckets (nodes) arranged in ascending order and cluster resizes
does not produce any holes in the arrangement of nodes i.e (0, 1, 2, 3)
--[removing node 1]--> (0, 1, 2). CephFS satisfies these conditions as
the MDSs are arranged as numbered ranks and cluster modifications does
not produce any holes in the resulting arrangement of ranks.
Fixes: https://tracker.ceph.com/issues/41302 Signed-off-by: Sidharth Anupkrishnan <sanupkri@redhat.com> Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
(cherry picked from commit ced15ed7ef70ff832d4bebedecb89944276b0395)
Patrick Donnelly [Tue, 30 Jun 2020 21:22:15 +0000 (14:22 -0700)]
Merge PR #35809 into octopus
* refs/pull/35809/head:
qa/tasks/vstart_runner.py: be python3 compatible
doc/dev/developer_guide: use python3 to launch vstart_runner.py
pybind/mgr/dashboard/run-backend-api-tests.sh: use python3 by default
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
This commit adds the dmcrypt support in `ceph-volume raw` mode.
Note about `ceph-volume raw list` change:
Given `lsblk -J` (json output) option isn't available on all OS, I came up with
adding '--inverse' option to the existing command which allows us to get the
mapper devices list in that command output. Not listing root devices containing
partitions shouldn't have side effect since we are in `ceph-volume raw`
context.
example:
running `lsblk --paths --nodeps --output=NAME --noheadings` doesn't allow to
get the mapper list because the output is like following :
adding `--inverse` is a trick to get around this issue, the counterpart is that
we can't list root devices if they contain at least one partition but this
shouldn't be an issue in `ceph-volume raw` context given we only deal with
raw devices.
Casey Bodley [Tue, 26 May 2020 19:03:03 +0000 (15:03 -0400)]
rgw: sanitize newlines in s3 CORSConfiguration's ExposeHeader
the values in the <ExposeHeader> element are sent back to clients in a
Access-Control-Expose-Headers response header. if the values are allowed
to have newlines in them, they can be used to inject arbitrary response
headers
this issue only affects s3, which gets these values from an xml document
in swift, they're given in the request header
X-Container-Meta-Access-Control-Expose-Headers, so the value itself
cannot contain newlines
Signed-off-by: Casey Bodley <cbodley@redhat.com> Reported-by: Adam Mohammed <amohammed@linode.com>
Nathan Cutler [Wed, 24 Jun 2020 19:08:40 +0000 (21:08 +0200)]
doc: PendingReleaseNotes: clean slate for 15.2.5
All of these Pending Release Notes have been included in the official
15.2.4 Release Notes, so keeping them in this file any longer would be
counterproductive.
Ernesto Puerta [Mon, 11 May 2020 18:33:25 +0000 (20:33 +0200)]
mgr/dashboard: work with RBD images v1
Add support for RBD Image Format v1:
- This format lacks ID field, required for dashboard. Instead,
RBD image `block_name_prefix` is used as unique ID (together with pool
id and namespace)
- Additionally, `image_format` is now exposed.
- In the front-end side:
- Copy action on a v1 image will cause the image to be copied to v2
format.
- List doesn't allow Move to Trash on v1 images,
- Details section now shows `image_format` for images,
- Edit Form disables flags not supported for v1 (`deep-flatten`,
`layering`, `exclusive-lock`).
- Protect does not work on v1 images or v2 images created from v1
ones.
Conflicts:
src/pybind/mgr/dashboard/frontend/src/app/ceph/block/rbd-details/rbd-details.component.html: keep only new row
src/pybind/mgr/dashboard/services/ceph_service.py: keep only 1 method, add import Union