From: Alfonso Martínez Date: Thu, 26 Aug 2021 10:05:54 +0000 (+0200) Subject: mgr/dashboard: NFS exports: API + UI: integration with mgr/nfs; cleanups X-Git-Tag: v16.2.7~52^2~6 X-Git-Url: http://git-server-git.apps.pok.os.sepia.ceph.com/?a=commitdiff_plain;h=6e0903053e10e1a071cfca4c95ffe7ff15edbece;p=ceph.git mgr/dashboard: NFS exports: API + UI: integration with mgr/nfs; cleanups mgr/dashboard: move NFS_GANESHA_SUPPORTED_FSALS to mgr_module.py Importing from nfs module throws AttributeError because as a side effect the dashboard module is impersonating the nfs module. https://gist.github.com/varshar16/61ac26426bbe5f5f562ebb14bcd0f548 mgr/dashboard: 'Create NFS export' form: list clusters from nfs module mgr/dashboard: frontend+backend cleanups for NFS export Removed all code and references related to daemons. UI cleanup and adopted unit-testing for nfs-epxort create form for CEPHFS backend. Cleanup for export list/get/create/set/delete endpoints. mgr/dashboard: rm set-ganesha ref + update docs Remove existing set-ganesha-clusters-rados-pool-namespace references as they are no longer required. Moreover, nfs doc in dashboard doc is updated accordingly to the current nfs status. mgr/dashboard: add nfs-export e2e test coverage mgr/dashboard: 'Create NFS export' form: remove RGW user id field. - Improve bucket typeahead behavior. - Increase version for bucket list endpoint. - Some refactoring. mgr/dashboard: 'Create NFS export' form: allow RGW backend only when default realm is selected. When RGW multisite is configured, the NFS module can only handle buckets in the default realm. mgr/dashboard: 'Create service' form: fix NFS service creation. After https://github.com/ceph/ceph/pull/42073, NFS pool and namespace are not customizable. mgr/dashboard: 'Create NFS export' form: add bucket validation. - Allow only existing buckets. - Refactoring: - Moved bucket validator from bucket form to cd-validators.ts - Split bucket validator into 2: bucket name validator and bucket existence (that checks either existence or non-existence). mgr/dashboard: 'Create NFS export' form: path validation refactor: allow only existing paths. Fixes: https://tracker.ceph.com/issues/46493 Fixes: https://tracker.ceph.com/issues/51479 Signed-off-by: Alfonso Martínez Signed-off-by: Avan Thakkar Signed-off-by: Pere Diaz Bou (cherry picked from commit 58a6ab2147c34d5b3f14bf48f9b47b03bea8a672) Conflicts: doc/mgr/dashboard.rst - Resolved conflicts. src/pybind/mgr/dashboard/services/cephx.py - Deleted as in master. --- diff --git a/doc/mgr/dashboard.rst b/doc/mgr/dashboard.rst index 3ac0e0333b0a..7acd0695e973 100644 --- a/doc/mgr/dashboard.rst +++ b/doc/mgr/dashboard.rst @@ -1179,97 +1179,8 @@ A log entry may look like this:: NFS-Ganesha Management ---------------------- -Support for NFS-Ganesha Clusters Deployed by the Orchestrator -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -The Ceph Dashboard can be used to manage NFS-Ganesha clusters deployed by the -Orchestrator and will detect them automatically. For more details -on deploying NFS-Ganesha clusters with the Orchestrator, please see: - -- Cephadm backend: :ref:`orchestrator-cli-stateless-services`. Or particularly, see - :ref:`deploy-cephadm-nfs-ganesha`. -- Rook backend: `Ceph NFS Gateway CRD `_. - -Support for NFS-Ganesha Clusters Defined by the User -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -.. note:: - - This configuration only applies for user-defined clusters, - NOT for Orchestrator-deployed clusters. - -The Ceph Dashboard can manage `NFS Ganesha `_ exports that use -CephFS or RGW as their backstore. - -To enable this feature in Ceph Dashboard there are some assumptions that need -to be met regarding the way NFS-Ganesha services are configured. - -The dashboard manages NFS-Ganesha config files stored in RADOS objects on the Ceph Cluster. -NFS-Ganesha must store part of their configuration in the Ceph cluster. - -These configuration files follow the below conventions. -Each export block must be stored in its own RADOS object named -``export-``, where ```` must match the ``Export_ID`` attribute of the -export configuration. Then, for each NFS-Ganesha service daemon there should -exist a RADOS object named ``conf-``, where ```` is an -arbitrary string that should uniquely identify the daemon instance (e.g., the -hostname where the daemon is running). -Each ``conf-`` object contains the RADOS URLs to the exports that -the NFS-Ganesha daemon should serve. These URLs are of the form:: - - %url rados://[/]/export- - -Both the ``conf-`` and ``export-`` objects must be stored in the -same RADOS pool/namespace. - - -Configuring NFS-Ganesha in the Dashboard -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -To enable management of NFS-Ganesha exports in the Ceph Dashboard, we -need to tell the Dashboard the RADOS pool and namespace in which -configuration objects are stored. The Ceph Dashboard can then access them -by following the naming convention described above. - -The Dashboard command to configure the NFS-Ganesha configuration objects -location is:: - - $ ceph dashboard set-ganesha-clusters-rados-pool-namespace [/] - -After running the above command, the Ceph Dashboard is able to find the NFS-Ganesha -configuration objects and we can manage exports through the Web UI. - -.. note:: - - A dedicated pool for the NFS shares should be used. Otherwise it can cause the - `known issue `_ with listing of shares - if the NFS objects are stored together with a lot of other objects in a single - pool. - - -Support for Multiple NFS-Ganesha Clusters -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -The Ceph Dashboard also supports management of NFS-Ganesha exports belonging -to other NFS-Ganesha clusters. An NFS-Ganesha cluster is a group of -NFS-Ganesha service daemons sharing the same exports. NFS-Ganesha -clusters are independent and don't share the exports configuration among each -other. - -Each NFS-Ganesha cluster should store its configuration objects in a -unique RADOS pool/namespace to isolate the configuration. - -To specify the the configuration location of each NFS-Ganesha cluster we -can use the same command as above but with a different value pattern:: - - $ ceph dashboard set-ganesha-clusters-rados-pool-namespace :[/](,:[/])* - -The ```` is an arbitrary string that should uniquely identify the -NFS-Ganesha cluster. - -When configuring the Ceph Dashboard with multiple NFS-Ganesha clusters, the -Web UI will allow you to choose to which cluster an export belongs. - +The dashboard requires enabling the NFS module which will be used to manage +NFS clusters and NFS exports. For more information check :ref:`mgr-nfs`. Plug-ins -------- diff --git a/qa/tasks/mgr/dashboard/test_ganesha.py b/qa/tasks/mgr/dashboard/test_ganesha.py deleted file mode 100644 index 6868e0cb3249..000000000000 --- a/qa/tasks/mgr/dashboard/test_ganesha.py +++ /dev/null @@ -1,208 +0,0 @@ -# -*- coding: utf-8 -*- -# pylint: disable=too-many-public-methods - -from __future__ import absolute_import - -from .helper import DashboardTestCase, JList, JObj - - -class GaneshaTest(DashboardTestCase): - CEPHFS = True - AUTH_ROLES = ['pool-manager', 'ganesha-manager'] - - @classmethod - def setUpClass(cls): - super(GaneshaTest, cls).setUpClass() - cls.create_pool('ganesha', 2**2, 'replicated') - cls._rados_cmd(['-p', 'ganesha', '-N', 'ganesha1', 'create', 'conf-node1']) - cls._rados_cmd(['-p', 'ganesha', '-N', 'ganesha1', 'create', 'conf-node2']) - cls._rados_cmd(['-p', 'ganesha', '-N', 'ganesha1', 'create', 'conf-node3']) - cls._rados_cmd(['-p', 'ganesha', '-N', 'ganesha2', 'create', 'conf-node1']) - cls._rados_cmd(['-p', 'ganesha', '-N', 'ganesha2', 'create', 'conf-node2']) - cls._rados_cmd(['-p', 'ganesha', '-N', 'ganesha2', 'create', 'conf-node3']) - cls._ceph_cmd(['dashboard', 'set-ganesha-clusters-rados-pool-namespace', - 'cluster1:ganesha/ganesha1,cluster2:ganesha/ganesha2']) - - # RGW setup - cls._radosgw_admin_cmd([ - 'user', 'create', '--uid', 'admin', '--display-name', 'admin', - '--system', '--access-key', 'admin', '--secret', 'admin' - ]) - cls._ceph_cmd_with_secret(['dashboard', 'set-rgw-api-secret-key'], 'admin') - cls._ceph_cmd_with_secret(['dashboard', 'set-rgw-api-access-key'], 'admin') - - @classmethod - def tearDownClass(cls): - super(GaneshaTest, cls).tearDownClass() - cls._radosgw_admin_cmd(['user', 'rm', '--uid', 'admin', '--purge-data']) - cls._ceph_cmd(['osd', 'pool', 'delete', 'ganesha', 'ganesha', - '--yes-i-really-really-mean-it']) - - @DashboardTestCase.RunAs('test', 'test', [{'rbd-image': ['create', 'update', 'delete']}]) - def test_read_access_permissions(self): - self._get('/api/nfs-ganesha/export') - self.assertStatus(403) - - def test_list_daemons(self): - daemons = self._get("/api/nfs-ganesha/daemon") - self.assertEqual(len(daemons), 6) - daemons = [(d['daemon_id'], d['cluster_id']) for d in daemons] - self.assertIn(('node1', 'cluster1'), daemons) - self.assertIn(('node2', 'cluster1'), daemons) - self.assertIn(('node3', 'cluster1'), daemons) - self.assertIn(('node1', 'cluster2'), daemons) - self.assertIn(('node2', 'cluster2'), daemons) - self.assertIn(('node3', 'cluster2'), daemons) - - @classmethod - def create_export(cls, path, cluster_id, daemons, fsal, sec_label_xattr=None): - if fsal == 'CEPH': - fsal = {"name": "CEPH", "user_id": "admin", "fs_name": None, - "sec_label_xattr": sec_label_xattr} - pseudo = "/cephfs{}".format(path) - else: - fsal = {"name": "RGW", "rgw_user_id": "admin"} - pseudo = "/rgw/{}".format(path if path[0] != '/' else "") - ex_json = { - "path": path, - "fsal": fsal, - "cluster_id": cluster_id, - "daemons": daemons, - "pseudo": pseudo, - "tag": None, - "access_type": "RW", - "squash": "no_root_squash", - "security_label": sec_label_xattr is not None, - "protocols": [4], - "transports": ["TCP"], - "clients": [{ - "addresses": ["10.0.0.0/8"], - "access_type": "RO", - "squash": "root" - }] - } - return cls._task_post('/api/nfs-ganesha/export', ex_json) - - def tearDown(self): - super(GaneshaTest, self).tearDown() - exports = self._get("/api/nfs-ganesha/export") - if self._resp.status_code != 200: - return - self.assertIsInstance(exports, list) - for exp in exports: - self._task_delete("/api/nfs-ganesha/export/{}/{}" - .format(exp['cluster_id'], exp['export_id'])) - - def _test_create_export(self, cephfs_path): - exports = self._get("/api/nfs-ganesha/export") - self.assertEqual(len(exports), 0) - - data = self.create_export(cephfs_path, 'cluster1', ['node1', 'node2'], 'CEPH', - "security.selinux") - - exports = self._get("/api/nfs-ganesha/export") - self.assertEqual(len(exports), 1) - self.assertDictEqual(exports[0], data) - return data - - def test_create_export(self): - self._test_create_export('/foo') - - def test_create_export_for_cephfs_root(self): - self._test_create_export('/') - - def test_update_export(self): - export = self._test_create_export('/foo') - export['access_type'] = 'RO' - export['daemons'] = ['node1', 'node3'] - export['security_label'] = True - data = self._task_put('/api/nfs-ganesha/export/{}/{}' - .format(export['cluster_id'], export['export_id']), - export) - exports = self._get("/api/nfs-ganesha/export") - self.assertEqual(len(exports), 1) - self.assertDictEqual(exports[0], data) - self.assertEqual(exports[0]['daemons'], ['node1', 'node3']) - self.assertEqual(exports[0]['security_label'], True) - - def test_delete_export(self): - export = self._test_create_export('/foo') - self._task_delete("/api/nfs-ganesha/export/{}/{}" - .format(export['cluster_id'], export['export_id'])) - self.assertStatus(204) - - def test_get_export(self): - exports = self._get("/api/nfs-ganesha/export") - self.assertEqual(len(exports), 0) - - data1 = self.create_export("/foo", 'cluster2', ['node1', 'node2'], 'CEPH') - data2 = self.create_export("mybucket", 'cluster2', ['node2', 'node3'], 'RGW') - - export1 = self._get("/api/nfs-ganesha/export/cluster2/1") - self.assertDictEqual(export1, data1) - - export2 = self._get("/api/nfs-ganesha/export/cluster2/2") - self.assertDictEqual(export2, data2) - - def test_invalid_status(self): - self._ceph_cmd(['dashboard', 'set-ganesha-clusters-rados-pool-namespace', '']) - - data = self._get('/api/nfs-ganesha/status') - self.assertStatus(200) - self.assertIn('available', data) - self.assertIn('message', data) - self.assertFalse(data['available']) - self.assertIn(("NFS-Ganesha cluster is not detected. " - "Please set the GANESHA_RADOS_POOL_NAMESPACE " - "setting or deploy an NFS-Ganesha cluster with the Orchestrator."), - data['message']) - - self._ceph_cmd(['dashboard', 'set-ganesha-clusters-rados-pool-namespace', - 'cluster1:ganesha/ganesha1,cluster2:ganesha/ganesha2']) - - def test_valid_status(self): - data = self._get('/api/nfs-ganesha/status') - self.assertStatus(200) - self.assertIn('available', data) - self.assertIn('message', data) - self.assertTrue(data['available']) - - def test_ganesha_fsals(self): - data = self._get('/ui-api/nfs-ganesha/fsals') - self.assertStatus(200) - self.assertIn('CEPH', data) - - def test_ganesha_filesystems(self): - data = self._get('/ui-api/nfs-ganesha/cephfs/filesystems') - self.assertStatus(200) - self.assertSchema(data, JList(JObj({ - 'id': int, - 'name': str - }))) - - def test_ganesha_lsdir(self): - fss = self._get('/ui-api/nfs-ganesha/cephfs/filesystems') - self.assertStatus(200) - for fs in fss: - data = self._get('/ui-api/nfs-ganesha/lsdir/{}'.format(fs['name'])) - self.assertStatus(200) - self.assertSchema(data, JObj({'paths': JList(str)})) - self.assertEqual(data['paths'][0], '/') - - def test_ganesha_buckets(self): - data = self._get('/ui-api/nfs-ganesha/rgw/buckets') - self.assertStatus(200) - schema = JList(str) - self.assertSchema(data, schema) - - def test_ganesha_clusters(self): - data = self._get('/ui-api/nfs-ganesha/clusters') - self.assertStatus(200) - schema = JList(str) - self.assertSchema(data, schema) - - def test_ganesha_cephx_clients(self): - data = self._get('/ui-api/nfs-ganesha/cephx/clients') - self.assertStatus(200) - schema = JList(str) - self.assertSchema(data, schema) diff --git a/qa/tasks/mgr/dashboard/test_rgw.py b/qa/tasks/mgr/dashboard/test_rgw.py index 1bfb99506596..dc972d3ed0a4 100644 --- a/qa/tasks/mgr/dashboard/test_rgw.py +++ b/qa/tasks/mgr/dashboard/test_rgw.py @@ -183,13 +183,13 @@ class RgwBucketTest(RgwTestCase): self.assertEqual(data['tenant'], '') # List all buckets. - data = self._get('/api/rgw/bucket') + data = self._get('/api/rgw/bucket', version='1.1') self.assertStatus(200) self.assertEqual(len(data), 1) self.assertIn('teuth-test-bucket', data) # List all buckets with stats. - data = self._get('/api/rgw/bucket?stats=true') + data = self._get('/api/rgw/bucket?stats=true', version='1.1') self.assertStatus(200) self.assertEqual(len(data), 1) self.assertSchema(data[0], JObj(sub_elems={ @@ -203,7 +203,7 @@ class RgwBucketTest(RgwTestCase): }, allow_unknown=True)) # List all buckets names without stats. - data = self._get('/api/rgw/bucket?stats=false') + data = self._get('/api/rgw/bucket?stats=false', version='1.1') self.assertStatus(200) self.assertEqual(data, ['teuth-test-bucket']) @@ -283,7 +283,7 @@ class RgwBucketTest(RgwTestCase): # Delete the bucket. self._delete('/api/rgw/bucket/teuth-test-bucket') self.assertStatus(204) - data = self._get('/api/rgw/bucket') + data = self._get('/api/rgw/bucket', version='1.1') self.assertStatus(200) self.assertEqual(len(data), 0) @@ -306,7 +306,7 @@ class RgwBucketTest(RgwTestCase): self.assertIsNone(data) # List all buckets. - data = self._get('/api/rgw/bucket') + data = self._get('/api/rgw/bucket', version='1.1') self.assertStatus(200) self.assertEqual(len(data), 1) self.assertIn('testx/teuth-test-bucket', data) @@ -379,7 +379,7 @@ class RgwBucketTest(RgwTestCase): self._delete('/api/rgw/bucket/{}'.format( parse.quote_plus('testx/teuth-test-bucket'))) self.assertStatus(204) - data = self._get('/api/rgw/bucket') + data = self._get('/api/rgw/bucket', version='1.1') self.assertStatus(200) self.assertEqual(len(data), 0) diff --git a/src/pybind/mgr/dashboard/ci/cephadm/bootstrap-cluster.sh b/src/pybind/mgr/dashboard/ci/cephadm/bootstrap-cluster.sh index 10a060a9bceb..2c451f7864c1 100755 --- a/src/pybind/mgr/dashboard/ci/cephadm/bootstrap-cluster.sh +++ b/src/pybind/mgr/dashboard/ci/cephadm/bootstrap-cluster.sh @@ -11,14 +11,15 @@ mon_ip=$(ifconfig eth0 | grep 'inet ' | awk '{ print $2}') cephadm bootstrap --mon-ip $mon_ip --initial-dashboard-password {{ admin_password }} --allow-fqdn-hostname --skip-monitoring-stack --dashboard-password-noupdate --shared_ceph_folder /mnt/{{ ceph_dev_folder }} fsid=$(cat /etc/ceph/ceph.conf | grep fsid | awk '{ print $3}') +cephadm_shell="cephadm shell --fsid ${fsid} -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring" {% for number in range(1, nodes) %} ssh-copy-id -f -i /etc/ceph/ceph.pub -o StrictHostKeyChecking=no root@{{ prefix }}-node-0{{ number }}.{{ domain }} {% if expanded_cluster is defined %} - cephadm shell --fsid $fsid -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring ceph orch host add {{ prefix }}-node-0{{ number }}.{{ domain }} + ${cephadm_shell} ceph orch host add {{ prefix }}-node-0{{ number }}.{{ domain }} {% endif %} {% endfor %} {% if expanded_cluster is defined %} - cephadm shell --fsid $fsid -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring ceph orch apply osd --all-available-devices + ${cephadm_shell} ceph orch apply osd --all-available-devices {% endif %} diff --git a/src/pybind/mgr/dashboard/controllers/nfsganesha.py b/src/pybind/mgr/dashboard/controllers/nfsganesha.py index 1f5d738cf252..7d16b91432fb 100644 --- a/src/pybind/mgr/dashboard/controllers/nfsganesha.py +++ b/src/pybind/mgr/dashboard/controllers/nfsganesha.py @@ -1,26 +1,23 @@ # -*- coding: utf-8 -*- from __future__ import absolute_import +import json import logging import os -import json from functools import partial +from typing import Any, Dict, List, Optional import cephfs import cherrypy -# Importing from nfs module throws Attribute Error -# https://gist.github.com/varshar16/61ac26426bbe5f5f562ebb14bcd0f548 -#from nfs.export_utils import NFS_GANESHA_SUPPORTED_FSALS -#from nfs.utils import available_clusters +from mgr_module import NFS_GANESHA_SUPPORTED_FSALS from .. import mgr from ..security import Scope from ..services.cephfs import CephFS from ..services.exception import DashboardException, serialize_dashboard_exception -from ..services.rgw_client import NoCredentialsException, \ - NoRgwDaemonsException, RequestException, RgwClient from . import APIDoc, APIRouter, BaseController, Endpoint, EndpointDoc, \ ReadPermission, RESTController, Task, UIRouter +from ._version import APIVersion logger = logging.getLogger('controllers.nfs') @@ -29,15 +26,12 @@ class NFSException(DashboardException): def __init__(self, msg): super(NFSException, self).__init__(component="nfs", msg=msg) -# Remove this once attribute error is fixed -NFS_GANESHA_SUPPORTED_FSALS = ['CEPH', 'RGW'] # documentation helpers EXPORT_SCHEMA = { 'export_id': (int, 'Export ID'), 'path': (str, 'Export path'), 'cluster_id': (str, 'Cluster identifier'), - 'daemons': ([str], 'List of NFS Ganesha daemons identifiers'), 'pseudo': (str, 'Pseudo FS path'), 'access_type': (str, 'Export access type'), 'squash': (str, 'Export squash policy'), @@ -46,10 +40,9 @@ EXPORT_SCHEMA = { 'transports': ([str], 'List of transport types'), 'fsal': ({ 'name': (str, 'name of FSAL'), - 'user_id': (str, 'CephX user id', True), - 'filesystem': (str, 'CephFS filesystem ID', True), + 'fs_name': (str, 'CephFS filesystem name', True), 'sec_label_xattr': (str, 'Name of xattr for security label', True), - 'rgw_user_id': (str, 'RGW user id', True) + 'user_id': (str, 'User id', True) }, 'FSAL configuration'), 'clients': ([{ 'addresses': ([str], 'list of IP addresses'), @@ -62,7 +55,6 @@ EXPORT_SCHEMA = { CREATE_EXPORT_SCHEMA = { 'path': (str, 'Export path'), 'cluster_id': (str, 'Cluster identifier'), - 'daemons': ([str], 'List of NFS Ganesha daemons identifiers'), 'pseudo': (str, 'Pseudo FS path'), 'access_type': (str, 'Export access type'), 'squash': (str, 'Export squash policy'), @@ -71,19 +63,14 @@ CREATE_EXPORT_SCHEMA = { 'transports': ([str], 'List of transport types'), 'fsal': ({ 'name': (str, 'name of FSAL'), - 'user_id': (str, 'CephX user id', True), - 'filesystem': (str, 'CephFS filesystem ID', True), - 'sec_label_xattr': (str, 'Name of xattr for security label', True), - 'rgw_user_id': (str, 'RGW user id', True) + 'fs_name': (str, 'CephFS filesystem name', True), + 'sec_label_xattr': (str, 'Name of xattr for security label', True) }, 'FSAL configuration'), 'clients': ([{ 'addresses': ([str], 'list of IP addresses'), 'access_type': (str, 'Client access type'), 'squash': (str, 'Client squash policy') - }], 'List of client configurations'), - 'reload_daemons': (bool, - 'Trigger reload of NFS-Ganesha daemons configuration', - True) + }], 'List of client configurations') } @@ -97,7 +84,7 @@ def NfsTask(name, metadata, wait_for): # noqa: N802 @APIRouter('/nfs-ganesha', Scope.NFS_GANESHA) -@APIDoc("NFS-Ganesha Management API", "NFS-Ganesha") +@APIDoc("NFS-Ganesha Cluster Management API", "NFS-Ganesha") class NFSGanesha(RESTController): @EndpointDoc("Status of NFS-Ganesha management feature", @@ -108,19 +95,24 @@ class NFSGanesha(RESTController): @Endpoint() @ReadPermission def status(self): - ''' - FIXME: update this to check if any nfs cluster is available. Otherwise this endpoint can be safely removed too. - As it was introduced to check dashboard pool and namespace configuration. + status = {'available': True, 'message': None} try: - cluster_ls = available_clusters(mgr) - if not cluster_ls: - raise NFSException('Please deploy a cluster using `nfs cluster create ... or orch apply nfs ..') - except (NameError, ImportError) as e: - status['message'] = str(e) # type: ignore + mgr.remote('nfs', 'cluster_ls') + except ImportError as error: + logger.exception(error) status['available'] = False + status['message'] = str(error) # type: ignore + return status - ''' - return {'available': True, 'message': None} + + +@APIRouter('/nfs-ganesha/cluster', Scope.NFS_GANESHA) +@APIDoc(group="NFS-Ganesha") +class NFSGaneshaCluster(RESTController): + @ReadPermission + @RESTController.MethodMap(version=APIVersion.EXPERIMENTAL) + def list(self): + return mgr.remote('nfs', 'cluster_ls') @APIRouter('/nfs-ganesha/export', Scope.NFS_GANESHA) @@ -128,33 +120,43 @@ class NFSGanesha(RESTController): class NFSGaneshaExports(RESTController): RESOURCE_ID = "cluster_id/export_id" + @staticmethod + def _get_schema_export(export: Dict[str, Any]) -> Dict[str, Any]: + """ + Method that avoids returning export info not exposed in the export schema + e.g., rgw user access/secret keys. + """ + schema_fsal_info = {} + for key in export['fsal'].keys(): + if key in EXPORT_SCHEMA['fsal'][0].keys(): # type: ignore + schema_fsal_info[key] = export['fsal'][key] + export['fsal'] = schema_fsal_info + return export + @EndpointDoc("List all NFS-Ganesha exports", responses={200: [EXPORT_SCHEMA]}) - def list(self): - ''' - list exports based on cluster_id ? - export_mgr = mgr.remote('nfs', 'fetch_nfs_export_obj') - ret, out, err = export_mgr.list_exports(cluster_id=cluster_id, detailed=True) - if ret == 0: - return json.loads(out) - raise NFSException(f"Failed to list exports: {err}") - ''' - return mgr.remote('nfs', 'export_ls') + def list(self) -> List[Dict[str, Any]]: + exports = [] + for export in mgr.remote('nfs', 'export_ls'): + exports.append(self._get_schema_export(export)) + + return exports @NfsTask('create', {'path': '{path}', 'fsal': '{fsal.name}', 'cluster_id': '{cluster_id}'}, 2.0) @EndpointDoc("Creates a new NFS-Ganesha export", parameters=CREATE_EXPORT_SCHEMA, responses={201: EXPORT_SCHEMA}) - def create(self, path, cluster_id, daemons, pseudo, access_type, - squash, security_label, protocols, transports, fsal, clients, - reload_daemons=True): - fsal.pop('user_id') # mgr/nfs does not let you customize user_id + @RESTController.MethodMap(version=APIVersion(2, 0)) # type: ignore + def create(self, path, cluster_id, pseudo, access_type, + squash, security_label, protocols, transports, fsal, clients) -> Dict[str, Any]: + + if hasattr(fsal, 'user_id'): + fsal.pop('user_id') # mgr/nfs does not let you customize user_id raw_ex = { 'path': path, 'pseudo': pseudo, 'cluster_id': cluster_id, - 'daemons': daemons, 'access_type': access_type, 'squash': squash, 'security_label': security_label, @@ -164,28 +166,25 @@ class NFSGaneshaExports(RESTController): 'clients': clients } export_mgr = mgr.remote('nfs', 'fetch_nfs_export_obj') - ret, out, err = export_mgr.apply_export(cluster_id, json.dumps(raw_ex)) + ret, _, err = export_mgr.apply_export(cluster_id, json.dumps(raw_ex)) if ret == 0: - return export_mgr._get_export_dict(cluster_id, pseudo) + return self._get_schema_export( + export_mgr._get_export_dict(cluster_id, pseudo)) # pylint: disable=W0212 raise NFSException(f"Export creation failed {err}") @EndpointDoc("Get an NFS-Ganesha export", parameters={ 'cluster_id': (str, 'Cluster identifier'), - 'export_id': (int, "Export ID") + 'export_id': (str, "Export ID") }, responses={200: EXPORT_SCHEMA}) - def get(self, cluster_id, export_id): - ''' - Get export by pseudo path? - export_mgr = mgr.remote('nfs', 'fetch_nfs_export_obj') - return export_mgr._get_export_dict(cluster_id, pseudo) - - Get export by id - export_mgr = mgr.remote('nfs', 'fetch_nfs_export_obj') - return export_mgr.get_export_by_id(cluster_id, export_id) - ''' - return mgr.remote('nfs', 'export_get', cluster_id, export_id) + def get(self, cluster_id, export_id) -> Optional[Dict[str, Any]]: + export_id = int(export_id) + export = mgr.remote('nfs', 'export_get', cluster_id, export_id) + if export: + export = self._get_schema_export(export) + + return export @NfsTask('edit', {'cluster_id': '{cluster_id}', 'export_id': '{export_id}'}, 2.0) @@ -193,16 +192,17 @@ class NFSGaneshaExports(RESTController): parameters=dict(export_id=(int, "Export ID"), **CREATE_EXPORT_SCHEMA), responses={200: EXPORT_SCHEMA}) - def set(self, cluster_id, export_id, path, daemons, pseudo, access_type, - squash, security_label, protocols, transports, fsal, clients, - reload_daemons=True): + @RESTController.MethodMap(version=APIVersion(2, 0)) # type: ignore + def set(self, cluster_id, export_id, path, pseudo, access_type, + squash, security_label, protocols, transports, fsal, clients) -> Dict[str, Any]: - fsal.pop('user_id') # mgr/nfs does not let you customize user_id + if hasattr(fsal, 'user_id'): + fsal.pop('user_id') # mgr/nfs does not let you customize user_id raw_ex = { 'path': path, 'pseudo': pseudo, 'cluster_id': cluster_id, - 'daemons': daemons, + 'export_id': export_id, 'access_type': access_type, 'squash': squash, 'security_label': security_label, @@ -213,9 +213,10 @@ class NFSGaneshaExports(RESTController): } export_mgr = mgr.remote('nfs', 'fetch_nfs_export_obj') - ret, out, err = export_mgr.apply_export(cluster_id, json.dumps(raw_ex)) + ret, _, err = export_mgr.apply_export(cluster_id, json.dumps(raw_ex)) if ret == 0: - return export_mgr._get_export_dict(cluster_id, pseudo) + return self._get_schema_export( + export_mgr._get_export_dict(cluster_id, pseudo)) # pylint: disable=W0212 raise NFSException(f"Failed to update export: {err}") @NfsTask('delete', {'cluster_id': '{cluster_id}', @@ -223,25 +224,10 @@ class NFSGaneshaExports(RESTController): @EndpointDoc("Deletes an NFS-Ganesha export", parameters={ 'cluster_id': (str, 'Cluster identifier'), - 'export_id': (int, "Export ID"), - 'reload_daemons': (bool, - 'Trigger reload of NFS-Ganesha daemons' - ' configuration', - True) + 'export_id': (int, "Export ID") }) - def delete(self, cluster_id, export_id, reload_daemons=True): - ''' - Delete by pseudo path - export_mgr = mgr.remote('nfs', 'fetch_nfs_export_obj') - export_mgr.delete_export(cluster_id, pseudo) - - if deleting by export id - export_mgr = mgr.remote('nfs', 'fetch_nfs_export_obj') - export = export_mgr.get_export_by_id(cluster_id, export_id) - ret, out, err = export_mgr.delete_export(cluster_id=cluster_id, pseudo_path=export['pseudo']) - if ret != 0: - raise NFSException(err) - ''' + @RESTController.MethodMap(version=APIVersion(2, 0)) # type: ignore + def delete(self, cluster_id, export_id): export_id = int(export_id) export = mgr.remote('nfs', 'export_get', cluster_id, export_id) @@ -250,31 +236,8 @@ class NFSGaneshaExports(RESTController): mgr.remote('nfs', 'export_rm', cluster_id, export['pseudo']) -# FIXME: remove this; dashboard should only care about clusters. -@APIRouter('/nfs-ganesha/daemon', Scope.NFS_GANESHA) -@APIDoc(group="NFS-Ganesha") -class NFSGaneshaService(RESTController): - - @EndpointDoc("List NFS-Ganesha daemons information", - responses={200: [{ - 'daemon_id': (str, 'Daemon identifier'), - 'cluster_id': (str, 'Cluster identifier'), - 'cluster_type': (str, 'Cluster type'), # FIXME: remove this property - 'status': (int, 'Status of daemon', True), - 'desc': (str, 'Status description', True) - }]}) - def list(self): - return mgr.remote('nfs', 'daemon_ls') - - @UIRouter('/nfs-ganesha', Scope.NFS_GANESHA) class NFSGaneshaUi(BaseController): - @Endpoint('GET', '/cephx/clients') - @ReadPermission - def cephx_clients(self): - # FIXME: remove this; cephx users/creds are managed by mgr/nfs - return ['admin'] - @Endpoint('GET', '/fsals') @ReadPermission def fsals(self): @@ -319,31 +282,3 @@ class NFSGaneshaUi(BaseController): @ReadPermission def filesystems(self): return CephFS.list_filesystems() - - @Endpoint('GET', '/rgw/buckets') - @ReadPermission - def buckets(self, user_id=None): - try: - return RgwClient.instance(user_id).get_buckets() - except (DashboardException, NoCredentialsException, RequestException, - NoRgwDaemonsException): - return [] - - @Endpoint('GET', '/clusters') - @ReadPermission - def clusters(self): - ''' - Remove this remote call instead directly use available_cluster() method. It returns list of cluster names: ['vstart'] - The current dashboard api needs to changed from following to simply list of strings - [ - { - 'pool': 'nfs-ganesha', - 'namespace': cluster_id, - 'type': 'orchestrator', - 'daemon_conf': None - } for cluster_id in available_clusters() - ] - As pool, namespace, cluster type and daemon_conf are not required for listing cluster by mgr/nfs module - return available_cluster(mgr) - ''' - return mgr.remote('nfs', 'cluster_ls') diff --git a/src/pybind/mgr/dashboard/controllers/rgw.py b/src/pybind/mgr/dashboard/controllers/rgw.py index 32885dc537f1..5f599b96c941 100644 --- a/src/pybind/mgr/dashboard/controllers/rgw.py +++ b/src/pybind/mgr/dashboard/controllers/rgw.py @@ -15,9 +15,10 @@ from ..services.rgw_client import NoRgwDaemonsException, RgwClient from ..tools import json_str_to_object, str_to_bool from . import APIDoc, APIRouter, BaseController, Endpoint, EndpointDoc, \ ReadPermission, RESTController, allow_empty_body +from ._version import APIVersion try: - from typing import Any, List, Optional + from typing import Any, Dict, List, Optional, Union except ImportError: # pragma: no cover pass # Just for type checking @@ -101,6 +102,7 @@ class RgwDaemon(RESTController): 'service_map_id': service['id'], 'version': metadata['ceph_version'], 'server_hostname': hostname, + 'realm_name': metadata['realm_name'], 'zonegroup_name': metadata['zonegroup_name'], 'zone_name': metadata['zone_name'], 'default': instance.daemon.name == metadata['id'] @@ -158,6 +160,8 @@ class RgwSite(RgwRESTController): return RgwClient.admin_instance(daemon_name=daemon_name).get_placement_targets() if query == 'realms': return RgwClient.admin_instance(daemon_name=daemon_name).get_realms() + if query == 'default-realm': + return RgwClient.admin_instance(daemon_name=daemon_name).get_default_realm() # @TODO: for multisite: by default, retrieve cluster topology/map. raise DashboardException(http_status_code=501, component='rgw', msg='Not Implemented') @@ -232,9 +236,12 @@ class RgwBucket(RgwRESTController): bucket_name = '{}:{}'.format(tenant, bucket_name) return bucket_name - def list(self, stats=False, daemon_name=None): - # type: (bool, Optional[str]) -> List[Any] - query_params = '?stats' if str_to_bool(stats) else '' + @RESTController.MethodMap(version=APIVersion(1, 1)) # type: ignore + def list(self, stats: bool = False, daemon_name: Optional[str] = None, + uid: Optional[str] = None) -> List[Union[str, Dict[str, Any]]]: + query_params = f'?stats={str_to_bool(stats)}' + if uid and uid.strip(): + query_params = f'{query_params}&uid={uid.strip()}' result = self.proxy(daemon_name, 'GET', 'bucket{}'.format(query_params)) if stats: diff --git a/src/pybind/mgr/dashboard/frontend/cypress/integration/block/images.e2e-spec.ts b/src/pybind/mgr/dashboard/frontend/cypress/integration/block/images.e2e-spec.ts index cf8832bb9bbc..5c89359db790 100644 --- a/src/pybind/mgr/dashboard/frontend/cypress/integration/block/images.e2e-spec.ts +++ b/src/pybind/mgr/dashboard/frontend/cypress/integration/block/images.e2e-spec.ts @@ -13,7 +13,7 @@ describe('Images page', () => { // Need pool for image testing pools.navigateTo('create'); pools.create(poolName, 8, 'rbd'); - pools.exist(poolName, true); + pools.existTableCell(poolName); }); after(() => { @@ -21,7 +21,7 @@ describe('Images page', () => { pools.navigateTo(); pools.delete(poolName); pools.navigateTo(); - pools.exist(poolName, false); + pools.existTableCell(poolName, false); }); beforeEach(() => { diff --git a/src/pybind/mgr/dashboard/frontend/cypress/integration/block/mirroring.e2e-spec.ts b/src/pybind/mgr/dashboard/frontend/cypress/integration/block/mirroring.e2e-spec.ts index ddee817e18ef..120956579d8d 100644 --- a/src/pybind/mgr/dashboard/frontend/cypress/integration/block/mirroring.e2e-spec.ts +++ b/src/pybind/mgr/dashboard/frontend/cypress/integration/block/mirroring.e2e-spec.ts @@ -32,7 +32,7 @@ describe('Mirroring page', () => { pools.navigateTo('create'); // Need pool for mirroring testing pools.create(poolName, 8, 'rbd'); pools.navigateTo(); - pools.exist(poolName, true); + pools.existTableCell(poolName, true); }); it('tests editing mode for pools', () => { diff --git a/src/pybind/mgr/dashboard/frontend/cypress/integration/cluster/logs.e2e-spec.ts b/src/pybind/mgr/dashboard/frontend/cypress/integration/cluster/logs.e2e-spec.ts index 731275e26d1c..9868b89aedbc 100644 --- a/src/pybind/mgr/dashboard/frontend/cypress/integration/cluster/logs.e2e-spec.ts +++ b/src/pybind/mgr/dashboard/frontend/cypress/integration/cluster/logs.e2e-spec.ts @@ -45,7 +45,7 @@ describe('Logs page', () => { pools.navigateTo('create'); pools.create(poolname, 8); pools.navigateTo(); - pools.exist(poolname, true); + pools.existTableCell(poolname, true); logs.checkAuditForPoolFunction(poolname, 'create', hour, minute); }); diff --git a/src/pybind/mgr/dashboard/frontend/cypress/integration/cluster/services.po.ts b/src/pybind/mgr/dashboard/frontend/cypress/integration/cluster/services.po.ts index 4265329db042..457b759ead39 100644 --- a/src/pybind/mgr/dashboard/frontend/cypress/integration/cluster/services.po.ts +++ b/src/pybind/mgr/dashboard/frontend/cypress/integration/cluster/services.po.ts @@ -40,15 +40,24 @@ export class ServicesPageHelper extends PageHelper { addService(serviceType: string, exist?: boolean, count = '1') { cy.get(`${this.pages.create.id}`).within(() => { this.selectServiceType(serviceType); - if (serviceType === 'rgw') { - cy.get('#service_id').type('foo'); - cy.get('#count').type(count); - } else if (serviceType === 'ingress') { - this.selectOption('backend_service', 'rgw.foo'); - cy.get('#service_id').should('have.value', 'rgw.foo'); - cy.get('#virtual_ip').type('192.168.20.1/24'); - cy.get('#frontend_port').type('8081'); - cy.get('#monitor_port').type('8082'); + switch (serviceType) { + case 'rgw': + cy.get('#service_id').type('foo'); + cy.get('#count').type(count); + break; + + case 'ingress': + this.selectOption('backend_service', 'rgw.foo'); + cy.get('#service_id').should('have.value', 'rgw.foo'); + cy.get('#virtual_ip').type('192.168.20.1/24'); + cy.get('#frontend_port').type('8081'); + cy.get('#monitor_port').type('8082'); + break; + + case 'nfs': + cy.get('#service_id').type('testnfs'); + cy.get('#count').type(count); + break; } cy.get('cd-submit-button').click(); diff --git a/src/pybind/mgr/dashboard/frontend/cypress/integration/orchestrator/workflow/07-nfs-exports.e2e-spec.ts b/src/pybind/mgr/dashboard/frontend/cypress/integration/orchestrator/workflow/07-nfs-exports.e2e-spec.ts new file mode 100644 index 000000000000..2d92075298e1 --- /dev/null +++ b/src/pybind/mgr/dashboard/frontend/cypress/integration/orchestrator/workflow/07-nfs-exports.e2e-spec.ts @@ -0,0 +1,81 @@ +import { ServicesPageHelper } from 'cypress/integration/cluster/services.po'; +import { NFSPageHelper } from 'cypress/integration/orchestrator/workflow/nfs/nfs-export.po'; +import { BucketsPageHelper } from 'cypress/integration/rgw/buckets.po'; + +describe('nfsExport page', () => { + const nfsExport = new NFSPageHelper(); + const services = new ServicesPageHelper(); + const buckets = new BucketsPageHelper(); + const bucketName = 'e2e.nfs.bucket'; + // @TODO: uncomment this when a CephFS volume can be created through Dashboard. + // const fsPseudo = '/fsPseudo'; + const rgwPseudo = '/rgwPseudo'; + const editPseudo = '/editPseudo'; + const backends = ['CephFS', 'Object Gateway']; + const squash = 'no_root_squash'; + const client: object = { addresses: '192.168.0.10' }; + + beforeEach(() => { + cy.login(); + Cypress.Cookies.preserveOnce('token'); + nfsExport.navigateTo(); + }); + + describe('breadcrumb test', () => { + it('should open and show breadcrumb', () => { + nfsExport.expectBreadcrumbText('NFS'); + }); + }); + + describe('Create, edit and delete', () => { + it('should create an NFS cluster', () => { + services.navigateTo('create'); + + services.addService('nfs'); + + services.checkExist('nfs.testnfs', true); + services.getExpandCollapseElement().click(); + services.checkServiceStatus('nfs'); + }); + + it('should create a nfs-export with RGW backend', () => { + buckets.navigateTo('create'); + buckets.create(bucketName, 'dashboard', 'default-placement'); + + nfsExport.navigateTo(); + nfsExport.existTableCell(rgwPseudo, false); + nfsExport.navigateTo('create'); + nfsExport.create(backends[1], squash, client, rgwPseudo, bucketName); + nfsExport.existTableCell(rgwPseudo); + }); + + // @TODO: uncomment this when a CephFS volume can be created through Dashboard. + // it('should create a nfs-export with CephFS backend', () => { + // nfsExport.navigateTo(); + // nfsExport.existTableCell(fsPseudo, false); + // nfsExport.navigateTo('create'); + // nfsExport.create(backends[0], squash, client, fsPseudo); + // nfsExport.existTableCell(fsPseudo); + // }); + + it('should show Clients', () => { + nfsExport.clickTab('cd-nfs-details', rgwPseudo, 'Clients (1)'); + cy.get('cd-nfs-details').within(() => { + nfsExport.getTableCount('total').should('be.gte', 0); + }); + }); + + it('should edit an export', () => { + nfsExport.editExport(rgwPseudo, editPseudo); + + nfsExport.existTableCell(editPseudo); + }); + + it('should delete exports and bucket', () => { + nfsExport.delete(editPseudo); + + buckets.navigateTo(); + buckets.delete(bucketName); + }); + }); +}); diff --git a/src/pybind/mgr/dashboard/frontend/cypress/integration/orchestrator/workflow/nfs/nfs-export.po.ts b/src/pybind/mgr/dashboard/frontend/cypress/integration/orchestrator/workflow/nfs/nfs-export.po.ts new file mode 100644 index 000000000000..91dfdf48d101 --- /dev/null +++ b/src/pybind/mgr/dashboard/frontend/cypress/integration/orchestrator/workflow/nfs/nfs-export.po.ts @@ -0,0 +1,45 @@ +import { PageHelper } from 'cypress/integration/page-helper.po'; + +const pages = { + index: { url: '#/nfs', id: 'cd-nfs-list' }, + create: { url: '#/nfs/create', id: 'cd-nfs-form' } +}; + +export class NFSPageHelper extends PageHelper { + pages = pages; + + @PageHelper.restrictTo(pages.create.url) + create(backend: string, squash: string, client: object, pseudo: string, rgwPath?: string) { + this.selectOption('cluster_id', 'testnfs'); + // select a storage backend + this.selectOption('name', backend); + if (backend === 'CephFS') { + this.selectOption('fs_name', 'myfs'); + + cy.get('#security_label').click({ force: true }); + } else { + cy.get('input[data-testid=rgw_path]').type(rgwPath); + } + + cy.get('input[name=pseudo]').type(pseudo); + this.selectOption('squash', squash); + + // Add clients + cy.get('button[name=add_client]').click({ force: true }); + cy.get('input[name=addresses]').type(client['addresses']); + + cy.get('cd-submit-button').click(); + } + + editExport(pseudo: string, editPseudo: string) { + this.navigateEdit(pseudo); + + cy.get('input[name=pseudo]').clear().type(editPseudo); + + cy.get('cd-submit-button').click(); + + // Click the export and check its details table for updated content + this.getExpandCollapseElement(editPseudo).click(); + cy.get('.active.tab-pane').should('contain.text', editPseudo); + } +} diff --git a/src/pybind/mgr/dashboard/frontend/cypress/integration/page-helper.po.ts b/src/pybind/mgr/dashboard/frontend/cypress/integration/page-helper.po.ts index 6395128c9473..176bca5a1477 100644 --- a/src/pybind/mgr/dashboard/frontend/cypress/integration/page-helper.po.ts +++ b/src/pybind/mgr/dashboard/frontend/cypress/integration/page-helper.po.ts @@ -74,7 +74,7 @@ export abstract class PageHelper { } getTab(tabName: string) { - return cy.contains('.nav.nav-tabs li', new RegExp(`^${tabName}$`)); + return cy.contains('.nav.nav-tabs li', tabName); } getTabText(index: number) { @@ -203,6 +203,11 @@ export abstract class PageHelper { ); } + existTableCell(name: string, oughtToBePresent = true) { + const waitRule = oughtToBePresent ? 'be.visible' : 'not.exist'; + this.getFirstTableCell(name).should(waitRule); + } + getExpandCollapseElement(content?: string) { this.waitDataTableToLoad(); diff --git a/src/pybind/mgr/dashboard/frontend/cypress/integration/pools/pools.e2e-spec.ts b/src/pybind/mgr/dashboard/frontend/cypress/integration/pools/pools.e2e-spec.ts index e5a28bfd4e20..b4c3c75ac5b8 100644 --- a/src/pybind/mgr/dashboard/frontend/cypress/integration/pools/pools.e2e-spec.ts +++ b/src/pybind/mgr/dashboard/frontend/cypress/integration/pools/pools.e2e-spec.ts @@ -30,19 +30,19 @@ describe('Pools page', () => { describe('Create, update and destroy', () => { it('should create a pool', () => { - pools.exist(poolName, false); + pools.existTableCell(poolName, false); pools.navigateTo('create'); pools.create(poolName, 8, 'rbd'); - pools.exist(poolName, true); + pools.existTableCell(poolName); }); it('should edit a pools placement group', () => { - pools.exist(poolName, true); + pools.existTableCell(poolName); pools.edit_pool_pg(poolName, 32); }); it('should show updated configuration field values', () => { - pools.exist(poolName, true); + pools.existTableCell(poolName); const bpsLimit = '4 B/s'; pools.edit_pool_configuration(poolName, bpsLimit); }); diff --git a/src/pybind/mgr/dashboard/frontend/cypress/integration/pools/pools.po.ts b/src/pybind/mgr/dashboard/frontend/cypress/integration/pools/pools.po.ts index ccf858b41206..98cee470eda9 100644 --- a/src/pybind/mgr/dashboard/frontend/cypress/integration/pools/pools.po.ts +++ b/src/pybind/mgr/dashboard/frontend/cypress/integration/pools/pools.po.ts @@ -13,12 +13,6 @@ export class PoolPageHelper extends PageHelper { return expect((n & (n - 1)) === 0, `Placement groups ${n} are not a power of 2`).to.be.true; } - @PageHelper.restrictTo(pages.index.url) - exist(name: string, oughtToBePresent = true) { - const waitRule = oughtToBePresent ? 'be.visible' : 'not.exist'; - this.getFirstTableCell(name).should(waitRule); - } - @PageHelper.restrictTo(pages.create.url) create(name: string, placement_groups: number, ...apps: string[]) { cy.get('input[name=name]').clear().type(name); diff --git a/src/pybind/mgr/dashboard/frontend/src/app/ceph/cluster/services/service-form/service-form.component.html b/src/pybind/mgr/dashboard/frontend/src/app/ceph/cluster/services/service-form/service-form.component.html index e8e340a7dc8d..99c0903dacf4 100644 --- a/src/pybind/mgr/dashboard/frontend/src/app/ceph/cluster/services/service-form/service-form.component.html +++ b/src/pybind/mgr/dashboard/frontend/src/app/ceph/cluster/services/service-form/service-form.component.html @@ -173,50 +173,6 @@ - - - -
- -
- - This field is required. -
-
- - -
- -
- -
-
-
- diff --git a/src/pybind/mgr/dashboard/frontend/src/app/ceph/cluster/services/service-form/service-form.component.spec.ts b/src/pybind/mgr/dashboard/frontend/src/app/ceph/cluster/services/service-form/service-form.component.spec.ts index 78863435ea32..fd3bc8025dbe 100644 --- a/src/pybind/mgr/dashboard/frontend/src/app/ceph/cluster/services/service-form/service-form.component.spec.ts +++ b/src/pybind/mgr/dashboard/frontend/src/app/ceph/cluster/services/service-form/service-form.component.spec.ts @@ -144,28 +144,14 @@ describe('ServiceFormComponent', () => { describe('should test service nfs', () => { beforeEach(() => { formHelper.setValue('service_type', 'nfs'); - formHelper.setValue('pool', 'foo'); }); - it('should submit nfs with namespace', () => { - formHelper.setValue('namespace', 'bar'); + it('should submit nfs', () => { component.onSubmit(); expect(cephServiceService.create).toHaveBeenCalledWith({ service_type: 'nfs', placement: {}, - unmanaged: false, - pool: 'foo', - namespace: 'bar' - }); - }); - - it('should submit nfs w/o namespace', () => { - component.onSubmit(); - expect(cephServiceService.create).toHaveBeenCalledWith({ - service_type: 'nfs', - placement: {}, - unmanaged: false, - pool: 'foo' + unmanaged: false }); }); }); diff --git a/src/pybind/mgr/dashboard/frontend/src/app/ceph/cluster/services/service-form/service-form.component.ts b/src/pybind/mgr/dashboard/frontend/src/app/ceph/cluster/services/service-form/service-form.component.ts index 2b424d7f26a3..da4daf9c1f5f 100644 --- a/src/pybind/mgr/dashboard/frontend/src/app/ceph/cluster/services/service-form/service-form.component.ts +++ b/src/pybind/mgr/dashboard/frontend/src/app/ceph/cluster/services/service-form/service-form.component.ts @@ -115,22 +115,16 @@ export class ServiceFormComponent extends CdForm implements OnInit { hosts: [[]], count: [null, [CdValidators.number(false), Validators.min(1)]], unmanaged: [false], - // NFS & iSCSI + // iSCSI pool: [ null, [ - CdValidators.requiredIf({ - service_type: 'nfs', - unmanaged: false - }), CdValidators.requiredIf({ service_type: 'iscsi', unmanaged: false }) ] ], - // NFS - namespace: [null], // RGW rgw_frontend_port: [ null, @@ -327,12 +321,6 @@ export class ServiceFormComponent extends CdForm implements OnInit { serviceSpec['placement']['count'] = values['count']; } switch (serviceType) { - case 'nfs': - serviceSpec['pool'] = values['pool']; - if (_.isString(values['namespace']) && !_.isEmpty(values['namespace'])) { - serviceSpec['namespace'] = values['namespace']; - } - break; case 'rgw': if (_.isNumber(values['rgw_frontend_port']) && values['rgw_frontend_port'] > 0) { serviceSpec['rgw_frontend_port'] = values['rgw_frontend_port']; diff --git a/src/pybind/mgr/dashboard/frontend/src/app/ceph/nfs/models/nfs.fsal.ts b/src/pybind/mgr/dashboard/frontend/src/app/ceph/nfs/models/nfs.fsal.ts new file mode 100644 index 000000000000..f204ac6d8b6b --- /dev/null +++ b/src/pybind/mgr/dashboard/frontend/src/app/ceph/nfs/models/nfs.fsal.ts @@ -0,0 +1,5 @@ +export interface NfsFSAbstractionLayer { + value: string; + descr: string; + disabled: boolean; +} diff --git a/src/pybind/mgr/dashboard/frontend/src/app/ceph/nfs/nfs-cluster-type.enum.ts b/src/pybind/mgr/dashboard/frontend/src/app/ceph/nfs/nfs-cluster-type.enum.ts deleted file mode 100644 index 7a775e5ab2db..000000000000 --- a/src/pybind/mgr/dashboard/frontend/src/app/ceph/nfs/nfs-cluster-type.enum.ts +++ /dev/null @@ -1,4 +0,0 @@ -export enum NFSClusterType { - user = 'user', - orchestrator = 'orchestrator' -} diff --git a/src/pybind/mgr/dashboard/frontend/src/app/ceph/nfs/nfs-details/nfs-details.component.spec.ts b/src/pybind/mgr/dashboard/frontend/src/app/ceph/nfs/nfs-details/nfs-details.component.spec.ts index 3abae2ee88e9..fcf5305393cb 100644 --- a/src/pybind/mgr/dashboard/frontend/src/app/ceph/nfs/nfs-details/nfs-details.component.spec.ts +++ b/src/pybind/mgr/dashboard/frontend/src/app/ceph/nfs/nfs-details/nfs-details.component.spec.ts @@ -25,18 +25,15 @@ describe('NfsDetailsComponent', () => { fixture = TestBed.createComponent(NfsDetailsComponent); component = fixture.componentInstance; - component.selection = undefined; component.selection = { export_id: 1, path: '/qwe', fsal: { name: 'CEPH', user_id: 'fs', fs_name: 1 }, cluster_id: 'cluster1', - daemons: ['node1', 'node2'], pseudo: '/qwe', - tag: 'asd', access_type: 'RW', squash: 'no_root_squash', - protocols: [3, 4], + protocols: [4], transports: ['TCP', 'UDP'], clients: [ { @@ -44,9 +41,7 @@ describe('NfsDetailsComponent', () => { access_type: 'RW', squash: 'root_id_squash' } - ], - id: 'cluster1:1', - state: 'LOADING' + ] }; component.ngOnChanges(); fixture.detectChanges(); @@ -62,8 +57,7 @@ describe('NfsDetailsComponent', () => { 'CephFS Filesystem': 1, 'CephFS User': 'fs', Cluster: 'cluster1', - Daemons: ['node1', 'node2'], - 'NFS Protocol': ['NFSv3', 'NFSv4'], + 'NFS Protocol': ['NFSv4'], Path: '/qwe', Pseudo: '/qwe', 'Security Label': undefined, @@ -77,7 +71,7 @@ describe('NfsDetailsComponent', () => { const newData = _.assignIn(component.selection, { fsal: { name: 'RGW', - rgw_user_id: 'rgw_user_id' + user_id: 'user-id' } }); component.selection = newData; @@ -85,9 +79,8 @@ describe('NfsDetailsComponent', () => { expect(component.data).toEqual({ 'Access Type': 'RW', Cluster: 'cluster1', - Daemons: ['node1', 'node2'], - 'NFS Protocol': ['NFSv3', 'NFSv4'], - 'Object Gateway User': 'rgw_user_id', + 'NFS Protocol': ['NFSv4'], + 'Object Gateway User': 'user-id', Path: '/qwe', Pseudo: '/qwe', Squash: 'no_root_squash', diff --git a/src/pybind/mgr/dashboard/frontend/src/app/ceph/nfs/nfs-details/nfs-details.component.ts b/src/pybind/mgr/dashboard/frontend/src/app/ceph/nfs/nfs-details/nfs-details.component.ts index 25a42416f7e3..5a84bd52e9da 100644 --- a/src/pybind/mgr/dashboard/frontend/src/app/ceph/nfs/nfs-details/nfs-details.component.ts +++ b/src/pybind/mgr/dashboard/frontend/src/app/ceph/nfs/nfs-details/nfs-details.component.ts @@ -45,7 +45,6 @@ export class NfsDetailsComponent implements OnChanges { this.data = {}; this.data[$localize`Cluster`] = this.selectedItem.cluster_id; - this.data[$localize`Daemons`] = this.selectedItem.daemons; this.data[$localize`NFS Protocol`] = this.selectedItem.protocols.map( (protocol: string) => 'NFSv' + protocol ); @@ -62,7 +61,7 @@ export class NfsDetailsComponent implements OnChanges { this.data[$localize`Security Label`] = this.selectedItem.fsal.sec_label_xattr; } else { this.data[$localize`Storage Backend`] = $localize`Object Gateway`; - this.data[$localize`Object Gateway User`] = this.selectedItem.fsal.rgw_user_id; + this.data[$localize`Object Gateway User`] = this.selectedItem.fsal.user_id; } } } diff --git a/src/pybind/mgr/dashboard/frontend/src/app/ceph/nfs/nfs-form-client/nfs-form-client.component.html b/src/pybind/mgr/dashboard/frontend/src/app/ceph/nfs/nfs-form-client/nfs-form-client.component.html index 4f84f8e03b75..137cc43fa4b4 100644 --- a/src/pybind/mgr/dashboard/frontend/src/app/ceph/nfs/nfs-form-client/nfs-form-client.component.html +++ b/src/pybind/mgr/dashboard/frontend/src/app/ceph/nfs/nfs-form-client/nfs-form-client.component.html @@ -26,7 +26,7 @@
- +
+ id="cluster_id"> @@ -34,66 +37,10 @@ [value]="cluster.cluster_id">{{ cluster.cluster_id }} This field is required. -
-
- - -
- -
- -
- - - - -
-
- -
-
- - - Add daemon - -
-
- -
-
- -
-
+ *ngIf="nfsForm.showError('cluster_id', formDir, 'required') || allClusters?.length === 0" + i18n>This field is required. + To create a new NFS cluster, add a new NFS Service.
@@ -120,70 +67,15 @@ value="" i18n>-- Select the storage backend -- + [value]="fsal.value" + [disabled]="fsal.disabled">{{ fsal.descr }} This field is required. -
-
- - -
- -
- This field is required. -
-
- - -
- -
- - This field is required. + *ngIf="fsalAvailabilityError" + i18n>{{ fsalAvailabilityError }}
@@ -192,13 +84,13 @@ *ngIf="nfsForm.getValue('name') === 'CEPH'"> + i18n>Volume
Path need to start with a '/' and can be followed by a word New directory will be created + *ngIf="nfsForm.showError('path', formDir, 'pathNameNotAllowed')" + i18n>The path does not exist.
- +
+ [ngbTypeahead]="bucketDataSource"> This field is required. - Path can only be a single '/' or a word - - New bucket will be created + *ngIf="nfsForm.showError('path', formDir, 'bucketNameNotAllowed')" + i18n>The bucket does not exist or is not in the default realm (if multiple realms are configured). + To continue, create a new bucket.
@@ -317,55 +213,23 @@ for="protocols" i18n>NFS Protocol
-
- - -
+ id="protocolNfsv4" + disabled>
This field is required.
- -
- -
- -
-
-
@@ -435,9 +299,12 @@
- +