NFS-Ganesha Management
----------------------
-Support for NFS-Ganesha Clusters Deployed by the Orchestrator
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-The Ceph Dashboard can be used to manage NFS-Ganesha clusters deployed by the
-Orchestrator and will detect them automatically. For more details
-on deploying NFS-Ganesha clusters with the Orchestrator, please see:
-
-- Cephadm backend: :ref:`orchestrator-cli-stateless-services`. Or particularly, see
- :ref:`deploy-cephadm-nfs-ganesha`.
-- Rook backend: `Ceph NFS Gateway CRD <https://rook.github.io/docs/rook/master/ceph-nfs-crd.html>`_.
-
-Support for NFS-Ganesha Clusters Defined by the User
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-.. note::
-
- This configuration only applies for user-defined clusters,
- NOT for Orchestrator-deployed clusters.
-
-The Ceph Dashboard can manage `NFS Ganesha <https://nfs-ganesha.github.io/>`_ exports that use
-CephFS or RGW as their backstore.
-
-To enable this feature in Ceph Dashboard there are some assumptions that need
-to be met regarding the way NFS-Ganesha services are configured.
-
-The dashboard manages NFS-Ganesha config files stored in RADOS objects on the Ceph Cluster.
-NFS-Ganesha must store part of their configuration in the Ceph cluster.
-
-These configuration files follow the below conventions.
-Each export block must be stored in its own RADOS object named
-``export-<id>``, where ``<id>`` must match the ``Export_ID`` attribute of the
-export configuration. Then, for each NFS-Ganesha service daemon there should
-exist a RADOS object named ``conf-<daemon_id>``, where ``<daemon_id>`` is an
-arbitrary string that should uniquely identify the daemon instance (e.g., the
-hostname where the daemon is running).
-Each ``conf-<daemon_id>`` object contains the RADOS URLs to the exports that
-the NFS-Ganesha daemon should serve. These URLs are of the form::
-
- %url rados://<pool_name>[/<namespace>]/export-<id>
-
-Both the ``conf-<daemon_id>`` and ``export-<id>`` objects must be stored in the
-same RADOS pool/namespace.
-
-
-Configuring NFS-Ganesha in the Dashboard
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-To enable management of NFS-Ganesha exports in the Ceph Dashboard, we
-need to tell the Dashboard the RADOS pool and namespace in which
-configuration objects are stored. The Ceph Dashboard can then access them
-by following the naming convention described above.
-
-The Dashboard command to configure the NFS-Ganesha configuration objects
-location is::
-
- $ ceph dashboard set-ganesha-clusters-rados-pool-namespace <pool_name>[/<namespace>]
-
-After running the above command, the Ceph Dashboard is able to find the NFS-Ganesha
-configuration objects and we can manage exports through the Web UI.
-
-.. note::
-
- A dedicated pool for the NFS shares should be used. Otherwise it can cause the
- `known issue <https://tracker.ceph.com/issues/46176>`_ with listing of shares
- if the NFS objects are stored together with a lot of other objects in a single
- pool.
-
-
-Support for Multiple NFS-Ganesha Clusters
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-The Ceph Dashboard also supports management of NFS-Ganesha exports belonging
-to other NFS-Ganesha clusters. An NFS-Ganesha cluster is a group of
-NFS-Ganesha service daemons sharing the same exports. NFS-Ganesha
-clusters are independent and don't share the exports configuration among each
-other.
-
-Each NFS-Ganesha cluster should store its configuration objects in a
-unique RADOS pool/namespace to isolate the configuration.
-
-To specify the the configuration location of each NFS-Ganesha cluster we
-can use the same command as above but with a different value pattern::
-
- $ ceph dashboard set-ganesha-clusters-rados-pool-namespace <cluster_id>:<pool_name>[/<namespace>](,<cluster_id>:<pool_name>[/<namespace>])*
-
-The ``<cluster_id>`` is an arbitrary string that should uniquely identify the
-NFS-Ganesha cluster.
-
-When configuring the Ceph Dashboard with multiple NFS-Ganesha clusters, the
-Web UI will allow you to choose to which cluster an export belongs.
-
+The dashboard requires enabling the NFS module which will be used to manage
+NFS clusters and NFS exports. For more information check :ref:`mgr-nfs`.
Plug-ins
--------
Reporting issues from Dashboard
"""""""""""""""""""""""""""""""
-Ceph-Dashboard provides two ways to create an issue in the Ceph Issue Tracker,
+Ceph-Dashboard provides two ways to create an issue in the Ceph Issue Tracker,
either using the Ceph command line interface or by using the Ceph Dashboard
-user interface.
+user interface.
To create an issue in the Ceph Issue Tracker, a user needs to have an account
on the issue tracker. Under the ``my account`` tab in the Ceph Issue Tracker,
The available projects to create an issue on are:
#. dashboard
#. block
-#. object
-#. file_system
+#. object
+#. file_system
#. ceph_manager
#. orchestrator
#. ceph_volume
#. bug
#. feature
-The subject and description are then set by the user.
+The subject and description are then set by the user.
The user can also create an issue using the Dashboard user interface. The settings
icon drop down menu on the top right of the navigation bar has the option to
``Raise an issue``. On clicking it, a modal dialog opens that has the option to
-select the project and tracker from their respective drop down menus. The subject
+select the project and tracker from their respective drop down menus. The subject
and multiline description are added by the user. The user can then submit the issue.
-
+++ /dev/null
-# -*- coding: utf-8 -*-
-# pylint: disable=too-many-public-methods
-
-from __future__ import absolute_import
-
-from .helper import DashboardTestCase, JList, JObj
-
-
-class GaneshaTest(DashboardTestCase):
- CEPHFS = True
- AUTH_ROLES = ['pool-manager', 'ganesha-manager']
-
- @classmethod
- def setUpClass(cls):
- super(GaneshaTest, cls).setUpClass()
- cls.create_pool('ganesha', 2**2, 'replicated')
- cls._rados_cmd(['-p', 'ganesha', '-N', 'ganesha1', 'create', 'conf-node1'])
- cls._rados_cmd(['-p', 'ganesha', '-N', 'ganesha1', 'create', 'conf-node2'])
- cls._rados_cmd(['-p', 'ganesha', '-N', 'ganesha1', 'create', 'conf-node3'])
- cls._rados_cmd(['-p', 'ganesha', '-N', 'ganesha2', 'create', 'conf-node1'])
- cls._rados_cmd(['-p', 'ganesha', '-N', 'ganesha2', 'create', 'conf-node2'])
- cls._rados_cmd(['-p', 'ganesha', '-N', 'ganesha2', 'create', 'conf-node3'])
- cls._ceph_cmd(['dashboard', 'set-ganesha-clusters-rados-pool-namespace',
- 'cluster1:ganesha/ganesha1,cluster2:ganesha/ganesha2'])
-
- # RGW setup
- cls._radosgw_admin_cmd([
- 'user', 'create', '--uid', 'admin', '--display-name', 'admin',
- '--system', '--access-key', 'admin', '--secret', 'admin'
- ])
- cls._ceph_cmd_with_secret(['dashboard', 'set-rgw-api-secret-key'], 'admin')
- cls._ceph_cmd_with_secret(['dashboard', 'set-rgw-api-access-key'], 'admin')
-
- @classmethod
- def tearDownClass(cls):
- super(GaneshaTest, cls).tearDownClass()
- cls._radosgw_admin_cmd(['user', 'rm', '--uid', 'admin', '--purge-data'])
- cls._ceph_cmd(['osd', 'pool', 'delete', 'ganesha', 'ganesha',
- '--yes-i-really-really-mean-it'])
-
- @DashboardTestCase.RunAs('test', 'test', [{'rbd-image': ['create', 'update', 'delete']}])
- def test_read_access_permissions(self):
- self._get('/api/nfs-ganesha/export')
- self.assertStatus(403)
-
- def test_list_daemons(self):
- daemons = self._get("/api/nfs-ganesha/daemon")
- self.assertEqual(len(daemons), 6)
- daemons = [(d['daemon_id'], d['cluster_id']) for d in daemons]
- self.assertIn(('node1', 'cluster1'), daemons)
- self.assertIn(('node2', 'cluster1'), daemons)
- self.assertIn(('node3', 'cluster1'), daemons)
- self.assertIn(('node1', 'cluster2'), daemons)
- self.assertIn(('node2', 'cluster2'), daemons)
- self.assertIn(('node3', 'cluster2'), daemons)
-
- @classmethod
- def create_export(cls, path, cluster_id, daemons, fsal, sec_label_xattr=None):
- if fsal == 'CEPH':
- fsal = {"name": "CEPH", "user_id": "admin", "fs_name": None,
- "sec_label_xattr": sec_label_xattr}
- pseudo = "/cephfs{}".format(path)
- else:
- fsal = {"name": "RGW", "rgw_user_id": "admin"}
- pseudo = "/rgw/{}".format(path if path[0] != '/' else "")
- ex_json = {
- "path": path,
- "fsal": fsal,
- "cluster_id": cluster_id,
- "daemons": daemons,
- "pseudo": pseudo,
- "tag": None,
- "access_type": "RW",
- "squash": "no_root_squash",
- "security_label": sec_label_xattr is not None,
- "protocols": [4],
- "transports": ["TCP"],
- "clients": [{
- "addresses": ["10.0.0.0/8"],
- "access_type": "RO",
- "squash": "root"
- }]
- }
- return cls._task_post('/api/nfs-ganesha/export', ex_json)
-
- def tearDown(self):
- super(GaneshaTest, self).tearDown()
- exports = self._get("/api/nfs-ganesha/export")
- if self._resp.status_code != 200:
- return
- self.assertIsInstance(exports, list)
- for exp in exports:
- self._task_delete("/api/nfs-ganesha/export/{}/{}"
- .format(exp['cluster_id'], exp['export_id']))
-
- def _test_create_export(self, cephfs_path):
- exports = self._get("/api/nfs-ganesha/export")
- self.assertEqual(len(exports), 0)
-
- data = self.create_export(cephfs_path, 'cluster1', ['node1', 'node2'], 'CEPH',
- "security.selinux")
-
- exports = self._get("/api/nfs-ganesha/export")
- self.assertEqual(len(exports), 1)
- self.assertDictEqual(exports[0], data)
- return data
-
- def test_create_export(self):
- self._test_create_export('/foo')
-
- def test_create_export_for_cephfs_root(self):
- self._test_create_export('/')
-
- def test_update_export(self):
- export = self._test_create_export('/foo')
- export['access_type'] = 'RO'
- export['daemons'] = ['node1', 'node3']
- export['security_label'] = True
- data = self._task_put('/api/nfs-ganesha/export/{}/{}'
- .format(export['cluster_id'], export['export_id']),
- export)
- exports = self._get("/api/nfs-ganesha/export")
- self.assertEqual(len(exports), 1)
- self.assertDictEqual(exports[0], data)
- self.assertEqual(exports[0]['daemons'], ['node1', 'node3'])
- self.assertEqual(exports[0]['security_label'], True)
-
- def test_delete_export(self):
- export = self._test_create_export('/foo')
- self._task_delete("/api/nfs-ganesha/export/{}/{}"
- .format(export['cluster_id'], export['export_id']))
- self.assertStatus(204)
-
- def test_get_export(self):
- exports = self._get("/api/nfs-ganesha/export")
- self.assertEqual(len(exports), 0)
-
- data1 = self.create_export("/foo", 'cluster2', ['node1', 'node2'], 'CEPH')
- data2 = self.create_export("mybucket", 'cluster2', ['node2', 'node3'], 'RGW')
-
- export1 = self._get("/api/nfs-ganesha/export/cluster2/1")
- self.assertDictEqual(export1, data1)
-
- export2 = self._get("/api/nfs-ganesha/export/cluster2/2")
- self.assertDictEqual(export2, data2)
-
- def test_invalid_status(self):
- self._ceph_cmd(['dashboard', 'set-ganesha-clusters-rados-pool-namespace', ''])
-
- data = self._get('/api/nfs-ganesha/status')
- self.assertStatus(200)
- self.assertIn('available', data)
- self.assertIn('message', data)
- self.assertFalse(data['available'])
- self.assertIn(("NFS-Ganesha cluster is not detected. "
- "Please set the GANESHA_RADOS_POOL_NAMESPACE "
- "setting or deploy an NFS-Ganesha cluster with the Orchestrator."),
- data['message'])
-
- self._ceph_cmd(['dashboard', 'set-ganesha-clusters-rados-pool-namespace',
- 'cluster1:ganesha/ganesha1,cluster2:ganesha/ganesha2'])
-
- def test_valid_status(self):
- data = self._get('/api/nfs-ganesha/status')
- self.assertStatus(200)
- self.assertIn('available', data)
- self.assertIn('message', data)
- self.assertTrue(data['available'])
-
- def test_ganesha_fsals(self):
- data = self._get('/ui-api/nfs-ganesha/fsals')
- self.assertStatus(200)
- self.assertIn('CEPH', data)
-
- def test_ganesha_filesystems(self):
- data = self._get('/ui-api/nfs-ganesha/cephfs/filesystems')
- self.assertStatus(200)
- self.assertSchema(data, JList(JObj({
- 'id': int,
- 'name': str
- })))
-
- def test_ganesha_lsdir(self):
- fss = self._get('/ui-api/nfs-ganesha/cephfs/filesystems')
- self.assertStatus(200)
- for fs in fss:
- data = self._get('/ui-api/nfs-ganesha/lsdir/{}'.format(fs['name']))
- self.assertStatus(200)
- self.assertSchema(data, JObj({'paths': JList(str)}))
- self.assertEqual(data['paths'][0], '/')
-
- def test_ganesha_buckets(self):
- data = self._get('/ui-api/nfs-ganesha/rgw/buckets')
- self.assertStatus(200)
- schema = JList(str)
- self.assertSchema(data, schema)
-
- def test_ganesha_clusters(self):
- data = self._get('/ui-api/nfs-ganesha/clusters')
- self.assertStatus(200)
- schema = JList(str)
- self.assertSchema(data, schema)
-
- def test_ganesha_cephx_clients(self):
- data = self._get('/ui-api/nfs-ganesha/cephx/clients')
- self.assertStatus(200)
- schema = JList(str)
- self.assertSchema(data, schema)
self.assertEqual(data['tenant'], '')
# List all buckets.
- data = self._get('/api/rgw/bucket')
+ data = self._get('/api/rgw/bucket', version='1.1')
self.assertStatus(200)
self.assertEqual(len(data), 1)
self.assertIn('teuth-test-bucket', data)
# List all buckets with stats.
- data = self._get('/api/rgw/bucket?stats=true')
+ data = self._get('/api/rgw/bucket?stats=true', version='1.1')
self.assertStatus(200)
self.assertEqual(len(data), 1)
self.assertSchema(data[0], JObj(sub_elems={
}, allow_unknown=True))
# List all buckets names without stats.
- data = self._get('/api/rgw/bucket?stats=false')
+ data = self._get('/api/rgw/bucket?stats=false', version='1.1')
self.assertStatus(200)
self.assertEqual(data, ['teuth-test-bucket'])
# Delete the bucket.
self._delete('/api/rgw/bucket/teuth-test-bucket')
self.assertStatus(204)
- data = self._get('/api/rgw/bucket')
+ data = self._get('/api/rgw/bucket', version='1.1')
self.assertStatus(200)
self.assertEqual(len(data), 0)
self.assertIsNone(data)
# List all buckets.
- data = self._get('/api/rgw/bucket')
+ data = self._get('/api/rgw/bucket', version='1.1')
self.assertStatus(200)
self.assertEqual(len(data), 1)
self.assertIn('testx/teuth-test-bucket', data)
self._delete('/api/rgw/bucket/{}'.format(
parse.quote_plus('testx/teuth-test-bucket')))
self.assertStatus(204)
- data = self._get('/api/rgw/bucket')
+ data = self._get('/api/rgw/bucket', version='1.1')
self.assertStatus(200)
self.assertEqual(len(data), 0)
cephadm bootstrap --mon-ip $mon_ip --initial-dashboard-password {{ admin_password }} --allow-fqdn-hostname --skip-monitoring-stack --dashboard-password-noupdate --shared_ceph_folder /mnt/{{ ceph_dev_folder }}
fsid=$(cat /etc/ceph/ceph.conf | grep fsid | awk '{ print $3}')
+cephadm_shell="cephadm shell --fsid ${fsid} -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring"
{% for number in range(1, nodes) %}
ssh-copy-id -f -i /etc/ceph/ceph.pub -o StrictHostKeyChecking=no root@{{ prefix }}-node-0{{ number }}.{{ domain }}
{% if expanded_cluster is defined %}
- cephadm shell --fsid $fsid -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring ceph orch host add {{ prefix }}-node-0{{ number }}.{{ domain }}
+ ${cephadm_shell} ceph orch host add {{ prefix }}-node-0{{ number }}.{{ domain }}
{% endif %}
{% endfor %}
{% if expanded_cluster is defined %}
- cephadm shell --fsid $fsid -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring ceph orch apply osd --all-available-devices
+ ${cephadm_shell} ceph orch apply osd --all-available-devices
{% endif %}
# -*- coding: utf-8 -*-
+import json
import logging
import os
-import json
from functools import partial
+from typing import Any, Dict, List, Optional
import cephfs
import cherrypy
-# Importing from nfs module throws Attribute Error
-# https://gist.github.com/varshar16/61ac26426bbe5f5f562ebb14bcd0f548
-#from nfs.export_utils import NFS_GANESHA_SUPPORTED_FSALS
-#from nfs.utils import available_clusters
+from mgr_module import NFS_GANESHA_SUPPORTED_FSALS
from .. import mgr
from ..security import Scope
from ..services.cephfs import CephFS
from ..services.exception import DashboardException, serialize_dashboard_exception
-from ..services.rgw_client import NoCredentialsException, \
- NoRgwDaemonsException, RequestException, RgwClient
from . import APIDoc, APIRouter, BaseController, Endpoint, EndpointDoc, \
ReadPermission, RESTController, Task, UIRouter
+from ._version import APIVersion
logger = logging.getLogger('controllers.nfs')
def __init__(self, msg):
super(NFSException, self).__init__(component="nfs", msg=msg)
-# Remove this once attribute error is fixed
-NFS_GANESHA_SUPPORTED_FSALS = ['CEPH', 'RGW']
# documentation helpers
EXPORT_SCHEMA = {
'export_id': (int, 'Export ID'),
'path': (str, 'Export path'),
'cluster_id': (str, 'Cluster identifier'),
- 'daemons': ([str], 'List of NFS Ganesha daemons identifiers'),
'pseudo': (str, 'Pseudo FS path'),
'access_type': (str, 'Export access type'),
'squash': (str, 'Export squash policy'),
'transports': ([str], 'List of transport types'),
'fsal': ({
'name': (str, 'name of FSAL'),
- 'user_id': (str, 'CephX user id', True),
- 'filesystem': (str, 'CephFS filesystem ID', True),
+ 'fs_name': (str, 'CephFS filesystem name', True),
'sec_label_xattr': (str, 'Name of xattr for security label', True),
- 'rgw_user_id': (str, 'RGW user id', True)
+ 'user_id': (str, 'User id', True)
}, 'FSAL configuration'),
'clients': ([{
'addresses': ([str], 'list of IP addresses'),
CREATE_EXPORT_SCHEMA = {
'path': (str, 'Export path'),
'cluster_id': (str, 'Cluster identifier'),
- 'daemons': ([str], 'List of NFS Ganesha daemons identifiers'),
'pseudo': (str, 'Pseudo FS path'),
'access_type': (str, 'Export access type'),
'squash': (str, 'Export squash policy'),
'transports': ([str], 'List of transport types'),
'fsal': ({
'name': (str, 'name of FSAL'),
- 'user_id': (str, 'CephX user id', True),
- 'filesystem': (str, 'CephFS filesystem ID', True),
- 'sec_label_xattr': (str, 'Name of xattr for security label', True),
- 'rgw_user_id': (str, 'RGW user id', True)
+ 'fs_name': (str, 'CephFS filesystem name', True),
+ 'sec_label_xattr': (str, 'Name of xattr for security label', True)
}, 'FSAL configuration'),
'clients': ([{
'addresses': ([str], 'list of IP addresses'),
'access_type': (str, 'Client access type'),
'squash': (str, 'Client squash policy')
- }], 'List of client configurations'),
- 'reload_daemons': (bool,
- 'Trigger reload of NFS-Ganesha daemons configuration',
- True)
+ }], 'List of client configurations')
}
@APIRouter('/nfs-ganesha', Scope.NFS_GANESHA)
-@APIDoc("NFS-Ganesha Management API", "NFS-Ganesha")
+@APIDoc("NFS-Ganesha Cluster Management API", "NFS-Ganesha")
class NFSGanesha(RESTController):
@EndpointDoc("Status of NFS-Ganesha management feature",
@Endpoint()
@ReadPermission
def status(self):
- '''
- FIXME: update this to check if any nfs cluster is available. Otherwise this endpoint can be safely removed too.
- As it was introduced to check dashboard pool and namespace configuration.
+ status = {'available': True, 'message': None}
try:
- cluster_ls = available_clusters(mgr)
- if not cluster_ls:
- raise NFSException('Please deploy a cluster using `nfs cluster create ... or orch apply nfs ..')
- except (NameError, ImportError) as e:
- status['message'] = str(e) # type: ignore
+ mgr.remote('nfs', 'cluster_ls')
+ except ImportError as error:
+ logger.exception(error)
status['available'] = False
+ status['message'] = str(error) # type: ignore
+
return status
- '''
- return {'available': True, 'message': None}
+
+
+@APIRouter('/nfs-ganesha/cluster', Scope.NFS_GANESHA)
+@APIDoc(group="NFS-Ganesha")
+class NFSGaneshaCluster(RESTController):
+ @ReadPermission
+ @RESTController.MethodMap(version=APIVersion.EXPERIMENTAL)
+ def list(self):
+ return mgr.remote('nfs', 'cluster_ls')
@APIRouter('/nfs-ganesha/export', Scope.NFS_GANESHA)
class NFSGaneshaExports(RESTController):
RESOURCE_ID = "cluster_id/export_id"
+ @staticmethod
+ def _get_schema_export(export: Dict[str, Any]) -> Dict[str, Any]:
+ """
+ Method that avoids returning export info not exposed in the export schema
+ e.g., rgw user access/secret keys.
+ """
+ schema_fsal_info = {}
+ for key in export['fsal'].keys():
+ if key in EXPORT_SCHEMA['fsal'][0].keys(): # type: ignore
+ schema_fsal_info[key] = export['fsal'][key]
+ export['fsal'] = schema_fsal_info
+ return export
+
@EndpointDoc("List all NFS-Ganesha exports",
responses={200: [EXPORT_SCHEMA]})
- def list(self):
- '''
- list exports based on cluster_id ?
- export_mgr = mgr.remote('nfs', 'fetch_nfs_export_obj')
- ret, out, err = export_mgr.list_exports(cluster_id=cluster_id, detailed=True)
- if ret == 0:
- return json.loads(out)
- raise NFSException(f"Failed to list exports: {err}")
- '''
- return mgr.remote('nfs', 'export_ls')
+ def list(self) -> List[Dict[str, Any]]:
+ exports = []
+ for export in mgr.remote('nfs', 'export_ls'):
+ exports.append(self._get_schema_export(export))
+
+ return exports
@NfsTask('create', {'path': '{path}', 'fsal': '{fsal.name}',
'cluster_id': '{cluster_id}'}, 2.0)
@EndpointDoc("Creates a new NFS-Ganesha export",
parameters=CREATE_EXPORT_SCHEMA,
responses={201: EXPORT_SCHEMA})
- def create(self, path, cluster_id, daemons, pseudo, access_type,
- squash, security_label, protocols, transports, fsal, clients,
- reload_daemons=True):
- fsal.pop('user_id') # mgr/nfs does not let you customize user_id
+ @RESTController.MethodMap(version=APIVersion(2, 0)) # type: ignore
+ def create(self, path, cluster_id, pseudo, access_type,
+ squash, security_label, protocols, transports, fsal, clients) -> Dict[str, Any]:
+
+ if hasattr(fsal, 'user_id'):
+ fsal.pop('user_id') # mgr/nfs does not let you customize user_id
raw_ex = {
'path': path,
'pseudo': pseudo,
'cluster_id': cluster_id,
- 'daemons': daemons,
'access_type': access_type,
'squash': squash,
'security_label': security_label,
'clients': clients
}
export_mgr = mgr.remote('nfs', 'fetch_nfs_export_obj')
- ret, out, err = export_mgr.apply_export(cluster_id, json.dumps(raw_ex))
+ ret, _, err = export_mgr.apply_export(cluster_id, json.dumps(raw_ex))
if ret == 0:
- return export_mgr._get_export_dict(cluster_id, pseudo)
+ return self._get_schema_export(
+ export_mgr._get_export_dict(cluster_id, pseudo)) # pylint: disable=W0212
raise NFSException(f"Export creation failed {err}")
@EndpointDoc("Get an NFS-Ganesha export",
parameters={
'cluster_id': (str, 'Cluster identifier'),
- 'export_id': (int, "Export ID")
+ 'export_id': (str, "Export ID")
},
responses={200: EXPORT_SCHEMA})
- def get(self, cluster_id, export_id):
- '''
- Get export by pseudo path?
- export_mgr = mgr.remote('nfs', 'fetch_nfs_export_obj')
- return export_mgr._get_export_dict(cluster_id, pseudo)
-
- Get export by id
- export_mgr = mgr.remote('nfs', 'fetch_nfs_export_obj')
- return export_mgr.get_export_by_id(cluster_id, export_id)
- '''
- return mgr.remote('nfs', 'export_get', cluster_id, export_id)
+ def get(self, cluster_id, export_id) -> Optional[Dict[str, Any]]:
+ export_id = int(export_id)
+ export = mgr.remote('nfs', 'export_get', cluster_id, export_id)
+ if export:
+ export = self._get_schema_export(export)
+
+ return export
@NfsTask('edit', {'cluster_id': '{cluster_id}', 'export_id': '{export_id}'},
2.0)
parameters=dict(export_id=(int, "Export ID"),
**CREATE_EXPORT_SCHEMA),
responses={200: EXPORT_SCHEMA})
- def set(self, cluster_id, export_id, path, daemons, pseudo, access_type,
- squash, security_label, protocols, transports, fsal, clients,
- reload_daemons=True):
+ @RESTController.MethodMap(version=APIVersion(2, 0)) # type: ignore
+ def set(self, cluster_id, export_id, path, pseudo, access_type,
+ squash, security_label, protocols, transports, fsal, clients) -> Dict[str, Any]:
- fsal.pop('user_id') # mgr/nfs does not let you customize user_id
+ if hasattr(fsal, 'user_id'):
+ fsal.pop('user_id') # mgr/nfs does not let you customize user_id
raw_ex = {
'path': path,
'pseudo': pseudo,
'cluster_id': cluster_id,
- 'daemons': daemons,
+ 'export_id': export_id,
'access_type': access_type,
'squash': squash,
'security_label': security_label,
}
export_mgr = mgr.remote('nfs', 'fetch_nfs_export_obj')
- ret, out, err = export_mgr.apply_export(cluster_id, json.dumps(raw_ex))
+ ret, _, err = export_mgr.apply_export(cluster_id, json.dumps(raw_ex))
if ret == 0:
- return export_mgr._get_export_dict(cluster_id, pseudo)
+ return self._get_schema_export(
+ export_mgr._get_export_dict(cluster_id, pseudo)) # pylint: disable=W0212
raise NFSException(f"Failed to update export: {err}")
@NfsTask('delete', {'cluster_id': '{cluster_id}',
@EndpointDoc("Deletes an NFS-Ganesha export",
parameters={
'cluster_id': (str, 'Cluster identifier'),
- 'export_id': (int, "Export ID"),
- 'reload_daemons': (bool,
- 'Trigger reload of NFS-Ganesha daemons'
- ' configuration',
- True)
+ 'export_id': (int, "Export ID")
})
- def delete(self, cluster_id, export_id, reload_daemons=True):
- '''
- Delete by pseudo path
- export_mgr = mgr.remote('nfs', 'fetch_nfs_export_obj')
- export_mgr.delete_export(cluster_id, pseudo)
-
- if deleting by export id
- export_mgr = mgr.remote('nfs', 'fetch_nfs_export_obj')
- export = export_mgr.get_export_by_id(cluster_id, export_id)
- ret, out, err = export_mgr.delete_export(cluster_id=cluster_id, pseudo_path=export['pseudo'])
- if ret != 0:
- raise NFSException(err)
- '''
+ @RESTController.MethodMap(version=APIVersion(2, 0)) # type: ignore
+ def delete(self, cluster_id, export_id):
export_id = int(export_id)
export = mgr.remote('nfs', 'export_get', cluster_id, export_id)
mgr.remote('nfs', 'export_rm', cluster_id, export['pseudo'])
-# FIXME: remove this; dashboard should only care about clusters.
-@APIRouter('/nfs-ganesha/daemon', Scope.NFS_GANESHA)
-@APIDoc(group="NFS-Ganesha")
-class NFSGaneshaService(RESTController):
-
- @EndpointDoc("List NFS-Ganesha daemons information",
- responses={200: [{
- 'daemon_id': (str, 'Daemon identifier'),
- 'cluster_id': (str, 'Cluster identifier'),
- 'cluster_type': (str, 'Cluster type'), # FIXME: remove this property
- 'status': (int, 'Status of daemon', True),
- 'desc': (str, 'Status description', True)
- }]})
- def list(self):
- return mgr.remote('nfs', 'daemon_ls')
-
-
@UIRouter('/nfs-ganesha', Scope.NFS_GANESHA)
class NFSGaneshaUi(BaseController):
- @Endpoint('GET', '/cephx/clients')
- @ReadPermission
- def cephx_clients(self):
- # FIXME: remove this; cephx users/creds are managed by mgr/nfs
- return ['admin']
-
@Endpoint('GET', '/fsals')
@ReadPermission
def fsals(self):
@ReadPermission
def filesystems(self):
return CephFS.list_filesystems()
-
- @Endpoint('GET', '/rgw/buckets')
- @ReadPermission
- def buckets(self, user_id=None):
- try:
- return RgwClient.instance(user_id).get_buckets()
- except (DashboardException, NoCredentialsException, RequestException,
- NoRgwDaemonsException):
- return []
-
- @Endpoint('GET', '/clusters')
- @ReadPermission
- def clusters(self):
- '''
- Remove this remote call instead directly use available_cluster() method. It returns list of cluster names: ['vstart']
- The current dashboard api needs to changed from following to simply list of strings
- [
- {
- 'pool': 'nfs-ganesha',
- 'namespace': cluster_id,
- 'type': 'orchestrator',
- 'daemon_conf': None
- } for cluster_id in available_clusters()
- ]
- As pool, namespace, cluster type and daemon_conf are not required for listing cluster by mgr/nfs module
- return available_cluster(mgr)
- '''
- return mgr.remote('nfs', 'cluster_ls')
from ..tools import json_str_to_object, str_to_bool
from . import APIDoc, APIRouter, BaseController, Endpoint, EndpointDoc, \
ReadPermission, RESTController, allow_empty_body
+from ._version import APIVersion
try:
- from typing import Any, List, Optional
+ from typing import Any, Dict, List, Optional, Union
except ImportError: # pragma: no cover
pass # Just for type checking
'service_map_id': service['id'],
'version': metadata['ceph_version'],
'server_hostname': hostname,
+ 'realm_name': metadata['realm_name'],
'zonegroup_name': metadata['zonegroup_name'],
'zone_name': metadata['zone_name'],
'default': instance.daemon.name == metadata['id']
return RgwClient.admin_instance(daemon_name=daemon_name).get_placement_targets()
if query == 'realms':
return RgwClient.admin_instance(daemon_name=daemon_name).get_realms()
+ if query == 'default-realm':
+ return RgwClient.admin_instance(daemon_name=daemon_name).get_default_realm()
# @TODO: for multisite: by default, retrieve cluster topology/map.
raise DashboardException(http_status_code=501, component='rgw', msg='Not Implemented')
bucket_name = '{}:{}'.format(tenant, bucket_name)
return bucket_name
- def list(self, stats=False, daemon_name=None):
- # type: (bool, Optional[str]) -> List[Any]
- query_params = '?stats' if str_to_bool(stats) else ''
+ @RESTController.MethodMap(version=APIVersion(1, 1)) # type: ignore
+ def list(self, stats: bool = False, daemon_name: Optional[str] = None,
+ uid: Optional[str] = None) -> List[Union[str, Dict[str, Any]]]:
+ query_params = f'?stats={str_to_bool(stats)}'
+ if uid and uid.strip():
+ query_params = f'{query_params}&uid={uid.strip()}'
result = self.proxy(daemon_name, 'GET', 'bucket{}'.format(query_params))
if stats:
// Need pool for image testing
pools.navigateTo('create');
pools.create(poolName, 8, 'rbd');
- pools.exist(poolName, true);
+ pools.existTableCell(poolName);
});
after(() => {
pools.navigateTo();
pools.delete(poolName);
pools.navigateTo();
- pools.exist(poolName, false);
+ pools.existTableCell(poolName, false);
});
beforeEach(() => {
pools.navigateTo('create'); // Need pool for mirroring testing
pools.create(poolName, 8, 'rbd');
pools.navigateTo();
- pools.exist(poolName, true);
+ pools.existTableCell(poolName, true);
});
it('tests editing mode for pools', () => {
pools.navigateTo('create');
pools.create(poolname, 8);
pools.navigateTo();
- pools.exist(poolname, true);
+ pools.existTableCell(poolname, true);
logs.checkAuditForPoolFunction(poolname, 'create', hour, minute);
});
addService(serviceType: string, exist?: boolean, count = '1') {
cy.get(`${this.pages.create.id}`).within(() => {
this.selectServiceType(serviceType);
- if (serviceType === 'rgw') {
- cy.get('#service_id').type('foo');
- cy.get('#count').type(count);
- } else if (serviceType === 'ingress') {
- this.selectOption('backend_service', 'rgw.foo');
- cy.get('#service_id').should('have.value', 'rgw.foo');
- cy.get('#virtual_ip').type('192.168.20.1/24');
- cy.get('#frontend_port').type('8081');
- cy.get('#monitor_port').type('8082');
+ switch (serviceType) {
+ case 'rgw':
+ cy.get('#service_id').type('foo');
+ cy.get('#count').type(count);
+ break;
+
+ case 'ingress':
+ this.selectOption('backend_service', 'rgw.foo');
+ cy.get('#service_id').should('have.value', 'rgw.foo');
+ cy.get('#virtual_ip').type('192.168.20.1/24');
+ cy.get('#frontend_port').type('8081');
+ cy.get('#monitor_port').type('8082');
+ break;
+
+ case 'nfs':
+ cy.get('#service_id').type('testnfs');
+ cy.get('#count').type(count);
+ break;
}
cy.get('cd-submit-button').click();
--- /dev/null
+import { ServicesPageHelper } from 'cypress/integration/cluster/services.po';
+import { NFSPageHelper } from 'cypress/integration/orchestrator/workflow/nfs/nfs-export.po';
+import { BucketsPageHelper } from 'cypress/integration/rgw/buckets.po';
+
+describe('nfsExport page', () => {
+ const nfsExport = new NFSPageHelper();
+ const services = new ServicesPageHelper();
+ const buckets = new BucketsPageHelper();
+ const bucketName = 'e2e.nfs.bucket';
+ // @TODO: uncomment this when a CephFS volume can be created through Dashboard.
+ // const fsPseudo = '/fsPseudo';
+ const rgwPseudo = '/rgwPseudo';
+ const editPseudo = '/editPseudo';
+ const backends = ['CephFS', 'Object Gateway'];
+ const squash = 'no_root_squash';
+ const client: object = { addresses: '192.168.0.10' };
+
+ beforeEach(() => {
+ cy.login();
+ Cypress.Cookies.preserveOnce('token');
+ nfsExport.navigateTo();
+ });
+
+ describe('breadcrumb test', () => {
+ it('should open and show breadcrumb', () => {
+ nfsExport.expectBreadcrumbText('NFS');
+ });
+ });
+
+ describe('Create, edit and delete', () => {
+ it('should create an NFS cluster', () => {
+ services.navigateTo('create');
+
+ services.addService('nfs');
+
+ services.checkExist('nfs.testnfs', true);
+ services.getExpandCollapseElement().click();
+ services.checkServiceStatus('nfs');
+ });
+
+ it('should create a nfs-export with RGW backend', () => {
+ buckets.navigateTo('create');
+ buckets.create(bucketName, 'dashboard', 'default-placement');
+
+ nfsExport.navigateTo();
+ nfsExport.existTableCell(rgwPseudo, false);
+ nfsExport.navigateTo('create');
+ nfsExport.create(backends[1], squash, client, rgwPseudo, bucketName);
+ nfsExport.existTableCell(rgwPseudo);
+ });
+
+ // @TODO: uncomment this when a CephFS volume can be created through Dashboard.
+ // it('should create a nfs-export with CephFS backend', () => {
+ // nfsExport.navigateTo();
+ // nfsExport.existTableCell(fsPseudo, false);
+ // nfsExport.navigateTo('create');
+ // nfsExport.create(backends[0], squash, client, fsPseudo);
+ // nfsExport.existTableCell(fsPseudo);
+ // });
+
+ it('should show Clients', () => {
+ nfsExport.clickTab('cd-nfs-details', rgwPseudo, 'Clients (1)');
+ cy.get('cd-nfs-details').within(() => {
+ nfsExport.getTableCount('total').should('be.gte', 0);
+ });
+ });
+
+ it('should edit an export', () => {
+ nfsExport.editExport(rgwPseudo, editPseudo);
+
+ nfsExport.existTableCell(editPseudo);
+ });
+
+ it('should delete exports and bucket', () => {
+ nfsExport.delete(editPseudo);
+
+ buckets.navigateTo();
+ buckets.delete(bucketName);
+ });
+ });
+});
--- /dev/null
+import { PageHelper } from 'cypress/integration/page-helper.po';
+
+const pages = {
+ index: { url: '#/nfs', id: 'cd-nfs-list' },
+ create: { url: '#/nfs/create', id: 'cd-nfs-form' }
+};
+
+export class NFSPageHelper extends PageHelper {
+ pages = pages;
+
+ @PageHelper.restrictTo(pages.create.url)
+ create(backend: string, squash: string, client: object, pseudo: string, rgwPath?: string) {
+ this.selectOption('cluster_id', 'testnfs');
+ // select a storage backend
+ this.selectOption('name', backend);
+ if (backend === 'CephFS') {
+ this.selectOption('fs_name', 'myfs');
+
+ cy.get('#security_label').click({ force: true });
+ } else {
+ cy.get('input[data-testid=rgw_path]').type(rgwPath);
+ }
+
+ cy.get('input[name=pseudo]').type(pseudo);
+ this.selectOption('squash', squash);
+
+ // Add clients
+ cy.get('button[name=add_client]').click({ force: true });
+ cy.get('input[name=addresses]').type(client['addresses']);
+
+ cy.get('cd-submit-button').click();
+ }
+
+ editExport(pseudo: string, editPseudo: string) {
+ this.navigateEdit(pseudo);
+
+ cy.get('input[name=pseudo]').clear().type(editPseudo);
+
+ cy.get('cd-submit-button').click();
+
+ // Click the export and check its details table for updated content
+ this.getExpandCollapseElement(editPseudo).click();
+ cy.get('.active.tab-pane').should('contain.text', editPseudo);
+ }
+}
}
getTab(tabName: string) {
- return cy.contains('.nav.nav-tabs li', new RegExp(`^${tabName}$`));
+ return cy.contains('.nav.nav-tabs li', tabName);
}
getTabText(index: number) {
);
}
+ existTableCell(name: string, oughtToBePresent = true) {
+ const waitRule = oughtToBePresent ? 'be.visible' : 'not.exist';
+ this.getFirstTableCell(name).should(waitRule);
+ }
+
getExpandCollapseElement(content?: string) {
this.waitDataTableToLoad();
describe('Create, update and destroy', () => {
it('should create a pool', () => {
- pools.exist(poolName, false);
+ pools.existTableCell(poolName, false);
pools.navigateTo('create');
pools.create(poolName, 8, 'rbd');
- pools.exist(poolName, true);
+ pools.existTableCell(poolName);
});
it('should edit a pools placement group', () => {
- pools.exist(poolName, true);
+ pools.existTableCell(poolName);
pools.edit_pool_pg(poolName, 32);
});
it('should show updated configuration field values', () => {
- pools.exist(poolName, true);
+ pools.existTableCell(poolName);
const bpsLimit = '4 B/s';
pools.edit_pool_configuration(poolName, bpsLimit);
});
return expect((n & (n - 1)) === 0, `Placement groups ${n} are not a power of 2`).to.be.true;
}
- @PageHelper.restrictTo(pages.index.url)
- exist(name: string, oughtToBePresent = true) {
- const waitRule = oughtToBePresent ? 'be.visible' : 'not.exist';
- this.getFirstTableCell(name).should(waitRule);
- }
-
@PageHelper.restrictTo(pages.create.url)
create(name: string, placement_groups: number, ...apps: string[]) {
cy.get('input[name=name]').clear().type(name);
</div>
</div>
- <!-- NFS -->
- <ng-container *ngIf="!serviceForm.controls.unmanaged.value && serviceForm.controls.service_type.value === 'nfs'">
- <!-- pool -->
- <div class="form-group row">
- <label i18n
- class="cd-col-form-label required"
- for="pool">Pool</label>
- <div class="cd-col-form-input">
- <select id="pool"
- name="pool"
- class="form-control custom-select"
- formControlName="pool">
- <option *ngIf="pools === null"
- [ngValue]="null"
- i18n>Loading...</option>
- <option *ngIf="pools !== null && pools.length === 0"
- [ngValue]="null"
- i18n>-- No pools available --</option>
- <option *ngIf="pools !== null && pools.length > 0"
- [ngValue]="null"
- i18n>-- Select a pool --</option>
- <option *ngFor="let pool of pools"
- [value]="pool.pool_name">{{ pool.pool_name }}</option>
- </select>
- <span class="invalid-feedback"
- *ngIf="serviceForm.showError('pool', frm, 'required')"
- i18n>This field is required.</span>
- </div>
- </div>
-
- <!-- namespace -->
- <div class="form-group row">
- <label i18n
- class="cd-col-form-label"
- for="namespace">Namespace</label>
- <div class="cd-col-form-input">
- <input id="namespace"
- class="form-control"
- type="text"
- formControlName="namespace">
- </div>
- </div>
- </ng-container>
-
<!-- RGW -->
<ng-container *ngIf="!serviceForm.controls.unmanaged.value && serviceForm.controls.service_type.value === 'rgw'">
<!-- rgw_frontend_port -->
describe('should test service nfs', () => {
beforeEach(() => {
formHelper.setValue('service_type', 'nfs');
- formHelper.setValue('pool', 'foo');
});
- it('should submit nfs with namespace', () => {
- formHelper.setValue('namespace', 'bar');
+ it('should submit nfs', () => {
component.onSubmit();
expect(cephServiceService.create).toHaveBeenCalledWith({
service_type: 'nfs',
placement: {},
- unmanaged: false,
- pool: 'foo',
- namespace: 'bar'
- });
- });
-
- it('should submit nfs w/o namespace', () => {
- component.onSubmit();
- expect(cephServiceService.create).toHaveBeenCalledWith({
- service_type: 'nfs',
- placement: {},
- unmanaged: false,
- pool: 'foo'
+ unmanaged: false
});
});
});
hosts: [[]],
count: [null, [CdValidators.number(false), Validators.min(1)]],
unmanaged: [false],
- // NFS & iSCSI
+ // iSCSI
pool: [
null,
[
- CdValidators.requiredIf({
- service_type: 'nfs',
- unmanaged: false
- }),
CdValidators.requiredIf({
service_type: 'iscsi',
unmanaged: false
})
]
],
- // NFS
- namespace: [null],
// RGW
rgw_frontend_port: [
null,
serviceSpec['placement']['count'] = values['count'];
}
switch (serviceType) {
- case 'nfs':
- serviceSpec['pool'] = values['pool'];
- if (_.isString(values['namespace']) && !_.isEmpty(values['namespace'])) {
- serviceSpec['namespace'] = values['namespace'];
- }
- break;
case 'rgw':
if (_.isNumber(values['rgw_frontend_port']) && values['rgw_frontend_port'] > 0) {
serviceSpec['rgw_frontend_port'] = values['rgw_frontend_port'];
--- /dev/null
+export interface NfsFSAbstractionLayer {
+ value: string;
+ descr: string;
+ disabled: boolean;
+}
+++ /dev/null
-export enum NFSClusterType {
- user = 'user',
- orchestrator = 'orchestrator'
-}
fixture = TestBed.createComponent(NfsDetailsComponent);
component = fixture.componentInstance;
- component.selection = undefined;
component.selection = {
export_id: 1,
path: '/qwe',
fsal: { name: 'CEPH', user_id: 'fs', fs_name: 1 },
cluster_id: 'cluster1',
- daemons: ['node1', 'node2'],
pseudo: '/qwe',
- tag: 'asd',
access_type: 'RW',
squash: 'no_root_squash',
- protocols: [3, 4],
+ protocols: [4],
transports: ['TCP', 'UDP'],
clients: [
{
access_type: 'RW',
squash: 'root_id_squash'
}
- ],
- id: 'cluster1:1',
- state: 'LOADING'
+ ]
};
component.ngOnChanges();
fixture.detectChanges();
'CephFS Filesystem': 1,
'CephFS User': 'fs',
Cluster: 'cluster1',
- Daemons: ['node1', 'node2'],
- 'NFS Protocol': ['NFSv3', 'NFSv4'],
+ 'NFS Protocol': ['NFSv4'],
Path: '/qwe',
Pseudo: '/qwe',
'Security Label': undefined,
const newData = _.assignIn(component.selection, {
fsal: {
name: 'RGW',
- rgw_user_id: 'rgw_user_id'
+ user_id: 'user-id'
}
});
component.selection = newData;
expect(component.data).toEqual({
'Access Type': 'RW',
Cluster: 'cluster1',
- Daemons: ['node1', 'node2'],
- 'NFS Protocol': ['NFSv3', 'NFSv4'],
- 'Object Gateway User': 'rgw_user_id',
+ 'NFS Protocol': ['NFSv4'],
+ 'Object Gateway User': 'user-id',
Path: '/qwe',
Pseudo: '/qwe',
Squash: 'no_root_squash',
this.data = {};
this.data[$localize`Cluster`] = this.selectedItem.cluster_id;
- this.data[$localize`Daemons`] = this.selectedItem.daemons;
this.data[$localize`NFS Protocol`] = this.selectedItem.protocols.map(
(protocol: string) => 'NFSv' + protocol
);
this.data[$localize`Security Label`] = this.selectedItem.fsal.sec_label_xattr;
} else {
this.data[$localize`Storage Backend`] = $localize`Object Gateway`;
- this.data[$localize`Object Gateway User`] = this.selectedItem.fsal.rgw_user_id;
+ this.data[$localize`Object Gateway User`] = this.selectedItem.fsal.user_id;
}
}
}
<!-- Addresses -->
<div class="form-group row">
<label i18n
- class="cd-col-form-label"
+ class="cd-col-form-label required"
for="addresses">Addresses</label>
<div class="cd-col-form-input">
<input type="text"
<!-- Squash -->
<div class="form-group row">
- <label i18n
- class="cd-col-form-label"
- for="squash">Squash</label>
+ <label class="cd-col-form-label"
+ for="squash">
+ <span i18n>Squash</span>
+ <ng-container *ngTemplateOutlet="squashHelperTpl"></ng-container>
+ </label>
<div class="cd-col-form-input">
<select class="form-control custom-select"
name="squash"
<div class="col-12">
<div class="float-right">
<button class="btn btn-light "
- (click)="addClient()">
+ (click)="addClient()"
+ name="add_client">
<i [ngClass]="[icons.add]"></i>
<ng-container i18n>Add clients</ng-container>
</button>
-import { Component, Input, OnInit } from '@angular/core';
+import { Component, ContentChild, Input, OnInit, TemplateRef } from '@angular/core';
import { FormArray, FormControl, NgForm, Validators } from '@angular/forms';
import _ from 'lodash';
@Input()
clients: any[];
+ @ContentChild('squashHelper', { static: true }) squashHelperTpl: TemplateRef<any>;
+
nfsSquash: any[] = this.nfsService.nfsSquash;
nfsAccessType: any[] = this.nfsService.nfsAccessType;
icons = Icons;
<div class="card-body">
<!-- cluster_id -->
- <div class="form-group row"
- *ngIf="!isDefaultCluster">
- <label class="cd-col-form-label required"
- for="cluster_id"
- i18n>Cluster</label>
+ <div class="form-group row">
+ <label class="cd-col-form-label"
+ for="cluster_id">
+ <span class="required"
+ i18n>Cluster</span>
+ <cd-helper>
+ <p i18n>This is the ID of an NFS Service.</p>
+ </cd-helper>
+ </label>
<div class="cd-col-form-input">
<select class="form-control custom-select"
formControlName="cluster_id"
name="cluster_id"
- id="cluster_id"
- (change)="onClusterChange()">
+ id="cluster_id">
<option *ngIf="allClusters === null"
value=""
i18n>Loading...</option>
[value]="cluster.cluster_id">{{ cluster.cluster_id }}</option>
</select>
<span class="invalid-feedback"
- *ngIf="nfsForm.showError('cluster_id', formDir, 'required')"
- i18n>This field is required.</span>
- </div>
- </div>
-
- <!-- daemons -->
- <div class="form-group row"
- *ngIf="clusterType">
- <label class="cd-col-form-label"
- for="daemons">
- <ng-container i18n>Daemons</ng-container>
- </label>
- <div class="cd-col-form-input">
- <ng-container *ngFor="let daemon of nfsForm.getValue('daemons'); let i = index">
- <div class="input-group cd-mb">
- <input class="cd-form-control"
- type="text"
- [value]="daemon"
- disabled />
- <span *ngIf="clusterType === 'user'"
- class="input-group-append">
- <button class="btn btn-light"
- type="button"
- (click)="removeDaemon(i, daemon)">
- <i [ngClass]="[icons.destroy]"
- aria-hidden="true"></i>
- </button>
- </span>
- </div>
- </ng-container>
-
- <div *ngIf="clusterType === 'user'"
- class="row">
- <div class="col-md-12">
- <cd-select [data]="nfsForm.get('daemons').value"
- [options]="daemonsSelections"
- [messages]="daemonsMessages"
- (selection)="onDaemonSelection()"
- elemClass="btn btn-light float-right">
- <i [ngClass]="[icons.add]"></i>
- <ng-container i18n>Add daemon</ng-container>
- </cd-select>
- </div>
- </div>
-
- <div *ngIf="clusterType === 'orchestrator'"
- class="row">
- <div class="col-md-12">
- <button type="button"
- class="btn btn-light float-right"
- (click)="onToggleAllDaemonsSelection()">
- <i [ngClass]="[icons.add]"></i>
- <ng-container *ngIf="nfsForm.getValue('daemons').length === 0; else hasDaemons"
- i18n>Add all daemons</ng-container>
- <ng-template #hasDaemons>
- <ng-container i18n>Remove all daemons</ng-container>
- </ng-template>
- </button>
- </div>
- </div>
+ *ngIf="nfsForm.showError('cluster_id', formDir, 'required') || allClusters?.length === 0"
+ i18n>This field is required.
+ To create a new NFS cluster, <a routerLink="/services/create"
+ class="btn-link">add a new NFS Service</a>.</span>
</div>
</div>
value=""
i18n>-- Select the storage backend --</option>
<option *ngFor="let fsal of allFsals"
- [value]="fsal.value">{{ fsal.descr }}</option>
+ [value]="fsal.value"
+ [disabled]="fsal.disabled">{{ fsal.descr }}</option>
</select>
<span class="invalid-feedback"
*ngIf="nfsForm.showError('name', formDir, 'required')"
i18n>This field is required.</span>
- </div>
- </div>
-
- <!-- RGW user -->
- <div class="form-group row"
- *ngIf="nfsForm.getValue('name') === 'RGW'">
- <label class="cd-col-form-label required"
- for="rgw_user_id"
- i18n>Object Gateway User</label>
- <div class="cd-col-form-input">
- <select class="form-control custom-select"
- formControlName="rgw_user_id"
- name="rgw_user_id"
- id="rgw_user_id"
- (change)="rgwUserIdChangeHandler()">
- <option *ngIf="allRgwUsers === null"
- value=""
- i18n>Loading...</option>
- <option *ngIf="allRgwUsers !== null && allRgwUsers.length === 0"
- value=""
- i18n>-- No users available --</option>
- <option *ngIf="allRgwUsers !== null && allRgwUsers.length > 0"
- value=""
- i18n>-- Select the object gateway user --</option>
- <option *ngFor="let rgwUserId of allRgwUsers"
- [value]="rgwUserId">{{ rgwUserId }}</option>
- </select>
<span class="invalid-feedback"
- *ngIf="nfsForm.showError('rgw_user_id', formDir, 'required')"
- i18n>This field is required.</span>
- </div>
- </div>
-
- <!-- CephFS user_id -->
- <div class="form-group row"
- *ngIf="nfsForm.getValue('name') === 'CEPH'">
- <label class="cd-col-form-label required"
- for="user_id"
- i18n>CephFS User ID</label>
- <div class="cd-col-form-input">
- <select class="form-control custom-select"
- formControlName="user_id"
- name="user_id"
- id="user_id">
- <option *ngIf="allCephxClients === null"
- value=""
- i18n>Loading...</option>
- <option *ngIf="allCephxClients !== null && allCephxClients.length === 0"
- value=""
- i18n>-- No clients available --</option>
- <option *ngIf="allCephxClients !== null && allCephxClients.length > 0"
- value=""
- i18n>-- Select the cephx client --</option>
- <option *ngFor="let client of allCephxClients"
- [value]="client">{{ client }}</option>
- </select>
- <span class="invalid-feedback"
- *ngIf="nfsForm.showError('user_id', formDir, 'required')"
- i18n>This field is required.</span>
+ *ngIf="fsalAvailabilityError"
+ i18n>{{ fsalAvailabilityError }}</span>
</div>
</div>
*ngIf="nfsForm.getValue('name') === 'CEPH'">
<label class="cd-col-form-label required"
for="fs_name"
- i18n>CephFS Name</label>
+ i18n>Volume</label>
<div class="cd-col-form-input">
<select class="form-control custom-select"
formControlName="fs_name"
name="fs_name"
id="fs_name"
- (change)="rgwUserIdChangeHandler()">
+ (change)="pathChangeHandler()">
<option *ngIf="allFsNames === null"
value=""
i18n>Loading...</option>
<!-- Path -->
<div class="form-group row"
*ngIf="nfsForm.getValue('name') === 'CEPH'">
- <label class="cd-col-form-label required"
- for="path"
- i18n>CephFS Path</label>
+ <label class="cd-col-form-label"
+ for="path">
+ <span class="required"
+ i18n>CephFS Path</span>
+ <cd-helper>
+ <p i18n>A path in a CephFS file system.</p>
+ </cd-helper>
+ </label>
<div class="cd-col-form-input">
<input type="text"
class="form-control"
name="path"
id="path"
+ data-testid="fs_path"
formControlName="path"
[ngbTypeahead]="pathDataSource"
(selectItem)="pathChangeHandler()"
*ngIf="nfsForm.showError('path', formDir, 'pattern')"
i18n>Path need to start with a '/' and can be followed by a word</span>
<span class="form-text text-muted"
- *ngIf="isNewDirectory && !nfsForm.showError('path', formDir)"
- i18n>New directory will be created</span>
+ *ngIf="nfsForm.showError('path', formDir, 'pathNameNotAllowed')"
+ i18n>The path does not exist.</span>
</div>
</div>
<!-- Bucket -->
<div class="form-group row"
*ngIf="nfsForm.getValue('name') === 'RGW'">
- <label class="cd-col-form-label required"
- for="path"
- i18n>Path</label>
+ <label class="cd-col-form-label"
+ for="path">
+ <span class="required"
+ i18n>Bucket</span>
+ </label>
<div class="cd-col-form-input">
<input type="text"
class="form-control"
name="path"
id="path"
+ data-testid="rgw_path"
formControlName="path"
- [ngbTypeahead]="bucketDataSource"
- (selectItem)="bucketChangeHandler()"
- (blur)="bucketChangeHandler()">
+ [ngbTypeahead]="bucketDataSource">
<span class="invalid-feedback"
*ngIf="nfsForm.showError('path', formDir, 'required')"
i18n>This field is required.</span>
-
<span class="invalid-feedback"
- *ngIf="nfsForm.showError('path', formDir, 'pattern')"
- i18n>Path can only be a single '/' or a word</span>
-
- <span class="form-text text-muted"
- *ngIf="isNewBucket && !nfsForm.showError('path', formDir)"
- i18n>New bucket will be created</span>
+ *ngIf="nfsForm.showError('path', formDir, 'bucketNameNotAllowed')"
+ i18n>The bucket does not exist or is not in the default realm (if multiple realms are configured).
+ To continue, <a routerLink="/rgw/bucket/create"
+ class="btn-link">create a new bucket</a>.</span>
</div>
</div>
for="protocols"
i18n>NFS Protocol</label>
<div class="cd-col-form-input">
- <div class="custom-control custom-checkbox">
- <input type="checkbox"
- class="custom-control-input"
- id="protocolNfsv3"
- name="protocolNfsv3"
- formControlName="protocolNfsv3"
- disabled>
- <label i18n
- class="custom-control-label"
- for="protocolNfsv3">NFSv3</label>
- </div>
<div class="custom-control custom-checkbox">
<input type="checkbox"
class="custom-control-input"
formControlName="protocolNfsv4"
name="protocolNfsv4"
- id="protocolNfsv4">
+ id="protocolNfsv4"
+ disabled>
<label i18n
class="custom-control-label"
for="protocolNfsv4">NFSv4</label>
</div>
<span class="invalid-feedback"
- *ngIf="nfsForm.showError('protocolNfsv3', formDir, 'required') ||
- nfsForm.showError('protocolNfsv4', formDir, 'required')"
+ *ngIf="nfsForm.showError('protocolNfsv4', formDir, 'required')"
i18n>This field is required.</span>
</div>
</div>
- <!-- Tag -->
- <div class="form-group row"
- *ngIf="nfsForm.getValue('protocolNfsv3')">
- <label class="cd-col-form-label"
- for="tag">
- <ng-container i18n>NFS Tag</ng-container>
- <cd-helper>
- <p i18n>Alternative access for <strong>NFS v3</strong> mounts (it must not have a leading /).</p>
- <p i18n>Clients may not mount subdirectories (i.e. if Tag = foo, the client may not mount foo/baz).</p>
- <p i18n>By using different Tag options, the same Path may be exported multiple times.</p>
- </cd-helper>
- </label>
- <div class="cd-col-form-input">
- <input type="text"
- class="form-control"
- name="tag"
- id="tag"
- formControlName="tag">
- </div>
- </div>
-
<!-- Pseudo -->
<div class="form-group row"
*ngIf="nfsForm.getValue('protocolNfsv4')">
<!-- Squash -->
<div class="form-group row">
- <label class="cd-col-form-label required"
- for="squash"
- i18n>Squash</label>
+ <label class="cd-col-form-label"
+ for="squash">
+ <span class="required"
+ i18n>Squash</span>
+ <ng-container *ngTemplateOutlet="squashHelper"></ng-container>
+ </label>
<div class="cd-col-form-input">
<select class="form-control custom-select"
name="squash"
<cd-nfs-form-client [form]="nfsForm"
[clients]="clients"
#nfsClients>
+ <ng-template #squashHelper>
+ <cd-helper>
+ <ul class="squash-helper">
+ <li>
+ <span class="squash-helper-item-value">no_root_squash: </span>
+ <span i18n>No user id squashing is performed.</span>
+ </li>
+ <li>
+ <span class="squash-helper-item-value">root_id_squash: </span>
+ <span i18n>uid 0 and gid 0 are squashed to the Anonymous_Uid and Anonymous_Gid gid 0 in alt_groups lists is also squashed.</span>
+ </li>
+ <li>
+ <span class="squash-helper-item-value">root_squash: </span>
+ <span i18n>uid 0 and gid of any value are squashed to the Anonymous_Uid and Anonymous_Gid alt_groups lists is discarded.</span>
+ </li>
+ <li>
+ <span class="squash-helper-item-value">all_squash: </span>
+ <span i18n>All users are squashed.</span>
+ </li>
+ </ul>
+ </cd-helper>
+ </ng-template>
</cd-nfs-form-client>
</div>
.cd-mb {
margin-bottom: 10px;
}
+
+.squash-helper {
+ padding-left: 1rem;
+}
+
+.squash-helper-item-value {
+ font-weight: bold;
+}
import { NgbTypeaheadModule } from '@ng-bootstrap/ng-bootstrap';
import { ToastrModule } from 'ngx-toastr';
+import { Observable, of } from 'rxjs';
+import { NfsFormClientComponent } from '~/app/ceph/nfs/nfs-form-client/nfs-form-client.component';
+import { NfsFormComponent } from '~/app/ceph/nfs/nfs-form/nfs-form.component';
+import { Directory } from '~/app/shared/api/nfs.service';
import { LoadingPanelComponent } from '~/app/shared/components/loading-panel/loading-panel.component';
import { SharedModule } from '~/app/shared/shared.module';
import { ActivatedRouteStub } from '~/testing/activated-route-stub';
import { configureTestBed, RgwHelper } from '~/testing/unit-test-helper';
-import { NFSClusterType } from '../nfs-cluster-type.enum';
-import { NfsFormClientComponent } from '../nfs-form-client/nfs-form-client.component';
-import { NfsFormComponent } from './nfs-form.component';
describe('NfsFormComponent', () => {
let component: NfsFormComponent;
RgwHelper.selectDaemon();
fixture.detectChanges();
- httpTesting.expectOne('api/nfs-ganesha/daemon').flush([
- { daemon_id: 'node1', cluster_id: 'cluster1', cluster_type: NFSClusterType.user },
- { daemon_id: 'node2', cluster_id: 'cluster1', cluster_type: NFSClusterType.user },
- { daemon_id: 'node5', cluster_id: 'cluster2', cluster_type: NFSClusterType.orchestrator }
- ]);
httpTesting.expectOne('ui-api/nfs-ganesha/fsals').flush(['CEPH', 'RGW']);
- httpTesting.expectOne('ui-api/nfs-ganesha/cephx/clients').flush(['admin', 'fs', 'rgw']);
httpTesting.expectOne('ui-api/nfs-ganesha/cephfs/filesystems').flush([{ id: 1, name: 'a' }]);
- httpTesting
- .expectOne(`api/rgw/user?${RgwHelper.DAEMON_QUERY_PARAM}`)
- .flush(['test', 'dev', 'tenant$user']);
- const user_dev = {
- suspended: 0,
- user_id: 'dev',
- keys: ['a']
- };
- httpTesting.expectOne(`api/rgw/user/dev?${RgwHelper.DAEMON_QUERY_PARAM}`).flush(user_dev);
- const user_test = {
- suspended: 1,
- user_id: 'test',
- keys: ['a']
- };
- httpTesting.expectOne(`api/rgw/user/test?${RgwHelper.DAEMON_QUERY_PARAM}`).flush(user_test);
- const tenantUser = {
- suspended: 0,
- tenant: 'tenant',
- user_id: 'user',
- keys: ['a']
- };
- httpTesting
- .expectOne(`api/rgw/user/tenant%24user?${RgwHelper.DAEMON_QUERY_PARAM}`)
- .flush(tenantUser);
+ httpTesting.expectOne('api/nfs-ganesha/cluster').flush(['mynfs']);
httpTesting.verify();
});
});
it('should process all data', () => {
- expect(component.allDaemons).toEqual({ cluster1: ['node1', 'node2'], cluster2: ['node5'] });
- expect(component.isDefaultCluster).toEqual(false);
expect(component.allFsals).toEqual([
- { descr: 'CephFS', value: 'CEPH' },
- { descr: 'Object Gateway', value: 'RGW' }
+ { descr: 'CephFS', value: 'CEPH', disabled: false },
+ { descr: 'Object Gateway', value: 'RGW', disabled: false }
]);
- expect(component.allCephxClients).toEqual(['admin', 'fs', 'rgw']);
expect(component.allFsNames).toEqual([{ id: 1, name: 'a' }]);
- expect(component.allRgwUsers).toEqual(['dev', 'tenant$user']);
+ expect(component.allClusters).toEqual([{ cluster_id: 'mynfs' }]);
});
it('should create the form', () => {
access_type: 'RW',
clients: [],
cluster_id: '',
- daemons: [],
- fsal: { fs_name: 'a', name: '', rgw_user_id: '', user_id: '' },
- path: '',
- protocolNfsv3: false,
+ fsal: { fs_name: 'a', name: '' },
+ path: '/',
protocolNfsv4: true,
pseudo: '',
sec_label_xattr: 'security.selinux',
security_label: false,
squash: '',
- tag: '',
transportTCP: true,
transportUDP: true
});
});
it('should prepare data when selecting an cluster', () => {
- expect(component.allDaemons).toEqual({ cluster1: ['node1', 'node2'], cluster2: ['node5'] });
- expect(component.daemonsSelections).toEqual([]);
- expect(component.clusterType).toBeNull();
-
component.nfsForm.patchValue({ cluster_id: 'cluster1' });
- component.onClusterChange();
-
- expect(component.daemonsSelections).toEqual([
- { description: '', name: 'node1', selected: false, enabled: true },
- { description: '', name: 'node2', selected: false, enabled: true }
- ]);
- expect(component.clusterType).toBe(NFSClusterType.user);
component.nfsForm.patchValue({ cluster_id: 'cluster2' });
- component.onClusterChange();
- expect(component.clusterType).toBe(NFSClusterType.orchestrator);
- expect(component.daemonsSelections).toEqual([]);
- });
-
- it('should clean data when changing cluster', () => {
- component.nfsForm.patchValue({ cluster_id: 'cluster1', daemons: ['node1'] });
- component.nfsForm.patchValue({ cluster_id: 'node2' });
- component.onClusterChange();
-
- expect(component.nfsForm.getValue('daemons')).toEqual([]);
});
it('should not allow changing cluster in edit mode', () => {
expect(component.nfsForm.get('cluster_id').disabled).toBeTruthy();
});
- it('should mark NFSv4 protocol as required', () => {
- component.nfsForm.patchValue({
- protocolNfsv4: false
- });
- component.nfsForm.updateValueAndValidity({ emitEvent: false });
- expect(component.nfsForm.valid).toBeFalsy();
- expect(component.nfsForm.get('protocolNfsv4').hasError('required')).toBeTruthy();
+ it('should mark NFSv4 protocol as enabled always', () => {
+ expect(component.nfsForm.get('protocolNfsv4')).toBeTruthy();
});
describe('should submit request', () => {
access_type: 'RW',
clients: [],
cluster_id: 'cluster1',
- daemons: ['node2'],
- fsal: { name: 'CEPH', user_id: 'fs', fs_name: 1, rgw_user_id: '' },
+ fsal: { name: 'CEPH', fs_name: 1 },
path: '/foo',
- protocolNfsv3: true,
protocolNfsv4: true,
pseudo: '/baz',
squash: 'no_root_squash',
- tag: 'bar',
transportTCP: true,
transportUDP: true
});
});
- it('should remove "pseudo" requirement when NFS v4 disabled', () => {
- component.nfsForm.patchValue({
- protocolNfsv4: false,
- pseudo: ''
- });
-
- component.nfsForm.updateValueAndValidity({ emitEvent: false });
- expect(component.nfsForm.valid).toBeTruthy();
- });
-
it('should call update', () => {
activatedRoute.setParams({ cluster_id: 'cluster1', export_id: '1' });
component.isEdit = true;
access_type: 'RW',
clients: [],
cluster_id: 'cluster1',
- daemons: ['node2'],
- export_id: '1',
- fsal: { fs_name: 1, name: 'CEPH', sec_label_xattr: null, user_id: 'fs' },
+ export_id: 1,
+ fsal: { fs_name: 1, name: 'CEPH', sec_label_xattr: null },
path: '/foo',
- protocols: [3, 4],
+ protocols: [4],
pseudo: '/baz',
security_label: false,
squash: 'no_root_squash',
- tag: 'bar',
transports: ['TCP', 'UDP']
});
});
access_type: 'RW',
clients: [],
cluster_id: 'cluster1',
- daemons: ['node2'],
fsal: {
fs_name: 1,
name: 'CEPH',
- sec_label_xattr: null,
- user_id: 'fs'
+ sec_label_xattr: null
},
path: '/foo',
- protocols: [3, 4],
+ protocols: [4],
pseudo: '/baz',
security_label: false,
squash: 'no_root_squash',
- tag: 'bar',
transports: ['TCP', 'UDP']
});
});
});
+
+ describe('pathExistence', () => {
+ beforeEach(() => {
+ component['nfsService']['lsDir'] = jest.fn(
+ (): Observable<Directory> => of({ paths: ['/path1'] })
+ );
+ component.nfsForm.get('name').setValue('CEPH');
+ component.setPathValidation();
+ });
+
+ const testValidator = (pathName: string, valid: boolean, expectedError?: string) => {
+ const path = component.nfsForm.get('path');
+ path.setValue(pathName);
+ path.markAsDirty();
+ path.updateValueAndValidity();
+
+ if (valid) {
+ expect(path.errors).toBe(null);
+ } else {
+ expect(path.hasError(expectedError)).toBeTruthy();
+ }
+ };
+
+ it('path cannot be empty', () => {
+ testValidator('', false, 'required');
+ });
+
+ it('path that does not exist should be invalid', () => {
+ testValidator('/path2', false, 'pathNameNotAllowed');
+ expect(component['nfsService']['lsDir']).toHaveBeenCalledTimes(1);
+ });
+
+ it('path that exists should be valid', () => {
+ testValidator('/path1', true);
+ expect(component['nfsService']['lsDir']).toHaveBeenCalledTimes(1);
+ });
+ });
});
import { ChangeDetectorRef, Component, OnInit, ViewChild } from '@angular/core';
-import { FormControl, Validators } from '@angular/forms';
+import {
+ AbstractControl,
+ AsyncValidatorFn,
+ FormControl,
+ ValidationErrors,
+ Validators
+} from '@angular/forms';
import { ActivatedRoute, Router } from '@angular/router';
import _ from 'lodash';
import { forkJoin, Observable, of } from 'rxjs';
-import { debounceTime, distinctUntilChanged, map, mergeMap } from 'rxjs/operators';
+import { catchError, debounceTime, distinctUntilChanged, map, mergeMap } from 'rxjs/operators';
-import { NfsService } from '~/app/shared/api/nfs.service';
-import { RgwUserService } from '~/app/shared/api/rgw-user.service';
-import { SelectMessages } from '~/app/shared/components/select/select-messages.model';
-import { SelectOption } from '~/app/shared/components/select/select-option.model';
+import { NfsFSAbstractionLayer } from '~/app/ceph/nfs/models/nfs.fsal';
+import { Directory, NfsService } from '~/app/shared/api/nfs.service';
+import { RgwBucketService } from '~/app/shared/api/rgw-bucket.service';
+import { RgwSiteService } from '~/app/shared/api/rgw-site.service';
import { ActionLabelsI18n } from '~/app/shared/constants/app.constants';
import { Icons } from '~/app/shared/enum/icons.enum';
import { CdForm } from '~/app/shared/forms/cd-form';
import { Permission } from '~/app/shared/models/permissions';
import { AuthStorageService } from '~/app/shared/services/auth-storage.service';
import { TaskWrapperService } from '~/app/shared/services/task-wrapper.service';
-import { NFSClusterType } from '../nfs-cluster-type.enum';
import { NfsFormClientComponent } from '../nfs-form-client/nfs-form-client.component';
@Component({
isEdit = false;
cluster_id: string = null;
- clusterType: string = null;
export_id: string = null;
- isNewDirectory = false;
- isNewBucket = false;
- isDefaultCluster = false;
-
- allClusters: { cluster_id: string; cluster_type: string }[] = null;
- allDaemons = {};
+ allClusters: { cluster_id: string }[] = null;
icons = Icons;
allFsals: any[] = [];
- allRgwUsers: any[] = [];
- allCephxClients: any[] = null;
allFsNames: any[] = null;
+ fsalAvailabilityError: string = null;
defaultAccessType = { RGW: 'RO' };
nfsAccessType: any[] = this.nfsService.nfsAccessType;
action: string;
resource: string;
- daemonsSelections: SelectOption[] = [];
- daemonsMessages = new SelectMessages({ noOptions: $localize`There are no daemons available.` });
-
pathDataSource = (text$: Observable<string>) => {
return text$.pipe(
debounceTime(200),
distinctUntilChanged(),
mergeMap((token: string) => this.getPathTypeahead(token)),
- map((val: any) => val.paths)
+ map((val: string[]) => val)
);
};
private nfsService: NfsService,
private route: ActivatedRoute,
private router: Router,
- private rgwUserService: RgwUserService,
+ private rgwBucketService: RgwBucketService,
+ private rgwSiteService: RgwSiteService,
private formBuilder: CdFormBuilder,
private taskWrapper: TaskWrapperService,
private cdRef: ChangeDetectorRef,
ngOnInit() {
const promises: Observable<any>[] = [
- this.nfsService.daemon(),
+ this.nfsService.listClusters(),
this.nfsService.fsals(),
- this.nfsService.clients(),
this.nfsService.filesystems()
];
getData(promises: Observable<any>[]) {
forkJoin(promises).subscribe((data: any[]) => {
- this.resolveDaemons(data[0]);
+ this.resolveClusters(data[0]);
this.resolveFsals(data[1]);
- this.resolveClients(data[2]);
- this.resolveFilesystems(data[3]);
- if (data[4]) {
- this.resolveModel(data[4]);
+ this.resolveFilesystems(data[2]);
+ if (data[3]) {
+ this.resolveModel(data[3]);
}
this.loadingReady();
cluster_id: new FormControl('', {
validators: [Validators.required]
}),
- daemons: new FormControl([]),
fsal: new CdFormGroup({
name: new FormControl('', {
validators: [Validators.required]
}),
- user_id: new FormControl('', {
- validators: [
- CdValidators.requiredIf({
- name: 'CEPH'
- })
- ]
- }),
fs_name: new FormControl('', {
validators: [
CdValidators.requiredIf({
name: 'CEPH'
})
]
- }),
- rgw_user_id: new FormControl('', {
- validators: [
- CdValidators.requiredIf({
- name: 'RGW'
- })
- ]
})
}),
- path: new FormControl(''),
- protocolNfsv3: new FormControl(false, {
- validators: [
- CdValidators.requiredIf({ protocolNfsv4: false }, (value: boolean) => {
- return !value;
- })
- ]
- }),
- protocolNfsv4: new FormControl(true, {
- validators: [
- CdValidators.requiredIf({ protocolNfsv3: false }, (value: boolean) => {
- return !value;
- })
- ]
- }),
- tag: new FormControl(''),
+ path: new FormControl('/'),
+ protocolNfsv4: new FormControl(true),
pseudo: new FormControl('', {
validators: [
CdValidators.requiredIf({ protocolNfsv4: true }),
res.sec_label_xattr = res.fsal.sec_label_xattr;
}
- if (this.clusterType === NFSClusterType.user) {
- this.daemonsSelections = _.map(
- this.allDaemons[res.cluster_id],
- (daemon) => new SelectOption(res.daemons.indexOf(daemon) !== -1, daemon, '')
- );
- this.daemonsSelections = [...this.daemonsSelections];
- }
-
- res.protocolNfsv3 = res.protocols.indexOf(3) !== -1;
res.protocolNfsv4 = res.protocols.indexOf(4) !== -1;
delete res.protocols;
this.clients = res.clients;
}
- resolveDaemons(daemons: Record<string, any>) {
- daemons = _.sortBy(daemons, ['daemon_id']);
- const clusters = _.groupBy(daemons, 'cluster_id');
-
+ resolveClusters(clusters: string[]) {
this.allClusters = [];
- _.forIn(clusters, (cluster, cluster_id) => {
- this.allClusters.push({ cluster_id: cluster_id, cluster_type: cluster[0].cluster_type });
- this.allDaemons[cluster_id] = [];
- });
-
- _.forEach(daemons, (daemon) => {
- this.allDaemons[daemon.cluster_id].push(daemon.daemon_id);
- });
-
- if (this.isEdit) {
- this.clusterType = _.find(this.allClusters, { cluster_id: this.cluster_id })?.cluster_type;
- }
-
- const hasOneCluster = _.isArray(this.allClusters) && this.allClusters.length === 1;
- this.isDefaultCluster = hasOneCluster && this.allClusters[0].cluster_id === '_default_';
- if (hasOneCluster) {
- this.nfsForm.patchValue({
- cluster_id: this.allClusters[0].cluster_id
- });
- this.onClusterChange();
+ for (const cluster of clusters) {
+ this.allClusters.push({ cluster_id: cluster });
}
}
if (_.isObjectLike(fsalItem)) {
this.allFsals.push(fsalItem);
- if (fsalItem.value === 'RGW') {
- this.rgwUserService.list().subscribe((result: any) => {
- result.forEach((user: Record<string, any>) => {
- if (user.suspended === 0 && user.keys.length > 0) {
- const userId = user.tenant ? `${user.tenant}$${user.user_id}` : user.user_id;
- this.allRgwUsers.push(userId);
- }
- });
- });
- }
}
});
}
}
- resolveClients(clients: any[]) {
- this.allCephxClients = clients;
- }
-
resolveFilesystems(filesystems: any[]) {
this.allFsNames = filesystems;
if (filesystems.length === 1) {
}
fsalChangeHandler() {
- this.nfsForm.patchValue({
- tag: this._generateTag(),
- pseudo: this._generatePseudo(),
- access_type: this._updateAccessType()
+ const fsalValue = this.nfsForm.getValue('name');
+ const checkAvailability =
+ fsalValue === 'RGW'
+ ? this.rgwSiteService.get('realms').pipe(
+ mergeMap((realms: string[]) =>
+ realms.length === 0
+ ? of(true)
+ : this.rgwSiteService.isDefaultRealm().pipe(
+ mergeMap((isDefaultRealm) => {
+ if (!isDefaultRealm) {
+ throw new Error('Selected realm is not the default.');
+ }
+ return of(true);
+ })
+ )
+ )
+ )
+ : this.nfsService.filesystems();
+
+ checkAvailability.subscribe({
+ next: () => {
+ this.setFsalAvailability(fsalValue, true);
+ this.nfsForm.patchValue({
+ path: fsalValue === 'RGW' ? '' : '/',
+ pseudo: this.generatePseudo(),
+ access_type: this.updateAccessType()
+ });
+
+ this.setPathValidation();
+
+ this.cdRef.detectChanges();
+ },
+ error: (error) => {
+ this.setFsalAvailability(fsalValue, false, error);
+ this.nfsForm.get('name').setValue('');
+ }
});
+ }
- this.setPathValidation();
+ private setFsalAvailability(fsalValue: string, available: boolean, errorMessage: string = '') {
+ this.allFsals = this.allFsals.map((fsalItem: NfsFSAbstractionLayer) => {
+ if (fsalItem.value === fsalValue) {
+ fsalItem.disabled = !available;
- this.cdRef.detectChanges();
+ this.fsalAvailabilityError = fsalItem.disabled
+ ? $localize`${fsalItem.descr} backend is not available. ${errorMessage}`
+ : null;
+ }
+ return fsalItem;
+ });
}
accessTypeChangeHandler() {
}
setPathValidation() {
+ const path = this.nfsForm.get('path');
+ path.setValidators([Validators.required]);
if (this.nfsForm.getValue('name') === 'RGW') {
- this.nfsForm
- .get('path')
- .setValidators([Validators.required, Validators.pattern('^(/|[^/><|&()#?]+)$')]);
+ path.setAsyncValidators([CdValidators.bucketExistence(true, this.rgwBucketService)]);
} else {
- this.nfsForm
- .get('path')
- .setValidators([Validators.required, Validators.pattern('^/[^><|&()?]*$')]);
+ path.setAsyncValidators([this.pathExistence(true)]);
}
- }
- rgwUserIdChangeHandler() {
- this.nfsForm.patchValue({
- pseudo: this._generatePseudo()
- });
+ if (this.isEdit) {
+ path.markAsDirty();
+ }
}
getAccessTypeHelp(accessType: string) {
return '';
}
- getPathTypeahead(path: any) {
+ private getPathTypeahead(path: any) {
if (!_.isString(path) || path === '/') {
return of([]);
}
const fsName = this.nfsForm.getValue('fsal').fs_name;
- return this.nfsService.lsDir(fsName, path);
+ return this.nfsService.lsDir(fsName, path).pipe(
+ map((result: Directory) =>
+ result.paths.filter((dirName: string) => dirName.toLowerCase().includes(path)).slice(0, 15)
+ ),
+ catchError(() => of([$localize`Error while retrieving paths.`]))
+ );
}
pathChangeHandler() {
this.nfsForm.patchValue({
- pseudo: this._generatePseudo()
- });
-
- const path = this.nfsForm.getValue('path');
- this.getPathTypeahead(path).subscribe((res: any) => {
- this.isNewDirectory = path !== '/' && res.paths.indexOf(path) === -1;
- });
- }
-
- bucketChangeHandler() {
- this.nfsForm.patchValue({
- tag: this._generateTag(),
- pseudo: this._generatePseudo()
- });
-
- const bucket = this.nfsForm.getValue('path');
- this.getBucketTypeahead(bucket).subscribe((res: any) => {
- this.isNewBucket = bucket !== '' && res.indexOf(bucket) === -1;
+ pseudo: this.generatePseudo()
});
}
- getBucketTypeahead(path: string): Observable<any> {
- const rgwUserId = this.nfsForm.getValue('rgw_user_id');
-
- if (_.isString(rgwUserId) && _.isString(path) && path !== '/' && path !== '') {
- return this.nfsService.buckets(rgwUserId);
+ private getBucketTypeahead(path: string): Observable<any> {
+ if (_.isString(path) && path !== '/' && path !== '') {
+ return this.rgwBucketService.list().pipe(
+ map((bucketList) =>
+ bucketList
+ .filter((bucketName: string) => bucketName.toLowerCase().includes(path))
+ .slice(0, 15)
+ ),
+ catchError(() => of([$localize`Error while retrieving bucket names.`]))
+ );
} else {
return of([]);
}
}
- _generateTag() {
- let newTag = this.nfsForm.getValue('tag');
- if (!this.nfsForm.get('tag').dirty) {
- newTag = undefined;
- if (this.nfsForm.getValue('fsal') === 'RGW') {
- newTag = this.nfsForm.getValue('path');
- }
- }
- return newTag;
- }
-
- _generatePseudo() {
+ private generatePseudo() {
let newPseudo = this.nfsForm.getValue('pseudo');
if (this.nfsForm.get('pseudo') && !this.nfsForm.get('pseudo').dirty) {
newPseudo = undefined;
if (_.isString(this.nfsForm.getValue('path'))) {
newPseudo += this.nfsForm.getValue('path');
}
- } else if (this.nfsForm.getValue('fsal') === 'RGW') {
- if (_.isString(this.nfsForm.getValue('rgw_user_id'))) {
- newPseudo = '/' + this.nfsForm.getValue('rgw_user_id');
- if (_.isString(this.nfsForm.getValue('path'))) {
- newPseudo += '/' + this.nfsForm.getValue('path');
- }
- }
}
}
return newPseudo;
}
- _updateAccessType() {
+ private updateAccessType() {
const name = this.nfsForm.getValue('name');
let accessType = this.defaultAccessType[name];
return accessType;
}
- onClusterChange() {
- const cluster_id = this.nfsForm.getValue('cluster_id');
- this.clusterType = _.find(this.allClusters, { cluster_id: cluster_id })?.cluster_type;
- if (this.clusterType === NFSClusterType.user) {
- this.daemonsSelections = _.map(
- this.allDaemons[cluster_id],
- (daemon) => new SelectOption(false, daemon, '')
- );
- this.daemonsSelections = [...this.daemonsSelections];
- } else {
- this.daemonsSelections = [];
- }
- this.nfsForm.patchValue({ daemons: [] });
- }
-
- removeDaemon(index: number, daemon: string) {
- this.daemonsSelections.forEach((value) => {
- if (value.name === daemon) {
- value.selected = false;
- }
- });
-
- const daemons = this.nfsForm.get('daemons');
- daemons.value.splice(index, 1);
- daemons.setValue(daemons.value);
-
- return false;
- }
-
- onDaemonSelection() {
- this.nfsForm.get('daemons').setValue(this.nfsForm.getValue('daemons'));
- }
-
- onToggleAllDaemonsSelection() {
- const cluster_id = this.nfsForm.getValue('cluster_id');
- const daemons =
- this.nfsForm.getValue('daemons').length === 0 ? this.allDaemons[cluster_id] : [];
- this.nfsForm.patchValue({ daemons: daemons });
- }
-
submitAction() {
let action: Observable<any>;
- const requestModel = this._buildRequest();
+ const requestModel = this.buildRequest();
if (this.isEdit) {
action = this.taskWrapper.wrapTaskAroundCall({
task: new FinishedTask('nfs/edit', {
cluster_id: this.cluster_id,
- export_id: this.export_id
+ export_id: _.parseInt(this.export_id)
}),
- call: this.nfsService.update(this.cluster_id, this.export_id, requestModel)
+ call: this.nfsService.update(this.cluster_id, _.parseInt(this.export_id), requestModel)
});
} else {
// Create
});
}
- _buildRequest() {
+ private buildRequest() {
const requestModel: any = _.cloneDeep(this.nfsForm.value);
- if (_.isUndefined(requestModel.tag) || requestModel.tag === '') {
- requestModel.tag = null;
- }
-
if (this.isEdit) {
- requestModel.export_id = this.export_id;
+ requestModel.export_id = _.parseInt(this.export_id);
}
- if (requestModel.fsal.name === 'CEPH') {
- delete requestModel.fsal.rgw_user_id;
- } else {
+ if (requestModel.fsal.name === 'RGW') {
delete requestModel.fsal.fs_name;
- delete requestModel.fsal.user_id;
}
requestModel.protocols = [];
- if (requestModel.protocolNfsv3) {
- requestModel.protocols.push(3);
- } else {
- requestModel.tag = null;
- }
- delete requestModel.protocolNfsv3;
if (requestModel.protocolNfsv4) {
requestModel.protocols.push(4);
} else {
return requestModel;
}
+
+ private pathExistence(requiredExistenceResult: boolean): AsyncValidatorFn {
+ return (control: AbstractControl): Observable<ValidationErrors | null> => {
+ if (control.pristine || !control.value) {
+ return of({ required: true });
+ }
+ const fsName = this.nfsForm.getValue('fsal').fs_name;
+ return this.nfsService
+ .lsDir(fsName, control.value)
+ .pipe(
+ map((directory: Directory) =>
+ directory.paths.includes(control.value) === requiredExistenceResult
+ ? null
+ : { pathNameNotAllowed: true }
+ )
+ );
+ };
+ }
}
beforeEach(() => {
fixture.detectChanges();
spyOn(nfsService, 'list').and.callThrough();
- httpTesting.expectOne('api/nfs-ganesha/daemon').flush([]);
});
afterEach(() => {
refresh(new Summary());
spyOn(nfsService, 'list').and.callFake(() => of(exports));
fixture.detectChanges();
-
- const req = httpTesting.expectOne('api/nfs-ganesha/daemon');
- req.flush([]);
});
it('should gets all exports without tasks', () => {
prop: 'cluster_id',
flexGrow: 2
},
- {
- name: $localize`Daemons`,
- prop: 'daemons',
- flexGrow: 2
- },
{
name: $localize`Storage Backend`,
prop: 'fsal',
}
];
- this.nfsService.daemon().subscribe(
- (daemons: any) => {
- const clusters = _(daemons)
- .map((daemon) => daemon.cluster_id)
- .uniq()
- .value();
-
- this.isDefaultCluster = clusters.length === 1 && clusters[0] === '_default_';
- this.columns[2].isHidden = this.isDefaultCluster;
- if (this.table) {
- this.table.updateColumns();
- }
-
- this.taskListService.init(
- () => this.nfsService.list(),
- (resp) => this.prepareResponse(resp),
- (exports) => (this.exports = exports),
- () => this.onFetchError(),
- this.taskFilter,
- this.itemFilter,
- this.builders
- );
- },
- () => {
- this.onFetchError();
- }
+ this.taskListService.init(
+ () => this.nfsService.list(),
+ (resp) => this.prepareResponse(resp),
+ (exports) => (this.exports = exports),
+ () => this.onFetchError(),
+ this.taskFilter,
+ this.itemFilter,
+ this.builders
);
}
service_map_id: string;
version: string;
server_hostname: string;
+ realm_name: string;
zonegroup_name: string;
zone_name: string;
default: boolean;
i18n>This field is required.</span>
<span class="invalid-feedback"
*ngIf="bucketForm.showError('bid', frm, 'bucketNameInvalid')"
- i18n>The value is not valid.</span>
+ i18n>Bucket names can only contain lowercase letters, numbers, periods and hyphens.</span>
<span class="invalid-feedback"
- *ngIf="bucketForm.showError('bid', frm, 'bucketNameExists')"
+ *ngIf="bucketForm.showError('bid', frm, 'bucketNameNotAllowed')"
i18n>The chosen name is already in use.</span>
<span class="invalid-feedback"
*ngIf="bucketForm.showError('bid', frm, 'containsUpperCase')"
i18n>Bucket names cannot be formatted as IP address.</span>
<span class="invalid-feedback"
*ngIf="bucketForm.showError('bid', frm, 'onlyLowerCaseAndNumbers')"
- i18n>Bucket names can only contain lowercase letters, numbers, and hyphens.</span>
+ i18n>Bucket labels cannot be empty and can only contain lowercase letters, numbers and hyphens.</span>
<span class="invalid-feedback"
*ngIf="bucketForm.showError('bid', frm, 'shouldBeInRange')"
i18n>Bucket names must be 3 to 63 characters long.</span>
import { HttpClientTestingModule } from '@angular/common/http/testing';
-import { ComponentFixture, fakeAsync, TestBed, tick } from '@angular/core/testing';
+import { ComponentFixture, fakeAsync, TestBed } from '@angular/core/testing';
import { ReactiveFormsModule } from '@angular/forms';
import { Router } from '@angular/router';
import { RouterTestingModule } from '@angular/router/testing';
import _ from 'lodash';
import { ToastrModule } from 'ngx-toastr';
-import { of as observableOf, throwError } from 'rxjs';
+import { of as observableOf } from 'rxjs';
import { RgwBucketService } from '~/app/shared/api/rgw-bucket.service';
import { RgwSiteService } from '~/app/shared/api/rgw-site.service';
});
describe('bucketNameValidator', () => {
- const testValidator = (name: string, valid: boolean, expectedError?: string) => {
- rgwBucketServiceGetSpy.and.returnValue(throwError('foo'));
- formHelper.setValue('bid', name, true);
- tick();
- if (valid) {
- formHelper.expectValid('bid');
- } else {
- formHelper.expectError('bid', expectedError);
- }
- };
-
it('should validate empty name', fakeAsync(() => {
formHelper.expectErrorChange('bid', '', 'required', true);
}));
+ });
- it('bucket names cannot be formatted as IP address', fakeAsync(() => {
- const testIPs = ['1.1.1.01', '001.1.1.01', '127.0.0.1'];
- for (const ip of testIPs) {
- testValidator(ip, false, 'ipAddress');
- }
- }));
-
- it('bucket name must be >= 3 characters long (1/2)', fakeAsync(() => {
- testValidator('ab', false, 'shouldBeInRange');
- }));
-
- it('bucket name must be >= 3 characters long (2/2)', fakeAsync(() => {
- testValidator('abc', true);
- }));
-
- it('bucket name must be <= than 63 characters long (1/2)', fakeAsync(() => {
- testValidator(_.repeat('a', 64), false, 'shouldBeInRange');
- }));
-
- it('bucket name must be <= than 63 characters long (2/2)', fakeAsync(() => {
- testValidator(_.repeat('a', 63), true);
- }));
-
- it('bucket names must not contain uppercase characters or underscores (1/2)', fakeAsync(() => {
- testValidator('iAmInvalid', false, 'containsUpperCase');
- }));
-
- it('bucket names can only contain lowercase letters, numbers, and hyphens', fakeAsync(() => {
- testValidator('$$$', false, 'onlyLowerCaseAndNumbers');
- }));
-
- it('bucket names must not contain uppercase characters or underscores (2/2)', fakeAsync(() => {
- testValidator('i_am_invalid', false, 'containsUpperCase');
- }));
-
- it('bucket names must start and end with letters or numbers', fakeAsync(() => {
- testValidator('abcd-', false, 'lowerCaseOrNumber');
- }));
-
- it('bucket names with invalid labels (1/3)', fakeAsync(() => {
- testValidator('abc.1def.Ghi2', false, 'containsUpperCase');
- }));
-
- it('bucket names with invalid labels (2/3)', fakeAsync(() => {
- testValidator('abc.1_xy', false, 'containsUpperCase');
- }));
-
- it('bucket names with invalid labels (3/3)', fakeAsync(() => {
- testValidator('abc.*def', false, 'lowerCaseOrNumber');
- }));
-
- it('bucket names must be a series of one or more labels and can contain lowercase letters, numbers, and hyphens (1/3)', fakeAsync(() => {
- testValidator('xyz.abc', true);
- }));
-
- it('bucket names must be a series of one or more labels and can contain lowercase letters, numbers, and hyphens (2/3)', fakeAsync(() => {
- testValidator('abc.1-def', true);
- }));
-
- it('bucket names must be a series of one or more labels and can contain lowercase letters, numbers, and hyphens (3/3)', fakeAsync(() => {
- testValidator('abc.ghi2', true);
- }));
-
- it('bucket names must be unique', fakeAsync(() => {
- testValidator('bucket-name-is-unique', true);
- }));
-
- it('bucket names must not contain spaces', fakeAsync(() => {
- testValidator('bucket name with spaces', false, 'onlyLowerCaseAndNumbers');
- }));
-
+ describe('zonegroup and placement targets', () => {
it('should get zonegroup and placement targets', () => {
const payload: Record<string, any> = {
zonegroup: 'default',
import { Component, OnInit } from '@angular/core';
-import { AbstractControl, AsyncValidatorFn, ValidationErrors, Validators } from '@angular/forms';
+import { Validators } from '@angular/forms';
import { ActivatedRoute, Router } from '@angular/router';
import _ from 'lodash';
-import { forkJoin, Observable, of as observableOf, timer as observableTimer } from 'rxjs';
-import { map, switchMapTo } from 'rxjs/operators';
+import { forkJoin } from 'rxjs';
import { RgwBucketService } from '~/app/shared/api/rgw-bucket.service';
import { RgwSiteService } from '~/app/shared/api/rgw-site.service';
});
this.bucketForm = this.formBuilder.group({
id: [null],
- bid: [null, [Validators.required], this.editing ? [] : [this.bucketNameValidator()]],
+ bid: [
+ null,
+ [Validators.required],
+ this.editing
+ ? []
+ : [CdValidators.bucketName(), CdValidators.bucketExistence(false, this.rgwBucketService)]
+ ],
owner: [null, [Validators.required]],
'placement-target': [null, this.editing ? [] : [Validators.required]],
versioning: [null],
}
}
- /**
- * Validate the bucket name. In general, bucket names should follow domain
- * name constraints:
- * - Bucket names must be unique.
- * - Bucket names cannot be formatted as IP address.
- * - Bucket names can be between 3 and 63 characters long.
- * - Bucket names must not contain uppercase characters or underscores.
- * - Bucket names must start with a lowercase letter or number.
- * - Bucket names must be a series of one or more labels. Adjacent
- * labels are separated by a single period (.). Bucket names can
- * contain lowercase letters, numbers, and hyphens. Each label must
- * start and end with a lowercase letter or a number.
- */
- bucketNameValidator(): AsyncValidatorFn {
- return (control: AbstractControl): Observable<ValidationErrors | null> => {
- // Exit immediately if user has not interacted with the control yet
- // or the control value is empty.
- if (control.pristine || control.value === '') {
- return observableOf(null);
- }
- const constraints = [];
- let errorName: string;
- // - Bucket names cannot be formatted as IP address.
- constraints.push(() => {
- const ipv4Rgx = /^((25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.){3}(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)$/i;
- const ipv6Rgx = /^(?:[a-f0-9]{1,4}:){7}[a-f0-9]{1,4}$/i;
- const name = this.bucketForm.get('bid').value;
- let notIP = true;
- if (ipv4Rgx.test(name) || ipv6Rgx.test(name)) {
- errorName = 'ipAddress';
- notIP = false;
- }
- return notIP;
- });
- // - Bucket names can be between 3 and 63 characters long.
- constraints.push((name: string) => {
- if (!_.inRange(name.length, 3, 64)) {
- errorName = 'shouldBeInRange';
- return false;
- }
- return true;
- });
- // - Bucket names must not contain uppercase characters or underscores.
- // - Bucket names must start with a lowercase letter or number.
- // - Bucket names must be a series of one or more labels. Adjacent
- // labels are separated by a single period (.). Bucket names can
- // contain lowercase letters, numbers, and hyphens. Each label must
- // start and end with a lowercase letter or a number.
- constraints.push((name: string) => {
- const labels = _.split(name, '.');
- return _.every(labels, (label) => {
- // Bucket names must not contain uppercase characters or underscores.
- if (label !== _.toLower(label) || label.includes('_')) {
- errorName = 'containsUpperCase';
- return false;
- }
- // Bucket names can contain lowercase letters, numbers, and hyphens.
- if (!/^\S*$/.test(name) || !/[0-9a-z-]/.test(label)) {
- errorName = 'onlyLowerCaseAndNumbers';
- return false;
- }
- // Each label must start and end with a lowercase letter or a number.
- return _.every([0, label.length - 1], (index) => {
- errorName = 'lowerCaseOrNumber';
- return /[a-z]/.test(label[index]) || _.isInteger(_.parseInt(label[index]));
- });
- });
- });
- if (!_.every(constraints, (func: Function) => func(control.value))) {
- return observableTimer().pipe(
- map(() => {
- switch (errorName) {
- case 'onlyLowerCaseAndNumbers':
- return { onlyLowerCaseAndNumbers: true };
- case 'shouldBeInRange':
- return { shouldBeInRange: true };
- case 'ipAddress':
- return { ipAddress: true };
- case 'containsUpperCase':
- return { containsUpperCase: true };
- case 'lowerCaseOrNumber':
- return { lowerCaseOrNumber: true };
- default:
- return { bucketNameInvalid: true };
- }
- })
- );
- }
- // - Bucket names must be unique.
- return observableTimer().pipe(
- switchMapTo(this.rgwBucketService.exists.call(this.rgwBucketService, control.value)),
- map((resp: boolean) => {
- if (!resp) {
- return null;
- } else {
- return { bucketNameExists: true };
- }
- })
- );
- };
- }
-
areMfaCredentialsRequired() {
return (
this.isMfaDeleteEnabled !== this.isMfaDeleteAlreadyEnabled ||
getBucketList(context: CdTableFetchDataContext) {
this.setTableRefreshTimeout();
- this.rgwBucketService.list().subscribe(
+ this.rgwBucketService.list(true).subscribe(
(resp: object[]) => {
this.buckets = resp;
this.transformBucketData();
import { take } from 'rxjs/operators';
+import { RgwDaemon } from '~/app/ceph/rgw/models/rgw-daemon';
import { RgwDaemonService } from '~/app/shared/api/rgw-daemon.service';
import { RgwSiteService } from '~/app/shared/api/rgw-site.service';
import { ListWithDetails } from '~/app/shared/classes/list-with-details.class';
getDaemonList(context: CdTableFetchDataContext) {
this.rgwDaemonService.daemons$.pipe(take(1)).subscribe(
- (resp: object[]) => {
+ (resp: RgwDaemon[]) => {
this.daemons = resp;
},
() => {
--- /dev/null
+import { ApiClient } from '~/app/shared/api/api-client';
+
+class MockApiClient extends ApiClient {}
+
+describe('ApiClient', () => {
+ const service = new MockApiClient();
+
+ it('should get the version header value', () => {
+ expect(service.getVersionHeaderValue(1, 2)).toBe('application/vnd.ceph.api.v1.2+json');
+ });
+});
--- /dev/null
+export abstract class ApiClient {
+ getVersionHeaderValue(major: number, minor: number) {
+ return `application/vnd.ceph.api.v${major}.${minor}+json`;
+ }
+}
import { InventoryDevice } from '~/app/ceph/cluster/inventory/inventory-devices/inventory-device.model';
import { InventoryHost } from '~/app/ceph/cluster/inventory/inventory-host.model';
+import { ApiClient } from '~/app/shared/api/api-client';
import { CdHelperClass } from '~/app/shared/classes/cd-helper.class';
import { Daemon } from '../models/daemon.interface';
import { CdDevice } from '../models/devices';
@Injectable({
providedIn: 'root'
})
-export class HostService {
+export class HostService extends ApiClient {
baseURL = 'api/host';
baseUIURL = 'ui-api/host';
- constructor(private http: HttpClient, private deviceService: DeviceService) {}
+ constructor(private http: HttpClient, private deviceService: DeviceService) {
+ super();
+ }
list(facts: string): Observable<object[]> {
return this.http.get<object[]>(this.baseURL, {
maintenance: maintenance,
force: force
},
- { headers: { Accept: 'application/vnd.ceph.api.v0.1+json' } }
+ { headers: { Accept: this.getVersionHeaderValue(0, 1) } }
);
}
});
it('should call update', () => {
- service.update('cluster_id', 'export_id', 'foo').subscribe();
- const req = httpTesting.expectOne('api/nfs-ganesha/export/cluster_id/export_id');
+ service.update('cluster_id', 1, 'foo').subscribe();
+ const req = httpTesting.expectOne('api/nfs-ganesha/export/cluster_id/1');
expect(req.request.body).toEqual('foo');
expect(req.request.method).toBe('PUT');
});
const req = httpTesting.expectOne('ui-api/nfs-ganesha/lsdir/a?root_dir=foo_dir');
expect(req.request.method).toBe('GET');
});
-
- it('should call buckets', () => {
- service.buckets('user_foo').subscribe();
- const req = httpTesting.expectOne('ui-api/nfs-ganesha/rgw/buckets?user_id=user_foo');
- expect(req.request.method).toBe('GET');
- });
-
- it('should call daemon', () => {
- service.daemon().subscribe();
- const req = httpTesting.expectOne('api/nfs-ganesha/daemon');
- expect(req.request.method).toBe('GET');
- });
-
- it('should call start', () => {
- service.start('host_name').subscribe();
- const req = httpTesting.expectOne('api/nfs-ganesha/service/host_name/start');
- expect(req.request.method).toBe('PUT');
- });
-
- it('should call stop', () => {
- service.stop('host_name').subscribe();
- const req = httpTesting.expectOne('api/nfs-ganesha/service/host_name/stop');
- expect(req.request.method).toBe('PUT');
- });
});
import { HttpClient } from '@angular/common/http';
import { Injectable } from '@angular/core';
+import { Observable } from 'rxjs';
+
+import { NfsFSAbstractionLayer } from '~/app/ceph/nfs/models/nfs.fsal';
+import { ApiClient } from '~/app/shared/api/api-client';
+
+export interface Directory {
+ paths: string[];
+}
+
@Injectable({
providedIn: 'root'
})
-export class NfsService {
+export class NfsService extends ApiClient {
apiPath = 'api/nfs-ganesha';
uiApiPath = 'ui-api/nfs-ganesha';
value: 'RO',
help: $localize`Allows only operations that do not modify the server`
},
- {
- value: 'MDONLY',
- help: $localize`Does not allow read or write operations, but allows any other operation`
- },
- {
- value: 'MDONLY_RO',
- help: $localize`Does not allow read, write, or any operation that modifies file attributes or directory content`
- },
{
value: 'NONE',
help: $localize`Allows no access at all`
}
];
- nfsFsal = [
+ nfsFsal: NfsFSAbstractionLayer[] = [
{
value: 'CEPH',
- descr: $localize`CephFS`
+ descr: $localize`CephFS`,
+ disabled: false
},
{
value: 'RGW',
- descr: $localize`Object Gateway`
+ descr: $localize`Object Gateway`,
+ disabled: false
}
];
nfsSquash = ['no_root_squash', 'root_id_squash', 'root_squash', 'all_squash'];
- constructor(private http: HttpClient) {}
+ constructor(private http: HttpClient) {
+ super();
+ }
list() {
return this.http.get(`${this.apiPath}/export`);
}
create(nfs: any) {
- return this.http.post(`${this.apiPath}/export`, nfs, { observe: 'response' });
+ return this.http.post(`${this.apiPath}/export`, nfs, {
+ headers: { Accept: this.getVersionHeaderValue(2, 0) },
+ observe: 'response'
+ });
}
- update(clusterId: string, id: string, nfs: any) {
- return this.http.put(`${this.apiPath}/export/${clusterId}/${id}`, nfs, { observe: 'response' });
+ update(clusterId: string, id: number, nfs: any) {
+ return this.http.put(`${this.apiPath}/export/${clusterId}/${id}`, nfs, {
+ headers: { Accept: this.getVersionHeaderValue(2, 0) },
+ observe: 'response'
+ });
}
delete(clusterId: string, exportId: string) {
return this.http.delete(`${this.apiPath}/export/${clusterId}/${exportId}`, {
+ headers: { Accept: this.getVersionHeaderValue(2, 0) },
observe: 'response'
});
}
- lsDir(fs_name: string, root_dir: string) {
- return this.http.get(`${this.uiApiPath}/lsdir/${fs_name}?root_dir=${root_dir}`);
- }
-
- buckets(user_id: string) {
- return this.http.get(`${this.uiApiPath}/rgw/buckets?user_id=${user_id}`);
+ listClusters() {
+ return this.http.get(`${this.apiPath}/cluster`, {
+ headers: { Accept: this.getVersionHeaderValue(0, 1) }
+ });
}
- clients() {
- return this.http.get(`${this.uiApiPath}/cephx/clients`);
+ lsDir(fs_name: string, root_dir: string): Observable<Directory> {
+ return this.http.get<Directory>(`${this.uiApiPath}/lsdir/${fs_name}?root_dir=${root_dir}`);
}
fsals() {
filesystems() {
return this.http.get(`${this.uiApiPath}/cephfs/filesystems`);
}
-
- daemon() {
- return this.http.get(`${this.apiPath}/daemon`);
- }
-
- start(host_name: string) {
- return this.http.put(`${this.apiPath}/service/${host_name}/start`, null, {
- observe: 'response'
- });
- }
-
- stop(host_name: string) {
- return this.http.put(`${this.apiPath}/service/${host_name}/stop`, null, {
- observe: 'response'
- });
- }
}
it('should call list', () => {
service.list().subscribe();
- const req = httpTesting.expectOne(`api/rgw/bucket?${RgwHelper.DAEMON_QUERY_PARAM}&stats=true`);
+ const req = httpTesting.expectOne(`api/rgw/bucket?${RgwHelper.DAEMON_QUERY_PARAM}&stats=false`);
+ expect(req.request.method).toBe('GET');
+ });
+
+ it('should call list with stats and user id', () => {
+ service.list(true, 'test-name').subscribe();
+ const req = httpTesting.expectOne(
+ `api/rgw/bucket?${RgwHelper.DAEMON_QUERY_PARAM}&stats=true&uid=test-name`
+ );
expect(req.request.method).toBe('GET');
});
import { of as observableOf } from 'rxjs';
import { catchError, mapTo } from 'rxjs/operators';
+import { ApiClient } from '~/app/shared/api/api-client';
import { RgwDaemonService } from '~/app/shared/api/rgw-daemon.service';
import { cdEncode } from '~/app/shared/decorators/cd-encode';
@Injectable({
providedIn: 'root'
})
-export class RgwBucketService {
+export class RgwBucketService extends ApiClient {
private url = 'api/rgw/bucket';
- constructor(private http: HttpClient, private rgwDaemonService: RgwDaemonService) {}
+ constructor(private http: HttpClient, private rgwDaemonService: RgwDaemonService) {
+ super();
+ }
/**
* Get the list of buckets.
* @return Observable<Object[]>
*/
- list() {
+ list(stats: boolean = false, uid: string = '') {
return this.rgwDaemonService.request((params: HttpParams) => {
- params = params.append('stats', 'true');
- return this.http.get(this.url, { params: params });
+ params = params.append('stats', stats.toString());
+ if (uid) {
+ params = params.append('uid', uid);
+ }
+ return this.http.get(this.url, {
+ headers: { Accept: this.getVersionHeaderValue(1, 1) },
+ params: params
+ });
});
}
import { HttpClient, HttpParams } from '@angular/common/http';
import { Injectable } from '@angular/core';
+import { Observable } from 'rxjs';
+import { map, mergeMap } from 'rxjs/operators';
+
+import { RgwDaemon } from '~/app/ceph/rgw/models/rgw-daemon';
import { RgwDaemonService } from '~/app/shared/api/rgw-daemon.service';
import { cdEncode } from '~/app/shared/decorators/cd-encode';
return this.http.get(this.url, { params: params });
});
}
+
+ isDefaultRealm(): Observable<boolean> {
+ return this.get('default-realm').pipe(
+ mergeMap((defaultRealm: string) =>
+ this.rgwDaemonService.selectedDaemon$.pipe(
+ map((selectedDaemon: RgwDaemon) => selectedDaemon.realm_name === defaultRealm)
+ )
+ )
+ );
+ }
}
import { fakeAsync, tick } from '@angular/core/testing';
import { FormControl, Validators } from '@angular/forms';
+import _ from 'lodash';
import { of as observableOf } from 'rxjs';
+import { RgwBucketService } from '~/app/shared/api/rgw-bucket.service';
+import { CdFormGroup } from '~/app/shared/forms/cd-form-group';
+import { CdValidators } from '~/app/shared/forms/cd-validators';
import { FormHelper } from '~/testing/unit-test-helper';
-import { CdFormGroup } from './cd-form-group';
-import { CdValidators } from './cd-validators';
+
+let mockBucketExists = observableOf(true);
+jest.mock('~/app/shared/api/rgw-bucket.service', () => {
+ return {
+ RgwBucketService: jest.fn().mockImplementation(() => {
+ return {
+ exists: () => mockBucketExists
+ };
+ })
+ };
+});
describe('CdValidators', () => {
let formHelper: FormHelper;
});
});
});
+ describe('bucket', () => {
+ const testValidator = (name: string, valid: boolean, expectedError?: string) => {
+ formHelper.setValue('x', name, true);
+ tick();
+ if (valid) {
+ formHelper.expectValid('x');
+ } else {
+ formHelper.expectError('x', expectedError);
+ }
+ };
+
+ describe('bucketName', () => {
+ beforeEach(() => {
+ form = new CdFormGroup({
+ x: new FormControl('', null, CdValidators.bucketName())
+ });
+ formHelper = new FormHelper(form);
+ });
+
+ it('bucket name cannot be empty', fakeAsync(() => {
+ testValidator('', false, 'required');
+ }));
+
+ it('bucket names cannot be formatted as IP address', fakeAsync(() => {
+ const testIPs = ['1.1.1.01', '001.1.1.01', '127.0.0.1'];
+ for (const ip of testIPs) {
+ testValidator(ip, false, 'ipAddress');
+ }
+ }));
+
+ it('bucket name must be >= 3 characters long (1/2)', fakeAsync(() => {
+ testValidator('ab', false, 'shouldBeInRange');
+ }));
+
+ it('bucket name must be >= 3 characters long (2/2)', fakeAsync(() => {
+ testValidator('abc', true);
+ }));
+
+ it('bucket name must be <= than 63 characters long (1/2)', fakeAsync(() => {
+ testValidator(_.repeat('a', 64), false, 'shouldBeInRange');
+ }));
+
+ it('bucket name must be <= than 63 characters long (2/2)', fakeAsync(() => {
+ testValidator(_.repeat('a', 63), true);
+ }));
+
+ it('bucket names must not contain uppercase characters or underscores (1/2)', fakeAsync(() => {
+ testValidator('iAmInvalid', false, 'bucketNameInvalid');
+ }));
+
+ it('bucket names can only contain lowercase letters, numbers, periods and hyphens', fakeAsync(() => {
+ testValidator('bk@2', false, 'bucketNameInvalid');
+ }));
+
+ it('bucket names must not contain uppercase characters or underscores (2/2)', fakeAsync(() => {
+ testValidator('i_am_invalid', false, 'bucketNameInvalid');
+ }));
+
+ it('bucket names must start and end with letters or numbers', fakeAsync(() => {
+ testValidator('abcd-', false, 'lowerCaseOrNumber');
+ }));
+
+ it('bucket labels cannot be empty', fakeAsync(() => {
+ testValidator('bk.', false, 'onlyLowerCaseAndNumbers');
+ }));
+
+ it('bucket names with invalid labels (1/3)', fakeAsync(() => {
+ testValidator('abc.1def.Ghi2', false, 'bucketNameInvalid');
+ }));
+
+ it('bucket names with invalid labels (2/3)', fakeAsync(() => {
+ testValidator('abc.1_xy', false, 'bucketNameInvalid');
+ }));
+
+ it('bucket names with invalid labels (3/3)', fakeAsync(() => {
+ testValidator('abc.*def', false, 'bucketNameInvalid');
+ }));
+
+ it('bucket names must be a series of one or more labels and can contain lowercase letters, numbers, and hyphens (1/3)', fakeAsync(() => {
+ testValidator('xyz.abc', true);
+ }));
+
+ it('bucket names must be a series of one or more labels and can contain lowercase letters, numbers, and hyphens (2/3)', fakeAsync(() => {
+ testValidator('abc.1-def', true);
+ }));
+
+ it('bucket names must be a series of one or more labels and can contain lowercase letters, numbers, and hyphens (3/3)', fakeAsync(() => {
+ testValidator('abc.ghi2', true);
+ }));
+
+ it('bucket names must be unique', fakeAsync(() => {
+ testValidator('bucket-name-is-unique', true);
+ }));
+
+ it('bucket names must not contain spaces', fakeAsync(() => {
+ testValidator('bucket name with spaces', false, 'bucketNameInvalid');
+ }));
+ });
+
+ describe('bucketExistence', () => {
+ const rgwBucketService = new RgwBucketService(undefined, undefined);
+
+ beforeEach(() => {
+ form = new CdFormGroup({
+ x: new FormControl('', null, CdValidators.bucketExistence(false, rgwBucketService))
+ });
+ formHelper = new FormHelper(form);
+ });
+
+ it('bucket name cannot be empty', fakeAsync(() => {
+ testValidator('', false, 'required');
+ }));
+
+ it('bucket name should not exist but it does', fakeAsync(() => {
+ testValidator('testName', false, 'bucketNameNotAllowed');
+ }));
+
+ it('bucket name should not exist and it does not', fakeAsync(() => {
+ mockBucketExists = observableOf(false);
+ testValidator('testName', true);
+ }));
+
+ it('bucket name should exist but it does not', fakeAsync(() => {
+ form.get('x').setAsyncValidators(CdValidators.bucketExistence(true, rgwBucketService));
+ mockBucketExists = observableOf(false);
+ testValidator('testName', false, 'bucketNameNotAllowed');
+ }));
+
+ it('bucket name should exist and it does', fakeAsync(() => {
+ form.get('x').setAsyncValidators(CdValidators.bucketExistence(true, rgwBucketService));
+ mockBucketExists = observableOf(true);
+ testValidator('testName', true);
+ }));
+ });
+ });
});
import { Observable, of as observableOf, timer as observableTimer } from 'rxjs';
import { map, switchMapTo, take } from 'rxjs/operators';
-import { DimlessBinaryPipe } from '../pipes/dimless-binary.pipe';
-import { FormatterService } from '../services/formatter.service';
+import { RgwBucketService } from '~/app/shared/api/rgw-bucket.service';
+import { DimlessBinaryPipe } from '~/app/shared/pipes/dimless-binary.pipe';
+import { FormatterService } from '~/app/shared/services/formatter.service';
export function isEmptyInputValue(value: any): boolean {
return value == null || value.length === 0;
);
};
}
+
+ /**
+ * Validate the bucket name. In general, bucket names should follow domain
+ * name constraints:
+ * - Bucket names must be unique.
+ * - Bucket names cannot be formatted as IP address.
+ * - Bucket names can be between 3 and 63 characters long.
+ * - Bucket names must not contain uppercase characters or underscores.
+ * - Bucket names must start with a lowercase letter or number.
+ * - Bucket names must be a series of one or more labels. Adjacent
+ * labels are separated by a single period (.). Bucket names can
+ * contain lowercase letters, numbers, and hyphens. Each label must
+ * start and end with a lowercase letter or a number.
+ */
+ static bucketName(): AsyncValidatorFn {
+ return (control: AbstractControl): Observable<ValidationErrors | null> => {
+ if (control.pristine || !control.value) {
+ return observableOf({ required: true });
+ }
+ const constraints = [];
+ let errorName: string;
+ // - Bucket names cannot be formatted as IP address.
+ constraints.push(() => {
+ const ipv4Rgx = /^((25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.){3}(25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)$/i;
+ const ipv6Rgx = /^(?:[a-f0-9]{1,4}:){7}[a-f0-9]{1,4}$/i;
+ const name = control.value;
+ let notIP = true;
+ if (ipv4Rgx.test(name) || ipv6Rgx.test(name)) {
+ errorName = 'ipAddress';
+ notIP = false;
+ }
+ return notIP;
+ });
+ // - Bucket names can be between 3 and 63 characters long.
+ constraints.push((name: string) => {
+ if (!_.inRange(name.length, 3, 64)) {
+ errorName = 'shouldBeInRange';
+ return false;
+ }
+ // Bucket names can only contain lowercase letters, numbers, periods and hyphens.
+ if (!/^[0-9a-z.-]+$/.test(control.value)) {
+ errorName = 'bucketNameInvalid';
+ return false;
+ }
+ return true;
+ });
+ // - Bucket names must not contain uppercase characters or underscores.
+ // - Bucket names must start with a lowercase letter or number.
+ // - Bucket names must be a series of one or more labels. Adjacent
+ // labels are separated by a single period (.). Bucket names can
+ // contain lowercase letters, numbers, and hyphens. Each label must
+ // start and end with a lowercase letter or a number.
+ constraints.push((name: string) => {
+ const labels = _.split(name, '.');
+ return _.every(labels, (label) => {
+ // Bucket names must not contain uppercase characters or underscores.
+ if (label !== _.toLower(label) || label.includes('_')) {
+ errorName = 'containsUpperCase';
+ return false;
+ }
+ // Bucket labels can contain lowercase letters, numbers, and hyphens.
+ if (!/^[0-9a-z-]+$/.test(label)) {
+ errorName = 'onlyLowerCaseAndNumbers';
+ return false;
+ }
+ // Each label must start and end with a lowercase letter or a number.
+ return _.every([0, label.length - 1], (index) => {
+ errorName = 'lowerCaseOrNumber';
+ return /[a-z]/.test(label[index]) || _.isInteger(_.parseInt(label[index]));
+ });
+ });
+ });
+ if (!_.every(constraints, (func: Function) => func(control.value))) {
+ return observableOf(
+ (() => {
+ switch (errorName) {
+ case 'onlyLowerCaseAndNumbers':
+ return { onlyLowerCaseAndNumbers: true };
+ case 'shouldBeInRange':
+ return { shouldBeInRange: true };
+ case 'ipAddress':
+ return { ipAddress: true };
+ case 'containsUpperCase':
+ return { containsUpperCase: true };
+ case 'lowerCaseOrNumber':
+ return { lowerCaseOrNumber: true };
+ default:
+ return { bucketNameInvalid: true };
+ }
+ })()
+ );
+ }
+
+ return observableOf(null);
+ };
+ }
+
+ static bucketExistence(
+ requiredExistenceResult: boolean,
+ rgwBucketService: RgwBucketService
+ ): AsyncValidatorFn {
+ return (control: AbstractControl): Observable<ValidationErrors | null> => {
+ if (control.pristine || !control.value) {
+ return observableOf({ required: true });
+ }
+ return rgwBucketService
+ .exists(control.value)
+ .pipe(
+ map((existenceResult: boolean) =>
+ existenceResult === requiredExistenceResult ? null : { bucketNameNotAllowed: true }
+ )
+ );
+ };
+ }
}
summary: Get Monitor Details
tags:
- Monitor
- /api/nfs-ganesha/daemon:
+ /api/nfs-ganesha/cluster:
get:
parameters: []
responses:
'200':
content:
- application/vnd.ceph.api.v1.0+json:
- schema:
- items:
- properties:
- cluster_id:
- description: Cluster identifier
- type: string
- cluster_type:
- description: Cluster type
- type: string
- daemon_id:
- description: Daemon identifier
- type: string
- desc:
- description: Status description
- type: string
- status:
- description: Status of daemon
- type: integer
- type: object
- required:
- - daemon_id
- - cluster_id
- - cluster_type
- type: array
+ application/vnd.ceph.api.v0.1+json:
+ type: object
description: OK
'400':
description: Operation exception. Please check the response body for details.
trace.
security:
- jwt: []
- summary: List NFS-Ganesha daemons information
tags:
- NFS-Ganesha
/api/nfs-ganesha/export:
cluster_id:
description: Cluster identifier
type: string
- daemons:
- description: List of NFS Ganesha daemons identifiers
- items:
- type: string
- type: array
export_id:
description: Export ID
type: integer
fsal:
description: FSAL configuration
properties:
- filesystem:
- description: CephFS filesystem ID
+ fs_name:
+ description: CephFS filesystem name
type: string
name:
description: name of FSAL
type: string
- rgw_user_id:
- description: RGW user id
- type: string
sec_label_xattr:
description: Name of xattr for security label
type: string
user_id:
- description: CephX user id
+ description: User id
type: string
required:
- name
squash:
description: Export squash policy
type: string
- tag:
- description: NFSv3 export tag
- type: string
transports:
description: List of transport types
items:
- export_id
- path
- cluster_id
- - daemons
- pseudo
- - tag
- access_type
- squash
- security_label
cluster_id:
description: Cluster identifier
type: string
- daemons:
- description: List of NFS Ganesha daemons identifiers
- items:
- type: string
- type: array
fsal:
description: FSAL configuration
properties:
- filesystem:
- description: CephFS filesystem ID
+ fs_name:
+ description: CephFS filesystem name
type: string
name:
description: name of FSAL
type: string
- rgw_user_id:
- description: RGW user id
- type: string
sec_label_xattr:
description: Name of xattr for security label
type: string
- user_id:
- description: CephX user id
- type: string
required:
- name
type: object
pseudo:
description: Pseudo FS path
type: string
- reload_daemons:
- default: true
- description: Trigger reload of NFS-Ganesha daemons configuration
- type: boolean
security_label:
description: Security label
type: string
squash:
description: Export squash policy
type: string
- tag:
- description: NFSv3 export tag
- type: string
transports:
description: List of transport types
items:
required:
- path
- cluster_id
- - daemons
- pseudo
- - tag
- access_type
- squash
- security_label
responses:
'201':
content:
- application/vnd.ceph.api.v1.0+json:
+ application/vnd.ceph.api.v2.0+json:
schema:
properties:
access_type:
cluster_id:
description: Cluster identifier
type: string
- daemons:
- description: List of NFS Ganesha daemons identifiers
- items:
- type: string
- type: array
export_id:
description: Export ID
type: integer
fsal:
description: FSAL configuration
properties:
- filesystem:
- description: CephFS filesystem ID
+ fs_name:
+ description: CephFS filesystem name
type: string
name:
description: name of FSAL
type: string
- rgw_user_id:
- description: RGW user id
- type: string
sec_label_xattr:
description: Name of xattr for security label
type: string
user_id:
- description: CephX user id
+ description: User id
type: string
required:
- name
squash:
description: Export squash policy
type: string
- tag:
- description: NFSv3 export tag
- type: string
transports:
description: List of transport types
items:
- export_id
- path
- cluster_id
- - daemons
- pseudo
- - tag
- access_type
- squash
- security_label
description: Resource created.
'202':
content:
- application/vnd.ceph.api.v1.0+json:
+ application/vnd.ceph.api.v2.0+json:
type: object
description: Operation is still executing. Please check the task queue.
'400':
required: true
schema:
type: integer
- - default: true
- description: Trigger reload of NFS-Ganesha daemons configuration
- in: query
- name: reload_daemons
- schema:
- type: boolean
responses:
'202':
content:
- application/vnd.ceph.api.v1.0+json:
+ application/vnd.ceph.api.v2.0+json:
type: object
description: Operation is still executing. Please check the task queue.
'204':
content:
- application/vnd.ceph.api.v1.0+json:
+ application/vnd.ceph.api.v2.0+json:
type: object
description: Resource deleted.
'400':
name: export_id
required: true
schema:
- type: integer
+ type: string
responses:
'200':
content:
cluster_id:
description: Cluster identifier
type: string
- daemons:
- description: List of NFS Ganesha daemons identifiers
- items:
- type: string
- type: array
export_id:
description: Export ID
type: integer
fsal:
description: FSAL configuration
properties:
- filesystem:
- description: CephFS filesystem ID
+ fs_name:
+ description: CephFS filesystem name
type: string
name:
description: name of FSAL
type: string
- rgw_user_id:
- description: RGW user id
- type: string
sec_label_xattr:
description: Name of xattr for security label
type: string
user_id:
- description: CephX user id
+ description: User id
type: string
required:
- name
squash:
description: Export squash policy
type: string
- tag:
- description: NFSv3 export tag
- type: string
transports:
description: List of transport types
items:
- export_id
- path
- cluster_id
- - daemons
- pseudo
- - tag
- access_type
- squash
- security_label
- squash
type: object
type: array
- daemons:
- description: List of NFS Ganesha daemons identifiers
- items:
- type: string
- type: array
fsal:
description: FSAL configuration
properties:
- filesystem:
- description: CephFS filesystem ID
+ fs_name:
+ description: CephFS filesystem name
type: string
name:
description: name of FSAL
type: string
- rgw_user_id:
- description: RGW user id
- type: string
sec_label_xattr:
description: Name of xattr for security label
type: string
- user_id:
- description: CephX user id
- type: string
required:
- name
type: object
pseudo:
description: Pseudo FS path
type: string
- reload_daemons:
- default: true
- description: Trigger reload of NFS-Ganesha daemons configuration
- type: boolean
security_label:
description: Security label
type: string
squash:
description: Export squash policy
type: string
- tag:
- description: NFSv3 export tag
- type: string
transports:
description: List of transport types
items:
type: array
required:
- path
- - daemons
- pseudo
- - tag
- access_type
- squash
- security_label
responses:
'200':
content:
- application/vnd.ceph.api.v1.0+json:
+ application/vnd.ceph.api.v2.0+json:
schema:
properties:
access_type:
cluster_id:
description: Cluster identifier
type: string
- daemons:
- description: List of NFS Ganesha daemons identifiers
- items:
- type: string
- type: array
export_id:
description: Export ID
type: integer
fsal:
description: FSAL configuration
properties:
- filesystem:
- description: CephFS filesystem ID
+ fs_name:
+ description: CephFS filesystem name
type: string
name:
description: name of FSAL
type: string
- rgw_user_id:
- description: RGW user id
- type: string
sec_label_xattr:
description: Name of xattr for security label
type: string
user_id:
- description: CephX user id
+ description: User id
type: string
required:
- name
squash:
description: Export squash policy
type: string
- tag:
- description: NFSv3 export tag
- type: string
transports:
description: List of transport types
items:
- export_id
- path
- cluster_id
- - daemons
- pseudo
- - tag
- access_type
- squash
- security_label
description: Resource updated.
'202':
content:
- application/vnd.ceph.api.v1.0+json:
+ application/vnd.ceph.api.v2.0+json:
type: object
description: Operation is still executing. Please check the task queue.
'400':
name: daemon_name
schema:
type: string
+ - allowEmptyValue: true
+ in: query
+ name: uid
+ schema:
+ type: string
responses:
'200':
content:
- application/vnd.ceph.api.v1.0+json:
+ application/vnd.ceph.api.v1.1+json:
type: object
description: OK
'400':
name: MonPerfCounter
- description: Get Monitor Details
name: Monitor
-- description: NFS-Ganesha Management API
+- description: NFS-Ganesha Cluster Management API
name: NFS-Ganesha
- description: OSD management API
name: OSD
from ..controllers.cephfs import CephFS
from ..controllers.iscsi import Iscsi, IscsiTarget
-from ..controllers.nfsganesha import NFSGanesha, NFSGaneshaExports, NFSGaneshaService
+from ..controllers.nfsganesha import NFSGanesha, NFSGaneshaExports
from ..controllers.rbd import Rbd, RbdSnapshot, RbdTrash
from ..controllers.rbd_mirroring import RbdMirroringPoolMode, \
RbdMirroringPoolPeer, RbdMirroringSummary
Features.ISCSI: [Iscsi, IscsiTarget],
Features.CEPHFS: [CephFS],
Features.RGW: [Rgw, RgwDaemon, RgwBucket, RgwUser],
- Features.NFS: [NFSGanesha, NFSGaneshaService, NFSGaneshaExports],
+ Features.NFS: [NFSGanesha, NFSGaneshaExports],
}
+++ /dev/null
-# -*- coding: utf-8 -*-
-
-from .ceph_service import CephService
-
-
-class CephX(object):
- @classmethod
- def _entities_map(cls, entity_type=None):
- auth_dump = CephService.send_command("mon", "auth list")
- result = {}
- for auth_entry in auth_dump['auth_dump']:
- entity = auth_entry['entity']
- if not entity_type or entity.startswith('{}.'.format(entity_type)):
- entity_id = entity[entity.find('.')+1:]
- result[entity_id] = auth_entry
- return result
-
- @classmethod
- def _clients_map(cls):
- return cls._entities_map("client")
-
- @classmethod
- def list_clients(cls):
- return list(cls._clients_map())
-
- @classmethod
- def get_client_key(cls, client_id):
- return cls._clients_map()[client_id]['key']
def _get_realms_info(self): # type: () -> dict
return json_str_to_object(self.proxy('GET', 'realm?list', None, None))
+ def _get_realm_info(self, realm_id: str) -> Dict[str, Any]:
+ return json_str_to_object(self.proxy('GET', f'realm?id={realm_id}', None, None))
+
@staticmethod
def _rgw_settings():
return (Settings.RGW_API_ACCESS_KEY,
return []
+ def get_default_realm(self) -> str:
+ realms_info = self._get_realms_info()
+ if 'default_info' in realms_info and realms_info['default_info']:
+ realm_info = self._get_realm_info(realms_info['default_info'])
+ if 'name' in realm_info and realm_info['name']:
+ return realm_info['name']
+ raise DashboardException(msg='Default realm not found.',
+ http_status_code=404,
+ component='rgw')
+
@RestClient.api_get('/{bucket_name}?versioning')
def get_bucket_versioning(self, bucket_name, request=None):
"""
from unittest.mock import patch
from urllib.parse import urlencode
-from ..controllers.nfsganesha import NFSGaneshaUi
+from ..controllers.nfsganesha import NFSGaneshaExports, NFSGaneshaUi
from . import ControllerTestCase # pylint: disable=no-name-in-module
+class NFSGaneshaExportsTest(ControllerTestCase):
+
+ def test_get_schema_export(self):
+ export = {
+ "export_id": 2,
+ "path": "bk1",
+ "cluster_id": "myc",
+ "pseudo": "/bk-ps",
+ "access_type": "RO",
+ "squash": "root_id_squash",
+ "security_label": False,
+ "protocols": [
+ 4
+ ],
+ "transports": [
+ "TCP",
+ "UDP"
+ ],
+ "fsal": {
+ "name": "RGW",
+ "user_id": "dashboard",
+ "access_key_id": "UUU5YVVOQ2P5QTOPYNAN",
+ "secret_access_key": "7z87tMUUsHr67ZWx12pCbWkp9UyOldxhDuPY8tVN"
+ },
+ "clients": []
+ }
+ expected_schema_export = export
+ del expected_schema_export['fsal']['access_key_id']
+ del expected_schema_export['fsal']['secret_access_key']
+ self.assertDictEqual(
+ expected_schema_export,
+ NFSGaneshaExports._get_schema_export(export)) # pylint: disable=protected-access
+
+
class NFSGaneshaUiControllerTest(ControllerTestCase):
@classmethod
def setup_server(cls):
{
'ceph_version': 'ceph version master (dev)',
'id': 'daemon1',
+ 'realm_name': 'realm1',
'zonegroup_name': 'zg1',
'zone_name': 'zone1'
},
{
'ceph_version': 'ceph version master (dev)',
'id': 'daemon2',
+ 'realm_name': 'realm2',
'zonegroup_name': 'zg2',
'zone_name': 'zone2'
}]
'service_map_id': '4832',
'version': 'ceph version master (dev)',
'server_hostname': 'host1',
+ 'realm_name': 'realm1',
'zonegroup_name': 'zg1',
'zone_name': 'zone1', 'default': True
},
'service_map_id': '5356',
'version': 'ceph version master (dev)',
'server_hostname': 'host1',
+ 'realm_name': 'realm2',
'zonegroup_name': 'zg2',
'zone_name': 'zone2',
'default': False
"wait",
]
+NFS_GANESHA_SUPPORTED_FSALS = ['CEPH', 'RGW']
NFS_POOL_NAME = '.nfs'
except Exception as e:
return exception_handler(e, "Failed to list NFS Cluster")
- # FIXME: Remove this method. It was added for dashboard integration with mgr/nfs module.
- def list_daemons(self):
- completion = self.mgr.list_daemons(daemon_type='nfs')
- # Here completion.result is a list DaemonDescription objects
- daemons = orchestrator.raise_if_exception(completion)
- return [
- {
- 'cluster_id': instance.service_id(),
- 'daemon_id': instance.daemon_id,
- 'cluster_type': 'orchestrator',
- 'status': instance.status,
- 'status_desc': instance.status_desc
- } for instance in daemons
- ]
-
def _show_nfs_cluster_info(self, cluster_id: str) -> Dict[str, Any]:
completion = self.mgr.list_daemons(daemon_type='nfs')
# Here completion.result is a list DaemonDescription objects
def show_nfs_cluster_info(self, cluster_id: Optional[str] = None) -> Tuple[int, str, str]:
try:
- cluster_ls = []
info_res = {}
if cluster_id:
cluster_ls = [cluster_id]
from rados import TimedOut, ObjectNotFound
-from mgr_module import NFS_POOL_NAME as POOL_NAME
+from mgr_module import NFS_POOL_NAME as POOL_NAME, NFS_GANESHA_SUPPORTED_FSALS
-from .export_utils import GaneshaConfParser, Export, RawBlock, CephFSFSAL, RGWFSAL, \
- NFS_GANESHA_SUPPORTED_FSALS
+from .export_utils import GaneshaConfParser, Export, RawBlock, CephFSFSAL, RGWFSAL
from .exception import NFSException, NFSInvalidOperation, FSNotFound, \
ClusterNotFound
from .utils import available_clusters, check_fs, restart_nfs_service
raise NFSException(f"Failed to delete exports: {err} and {ret}")
log.info("All exports successfully deleted for cluster id: %s", cluster_id)
- def list_all_exports(self):
+ def list_all_exports(self) -> List[Dict[str, Any]]:
r = []
for cluster_id, ls in self.exports.items():
r.extend([e.to_dict() for e in ls])
if export:
return export.to_dict()
log.warning(f"No {pseudo_path} export to show for {cluster_id}")
+ return None
@export_cluster_checker
def get_export(
self,
cluster_id: str,
export_id: int
- ) -> Dict[Any, Any]:
+ ) -> Optional[Dict[str, Any]]:
export = self._fetch_export_id(cluster_id, export_id)
return export.to_dict() if export else None
raise NFSInvalidOperation(f"export FSAL user_id must be '{user_id}'")
else:
raise NFSInvalidOperation(f"NFS Ganesha supported FSALs are {NFS_GANESHA_SUPPORTED_FSALS}."
- "Export must specify any one of it.")
+ "Export must specify any one of it.")
ex_dict["fsal"] = fsal
ex_dict["cluster_id"] = cluster_id
from typing import cast, List, Dict, Any, Optional, TYPE_CHECKING
from os.path import isabs
+from mgr_module import NFS_GANESHA_SUPPORTED_FSALS
+
from .exception import NFSInvalidOperation, FSNotFound
from .utils import check_fs
if TYPE_CHECKING:
from nfs.module import Module
-NFS_GANESHA_SUPPORTED_FSALS = ['CEPH', 'RGW']
class RawBlock():
def __init__(self, block_name: str, blocks: List['RawBlock'] = [], values: Dict[str, Any] = {}):
"""Reset NFS-Ganesha Config to default"""
return self.nfs.reset_nfs_cluster_config(cluster_id=cluster_id)
- def fetch_nfs_export_obj(self):
+ def fetch_nfs_export_obj(self) -> ExportMgr:
return self.export_mgr
def export_ls(self) -> List[Dict[Any, Any]]:
return self.export_mgr.list_all_exports()
- def export_get(self, cluster_id: str, export_id: int) -> Dict[Any, Any]:
+ def export_get(self, cluster_id: str, export_id: int) -> Optional[Dict[str, Any]]:
return self.export_mgr.get_export_by_id(cluster_id, export_id)
def export_rm(self, cluster_id: str, pseudo: str) -> None:
self.export_mgr.delete_export(cluster_id=cluster_id, pseudo_path=pseudo)
- def daemon_ls(self) -> List[Dict[Any, Any]]:
- return self.nfs.list_daemons()
-
- # Remove this method after fixing attribute error
def cluster_ls(self) -> List[str]:
- return [
- {
- 'pool': NFS_POOL_NAME,
- 'namespace': cluster_id,
- 'type': 'orchestrator',
- 'daemon_conf': None,
- } for cluster_id in available_clusters()
- ]
+ return available_clusters(self)
# flake8: noqa
import json
import pytest
-from typing import Optional, Tuple, Iterator, List, Any, Dict
+from typing import Optional, Tuple, Iterator, List, Any
from contextlib import contextmanager
from unittest import mock
assert export.protocols == [4, 3]
assert set(export.transports) == {"TCP", "UDP"}
assert export.fsal.name == "RGW"
- # assert export.fsal.rgw_user_id == "testuser" # probably correct value
- # assert export.fsal.access_key == "access_key" # probably correct value
- # assert export.fsal.secret_key == "secret_key" # probably correct value
+ assert export.fsal.user_id == "nfs.foo.bucket"
+ assert export.fsal.access_key_id == "the_access_key"
+ assert export.fsal.secret_access_key == "the_secret_key"
assert len(export.clients) == 0
assert export.cluster_id == 'foo'
'clients': [],
'fsal': {
'name': 'RGW',
- 'rgw_user_id': 'rgw.foo.bucket'
+ 'user_id': 'rgw.foo.bucket',
+ 'access_key_id': 'the_access_key',
+ 'secret_access_key': 'the_secret_key'
}
})
assert set(export.protocols) == {4, 3}
assert set(export.transports) == {"TCP", "UDP"}
assert export.fsal.name == "RGW"
-# assert export.fsal.rgw_user_id == "testuser"
-# assert export.fsal.access_key is None
-# assert export.fsal.secret_key is None
+ assert export.fsal.user_id == "rgw.foo.bucket"
+ assert export.fsal.access_key_id == "the_access_key"
+ assert export.fsal.secret_access_key == "the_secret_key"
assert len(export.clients) == 0
assert export.cluster_id == self.cluster_id
echo "$test_user ganesha daemon $name started on port: $port"
done
-
- if $with_mgr_dashboard; then
- ceph_adm dashboard set-ganesha-clusters-rados-pool-namespace "$cluster_id:$pool_name/$cluster_id"
- fi
}
if [ "$debug" -eq 0 ]; then