* fs: Names of new FSs, volumes, subvolumes and subvolume groups can only
contain alphanumeric and ``-``, ``_`` and ``.`` characters. Some commands
or CephX credentials may not work with old FSs with non-conformant names.
+
+* `blacklist` has been replaced with `blocklist` throughout. The following commands have changed:
+
+ - ``ceph osd blacklist ...`` are now ``ceph osd blocklist ...``
+ - ``ceph <tell|daemon> osd.<NNN> dump_blacklist`` is now ``ceph <tell|daemon> osd.<NNN> dump_blocklist``
+
+* The following config options have changed:
+
+ - ``mon osd blacklist default expire`` is now ``mon osd blocklist default expire``
+ - ``mon mds blacklist interval`` is now ``mon mds blocklist interval``
+ - ``mon mgr blacklist interval`` is now ''mon mgr blocklist interval``
+ - ``rbd blacklist on break lock`` is now ``rbd blocklist on break lock``
+ - ``rbd blacklist expire seconds`` is now ``rbd blocklist expire seconds``
+ - ``mds session blacklist on timeout`` is now ``mds session blocklist on timeout``
+ - ``mds session blacklist on evict`` is now ``mds session blocklist on evict``
+
+* The following librados API calls have changed:
+
+ - ``rados_blacklist_add`` is now ``rados_blocklist_add``; the former will issue a deprecation warning and be removed in a future release.
+ - ``rados.blacklist_add`` is now ``rados.blocklist_add`` in the C++ API.
+
+* The JSON output for the following commands now shows ``blocklist`` instead of ``blacklist``:
+
+ - ``ceph osd dump``
+ - ``ceph <tell|daemon> osd.<N> dump_blocklist``
ceph tell mds.0 client evict client_metadata.=4305
-Advanced: Un-blacklisting a client
+Advanced: Un-blocklisting a client
==================================
-Ordinarily, a blacklisted client may not reconnect to the servers: it
+Ordinarily, a blocklisted client may not reconnect to the servers: it
must be unmounted and then mounted anew.
However, in some situations it may be useful to permit a client that
was evicted to attempt to reconnect.
-Because CephFS uses the RADOS OSD blacklist to control client eviction,
+Because CephFS uses the RADOS OSD blocklist to control client eviction,
CephFS clients can be permitted to reconnect by removing them from
-the blacklist:
+the blocklist:
::
- $ ceph osd blacklist ls
+ $ ceph osd blocklist ls
listed 1 entries
127.0.0.1:0/3710147553 2018-03-19 11:32:24.716146
- $ ceph osd blacklist rm 127.0.0.1:0/3710147553
- un-blacklisting 127.0.0.1:0/3710147553
+ $ ceph osd blocklist rm 127.0.0.1:0/3710147553
+ un-blocklisting 127.0.0.1:0/3710147553
Doing this may put data integrity at risk if other clients have accessed
-files that the blacklisted client was doing buffered IO to. It is also not
+files that the blocklisted client was doing buffered IO to. It is also not
guaranteed to result in a fully functional client -- the best way to get
a fully healthy client back after an eviction is to unmount the client
and do a fresh mount.
find it useful to set ``client_reconnect_stale`` to true in the
FUSE client, to prompt the client to try to reconnect.
-Advanced: Configuring blacklisting
+Advanced: Configuring blocklisting
==================================
If you are experiencing frequent client evictions, due to slow
It is possible to respond to slow clients by simply dropping their
MDS sessions, but permit them to re-open sessions and permit them
to continue talking to OSDs. To enable this mode, set
-``mds_session_blacklist_on_timeout`` to false on your MDS nodes.
+``mds_session_blocklist_on_timeout`` to false on your MDS nodes.
For the equivalent behaviour on manual evictions, set
-``mds_session_blacklist_on_evict`` to false.
+``mds_session_blocklist_on_evict`` to false.
-Note that if blacklisting is disabled, then evicting a client will
+Note that if blocklisting is disabled, then evicting a client will
only have an effect on the MDS you send the command to. On a system
with multiple active MDS daemons, you would need to send an
-eviction command to each active daemon. When blacklisting is enabled
+eviction command to each active daemon. When blocklisting is enabled
(the default), sending an eviction command to just a single
-MDS is sufficient, because the blacklist propagates it to the others.
+MDS is sufficient, because the blocklist propagates it to the others.
-.. _background_blacklisting_and_osd_epoch_barrier:
+.. _background_blocklisting_and_osd_epoch_barrier:
-Background: Blacklisting and OSD epoch barrier
+Background: Blocklisting and OSD epoch barrier
==============================================
-After a client is blacklisted, it is necessary to make sure that
+After a client is blocklisted, it is necessary to make sure that
other clients and MDS daemons have the latest OSDMap (including
-the blacklist entry) before they try to access any data objects
-that the blacklisted client might have been accessing.
+the blocklist entry) before they try to access any data objects
+that the blocklisted client might have been accessing.
This is ensured using an internal "osdmap epoch barrier" mechanism.
capabilities which might allow touching the same RADOS objects, the
clients we hand out the capabilities to must have a sufficiently recent
OSD map to not race with cancelled operations (from ENOSPC) or
-blacklisted clients (from evictions).
+blocklisted clients (from evictions).
More specifically, the cases where an epoch barrier is set are:
- * Client eviction (where the client is blacklisted and other clients
- must wait for a post-blacklist epoch to touch the same objects).
+ * Client eviction (where the client is blocklisted and other clients
+ must wait for a post-blocklist epoch to touch the same objects).
* OSD map full flag handling in the client (where the client may
cancel some OSD ops from a pre-full epoch, so other clients must
wait until the full epoch or later before touching the same objects).
when releasing capabilities on files affected by cancelled operations, in
order to ensure that these cancelled operations do not interfere with
subsequent access to the data objects by the MDS or other clients. For
-more on the epoch barrier mechanism, see :ref:`background_blacklisting_and_osd_epoch_barrier`.
+more on the epoch barrier mechanism, see :ref:`background_blocklisting_and_osd_epoch_barrier`.
Legacy (pre-hammer) behavior
----------------------------
there is competing access or memory pressure on the MDS, they may be
**revoked**. When a capability is revoked, the client is responsible for
returning it as soon as it is able. Clients that fail to do so in a
-timely fashion may end up **blacklisted** and unable to communicate with
+timely fashion may end up **blocklisted** and unable to communicate with
the cluster.
Since the cache is distributed, the MDS must take great care to ensure
:Default: ``15``
-``mds blacklist interval``
+``mds blocklist interval``
-:Description: The blacklist duration for failed MDSs in the OSD map. Note,
+:Description: The blocklist duration for failed MDSs in the OSD map. Note,
this controls how long failed MDS daemons will stay in the
- OSDMap blacklist. It has no effect on how long something is
- blacklisted when the administrator blacklists it manually. For
- example, ``ceph osd blacklist add`` will still use the default
- blacklist time.
+ OSDMap blocklist. It has no effect on how long something is
+ blocklisted when the administrator blocklists it manually. For
+ example, ``ceph osd blocklist add`` will still use the default
+ blocklist time.
:Type: Float
:Default: ``24.0*60.0``
| **ceph** **mon** [ *add* \| *dump* \| *getmap* \| *remove* \| *stat* ] ...
-| **ceph** **osd** [ *blacklist* \| *blocked-by* \| *create* \| *new* \| *deep-scrub* \| *df* \| *down* \| *dump* \| *erasure-code-profile* \| *find* \| *getcrushmap* \| *getmap* \| *getmaxosd* \| *in* \| *ls* \| *lspools* \| *map* \| *metadata* \| *ok-to-stop* \| *out* \| *pause* \| *perf* \| *pg-temp* \| *force-create-pg* \| *primary-affinity* \| *primary-temp* \| *repair* \| *reweight* \| *reweight-by-pg* \| *rm* \| *destroy* \| *purge* \| *safe-to-destroy* \| *scrub* \| *set* \| *setcrushmap* \| *setmaxosd* \| *stat* \| *tree* \| *unpause* \| *unset* ] ...
+| **ceph** **osd** [ *blocklist* \| *blocked-by* \| *create* \| *new* \| *deep-scrub* \| *df* \| *down* \| *dump* \| *erasure-code-profile* \| *find* \| *getcrushmap* \| *getmap* \| *getmaxosd* \| *in* \| *ls* \| *lspools* \| *map* \| *metadata* \| *ok-to-stop* \| *out* \| *pause* \| *perf* \| *pg-temp* \| *force-create-pg* \| *primary-affinity* \| *primary-temp* \| *repair* \| *reweight* \| *reweight-by-pg* \| *rm* \| *destroy* \| *purge* \| *safe-to-destroy* \| *scrub* \| *set* \| *setcrushmap* \| *setmaxosd* \| *stat* \| *tree* \| *unpause* \| *unset* ] ...
| **ceph** **osd** **crush** [ *add* \| *add-bucket* \| *create-or-move* \| *dump* \| *get-tunable* \| *link* \| *move* \| *remove* \| *rename-bucket* \| *reweight* \| *reweight-all* \| *reweight-subtree* \| *rm* \| *rule* \| *set* \| *set-tunable* \| *show-tunables* \| *tunables* \| *unlink* ] ...
Manage OSD configuration and administration. It uses some additional
subcommands.
-Subcommand ``blacklist`` manage blacklisted clients. It uses some additional
+Subcommand ``blocklist`` manage blocklisted clients. It uses some additional
subcommands.
-Subcommand ``add`` add <addr> to blacklist (optionally until <expire> seconds
+Subcommand ``add`` add <addr> to blocklist (optionally until <expire> seconds
from now)
Usage::
- ceph osd blacklist add <EntityAddr> {<float[0.0-]>}
+ ceph osd blocklist add <EntityAddr> {<float[0.0-]>}
-Subcommand ``ls`` show blacklisted clients
+Subcommand ``ls`` show blocklisted clients
Usage::
- ceph osd blacklist ls
+ ceph osd blocklist ls
-Subcommand ``rm`` remove <addr> from blacklist
+Subcommand ``rm`` remove <addr> from blocklist
Usage::
- ceph osd blacklist rm <EntityAddr>
+ ceph osd blocklist rm <EntityAddr>
Subcommand ``blocked-by`` prints a histogram of which OSDs are blocking their peers
path to file containing the secret key to use with CephX
:command:`recover_session=<no|clean>`
- Set auto reconnect mode in the case where the client is blacklisted. The
+ Set auto reconnect mode in the case where the client is blocklisted. The
available modes are ``no`` and ``clean``. The default is ``no``.
- ``no``: never attempt to reconnect when client detects that it has been
- blacklisted. Blacklisted clients will not attempt to reconnect and
+ blocklisted. Blocklisted clients will not attempt to reconnect and
their operations will fail too.
- ``clean``: client reconnects to the Ceph cluster automatically when it
- detects that it has been blacklisted. During reconnect, client drops
+ detects that it has been blocklisted. During reconnect, client drops
dirty data/metadata, invalidates page caches and writable file handles.
After reconnect, file locks become stale because the MDS loses track of
them. If an inode contains any stale file locks, read/write on the inode
that have no pre-Luminous cients may instead wish to instead enable the
`balancer`` module for ``ceph-mgr``.
-Add/remove an IP address to/from the blacklist. When adding an address,
-you can specify how long it should be blacklisted in seconds; otherwise,
-it will default to 1 hour. A blacklisted address is prevented from
-connecting to any OSD. Blacklisting is most often used to prevent a
+Add/remove an IP address to/from the blocklist. When adding an address,
+you can specify how long it should be blocklisted in seconds; otherwise,
+it will default to 1 hour. A blocklisted address is prevented from
+connecting to any OSD. Blocklisting is most often used to prevent a
lagging metadata server from making bad changes to data on the OSDs.
These commands are mostly only useful for failure testing, as
-blacklists are normally maintained automatically and shouldn't need
+blocklists are normally maintained automatically and shouldn't need
manual intervention. ::
- ceph osd blacklist add ADDRESS[:source_port] [TIME]
- ceph osd blacklist rm ADDRESS[:source_port]
+ ceph osd blocklist add ADDRESS[:source_port] [TIME]
+ ceph osd blocklist rm ADDRESS[:source_port]
Creates/deletes a snapshot of a pool. ::
:Description: Gives a user permissions to manipulate RBD images. When used
as a Monitor cap, it provides the minimal privileges required
by an RBD client application; this includes the ability
- to blacklist other client users. When used as an OSD cap, it
+ to blocklist other client users. When used as an OSD cap, it
provides read-write access to the specified pool to an
RBD client application. The Manager cap supports optional
``pool`` and ``namespace`` keyword arguments.
Thus, in the event that a lock cannot be acquired in the standard
graceful manner, the overtaking process not only breaks the lock, but
-also blacklists the previous lock holder. This is negotiated between
-the new client process and the Ceph Mon: upon receiving the blacklist
+also blocklists the previous lock holder. This is negotiated between
+the new client process and the Ceph Mon: upon receiving the blocklist
request,
* the Mon instructs the relevant OSDs to no longer serve requests from
* once the new client has acquired the lock, it can commence writing
to the image.
-Blacklisting is thus a form of storage-level resource `fencing`_.
+Blocklisting is thus a form of storage-level resource `fencing`_.
-In order for blacklisting to work, the client must have the ``osd
-blacklist`` capability. This capability is included in the ``profile
+In order for blocklisting to work, the client must have the ``osd
+blocklist`` capability. This capability is included in the ``profile
rbd`` capability profile, which should generally be set on all Ceph
:ref:`client identities <user-management>` using RBD.
object-storage-feature-enabled:
container_sync: false
discoverability: true
- blacklist:
+ blocklist:
- .*test_account_quotas_negative.AccountQuotasNegativeTest.test_user_modify_quota
- .*test_container_acl_negative.ObjectACLsNegativeTest.*
- .*test_container_services_negative.ContainerNegativeTest.test_create_container_metadata_.*
wait for all specified osds, but some of them could be
moved out of osdmap, so we cannot get their updated
stat seq from monitor anymore. in that case, you need
- to pass a blacklist.
+ to pass a blocklist.
:param wait_for_mon: wait for mon to be synced with mgr. 0 to disable
it. (5 min by default)
"""
self.fs = None # is now invalid!
self.recovery_fs = None
- # In case anything is in the OSD blacklist list, clear it out. This is to avoid
- # the OSD map changing in the background (due to blacklist expiry) while tests run.
+ # In case anything is in the OSD blocklist list, clear it out. This is to avoid
+ # the OSD map changing in the background (due to blocklist expiry) while tests run.
try:
- self.mds_cluster.mon_manager.raw_cluster_cmd("osd", "blacklist", "clear")
+ self.mds_cluster.mon_manager.raw_cluster_cmd("osd", "blocklist", "clear")
except CommandFailedError:
# Fallback for older Ceph cluster
- blacklist = json.loads(self.mds_cluster.mon_manager.raw_cluster_cmd("osd",
- "dump", "--format=json-pretty"))['blacklist']
- log.info("Removing {0} blacklist entries".format(len(blacklist)))
- for addr, blacklisted_at in blacklist.items():
- self.mds_cluster.mon_manager.raw_cluster_cmd("osd", "blacklist", "rm", addr)
+ blocklist = json.loads(self.mds_cluster.mon_manager.raw_cluster_cmd("osd",
+ "dump", "--format=json-pretty"))['blocklist']
+ log.info("Removing {0} blocklist entries".format(len(blocklist)))
+ for addr, blocklisted_at in blocklist.items():
+ self.mds_cluster.mon_manager.raw_cluster_cmd("osd", "blocklist", "rm", addr)
client_mount_ids = [m.client_id for m in self.mounts]
# In case the test changes the IDs of clients, stash them so that we can
finally:
self.umount_wait()
- def is_blacklisted(self):
+ def is_blocklisted(self):
addr = self.get_global_addr()
- blacklist = json.loads(self.fs.mon_manager.raw_cluster_cmd("osd", "blacklist", "ls", "--format=json"))
- for b in blacklist:
+ blocklist = json.loads(self.fs.mon_manager.raw_cluster_cmd("osd", "blocklist", "ls", "--format=json"))
+ for b in blocklist:
if addr == b["addr"]:
return True
return False
self.mount_a.kill_cleanup()
- def test_reconnect_after_blacklisted(self):
+ def test_reconnect_after_blocklisted(self):
"""
- Test reconnect after blacklisted.
- - writing to a fd that was opened before blacklist should return -EBADF
+ Test reconnect after blocklisted.
+ - writing to a fd that was opened before blocklist should return -EBADF
- reading/writing to a file with lost file locks should return -EIO
- readonly fd should continue to work
"""
self.mount_a.wait_until_mounted()
- path = os.path.join(self.mount_a.mountpoint, 'testfile_reconnect_after_blacklisted')
+ path = os.path.join(self.mount_a.mountpoint, 'testfile_reconnect_after_blocklisted')
pyscript = dedent("""
import os
import sys
os.read(fd4, 1);
fcntl.flock(fd4, fcntl.LOCK_SH | fcntl.LOCK_NB)
- print("blacklist")
+ print("blocklist")
sys.stdout.flush()
sys.stdin.readline()
time.sleep(10);
# trigger 'open session' message. kclient relies on 'session reject' message
- # to detect if itself is blacklisted
+ # to detect if itself is blocklisted
try:
os.stat("{path}.1")
except:
cap_waited, session_timeout
))
- self.assertTrue(self.mount_a.is_blacklisted())
+ self.assertTrue(self.mount_a.is_blocklisted())
cap_holder.stdin.close()
try:
cap_holder.wait()
with self.assertRaises(CommandFailedError):
self.mount_b.mount_wait(mount_path="/foo/bar")
- def test_session_evict_blacklisted(self):
+ def test_session_evict_blocklisted(self):
"""
- Check that mds evicts blacklisted client
+ Check that mds evicts blocklisted client
"""
if not isinstance(self.mount_a, FuseMount):
- self.skipTest("Requires FUSE client to use is_blacklisted()")
+ self.skipTest("Requires FUSE client to use is_blocklisted()")
self.fs.set_max_mds(2)
status = self.fs.wait_for_daemons()
mount_a_client_id = self.mount_a.get_global_id()
self.fs.mds_asok(['session', 'evict', "%s" % mount_a_client_id],
mds_id=self.fs.get_rank(rank=0, status=status)['name'])
- self.wait_until_true(lambda: self.mount_a.is_blacklisted(), timeout=30)
+ self.wait_until_true(lambda: self.mount_a.is_blocklisted(), timeout=30)
# 10 seconds should be enough for evicting client
time.sleep(10)
# Evicted guest client, guest_mounts[0], should not be able to do
# anymore metadata ops. It should start failing all operations
- # when it sees that its own address is in the blacklist.
+ # when it sees that its own address is in the blocklist.
try:
guest_mounts[0].write_n_mb("rogue.bin", 1)
except CommandFailedError:
else:
raise RuntimeError("post-eviction write should have failed!")
- # The blacklisted guest client should now be unmountable
+ # The blocklisted guest client should now be unmountable
guest_mounts[0].umount_wait()
# Guest client, guest_mounts[1], using the same auth ID 'guest', but
mount = mounts.get(client)
if mount is not None:
if evicted:
- log.info("confirming client {} is blacklisted".format(client))
- assert mount.is_blacklisted()
+ log.info("confirming client {} is blocklisted".format(client))
+ assert mount.is_blocklisted()
elif client in no_session:
log.info("client {} should not be evicted but has no session with an MDS".format(client))
- mount.is_blacklisted() # for debugging
+ mount.is_blocklisted() # for debugging
should_assert = True
if should_assert:
raise RuntimeError("some clients which should not be evicted have no session with an MDS?")
self._ceph_cmd(['dashboard', 'set-jwt-token-ttl', '28800'])
self.set_jwt_token(None)
- def test_remove_from_blacklist(self):
+ def test_remove_from_blocklist(self):
self._ceph_cmd(['dashboard', 'set-jwt-token-ttl', '5'])
self._post("/api/auth", {'username': 'admin', 'password': 'admin'})
self.assertStatus(201)
self.set_jwt_token(self.jsonBody()['token'])
- # the following call adds the token to the blacklist
+ # the following call adds the token to the blocklist
self._post("/api/auth/logout")
self.assertStatus(200)
self._get("/api/host")
self._post("/api/auth", {'username': 'admin', 'password': 'admin'})
self.assertStatus(201)
self.set_jwt_token(self.jsonBody()['token'])
- # the following call removes expired tokens from the blacklist
+ # the following call removes expired tokens from the blocklist
self._post("/api/auth/logout")
self.assertStatus(200)
log.info('Configuring Tempest')
for (client, cconf) in config.items():
- blacklist = cconf.get('blacklist', [])
- assert isinstance(blacklist, list)
+ blocklist = cconf.get('blocklist', [])
+ assert isinstance(blocklist, list)
run_in_tempest_venv(ctx, client,
[
'tempest',
'--workspace',
'rgw',
'--regex', '^tempest.api.object_storage',
- '--black-regex', '|'.join(blacklist)
+ '--black-regex', '|'.join(blocklist)
])
try:
yield
object-storage-feature-enabled:
container_sync: false
discoverability: false
- blacklist:
+ blocklist:
# please strip half of these items after merging PRs #15369
# and #12704
- .*test_list_containers_reverse_order.*
function test_mon_osd()
{
#
- # osd blacklist
+ # osd blocklist
#
bl=192.168.0.1:0/1000
- ceph osd blacklist add $bl
- ceph osd blacklist ls | grep $bl
- ceph osd blacklist ls --format=json-pretty | sed 's/\\\//\//' | grep $bl
+ ceph osd blocklist add $bl
+ ceph osd blocklist ls | grep $bl
+ ceph osd blocklist ls --format=json-pretty | sed 's/\\\//\//' | grep $bl
ceph osd dump --format=json-pretty | grep $bl
ceph osd dump | grep $bl
- ceph osd blacklist rm $bl
- ceph osd blacklist ls | expect_false grep $bl
+ ceph osd blocklist rm $bl
+ ceph osd blocklist ls | expect_false grep $bl
bl=192.168.0.1
# test without nonce, invalid nonce
- ceph osd blacklist add $bl
- ceph osd blacklist ls | grep $bl
- ceph osd blacklist rm $bl
- ceph osd blacklist ls | expect_false grep $bl
- expect_false "ceph osd blacklist $bl/-1"
- expect_false "ceph osd blacklist $bl/foo"
+ ceph osd blocklist add $bl
+ ceph osd blocklist ls | grep $bl
+ ceph osd blocklist rm $bl
+ ceph osd blocklist ls | expect_false grep $bl
+ expect_false "ceph osd blocklist $bl/-1"
+ expect_false "ceph osd blocklist $bl/foo"
# test with wrong address
- expect_false "ceph osd blacklist 1234.56.78.90/100"
+ expect_false "ceph osd blocklist 1234.56.78.90/100"
# Test `clear`
- ceph osd blacklist add $bl
- ceph osd blacklist ls | grep $bl
- ceph osd blacklist clear
- ceph osd blacklist ls | expect_false grep $bl
+ ceph osd blocklist add $bl
+ ceph osd blocklist ls | grep $bl
+ ceph osd blocklist clear
+ ceph osd blocklist ls | expect_false grep $bl
+
+ # deprecated syntax?
+ ceph osd blacklist ls
#
# osd crush
grep '"lockers":\[\]'
}
-function blacklist_add() {
+function blocklist_add() {
local dev_id="${1#/dev/rbd}"
local client_addr
client_addr="$(< $SYSFS_DIR/$dev_id/client_addr)"
- ceph osd blacklist add $client_addr
+ ceph osd blocklist add $client_addr
}
SYSFS_DIR="/sys/bus/rbd/devices"
DEV=$(sudo rbd map $IMAGE_NAME)
assert_locked $DEV
dd if=/dev/urandom of=$DEV bs=4k count=10 oflag=direct
-{ sleep 10; blacklist_add $DEV; } &
+{ sleep 10; blocklist_add $DEV; } &
PID=$!
expect_false dd if=/dev/urandom of=$DEV bs=4k count=200000 oflag=direct
wait $PID
if [ -z "${RBD_MIRROR_USE_RBD_MIRROR}" ]; then
# teuthology will trash the daemon
- testlog "TEST: no blacklists"
- CEPH_ARGS='--id admin' ceph --cluster ${CLUSTER1} osd blacklist ls 2>&1 | grep -q "listed 0 entries"
- CEPH_ARGS='--id admin' ceph --cluster ${CLUSTER2} osd blacklist ls 2>&1 | grep -q "listed 0 entries"
+ testlog "TEST: no blocklists"
+ CEPH_ARGS='--id admin' ceph --cluster ${CLUSTER1} osd blocklist ls 2>&1 | grep -q "listed 0 entries"
+ CEPH_ARGS='--id admin' ceph --cluster ${CLUSTER2} osd blocklist ls 2>&1 | grep -q "listed 0 entries"
fi
if [ -z "${RBD_MIRROR_USE_RBD_MIRROR}" ]; then
# teuthology will trash the daemon
- testlog "TEST: no blacklists"
- CEPH_ARGS='--id admin' ceph --cluster ${CLUSTER1} osd blacklist ls 2>&1 | grep -q "listed 0 entries"
- CEPH_ARGS='--id admin' ceph --cluster ${CLUSTER2} osd blacklist ls 2>&1 | grep -q "listed 0 entries"
+ testlog "TEST: no blocklists"
+ CEPH_ARGS='--id admin' ceph --cluster ${CLUSTER1} osd blocklist ls 2>&1 | grep -q "listed 0 entries"
+ CEPH_ARGS='--id admin' ceph --cluster ${CLUSTER2} osd blocklist ls 2>&1 | grep -q "listed 0 entries"
fi
echo "clientaddr: $clientaddr"
echo "clientid: $clientid"
-ceph osd blacklist add $clientaddr || exit 1
+ceph osd blocklist add $clientaddr || exit 1
wait $iochild
rbdrw_exitcode=$?
fi
set -e
-ceph osd blacklist rm $clientaddr
+ceph osd blocklist rm $clientaddr
rbd lock remove $IMAGE $LOCKID "$clientid"
# rbdrw will have exited with an existing watch, so, until #3527 is fixed,
# hang out until the watch expires
f->dump_int("mds_epoch", mdsmap->get_epoch());
f->dump_int("osd_epoch", osd_epoch);
f->dump_int("osd_epoch_barrier", cap_epoch_barrier);
- f->dump_bool("blacklisted", blacklisted);
+ f->dump_bool("blocklisted", blocklisted);
}
}
objecter_finisher.start();
filer.reset(new Filer(objecter, &objecter_finisher));
- objecter->enable_blacklist_events();
+ objecter->enable_blocklist_events();
objectcacher->start();
}
if (request->aborted())
break;
- if (blacklisted) {
- request->abort(-EBLACKLISTED);
+ if (blocklisted) {
+ request->abort(-EBLOCKLISTED);
break;
}
void Client::handle_osd_map(const MConstRef<MOSDMap>& m)
{
- std::set<entity_addr_t> new_blacklists;
- objecter->consume_blacklist_events(&new_blacklists);
+ std::set<entity_addr_t> new_blocklists;
+ objecter->consume_blocklist_events(&new_blocklists);
const auto myaddrs = messenger->get_myaddrs();
- bool new_blacklist = false;
+ bool new_blocklist = false;
bool prenautilus = objecter->with_osdmap(
[&](const OSDMap& o) {
return o.require_osd_release < ceph_release_t::nautilus;
});
- if (!blacklisted) {
+ if (!blocklisted) {
for (auto a : myaddrs.v) {
- // blacklist entries are always TYPE_ANY for nautilus+
+ // blocklist entries are always TYPE_ANY for nautilus+
a.set_type(entity_addr_t::TYPE_ANY);
- if (new_blacklists.count(a)) {
- new_blacklist = true;
+ if (new_blocklists.count(a)) {
+ new_blocklist = true;
break;
}
if (prenautilus) {
// ...except pre-nautilus, they were TYPE_LEGACY
a.set_type(entity_addr_t::TYPE_LEGACY);
- if (new_blacklists.count(a)) {
- new_blacklist = true;
+ if (new_blocklists.count(a)) {
+ new_blocklist = true;
break;
}
}
}
}
- if (new_blacklist) {
+ if (new_blocklist) {
auto epoch = objecter->with_osdmap([](const OSDMap &o){
return o.get_epoch();
});
- lderr(cct) << "I was blacklisted at osd epoch " << epoch << dendl;
- blacklisted = true;
+ lderr(cct) << "I was blocklisted at osd epoch " << epoch << dendl;
+ blocklisted = true;
- _abort_mds_sessions(-EBLACKLISTED);
+ _abort_mds_sessions(-EBLOCKLISTED);
// Since we know all our OSD ops will fail, cancel them all preemtively,
// so that on an unhealthy cluster we can umount promptly even if e.g.
// some PGs were inaccessible.
- objecter->op_cancel_writes(-EBLACKLISTED);
+ objecter->op_cancel_writes(-EBLOCKLISTED);
}
- if (blacklisted) {
- // Handle case where we were blacklisted but no longer are
- blacklisted = objecter->with_osdmap([myaddrs](const OSDMap &o){
- return o.is_blacklisted(myaddrs);});
+ if (blocklisted) {
+ // Handle case where we were blocklisted but no longer are
+ blocklisted = objecter->with_osdmap([myaddrs](const OSDMap &o){
+ return o.is_blocklisted(myaddrs);});
}
- // Always subscribe to next osdmap for blacklisted client
- // until this client is not blacklisted.
- if (blacklisted) {
+ // Always subscribe to next osdmap for blocklisted client
+ // until this client is not blocklisted.
+ if (blocklisted) {
objecter->maybe_request_map();
}
}
caps &= CEPH_CAP_FILE_CACHE | CEPH_CAP_FILE_BUFFER;
if (caps && !in->caps_issued_mask(caps, true)) {
- if (err == -EBLACKLISTED) {
+ if (err == -EBLOCKLISTED) {
if (in->oset.dirty_or_tx) {
lderr(cct) << __func__ << " still has dirty data on " << *in << dendl;
in->set_async_err(err);
std::unique_lock lock{client_lock};
- if (abort || blacklisted) {
- ldout(cct, 2) << "unmounting (" << (abort ? "abort)" : "blacklisted)") << dendl;
+ if (abort || blocklisted) {
+ ldout(cct, 2) << "unmounting (" << (abort ? "abort)" : "blocklisted)") << dendl;
} else {
ldout(cct, 2) << "unmounting" << dendl;
}
// prevent inode from getting freed
anchor.emplace_back(in);
- if (abort || blacklisted) {
+ if (abort || blocklisted) {
objectcacher->purge_set(&in->oset);
} else if (!in->caps.empty()) {
_release(in);
}
}
- if (abort || blacklisted) {
+ if (abort || blocklisted) {
for (auto p = dirty_list.begin(); !p.end(); ) {
Inode *in = *p;
++p;
trim_cache(true);
- if (blacklisted && (is_mounted() || is_unmounting()) &&
+ if (blocklisted && (is_mounted() || is_unmounting()) &&
last_auto_reconnect + 30 * 60 < now &&
cct->_conf.get_val<bool>("client_reconnect_stale")) {
messenger->client_reset();
fd_gen++; // invalidate open files
- blacklisted = false;
+ blocklisted = false;
_kick_stale_sessions();
last_auto_reconnect = now;
}
std::scoped_lock lock(client_lock);
/*
- * The whole point is to prevent blacklisting so we must time out the
+ * The whole point is to prevent blocklisting so we must time out the
* delegation before the session autoclose timeout kicks in.
*/
if (timeout >= mdsmap->get_session_autoclose())
case MetaSession::STATE_OPEN:
{
- objecter->maybe_request_map(); /* to check if we are blacklisted */
+ objecter->maybe_request_map(); /* to check if we are blocklisted */
if (cct->_conf.get_val<bool>("client_reconnect_stale")) {
ldout(cct, 1) << "reset from mds we were open; close mds session for reconnect" << dendl;
_closed_mds_session(s);
if (flags & CEPH_RECLAIM_RESET)
return 0;
- // use blacklist to check if target session was killed
- // (config option mds_session_blacklist_on_evict needs to be true)
+ // use blocklist to check if target session was killed
+ // (config option mds_session_blocklist_on_evict needs to be true)
ldout(cct, 10) << __func__ << ": waiting for OSD epoch " << reclaim_osd_epoch << dendl;
bs::error_code ec;
l.unlock();
if (ec)
return ceph::from_error_code(ec);
- bool blacklisted = objecter->with_osdmap(
+ bool blocklisted = objecter->with_osdmap(
[this](const OSDMap &osd_map) -> bool {
- return osd_map.is_blacklisted(reclaim_target_addrs);
+ return osd_map.is_blocklisted(reclaim_target_addrs);
});
- if (blacklisted)
+ if (blocklisted)
return -ENOTRECOVERABLE;
metadata["reclaiming_uuid"] = uuid;
ceph::unordered_set<dir_result_t*> opened_dirs;
uint64_t fd_gen = 1;
- bool blacklisted = false;
+ bool blocklisted = false;
ceph::unordered_map<vinodeno_t, Inode*> inode_map;
ceph::unordered_map<ino_t, vinodeno_t> faked_ino_map;
// UNSAFE -- TESTING ONLY! Allows addition of a cache tier with preexisting snaps
OPTION(mon_debug_unsafe_allow_tier_with_nonempty_snaps, OPT_BOOL)
-OPTION(mon_osd_blacklist_default_expire, OPT_DOUBLE) // default one hour
+OPTION(mon_osd_blocklist_default_expire, OPT_DOUBLE) // default one hour
OPTION(mon_osd_crush_smoke_test, OPT_BOOL)
OPTION(paxos_stash_full_interval, OPT_INT) // how often (in commits) to stash a full copy of the PaxosService state
OPTION(mds_beacon_grace, OPT_FLOAT)
OPTION(mds_enforce_unique_name, OPT_BOOL)
-OPTION(mds_session_blacklist_on_timeout, OPT_BOOL) // whether to blacklist clients whose sessions are dropped due to timeout
-OPTION(mds_session_blacklist_on_evict, OPT_BOOL) // whether to blacklist clients whose sessions are dropped via admin commands
+OPTION(mds_session_blocklist_on_timeout, OPT_BOOL) // whether to blocklist clients whose sessions are dropped due to timeout
+OPTION(mds_session_blocklist_on_evict, OPT_BOOL) // whether to blocklist clients whose sessions are dropped via admin commands
OPTION(mds_sessionmap_keys_per_op, OPT_U32) // how many sessions should I try to load/store in a single OMAP operation?
OPTION(mds_freeze_tree_timeout, OPT_FLOAT) // detecting freeze tree deadlock
.add_service("mon")
.set_description(""),
- Option("mon_osd_blacklist_default_expire", Option::TYPE_FLOAT, Option::LEVEL_ADVANCED)
+ Option("mon_osd_blocklist_default_expire", Option::TYPE_FLOAT, Option::LEVEL_ADVANCED)
.set_default(1_hr)
.add_service("mon")
- .set_description("Duration in seconds that blacklist entries for clients "
+ .set_description("Duration in seconds that blocklist entries for clients "
"remain in the OSD map"),
- Option("mon_mds_blacklist_interval", Option::TYPE_FLOAT, Option::LEVEL_DEV)
+ Option("mon_mds_blocklist_interval", Option::TYPE_FLOAT, Option::LEVEL_DEV)
.set_default(1_day)
.set_min(1_hr)
.add_service("mon")
- .set_description("Duration in seconds that blacklist entries for MDS "
+ .set_description("Duration in seconds that blocklist entries for MDS "
"daemons remain in the OSD map")
.set_flag(Option::FLAG_RUNTIME),
- Option("mon_mgr_blacklist_interval", Option::TYPE_FLOAT, Option::LEVEL_DEV)
+ Option("mon_mgr_blocklist_interval", Option::TYPE_FLOAT, Option::LEVEL_DEV)
.set_default(1_day)
.set_min(1_hr)
.add_service("mon")
- .set_description("Duration in seconds that blacklist entries for mgr "
+ .set_description("Duration in seconds that blocklist entries for mgr "
"daemons remain in the OSD map")
.set_flag(Option::FLAG_RUNTIME),
.set_default(false)
.set_description("copy-up parent image blocks to clone upon read request"),
- Option("rbd_blacklist_on_break_lock", Option::TYPE_BOOL, Option::LEVEL_ADVANCED)
+ Option("rbd_blocklist_on_break_lock", Option::TYPE_BOOL, Option::LEVEL_ADVANCED)
.set_default(true)
- .set_description("whether to blacklist clients whose lock was broken"),
+ .set_description("whether to blocklist clients whose lock was broken"),
- Option("rbd_blacklist_expire_seconds", Option::TYPE_UINT, Option::LEVEL_ADVANCED)
+ Option("rbd_blocklist_expire_seconds", Option::TYPE_UINT, Option::LEVEL_ADVANCED)
.set_default(0)
- .set_description("number of seconds to blacklist - set to 0 for OSD default"),
+ .set_description("number of seconds to blocklist - set to 0 for OSD default"),
Option("rbd_request_timed_out_seconds", Option::TYPE_UINT, Option::LEVEL_ADVANCED)
.set_default(30)
.set_default(true)
.set_description("require MDS name is unique in the cluster"),
- Option("mds_session_blacklist_on_timeout", Option::TYPE_BOOL, Option::LEVEL_ADVANCED)
+ Option("mds_session_blocklist_on_timeout", Option::TYPE_BOOL, Option::LEVEL_ADVANCED)
.set_default(true)
- .set_description("blacklist clients whose sessions have become stale"),
+ .set_description("blocklist clients whose sessions have become stale"),
- Option("mds_session_blacklist_on_evict", Option::TYPE_BOOL, Option::LEVEL_ADVANCED)
+ Option("mds_session_blocklist_on_evict", Option::TYPE_BOOL, Option::LEVEL_ADVANCED)
.set_default(true)
- .set_description("blacklist clients that have been evicted"),
+ .set_description("blocklist clients that have been evicted"),
Option("mds_sessionmap_keys_per_op", Option::TYPE_UINT, Option::LEVEL_ADVANCED)
.set_default(1024)
void check_recovery_sources(const OSDMapRef& newmap) final {
// Not needed yet
}
- void check_blacklisted_watchers() final {
+ void check_blocklisted_watchers() final {
// Not needed yet
}
void clear_primary_state() final {
* Get the amount of time that the client has to return caps
* @param cmount the ceph mount handle to use.
*
- * In the event that a client does not return its caps, the MDS may blacklist
+ * In the event that a client does not return its caps, the MDS may blocklist
* it after this timeout. Applications should check this value and ensure
* that they set the delegation timeout to a value lower than this.
*
* @param cmount the ceph mount handle to use.
* @param timeout the delegation timeout (in seconds)
*
- * Since the client could end up blacklisted if it doesn't return delegations
+ * Since the client could end up blocklisted if it doesn't return delegations
* in time, we mandate that any application wanting to use delegations
* explicitly set the timeout beforehand. Until this call is done on the
* mount, attempts to set a delegation will return -ETIME.
};
#define EOLDSNAPC 85 /* ORDERSNAP flag set; writer has old snapc*/
-#define EBLACKLISTED 108 /* blacklisted */
+#define EBLOCKLISTED 108 /* blocklisted */
+#define EBLACKLISTED 108 /* deprecated */
/* xattr comparison */
enum {
const char *cookie);
/**
- * Blacklists the specified client from the OSDs
+ * Blocklists the specified client from the OSDs
*
* @param cluster cluster handle
* @param client_address client address
- * @param expire_seconds number of seconds to blacklist (0 for default)
+ * @param expire_seconds number of seconds to blocklist (0 for default)
* @returns 0 on success, negative error code on failure
*/
-CEPH_RADOS_API int rados_blacklist_add(rados_t cluster,
+CEPH_RADOS_API int rados_blocklist_add(rados_t cluster,
char *client_address,
uint32_t expire_seconds);
+CEPH_RADOS_API int rados_blacklist_add(rados_t cluster,
+ char *client_address,
+ uint32_t expire_seconds)
+ __attribute__((deprecated));
/**
- * Gets addresses of the RADOS session, suitable for blacklisting.
+ * Gets addresses of the RADOS session, suitable for blocklisting.
*
* @param cluster cluster handle
* @param addrs the output string.
int ioctx_create2(int64_t pool_id, IoCtx &pioctx);
// Features useful for test cases
- void test_blacklist_self(bool set);
+ void test_blocklist_self(bool set);
/* pool info */
int pool_list(std::list<std::string>& v);
/// get/wait for the most recent osdmap
int wait_for_latest_osdmap();
- int blacklist_add(const std::string& client_address,
+ int blocklist_add(const std::string& client_address,
uint32_t expire_seconds);
/*
if (r == -ENOENT) {
ldout(m_cct, 5) << __func__ << ": journal header not found" << dendl;
} else if (r == -EBLACKLISTED) {
- ldout(m_cct, 5) << __func__ << ": client blacklisted" << dendl;
+ ldout(m_cct, 5) << __func__ << ": client blocklisted" << dendl;
} else {
lderr(m_cct) << __func__ << ": failed to watch journal: "
<< cpp_strerror(r) << dendl;
void JournalMetadata::handle_watch_error(int err) {
if (err == -ENOTCONN) {
ldout(m_cct, 5) << "journal watch error: header removed" << dendl;
- } else if (err == -EBLACKLISTED) {
- lderr(m_cct) << "journal watch error: client blacklisted" << dendl;
+ } else if (err == -EBLOCKLISTED) {
+ lderr(m_cct) << "journal watch error: client blocklisted" << dendl;
} else {
lderr(m_cct) << "journal watch error: " << cpp_strerror(err) << dendl;
}
return r;
}
-void librados::RadosClient::blacklist_self(bool set) {
+void librados::RadosClient::blocklist_self(bool set) {
std::lock_guard l(lock);
- objecter->blacklist_self(set);
+ objecter->blocklist_self(set);
}
std::string librados::RadosClient::get_addrs() const {
return std::string(cos->strv());
}
-int librados::RadosClient::blacklist_add(const string& client_address,
+int librados::RadosClient::blocklist_add(const string& client_address,
uint32_t expire_seconds)
{
entity_addr_t addr;
std::stringstream cmd;
cmd << "{"
- << "\"prefix\": \"osd blacklist\", "
- << "\"blacklistop\": \"add\", "
+ << "\"prefix\": \"osd blocklist\", "
+ << "\"blocklistop\": \"add\", "
<< "\"addr\": \"" << client_address << "\"";
if (expire_seconds != 0) {
cmd << ", \"expire\": " << expire_seconds << ".0";
bufferlist inbl;
int r = mon_command(cmds, inbl, NULL, NULL);
if (r < 0) {
+#warning blocklist fallback
return r;
}
int pool_delete_async(const char *name, PoolAsyncCompletionImpl *c);
- int blacklist_add(const string& client_address, uint32_t expire_seconds);
+ int blocklist_add(const string& client_address, uint32_t expire_seconds);
int mon_command(const vector<string>& cmd, const bufferlist &inbl,
bufferlist *outbl, string *outs);
void get();
bool put();
- void blacklist_self(bool set);
+ void blocklist_self(bool set);
std::string get_addrs() const;
}
LIBRADOS_C_API_BASE_DEFAULT(rados_wait_for_latest_osdmap);
-extern "C" int _rados_blacklist_add(rados_t cluster, char *client_address,
+extern "C" int _rados_blocklist_add(rados_t cluster, char *client_address,
uint32_t expire_seconds)
{
librados::RadosClient *radosp = (librados::RadosClient *)cluster;
- return radosp->blacklist_add(client_address, expire_seconds);
+ return radosp->blocklist_add(client_address, expire_seconds);
+}
+LIBRADOS_C_API_BASE_DEFAULT(rados_blocklist_add);
+
+extern "C" int _rados_blacklist_add(rados_t cluster, char *client_address,
+ uint32_t expire_seconds)
+{
+ return _rados_blocklist_add(cluster, client_address, expire_seconds);
}
LIBRADOS_C_API_BASE_DEFAULT(rados_blacklist_add);
return 0;
}
-void librados::Rados::test_blacklist_self(bool set)
+void librados::Rados::test_blocklist_self(bool set)
{
- client->blacklist_self(set);
+ client->blocklist_self(set);
}
int librados::Rados::get_pool_stats(std::list<string>& v,
return client->wait_for_latest_osdmap();
}
-int librados::Rados::blacklist_add(const std::string& client_address,
+int librados::Rados::blocklist_add(const std::string& client_address,
uint32_t expire_seconds)
{
- return client->blacklist_add(client_address, expire_seconds);
+ return client->blocklist_add(client_address, expire_seconds);
}
librados::PoolAsyncCompletion *librados::Rados::pool_async_create_completion()
: RefCountedObject(image_ctx.cct),
ML<I>(image_ctx.md_ctx, *image_ctx.asio_engine, image_ctx.header_oid,
image_ctx.image_watcher, managed_lock::EXCLUSIVE,
- image_ctx.config.template get_val<bool>("rbd_blacklist_on_break_lock"),
- image_ctx.config.template get_val<uint64_t>("rbd_blacklist_expire_seconds")),
+ image_ctx.config.template get_val<bool>("rbd_blocklist_on_break_lock"),
+ image_ctx.config.template get_val<uint64_t>("rbd_blocklist_expire_seconds")),
m_image_ctx(image_ctx) {
std::lock_guard locker{ML<I>::m_lock};
ML<I>::set_state_uninitialized();
template <typename I>
int ExclusiveLock<I>::get_unlocked_op_error() const {
- if (m_image_ctx.image_watcher->is_blacklisted()) {
- return -EBLACKLISTED;
+ if (m_image_ctx.image_watcher->is_blocklisted()) {
+ return -EBLOCKLISTED;
}
return -EROFS;
}
template <typename I>
ManagedLock<I>::ManagedLock(librados::IoCtx &ioctx, AsioEngine& asio_engine,
const string& oid, Watcher *watcher, Mode mode,
- bool blacklist_on_break_lock,
- uint32_t blacklist_expire_seconds)
+ bool blocklist_on_break_lock,
+ uint32_t blocklist_expire_seconds)
: m_lock(ceph::make_mutex(unique_lock_name("librbd::ManagedLock<I>::m_lock", this))),
m_ioctx(ioctx), m_cct(reinterpret_cast<CephContext *>(ioctx.cct())),
m_asio_engine(asio_engine),
m_oid(oid),
m_watcher(watcher),
m_mode(mode),
- m_blacklist_on_break_lock(blacklist_on_break_lock),
- m_blacklist_expire_seconds(blacklist_expire_seconds),
+ m_blocklist_on_break_lock(blocklist_on_break_lock),
+ m_blocklist_expire_seconds(blocklist_expire_seconds),
m_state(STATE_UNLOCKED) {
}
on_finish = new C_Tracked(m_async_op_tracker, on_finish);
auto req = managed_lock::BreakRequest<I>::create(
m_ioctx, m_asio_engine, m_oid, locker, m_mode == EXCLUSIVE,
- m_blacklist_on_break_lock, m_blacklist_expire_seconds, force_break_lock,
+ m_blocklist_on_break_lock, m_blocklist_expire_seconds, force_break_lock,
on_finish);
req->send();
return;
int r = m_ioctx.operate(m_oid, &op, nullptr);
if (r < 0) {
- if (r == -EBLACKLISTED) {
- ldout(m_cct, 5) << "client is not lock owner -- client blacklisted"
+ if (r == -EBLOCKLISTED) {
+ ldout(m_cct, 5) << "client is not lock owner -- client blocklisted"
<< dendl;
} else if (r == -ENOENT) {
ldout(m_cct, 5) << "client is not lock owner -- no lock detected"
using managed_lock::AcquireRequest;
AcquireRequest<I>* req = AcquireRequest<I>::create(
m_ioctx, m_watcher, m_asio_engine, m_oid, m_cookie, m_mode == EXCLUSIVE,
- m_blacklist_on_break_lock, m_blacklist_expire_seconds,
+ m_blocklist_on_break_lock, m_blocklist_expire_seconds,
create_context_callback<
ManagedLock<I>, &ManagedLock<I>::handle_acquire_lock>(this));
m_work_queue->queue(new C_SendLockRequest<AcquireRequest<I>>(req), 0);
}
m_new_cookie = encode_lock_cookie(watch_handle);
- if (m_cookie == m_new_cookie && m_blacklist_on_break_lock) {
+ if (m_cookie == m_new_cookie && m_blocklist_on_break_lock) {
ldout(m_cct, 10) << "skipping reacquire since cookie still valid"
<< dendl;
auto ctx = create_context_callback<
std::lock_guard locker{m_lock};
ceph_assert(m_state == STATE_RELEASING);
- if (r >= 0 || r == -EBLACKLISTED || r == -ENOENT) {
+ if (r >= 0 || r == -EBLOCKLISTED || r == -ENOENT) {
m_cookie = "";
m_post_next_state = STATE_UNLOCKED;
} else {
AsioEngine& asio_engine,
const std::string& oid, Watcher *watcher,
managed_lock::Mode mode,
- bool blacklist_on_break_lock,
- uint32_t blacklist_expire_seconds) {
+ bool blocklist_on_break_lock,
+ uint32_t blocklist_expire_seconds) {
return new ManagedLock(ioctx, asio_engine, oid, watcher, mode,
- blacklist_on_break_lock, blacklist_expire_seconds);
+ blocklist_on_break_lock, blocklist_expire_seconds);
}
void destroy() {
delete this;
ManagedLock(librados::IoCtx& ioctx, AsioEngine& asio_engine,
const std::string& oid, Watcher *watcher,
- managed_lock::Mode mode, bool blacklist_on_break_lock,
- uint32_t blacklist_expire_seconds);
+ managed_lock::Mode mode, bool blocklist_on_break_lock,
+ uint32_t blocklist_expire_seconds);
virtual ~ManagedLock();
bool is_lock_owner() const;
std::string m_oid;
Watcher *m_watcher;
managed_lock::Mode m_mode;
- bool m_blacklist_on_break_lock;
- uint32_t m_blacklist_expire_seconds;
+ bool m_blocklist_on_break_lock;
+ uint32_t m_blocklist_expire_seconds;
std::string m_cookie;
std::string m_new_cookie;
std::unique_lock watch_locker{m_watch_lock};
ceph_assert(is_unregistered(m_watch_lock));
m_watch_state = WATCH_STATE_REGISTERING;
- m_watch_blacklisted = false;
+ m_watch_blocklisted = false;
librados::AioCompletion *aio_comp = create_rados_callback(
new C_RegisterWatch(this, on_finish));
m_watch_state = WATCH_STATE_REWATCHING;
watch_error = true;
} else {
- m_watch_blacklisted = (r == -EBLACKLISTED);
+ m_watch_blocklisted = (r == -EBLOCKLISTED);
}
}
aio_comp->release();
m_watch_handle = 0;
- m_watch_blacklisted = false;
+ m_watch_blocklisted = false;
return;
}
}
if (is_registered(m_watch_lock)) {
m_watch_state = WATCH_STATE_REWATCHING;
- if (err == -EBLACKLISTED) {
- m_watch_blacklisted = true;
+ if (err == -EBLOCKLISTED) {
+ m_watch_blocklisted = true;
}
auto ctx = new LambdaContext(
std::unique_lock watch_locker{m_watch_lock};
ceph_assert(m_watch_state == WATCH_STATE_REWATCHING);
- m_watch_blacklisted = false;
+ m_watch_blocklisted = false;
if (m_unregister_watch_ctx != nullptr) {
ldout(m_cct, 10) << "image is closing, skip rewatch" << dendl;
m_watch_state = WATCH_STATE_IDLE;
std::swap(unregister_watch_ctx, m_unregister_watch_ctx);
- } else if (r == -EBLACKLISTED) {
- lderr(m_cct) << "client blacklisted" << dendl;
- m_watch_blacklisted = true;
+ } else if (r == -EBLOCKLISTED) {
+ lderr(m_cct) << "client blocklisted" << dendl;
+ m_watch_blocklisted = true;
} else if (r == -ENOENT) {
ldout(m_cct, 5) << "object does not exist" << dendl;
} else if (r < 0) {
if (m_unregister_watch_ctx != nullptr) {
m_watch_state = WATCH_STATE_IDLE;
std::swap(unregister_watch_ctx, m_unregister_watch_ctx);
- } else if (r == -EBLACKLISTED || r == -ENOENT) {
+ } else if (r == -EBLOCKLISTED || r == -ENOENT) {
m_watch_state = WATCH_STATE_IDLE;
} else if (r < 0 || m_watch_error) {
watch_error = true;
std::shared_lock locker{m_watch_lock};
return is_unregistered(m_watch_lock);
}
- bool is_blacklisted() const {
+ bool is_blocklisted() const {
std::shared_lock locker{m_watch_lock};
- return m_watch_blacklisted;
+ return m_watch_blocklisted;
}
protected:
watcher::Notifier m_notifier;
WatchState m_watch_state;
- bool m_watch_blacklisted = false;
+ bool m_watch_blocklisted = false;
AsyncOpTracker m_async_op_tracker;
auto cct = dispatcher->m_image_ctx->cct;
if (r == -EBLACKLISTED) {
- lderr(cct) << "blacklisted during flush (purging)" << dendl;
+ lderr(cct) << "blocklisted during flush (purging)" << dendl;
dispatcher->m_object_cacher->purge_set(dispatcher->m_object_set);
} else if (r < 0 && purge_on_error) {
lderr(cct) << "failed to invalidate cache (purging): "
ldout(cct, 10) << "r=" << r << dendl;
if (r == -EBLACKLISTED) {
- // allow clean shut down if blacklisted
- lderr(cct) << "failed to block writes because client is blacklisted"
+ // allow clean shut down if blocklisted
+ lderr(cct) << "failed to block writes because client is blocklisted"
<< dendl;
} else if (r < 0) {
lderr(cct) << "failed to block writes: " << cpp_strerror(r) << dendl;
CephContext *cct = m_image_ctx.cct;
ldout(cct, 10) << "r=" << r << dendl;
- if (r < 0 && r != -EBLACKLISTED && r != -EBUSY) {
+ if (r < 0 && r != -EBLOCKLISTED && r != -EBUSY) {
lderr(cct) << "failed to invalidate cache: " << cpp_strerror(r)
<< dendl;
m_image_dispatch->unset_require_lock(io::DIRECTION_BOTH);
return 0;
}
- // might have been blacklisted by peer -- ensure we still own
+ // might have been blocklisted by peer -- ensure we still own
// the lock by pinging the OSD
int r = ictx->exclusive_lock->assert_header_locked();
if (r == -EBUSY || r == -ENOENT) {
return -EINVAL;
}
- if (ictx->config.get_val<bool>("rbd_blacklist_on_break_lock")) {
+ if (ictx->config.get_val<bool>("rbd_blocklist_on_break_lock")) {
typedef std::map<rados::cls::lock::locker_id_t,
rados::cls::lock::locker_info_t> Lockers;
Lockers lockers;
}
librados::Rados rados(ictx->md_ctx);
- r = rados.blacklist_add(
+ r = rados.blocklist_add(
client_address,
- ictx->config.get_val<uint64_t>("rbd_blacklist_expire_seconds"));
+ ictx->config.get_val<uint64_t>("rbd_blocklist_expire_seconds"));
if (r < 0) {
- lderr(ictx->cct) << "unable to blacklist client: " << cpp_strerror(r)
+ lderr(ictx->cct) << "unable to blocklist client: " << cpp_strerror(r)
<< dendl;
return r;
}
const string& oid,
const string& cookie,
bool exclusive,
- bool blacklist_on_break_lock,
- uint32_t blacklist_expire_seconds,
+ bool blocklist_on_break_lock,
+ uint32_t blocklist_expire_seconds,
Context *on_finish) {
return new AcquireRequest(ioctx, watcher, asio_engine, oid, cookie,
- exclusive, blacklist_on_break_lock,
- blacklist_expire_seconds, on_finish);
+ exclusive, blocklist_on_break_lock,
+ blocklist_expire_seconds, on_finish);
}
template <typename I>
AsioEngine& asio_engine,
const string& oid,
const string& cookie, bool exclusive,
- bool blacklist_on_break_lock,
- uint32_t blacklist_expire_seconds,
+ bool blocklist_on_break_lock,
+ uint32_t blocklist_expire_seconds,
Context *on_finish)
: m_ioctx(ioctx), m_watcher(watcher),
m_cct(reinterpret_cast<CephContext *>(m_ioctx.cct())),
m_asio_engine(asio_engine), m_oid(oid), m_cookie(cookie),
m_exclusive(exclusive),
- m_blacklist_on_break_lock(blacklist_on_break_lock),
- m_blacklist_expire_seconds(blacklist_expire_seconds),
+ m_blocklist_on_break_lock(blocklist_on_break_lock),
+ m_blocklist_expire_seconds(blocklist_expire_seconds),
m_on_finish(new C_AsyncCallback<asio::ContextWQ>(
asio_engine.get_work_queue(), on_finish)) {
}
AcquireRequest<I>, &AcquireRequest<I>::handle_break_lock>(this);
auto req = BreakRequest<I>::create(
m_ioctx, m_asio_engine, m_oid, m_locker, m_exclusive,
- m_blacklist_on_break_lock, m_blacklist_expire_seconds, false, ctx);
+ m_blocklist_on_break_lock, m_blocklist_expire_seconds, false, ctx);
req->send();
}
const std::string& oid,
const std::string& cookie,
bool exclusive,
- bool blacklist_on_break_lock,
- uint32_t blacklist_expire_seconds,
+ bool blocklist_on_break_lock,
+ uint32_t blocklist_expire_seconds,
Context *on_finish);
~AcquireRequest();
AcquireRequest(librados::IoCtx& ioctx, Watcher *watcher,
AsioEngine& asio_engine, const std::string& oid,
const std::string& cookie, bool exclusive,
- bool blacklist_on_break_lock,
- uint32_t blacklist_expire_seconds, Context *on_finish);
+ bool blocklist_on_break_lock,
+ uint32_t blocklist_expire_seconds, Context *on_finish);
librados::IoCtx& m_ioctx;
Watcher *m_watcher;
std::string m_oid;
std::string m_cookie;
bool m_exclusive;
- bool m_blacklist_on_break_lock;
- uint32_t m_blacklist_expire_seconds;
+ bool m_blocklist_on_break_lock;
+ uint32_t m_blocklist_expire_seconds;
Context *m_on_finish;
bufferlist m_out_bl;
BreakRequest<I>::BreakRequest(librados::IoCtx& ioctx,
AsioEngine& asio_engine,
const std::string& oid, const Locker &locker,
- bool exclusive, bool blacklist_locker,
- uint32_t blacklist_expire_seconds,
+ bool exclusive, bool blocklist_locker,
+ uint32_t blocklist_expire_seconds,
bool force_break_lock, Context *on_finish)
: m_ioctx(ioctx), m_cct(reinterpret_cast<CephContext *>(m_ioctx.cct())),
m_asio_engine(asio_engine), m_oid(oid), m_locker(locker),
- m_exclusive(exclusive), m_blacklist_locker(blacklist_locker),
- m_blacklist_expire_seconds(blacklist_expire_seconds),
+ m_exclusive(exclusive), m_blocklist_locker(blocklist_locker),
+ m_blocklist_expire_seconds(blocklist_expire_seconds),
m_force_break_lock(force_break_lock), m_on_finish(on_finish) {
}
return;
}
- send_blacklist();
+ send_blocklist();
}
template <typename I>
-void BreakRequest<I>::send_blacklist() {
- if (!m_blacklist_locker) {
+void BreakRequest<I>::send_blocklist() {
+ if (!m_blocklist_locker) {
send_break_lock();
return;
}
<< "locker entity=" << m_locker.entity << dendl;
if (m_locker.entity == entity_name) {
- lderr(m_cct) << "attempting to self-blacklist" << dendl;
+ lderr(m_cct) << "attempting to self-blocklist" << dendl;
finish(-EINVAL);
return;
}
std::stringstream cmd;
cmd << "{"
- << "\"prefix\": \"osd blacklist\", "
- << "\"blacklistop\": \"add\", "
+ << "\"prefix\": \"osd blocklist\", "
+ << "\"blocklistop\": \"add\", "
<< "\"addr\": \"" << locker_addr << "\"";
- if (m_blacklist_expire_seconds != 0) {
- cmd << ", \"expire\": " << m_blacklist_expire_seconds << ".0";
+ if (m_blocklist_expire_seconds != 0) {
+ cmd << ", \"expire\": " << m_blocklist_expire_seconds << ".0";
}
cmd << "}";
m_asio_engine.get_rados_api().mon_command(
{cmd.str()}, in_bl, nullptr, nullptr,
librbd::asio::util::get_callback_adapter(
- [this](int r) { handle_blacklist(r); }));
+ [this](int r) { handle_blocklist(r); }));
}
template <typename I>
-void BreakRequest<I>::handle_blacklist(int r) {
+void BreakRequest<I>::handle_blocklist(int r) {
ldout(m_cct, 10) << "r=" << r << dendl;
if (r < 0) {
- lderr(m_cct) << "failed to blacklist lock owner: " << cpp_strerror(r)
+ lderr(m_cct) << "failed to blocklist lock owner: " << cpp_strerror(r)
<< dendl;
finish(r);
return;
static BreakRequest* create(librados::IoCtx& ioctx,
AsioEngine& asio_engine,
const std::string& oid, const Locker &locker,
- bool exclusive, bool blacklist_locker,
- uint32_t blacklist_expire_seconds,
+ bool exclusive, bool blocklist_locker,
+ uint32_t blocklist_expire_seconds,
bool force_break_lock, Context *on_finish) {
return new BreakRequest(ioctx, asio_engine, oid, locker, exclusive,
- blacklist_locker, blacklist_expire_seconds,
+ blocklist_locker, blocklist_expire_seconds,
force_break_lock, on_finish);
}
* GET_LOCKER
* |
* v
- * BLACKLIST (skip if disabled)
+ * BLOCKLIST (skip if disabled)
* |
* v
* WAIT_FOR_OSD_MAP
std::string m_oid;
Locker m_locker;
bool m_exclusive;
- bool m_blacklist_locker;
- uint32_t m_blacklist_expire_seconds;
+ bool m_blocklist_locker;
+ uint32_t m_blocklist_expire_seconds;
bool m_force_break_lock;
Context *m_on_finish;
BreakRequest(librados::IoCtx& ioctx, AsioEngine& asio_engine,
const std::string& oid, const Locker &locker,
- bool exclusive, bool blacklist_locker,
- uint32_t blacklist_expire_seconds, bool force_break_lock,
+ bool exclusive, bool blocklist_locker,
+ uint32_t blocklist_expire_seconds, bool force_break_lock,
Context *on_finish);
void send_get_watchers();
void send_get_locker();
void handle_get_locker(int r);
- void send_blacklist();
- void handle_blacklist(int r);
+ void send_blocklist();
+ void handle_blocklist(int r);
void wait_for_osd_map();
void handle_wait_for_osd_map(int r);
ldout(cct, 10) << "r=" << r << dendl;
if (r == -EBLACKLISTED) {
- lderr(cct) << "client blacklisted" << dendl;
+ lderr(cct) << "client blocklisted" << dendl;
finish(r);
return;
} else if (r < 0) {
fs->mds_map.epoch = epoch;
}
-void FSMap::erase(mds_gid_t who, epoch_t blacklist_epoch)
+void FSMap::erase(mds_gid_t who, epoch_t blocklist_epoch)
{
if (mds_roles.at(who) == FS_CLUSTER_ID_NONE) {
standby_daemons.erase(who);
fs->mds_map.up.erase(info.rank);
}
fs->mds_map.mds_info.erase(who);
- fs->mds_map.last_failure_osd_epoch = blacklist_epoch;
+ fs->mds_map.last_failure_osd_epoch = blocklist_epoch;
fs->mds_map.epoch = epoch;
}
mds_roles.erase(who);
}
-void FSMap::damaged(mds_gid_t who, epoch_t blacklist_epoch)
+void FSMap::damaged(mds_gid_t who, epoch_t blocklist_epoch)
{
ceph_assert(mds_roles.at(who) != FS_CLUSTER_ID_NONE);
auto fs = filesystems.at(mds_roles.at(who));
mds_rank_t rank = fs->mds_map.mds_info[who].rank;
- erase(who, blacklist_epoch);
+ erase(who, blocklist_epoch);
fs->mds_map.failed.erase(rank);
fs->mds_map.damaged.insert(rank);
* The rank held by 'who', if any, is to be relinquished, and
* the state for the daemon GID is to be forgotten.
*/
- void erase(mds_gid_t who, epoch_t blacklist_epoch);
+ void erase(mds_gid_t who, epoch_t blocklist_epoch);
/**
* Update to indicate that the rank held by 'who' is damaged
*/
- void damaged(mds_gid_t who, epoch_t blacklist_epoch);
+ void damaged(mds_gid_t who, epoch_t blocklist_epoch);
/**
* Update to indicate that the rank `rank` is to be removed
if (cap->revoking() & CEPH_CAP_ANY_WR) {
std::stringstream ss;
- mds->evict_client(client.v, false, g_conf()->mds_session_blacklist_on_timeout, ss, nullptr);
+ mds->evict_client(client.v, false, g_conf()->mds_session_blocklist_on_timeout, ss, nullptr);
return;
}
// assume journal is reliable, so don't choose action based on
// g_conf()->mds_action_on_write_error.
if (r == -EBLACKLISTED) {
- derr << "we have been blacklisted (fenced), respawning..." << dendl;
+ derr << "we have been blocklisted (fenced), respawning..." << dendl;
mds->respawn();
} else {
derr << "unhandled error " << cpp_strerror(r) << ", shutting down..." << dendl;
int write_result = jp.save(mds->objecter);
// Nothing graceful we can do for this
ceph_assert(write_result >= 0);
- } else if (read_result == -EBLACKLISTED) {
- derr << "Blacklisted during JournalPointer read! Respawning..." << dendl;
+ } else if (read_result == -EBLOCKLISTED) {
+ derr << "Blocklisted during JournalPointer read! Respawning..." << dendl;
mds->respawn();
ceph_abort(); // Should be unreachable because respawn calls execv
} else if (read_result != 0) {
C_SaferCond recover_wait;
back.recover(&recover_wait);
int recovery_result = recover_wait.wait();
- if (recovery_result == -EBLACKLISTED) {
- derr << "Blacklisted during journal recovery! Respawning..." << dendl;
+ if (recovery_result == -EBLOCKLISTED) {
+ derr << "Blocklisted during journal recovery! Respawning..." << dendl;
mds->respawn();
ceph_abort(); // Should be unreachable because respawn calls execv
} else if (recovery_result != 0) {
int recovery_result = recover_wait.wait();
dout(4) << "Journal " << jp.front << " recovered." << dendl;
- if (recovery_result == -EBLACKLISTED) {
- derr << "Blacklisted during journal recovery! Respawning..." << dendl;
+ if (recovery_result == -EBLOCKLISTED) {
+ derr << "Blocklisted during journal recovery! Respawning..." << dendl;
mds->respawn();
ceph_abort(); // Should be unreachable because respawn calls execv
} else if (recovery_result != 0) {
}
if (r == -EBLACKLISTED) {
- derr << "MDSIOContextBase: blacklisted! Restarting..." << dendl;
+ derr << "MDSIOContextBase: blocklisted! Restarting..." << dendl;
mds->respawn();
} else {
MDSContext::complete(r);
uint32_t flags = CEPH_MDSMAP_DEFAULTS; // flags
epoch_t last_failure = 0; // mds epoch of last failure
epoch_t last_failure_osd_epoch = 0; // osd epoch of last failure; any mds entering replay needs
- // at least this osdmap to ensure the blacklist propagates.
+ // at least this osdmap to ensure the blocklist propagates.
utime_t created;
utime_t modified;
void MDSRank::handle_write_error(int err)
{
if (err == -EBLACKLISTED) {
- derr << "we have been blacklisted (fenced), respawning..." << dendl;
+ derr << "we have been blocklisted (fenced), respawning..." << dendl;
respawn();
return;
}
}
// NB not enabling suicide grace, because the mon takes care of killing us
- // (by blacklisting us) when we fail to send beacons, and it's simpler to
+ // (by blocklisting us) when we fail to send beacons, and it's simpler to
// only have one way of dying.
g_ceph_context->get_heartbeat_map()->reset_timeout(hb,
ceph::make_timespan(heartbeat_grace),
/**
* This is used whenever a RADOS operation has been cancelled
- * or a RADOS client has been blacklisted, to cause the MDS and
+ * or a RADOS client has been blocklisted, to cause the MDS and
* any clients to wait for this OSD epoch before using any new caps.
*
* See doc/cephfs/eviction
boot_start();
} else {
dout(1) << " waiting for osdmap " << mdsmap->get_last_failure_osd_epoch()
- << " (which blacklists prior instance)" << dendl;
+ << " (which blocklists prior instance)" << dendl;
Context *fin = new C_IO_Wrapper(this, new C_MDS_BootStart(this, MDS_BOOT_INITIAL));
objecter->wait_for_map(
mdsmap->get_last_failure_osd_epoch(),
} else {
auto fin = new C_IO_Wrapper(this, new C_MDS_StandbyReplayRestart(this));
dout(1) << " waiting for osdmap " << mdsmap->get_last_failure_osd_epoch()
- << " (which blacklists prior instance)" << dendl;
+ << " (which blocklists prior instance)" << dendl;
objecter->wait_for_map(mdsmap->get_last_failure_osd_epoch(),
lambdafy(fin));
}
reopen_log();
}
- // Drop any blacklisted clients from the SessionMap before going
+ // Drop any blocklisted clients from the SessionMap before going
// into reconnect, so that we don't wait for them.
- objecter->enable_blacklist_events();
- std::set<entity_addr_t> blacklist;
+ objecter->enable_blocklist_events();
+ std::set<entity_addr_t> blocklist;
epoch_t epoch = 0;
- objecter->with_osdmap([&blacklist, &epoch](const OSDMap& o) {
- o.get_blacklist(&blacklist);
+ objecter->with_osdmap([&blocklist, &epoch](const OSDMap& o) {
+ o.get_blocklist(&blocklist);
epoch = o.get_epoch();
});
- auto killed = server->apply_blacklist(blacklist);
- dout(4) << "reconnect_start: killed " << killed << " blacklisted sessions ("
- << blacklist.size() << " blacklist entries, "
+ auto killed = server->apply_blocklist(blocklist);
+ dout(4) << "reconnect_start: killed " << killed << " blocklisted sessions ("
+ << blocklist.size() << " blocklist entries, "
<< sessionmap.get_sessions().size() << ")" << dendl;
if (killed) {
set_osd_epoch_barrier(epoch);
mdlog->flush();
// Usually we do this during reconnect, but creation skips that.
- objecter->enable_blacklist_events();
+ objecter->enable_blocklist_events();
fin.activate();
}
for (const auto &s : victims) {
std::stringstream ss;
evict_client(s->get_client().v, false,
- g_conf()->mds_session_blacklist_on_evict, ss, gather.new_sub());
+ g_conf()->mds_session_blocklist_on_evict, ss, gather.new_sub());
}
gather.activate();
}
}
std::lock_guard l(mds_lock);
bool evicted = evict_client(strtol(client_id.c_str(), 0, 10), true,
- g_conf()->mds_session_blacklist_on_evict, ss);
+ g_conf()->mds_session_blocklist_on_evict, ss);
if (!evicted) {
dout(15) << ss.str() << dendl;
r = -ENOENT;
for (const auto s : victims) {
std::stringstream ss;
evict_client(s->get_client().v, false,
- g_conf()->mds_session_blacklist_on_evict, ss, gather.new_sub());
+ g_conf()->mds_session_blocklist_on_evict, ss, gather.new_sub());
}
gather.activate();
}
purge_queue.update_op_limit(*mdsmap);
- std::set<entity_addr_t> newly_blacklisted;
- objecter->consume_blacklist_events(&newly_blacklisted);
+ std::set<entity_addr_t> newly_blocklisted;
+ objecter->consume_blocklist_events(&newly_blocklisted);
auto epoch = objecter->with_osdmap([](const OSDMap &o){return o.get_epoch();});
dout(4) << "handle_osd_map epoch " << epoch << ", "
- << newly_blacklisted.size() << " new blacklist entries" << dendl;
- auto victims = server->apply_blacklist(newly_blacklisted);
+ << newly_blocklisted.size() << " new blocklist entries" << dendl;
+ auto victims = server->apply_blocklist(newly_blocklisted);
if (victims) {
set_osd_epoch_barrier(epoch);
}
}
bool MDSRank::evict_client(int64_t session_id,
- bool wait, bool blacklist, std::ostream& err_ss,
+ bool wait, bool blocklist, std::ostream& err_ss,
Context *on_killed)
{
ceph_assert(ceph_mutex_is_locked_by_me(mds_lock));
auto& addr = session->info.inst.addr;
{
CachedStackStringStream css;
- *css << "Evicting " << (blacklist ? "(and blacklisting) " : "")
+ *css << "Evicting " << (blocklist ? "(and blocklisting) " : "")
<< "client session " << session_id << " (" << addr << ")";
dout(1) << css->strv() << dendl;
clog->info() << css->strv();
}
- dout(4) << "Preparing blacklist command... (wait=" << wait << ")" << dendl;
+ dout(4) << "Preparing blocklist command... (wait=" << wait << ")" << dendl;
stringstream ss;
- ss << "{\"prefix\":\"osd blacklist\", \"blacklistop\":\"add\",";
+ ss << "{\"prefix\":\"osd blocklist\", \"blocklistop\":\"add\",";
ss << "\"addr\":\"";
ss << addr;
ss << "\"}";
}
} else {
dout(1) << "session " << session_id << " was removed while we waited "
- "for blacklist" << dendl;
+ "for blocklist" << dendl;
// Even though it wasn't us that removed it, kick our completion
// as the session has been removed.
}
};
- auto apply_blacklist = [this, cmd](std::function<void ()> fn){
+ auto apply_blocklist = [this, cmd](std::function<void ()> fn){
ceph_assert(ceph_mutex_is_locked_by_me(mds_lock));
- Context *on_blacklist_done = new LambdaContext([this, fn](int r) {
+ Context *on_blocklist_done = new LambdaContext([this, fn](int r) {
objecter->wait_for_latest_osdmap(
lambdafy((new C_OnFinisher(
new LambdaContext([this, fn](int r) {
)));
});
- dout(4) << "Sending mon blacklist command: " << cmd[0] << dendl;
- monc->start_mon_command(cmd, {}, nullptr, nullptr, on_blacklist_done);
+ dout(4) << "Sending mon blocklist command: " << cmd[0] << dendl;
+ monc->start_mon_command(cmd, {}, nullptr, nullptr, on_blocklist_done);
};
if (wait) {
- if (blacklist) {
+ if (blocklist) {
C_SaferCond inline_ctx;
- apply_blacklist([&inline_ctx](){inline_ctx.complete(0);});
+ apply_blocklist([&inline_ctx](){inline_ctx.complete(0);});
mds_lock.unlock();
inline_ctx.wait();
mds_lock.lock();
session_id));
if (!session) {
dout(1) << "session " << session_id << " was removed while we waited "
- "for blacklist" << dendl;
+ "for blocklist" << dendl;
return true;
}
kill_client_session();
} else {
- if (blacklist) {
- apply_blacklist(kill_client_session);
+ if (blocklist) {
+ apply_blocklist(kill_client_session);
} else {
kill_client_session();
}
return map_targets.count(rank);
}
- bool evict_client(int64_t session_id, bool wait, bool blacklist,
+ bool evict_client(int64_t session_id, bool wait, bool blocklist,
std::ostream& ss, Context *on_killed=nullptr);
int config_client(int64_t session_id, bool remove,
const std::string& option, const std::string& value,
send_reply = nullptr;
}
- bool blacklisted = mds->objecter->with_osdmap([target](const OSDMap &map) {
- return map.is_blacklisted(target->info.inst.addr);
+ bool blocklisted = mds->objecter->with_osdmap([target](const OSDMap &map) {
+ return map.is_blocklisted(target->info.inst.addr);
});
- if (blacklisted || !g_conf()->mds_session_blacklist_on_evict) {
+ if (blocklisted || !g_conf()->mds_session_blocklist_on_evict) {
kill_session(target, send_reply);
} else {
std::stringstream ss;
log_session_status("REJECTED", err_str);
};
- bool blacklisted = mds->objecter->with_osdmap(
+ bool blocklisted = mds->objecter->with_osdmap(
[&addr](const OSDMap &osd_map) -> bool {
- return osd_map.is_blacklisted(addr);
+ return osd_map.is_blocklisted(addr);
});
- if (blacklisted) {
- dout(10) << "rejecting blacklisted client " << addr << dendl;
- send_reject_message("blacklisted");
+ if (blocklisted) {
+ dout(10) << "rejecting blocklisted client " << addr << dendl;
+ send_reject_message("blocklisted");
session->clear();
break;
}
mds->objecter->with_osdmap(
[this, &cm, &cmm](const OSDMap &osd_map) {
for (auto p = cm.begin(); p != cm.end(); ) {
- if (osd_map.is_blacklisted(p->second.addr)) {
- dout(10) << " ignoring blacklisted client." << p->first
+ if (osd_map.is_blocklisted(p->second.addr)) {
+ dout(10) << " ignoring blocklisted client." << p->first
<< " (" << p->second.addr << ")" << dendl;
cmm.erase(p->first);
cm.erase(p++);
dout(10) << "autoclosing stale session " << session->info.inst
<< " last renewed caps " << last_cap_renew_span << "s ago" << dendl;
- if (g_conf()->mds_session_blacklist_on_timeout) {
+ if (g_conf()->mds_session_blocklist_on_timeout) {
std::stringstream ss;
mds->evict_client(session->get_client().v, false, true, ss, nullptr);
} else {
std::stringstream ss;
bool evicted = mds->evict_client(client.v, false,
- g_conf()->mds_session_blacklist_on_evict,
+ g_conf()->mds_session_blocklist_on_evict,
ss, nullptr);
if (evicted && logger) {
logger->inc(l_mdss_cap_revoke_eviction);
}
}
-size_t Server::apply_blacklist(const std::set<entity_addr_t> &blacklist)
+size_t Server::apply_blocklist(const std::set<entity_addr_t> &blocklist)
{
bool prenautilus = mds->objecter->with_osdmap(
[&](const OSDMap& o) {
const auto& sessions = mds->sessionmap.get_sessions();
for (const auto& p : sessions) {
if (!p.first.is_client()) {
- // Do not apply OSDMap blacklist to MDS daemons, we find out
+ // Do not apply OSDMap blocklist to MDS daemons, we find out
// about their death via MDSMap.
continue;
}
Session *s = p.second;
auto inst_addr = s->info.inst.addr;
- // blacklist entries are always TYPE_ANY for nautilus+
+ // blocklist entries are always TYPE_ANY for nautilus+
inst_addr.set_type(entity_addr_t::TYPE_ANY);
- if (blacklist.count(inst_addr)) {
+ if (blocklist.count(inst_addr)) {
victims.push_back(s);
continue;
}
if (prenautilus) {
// ...except pre-nautilus, they were TYPE_LEGACY
inst_addr.set_type(entity_addr_t::TYPE_LEGACY);
- if (blacklist.count(inst_addr)) {
+ if (blocklist.count(inst_addr)) {
victims.push_back(s);
}
}
kill_session(s, nullptr);
}
- dout(10) << "apply_blacklist: killed " << victims.size() << dendl;
+ dout(10) << "apply_blocklist: killed " << victims.size() << dendl;
return victims.size();
}
feature_bitset_t missing_features = required_client_features;
missing_features -= session->info.client_metadata.features;
if (!missing_features.empty()) {
- bool blacklisted = mds->objecter->with_osdmap(
+ bool blocklisted = mds->objecter->with_osdmap(
[session](const OSDMap &osd_map) -> bool {
- return osd_map.is_blacklisted(session->info.inst.addr);
+ return osd_map.is_blocklisted(session->info.inst.addr);
});
- if (blacklisted)
+ if (blocklisted)
continue;
mds->clog->warn() << "evicting session " << *session << ", missing required features '"
<< missing_features << "'";
std::stringstream ss;
mds->evict_client(session->get_client().v, false,
- g_conf()->mds_session_blacklist_on_evict, ss);
+ g_conf()->mds_session_blocklist_on_evict, ss);
}
}
}
dout(7) << "reconnect timed out, " << remaining_sessions.size()
<< " clients have not reconnected in time" << dendl;
- // If we're doing blacklist evictions, use this to wait for them before
+ // If we're doing blocklist evictions, use this to wait for them before
// proceeding to reconnect_gather_finish
MDSGatherBuilder gather(g_ceph_context);
<< ", after waiting " << elapse1
<< " seconds during MDS startup";
- if (g_conf()->mds_session_blacklist_on_timeout) {
+ if (g_conf()->mds_session_blocklist_on_timeout) {
std::stringstream ss;
mds->evict_client(session->get_client().v, false, true, ss,
gather.new_sub());
void terminate_sessions();
void find_idle_sessions();
void kill_session(Session *session, Context *on_safe, bool need_purge_inos = false);
- size_t apply_blacklist(const std::set<entity_addr_t> &blacklist);
+ size_t apply_blocklist(const std::set<entity_addr_t> &blocklist);
void journal_close_session(Session *session, int state, Context *on_safe, bool need_purge_inos = false);
size_t get_num_pending_reclaim() const { return client_reclaim_gather.size(); }
METH_VARARGS, "Verify the current session caps are valid"},
{"_ceph_register_client", (PyCFunction)ceph_register_client,
- METH_VARARGS, "Register RADOS instance for potential blacklisting"},
+ METH_VARARGS, "Register RADOS instance for potential blocklisting"},
{"_ceph_unregister_client", (PyCFunction)ceph_unregister_client,
- METH_VARARGS, "Unregister RADOS instance for potential blacklisting"},
+ METH_VARARGS, "Unregister RADOS instance for potential blocklisting"},
{NULL, NULL, 0, NULL}
};
derr << " *** Got signal " << sig_str(signum) << " ***" << dendl;
// The python modules don't reliably shut down, so don't even
- // try. The mon will blacklist us (and all of our rados/cephfs
+ // try. The mon will blocklist us (and all of our rados/cephfs
// clients) anyway. Just exit!
_exit(0); // exit with 0 result code, as if we had done an orderly shutdown
cluster_state.with_mgrmap([&e](const MgrMap& m) {
e = m.last_failure_osd_epoch;
});
- /* wait for any blacklists to be applied to previous mgr instance */
+ /* wait for any blocklists to be applied to previous mgr instance */
dout(4) << "Waiting for new OSDMap (e=" << e
- << ") that may blacklist prior active." << dendl;
+ << ") that may blocklist prior active." << dendl;
objecter->wait_for_osd_map(e);
lock.lock();
auto clients = py_module_registry.get_clients();
for (const auto& client : clients) {
- dout(15) << "noting RADOS client for blacklist: " << client << dendl;
+ dout(15) << "noting RADOS client for blocklist: " << client << dendl;
}
// Whether I think I am available (request MgrMonitor to set me
const cmdmap_t& cmdmap,
std::stringstream &ss) override
{
- /* We may need to blacklist ranks. */
+ /* We may need to blocklist ranks. */
if (!mon->osdmon()->is_writeable()) {
// not allowed to write yet, so retry when we can
mon->osdmon()->wait_for_writeable(op, new PaxosService::C_RetryMessage(mon->mdsmon(), op));
mon->mdsmon()->fail_mds_gid(fsmap, gid);
}
if (!to_fail.empty()) {
- mon->osdmon()->propose_pending(); /* maybe new blacklists */
+ mon->osdmon()->propose_pending(); /* maybe new blocklists */
}
fsmap.erase_filesystem(fs->fscid);
} else if (state == MDSMap::STATE_DAMAGED) {
if (!mon->osdmon()->is_writeable()) {
dout(1) << __func__ << ": DAMAGED from rank " << info.rank
- << " waiting for osdmon writeable to blacklist it" << dendl;
+ << " waiting for osdmon writeable to blocklist it" << dendl;
mon->osdmon()->wait_for_writeable(op, new C_RetryMessage(this, op));
return false;
}
<< info.rank << " damaged" << dendl;
utime_t until = ceph_clock_now();
- until += g_conf().get_val<double>("mon_mds_blacklist_interval");
- const auto blacklist_epoch = mon->osdmon()->blacklist(info.addrs, until);
+ until += g_conf().get_val<double>("mon_mds_blocklist_interval");
+ const auto blocklist_epoch = mon->osdmon()->blocklist(info.addrs, until);
request_proposal(mon->osdmon());
- pending.damaged(gid, blacklist_epoch);
+ pending.damaged(gid, blocklist_epoch);
last_beacon.erase(gid);
// Respond to MDS, so that it knows it can continue to shut down
} else if (state == MDSMap::STATE_DNE) {
if (!mon->osdmon()->is_writeable()) {
dout(1) << __func__ << ": DNE from rank " << info.rank
- << " waiting for osdmon writeable to blacklist it" << dendl;
+ << " waiting for osdmon writeable to blocklist it" << dendl;
mon->osdmon()->wait_for_writeable(op, new C_RetryMessage(this, op));
return false;
}
ceph_assert(mon->osdmon()->is_writeable());
- epoch_t blacklist_epoch = 0;
+ epoch_t blocklist_epoch = 0;
if (info.rank >= 0 && info.state != MDSMap::STATE_STANDBY_REPLAY) {
utime_t until = ceph_clock_now();
- until += g_conf().get_val<double>("mon_mds_blacklist_interval");
- blacklist_epoch = mon->osdmon()->blacklist(info.addrs, until);
+ until += g_conf().get_val<double>("mon_mds_blocklist_interval");
+ blocklist_epoch = mon->osdmon()->blocklist(info.addrs, until);
}
- fsmap.erase(gid, blacklist_epoch);
+ fsmap.erase(gid, blocklist_epoch);
last_beacon.erase(gid);
if (pending_daemon_health.count(gid)) {
pending_daemon_health.erase(gid);
pending_daemon_health_rm.insert(gid);
}
- return blacklist_epoch != 0;
+ return blocklist_epoch != 0;
}
mds_gid_t MDSMonitor::gid_from_arg(const FSMap &fsmap, const std::string &arg, std::ostream &ss)
<< " (gid: " << gid << " addr: " << info.addrs
<< " state: " << ceph_mds_state_name(info.state) << ")"
<< " since " << since_last.count() << dendl;
- // If the OSDMap is writeable, we can blacklist things, so we can
+ // If the OSDMap is writeable, we can blocklist things, so we can
// try failing any laggy MDS daemons. Consider each one for failure.
if (!info.laggy()) {
dout(1) << " marking " << gid << " " << info.addrs
int print_nodes(ceph::Formatter *f);
/**
- * Return true if a blacklist was done (i.e. OSD propose needed)
+ * Return true if a blocklist was done (i.e. OSD propose needed)
*/
bool fail_mds_gid(FSMap &fsmap, mds_gid_t gid);
/// features
uint64_t active_mgr_features = 0;
- std::vector<entity_addrvec_t> clients; // for blacklist
+ std::vector<entity_addrvec_t> clients; // for blocklist
std::map<uint64_t, StandbyInfo> standbys;
<< " restarted";
if (!mon->osdmon()->is_writeable()) {
dout(1) << __func__ << ": waiting for osdmon writeable to"
- " blacklist old instance." << dendl;
+ " blocklist old instance." << dendl;
mon->osdmon()->wait_for_writeable(op, new C_RetryMessage(this, op));
return false;
}
ceph_assert(pending_map.active_gid > 0);
auto until = ceph_clock_now();
- until += g_conf().get_val<double>("mon_mgr_blacklist_interval");
- dout(5) << "blacklisting previous mgr." << pending_map.active_name << "."
+ until += g_conf().get_val<double>("mon_mgr_blocklist_interval");
+ dout(5) << "blocklisting previous mgr." << pending_map.active_name << "."
<< pending_map.active_gid << " ("
<< pending_map.active_addrs << ")" << dendl;
- auto blacklist_epoch = mon->osdmon()->blacklist(pending_map.active_addrs, until);
+ auto blocklist_epoch = mon->osdmon()->blocklist(pending_map.active_addrs, until);
- /* blacklist RADOS clients in use by the mgr */
+ /* blocklist RADOS clients in use by the mgr */
for (const auto& a : pending_map.clients) {
- mon->osdmon()->blacklist(a, until);
+ mon->osdmon()->blocklist(a, until);
}
request_proposal(mon->osdmon());
pending_map.active_addrs = entity_addrvec_t();
pending_map.services.clear();
pending_map.clients.clear();
- pending_map.last_failure_osd_epoch = blacklist_epoch;
+ pending_map.last_failure_osd_epoch = blocklist_epoch;
// So that when new active mgr subscribes to mgrdigest, it will
// get an immediate response instead of waiting for next timer
profile_grants.push_back(MonCapGrant("osd", MON_CAP_R));
// This command grant is checked explicitly in MRemoveSnaps handling
profile_grants.push_back(MonCapGrant("osd pool rmsnap"));
- profile_grants.push_back(MonCapGrant("osd blacklist"));
+ profile_grants.push_back(MonCapGrant("osd blocklist"));
+ profile_grants.push_back(MonCapGrant("osd blacklist")); // for compat
profile_grants.push_back(MonCapGrant("log", MON_CAP_W));
}
if (profile == "mgr") {
profile_grants.push_back(MonCapGrant("osd", MON_CAP_R));
profile_grants.push_back(MonCapGrant("pg", MON_CAP_R));
- // exclusive lock dead-client blacklisting (IP+nonce required)
+ // exclusive lock dead-client blocklisting (IP+nonce required)
+ profile_grants.push_back(MonCapGrant("osd blocklist"));
+ profile_grants.back().command_args["blocklistop"] = StringConstraint(
+ StringConstraint::MATCH_TYPE_EQUAL, "add");
+ profile_grants.back().command_args["addr"] = StringConstraint(
+ StringConstraint::MATCH_TYPE_REGEX, "^[^/]+/[0-9]+$");
+
+ // for compat,
profile_grants.push_back(MonCapGrant("osd blacklist"));
profile_grants.back().command_args["blacklistop"] = StringConstraint(
StringConstraint::MATCH_TYPE_EQUAL, "add");
"exist and have been previously destroyed. "
"Reads secrets from JSON file via `-i <file>` (see man page).",
"osd", "rw")
-COMMAND("osd blacklist "
+COMMAND("osd blocklist "
+ "name=blocklistop,type=CephChoices,strings=add|rm "
+ "name=addr,type=CephEntityAddr "
+ "name=expire,type=CephFloat,range=0.0,req=false",
+ "add (optionally until <expire> seconds from now) or remove <addr> from blocklist",
+ "osd", "rw")
+COMMAND("osd blocklist ls", "show blocklisted clients", "osd", "r")
+COMMAND("osd blocklist clear", "clear all blocklisted clients", "osd", "rw")
+
+COMMAND_WITH_FLAG("osd blacklist "
"name=blacklistop,type=CephChoices,strings=add|rm "
"name=addr,type=CephEntityAddr "
"name=expire,type=CephFloat,range=0.0,req=false",
"add (optionally until <expire> seconds from now) or remove <addr> from blacklist",
- "osd", "rw")
-COMMAND("osd blacklist ls", "show blacklisted clients", "osd", "r")
-COMMAND("osd blacklist clear", "clear all blacklisted clients", "osd", "rw")
+ "osd", "rw",
+ FLAG(DEPRECATED))
+COMMAND_WITH_FLAG("osd blacklist ls", "show blacklisted clients", "osd", "r",
+ FLAG(DEPRECATED))
+COMMAND_WITH_FLAG("osd blacklist clear", "clear all blacklisted clients", "osd", "rw",
+ FLAG(DEPRECATED))
+
COMMAND("osd pool mksnap "
"name=pool,type=CephPoolname "
"name=snap,type=CephString",
pending_inc.new_pools[i.first].flags |= pg_pool_t::FLAG_CREATING;
}
}
- // adjust blacklist items to all be TYPE_ANY
- for (auto& i : tmp.blacklist) {
+ // adjust blocklist items to all be TYPE_ANY
+ for (auto& i : tmp.blocklist) {
auto a = i.first;
a.set_type(entity_addr_t::TYPE_ANY);
- pending_inc.new_blacklist[a] = i.second;
- pending_inc.old_blacklist.push_back(i.first);
+ pending_inc.new_blocklist[a] = i.second;
+ pending_inc.old_blocklist.push_back(i.first);
}
}
return 0;
}
-epoch_t OSDMonitor::blacklist(const entity_addrvec_t& av, utime_t until)
+epoch_t OSDMonitor::blocklist(const entity_addrvec_t& av, utime_t until)
{
- dout(10) << "blacklist " << av << " until " << until << dendl;
+ dout(10) << "blocklist " << av << " until " << until << dendl;
for (auto a : av.v) {
if (osdmap.require_osd_release >= ceph_release_t::nautilus) {
a.set_type(entity_addr_t::TYPE_ANY);
} else {
a.set_type(entity_addr_t::TYPE_LEGACY);
}
- pending_inc.new_blacklist[a] = until;
+ pending_inc.new_blocklist[a] = until;
}
return pending_inc.epoch;
}
-epoch_t OSDMonitor::blacklist(entity_addr_t a, utime_t until)
+epoch_t OSDMonitor::blocklist(entity_addr_t a, utime_t until)
{
if (osdmap.require_osd_release >= ceph_release_t::nautilus) {
a.set_type(entity_addr_t::TYPE_ANY);
} else {
a.set_type(entity_addr_t::TYPE_LEGACY);
}
- dout(10) << "blacklist " << a << " until " << until << dendl;
- pending_inc.new_blacklist[a] = until;
+ dout(10) << "blocklist " << a << " until " << until << dendl;
+ pending_inc.new_blocklist[a] = until;
return pending_inc.epoch;
}
dout(10) << "tick NOOUT flag set, not checking down osds" << dendl;
}
- // expire blacklisted items?
- for (ceph::unordered_map<entity_addr_t,utime_t>::iterator p = osdmap.blacklist.begin();
- p != osdmap.blacklist.end();
+ // expire blocklisted items?
+ for (ceph::unordered_map<entity_addr_t,utime_t>::iterator p = osdmap.blocklist.begin();
+ p != osdmap.blocklist.end();
++p) {
if (p->second < now) {
- dout(10) << "expiring blacklist item " << p->first << " expired " << p->second << " < now " << now << dendl;
- pending_inc.old_blacklist.push_back(p->first);
+ dout(10) << "expiring blocklist item " << p->first << " expired " << p->second << " < now " << now << dendl;
+ pending_inc.old_blocklist.push_back(p->first);
do_propose = true;
}
}
f->flush(ds);
}
rdata.append(ds);
- } else if (prefix == "osd blacklist ls") {
+ } else if (prefix == "osd blocklist ls" ||
+ prefix == "osd blacklist ls") {
if (f)
- f->open_array_section("blacklist");
+ f->open_array_section("blocklist");
- for (ceph::unordered_map<entity_addr_t,utime_t>::iterator p = osdmap.blacklist.begin();
- p != osdmap.blacklist.end();
+ for (ceph::unordered_map<entity_addr_t,utime_t>::iterator p = osdmap.blocklist.begin();
+ p != osdmap.blocklist.end();
++p) {
if (f) {
f->open_object_section("entry");
f->close_section();
f->flush(rdata);
}
- ss << "listed " << osdmap.blacklist.size() << " entries";
+ ss << "listed " << osdmap.blocklist.size() << " entries";
} else if (prefix == "osd pool ls") {
string detail;
get_last_committed() + 1));
return true;
- } else if (prefix == "osd blacklist clear") {
- pending_inc.new_blacklist.clear();
- std::list<std::pair<entity_addr_t,utime_t > > blacklist;
- osdmap.get_blacklist(&blacklist);
- for (const auto &entry : blacklist) {
- pending_inc.old_blacklist.push_back(entry.first);
+ } else if (prefix == "osd blocklist clear" ||
+ prefix == "osd blacklist clear") {
+ pending_inc.new_blocklist.clear();
+ std::list<std::pair<entity_addr_t,utime_t > > blocklist;
+ osdmap.get_blocklist(&blocklist);
+ for (const auto &entry : blocklist) {
+ pending_inc.old_blocklist.push_back(entry.first);
}
- ss << " removed all blacklist entries";
+ ss << " removed all blocklist entries";
getline(ss, rs);
wait_for_finished_proposal(op, new Monitor::C_Command(mon, op, 0, rs,
get_last_committed() + 1));
return true;
- } else if (prefix == "osd blacklist") {
+ } else if (prefix == "osd blocklist" ||
+ prefix == "osd blacklist") {
string addrstr;
cmd_getval(cmdmap, "addr", addrstr);
entity_addr_t addr;
}
else {
if (osdmap.require_osd_release >= ceph_release_t::nautilus) {
- // always blacklist type ANY
+ // always blocklist type ANY
addr.set_type(entity_addr_t::TYPE_ANY);
} else {
addr.set_type(entity_addr_t::TYPE_LEGACY);
}
- string blacklistop;
- cmd_getval(cmdmap, "blacklistop", blacklistop);
- if (blacklistop == "add") {
+ string blocklistop;
+ if (!cmd_getval(cmdmap, "blocklistop", blocklistop)) {
+ cmd_getval(cmdmap, "blacklistop", blocklistop);
+ }
+ if (blocklistop == "add") {
utime_t expires = ceph_clock_now();
double d;
// default one hour
cmd_getval(cmdmap, "expire", d,
- g_conf()->mon_osd_blacklist_default_expire);
+ g_conf()->mon_osd_blocklist_default_expire);
expires += d;
- pending_inc.new_blacklist[addr] = expires;
+ pending_inc.new_blocklist[addr] = expires;
{
- // cancel any pending un-blacklisting request too
- auto it = std::find(pending_inc.old_blacklist.begin(),
- pending_inc.old_blacklist.end(), addr);
- if (it != pending_inc.old_blacklist.end()) {
- pending_inc.old_blacklist.erase(it);
+ // cancel any pending un-blocklisting request too
+ auto it = std::find(pending_inc.old_blocklist.begin(),
+ pending_inc.old_blocklist.end(), addr);
+ if (it != pending_inc.old_blocklist.end()) {
+ pending_inc.old_blocklist.erase(it);
}
}
- ss << "blacklisting " << addr << " until " << expires << " (" << d << " sec)";
+ ss << "blocklisting " << addr << " until " << expires << " (" << d << " sec)";
getline(ss, rs);
wait_for_finished_proposal(op, new Monitor::C_Command(mon, op, 0, rs,
get_last_committed() + 1));
return true;
- } else if (blacklistop == "rm") {
- if (osdmap.is_blacklisted(addr) ||
- pending_inc.new_blacklist.count(addr)) {
- if (osdmap.is_blacklisted(addr))
- pending_inc.old_blacklist.push_back(addr);
+ } else if (blocklistop == "rm") {
+ if (osdmap.is_blocklisted(addr) ||
+ pending_inc.new_blocklist.count(addr)) {
+ if (osdmap.is_blocklisted(addr))
+ pending_inc.old_blocklist.push_back(addr);
else
- pending_inc.new_blacklist.erase(addr);
- ss << "un-blacklisting " << addr;
+ pending_inc.new_blocklist.erase(addr);
+ ss << "un-blocklisting " << addr;
getline(ss, rs);
wait_for_finished_proposal(op, new Monitor::C_Command(mon, op, 0, rs,
get_last_committed() + 1));
return true;
}
- ss << addr << " isn't blacklisted";
+ ss << addr << " isn't blocklisted";
err = 0;
goto reply;
}
int get_inc(version_t ver, OSDMap::Incremental& inc);
int get_full_from_pinned_map(version_t ver, ceph::buffer::list& bl);
- epoch_t blacklist(const entity_addrvec_t& av, utime_t until);
- epoch_t blacklist(entity_addr_t a, utime_t until);
+ epoch_t blocklist(const entity_addrvec_t& av, utime_t until);
+ epoch_t blocklist(entity_addr_t a, utime_t until);
void dump_info(ceph::Formatter *f);
int dump_osd_metadata(int osd, ceph::Formatter *f, std::ostream *err);
encode(type, bl);
} else {
// map any -> legacy for old clients. this is primary for the benefit
- // of OSDMap's blacklist, but is reasonable in general since any: is
+ // of OSDMap's blocklist, but is reasonable in general since any: is
// meaningless for pre-nautilus clients or daemons.
auto t = type;
if (t == TYPE_ANY) {
f->open_object_section("pq");
op_shardedwq.dump(f);
f->close_section();
- } else if (prefix == "dump_blacklist") {
+ } else if (prefix == "dump_blocklist") {
list<pair<entity_addr_t,utime_t> > bl;
OSDMapRef curmap = service.get_osdmap();
- f->open_array_section("blacklist");
- curmap->get_blacklist(&bl);
+ f->open_array_section("blocklist");
+ curmap->get_blocklist(&bl);
for (list<pair<entity_addr_t,utime_t> >::iterator it = bl.begin();
it != bl.end(); ++it) {
f->open_object_section("entry");
it->second.localtime(f->dump_stream("expire_time"));
f->close_section(); //entry
}
- f->close_section(); //blacklist
+ f->close_section(); //blocklist
} else if (prefix == "dump_watchers") {
list<obj_watch_item_t> watchers;
// scan pg's
asok_hook,
"dump op priority queue state");
ceph_assert(r == 0);
- r = admin_socket->register_command("dump_blacklist",
+ r = admin_socket->register_command("dump_blocklist",
asok_hook,
- "dump blacklisted clients and times");
+ "dump blocklisted clients and times");
ceph_assert(r == 0);
r = admin_socket->register_command("dump_watchers",
asok_hook,
OSDMapRef newmap = get_map(cur);
ceph_assert(newmap); // we just cached it above!
- // start blacklisting messages sent to peers that go down.
+ // start blocklisting messages sent to peers that go down.
service.pre_publish_map(newmap);
// kill connections to newly down osds
encode(new_up_thru, bl);
encode(new_last_clean_interval, bl);
encode(new_lost, bl);
- encode(new_blacklist, bl, features);
- encode(old_blacklist, bl, features);
+ encode(new_blocklist, bl, features);
+ encode(old_blocklist, bl, features);
encode(new_up_cluster, bl, features);
encode(cluster_snapshot, bl);
encode(new_uuid, bl);
encode(new_up_thru, bl);
encode(new_last_clean_interval, bl);
encode(new_lost, bl);
- encode(new_blacklist, bl, features);
- encode(old_blacklist, bl, features);
+ encode(new_blocklist, bl, features);
+ encode(old_blocklist, bl, features);
if (target_v < 7) {
encode_addrvec_map_as_addr(new_up_cluster, bl, features);
} else {
decode(new_up_thru, p);
decode(new_last_clean_interval, p);
decode(new_lost, p);
- decode(new_blacklist, p);
- decode(old_blacklist, p);
+ decode(new_blocklist, p);
+ decode(old_blocklist, p);
if (ev >= 6)
decode(new_up_cluster, p);
if (ev >= 7)
decode(new_up_thru, bl);
decode(new_last_clean_interval, bl);
decode(new_lost, bl);
- decode(new_blacklist, bl);
- decode(old_blacklist, bl);
+ decode(new_blocklist, bl);
+ decode(old_blocklist, bl);
decode(new_up_cluster, bl);
decode(cluster_snapshot, bl);
decode(new_uuid, bl);
}
f->close_section();
- f->open_array_section("new_blacklist");
- for (const auto &blist : new_blacklist) {
+ f->open_array_section("new_blocklist");
+ for (const auto &blist : new_blocklist) {
stringstream ss;
ss << blist.first;
f->dump_stream(ss.str().c_str()) << blist.second;
}
f->close_section();
- f->open_array_section("old_blacklist");
- for (const auto &blist : old_blacklist)
+ f->open_array_section("old_blocklist");
+ for (const auto &blist : old_blocklist)
f->dump_stream("addr") << blist;
f->close_section();
pool.second.last_change = e;
}
-bool OSDMap::is_blacklisted(const entity_addr_t& orig) const
+bool OSDMap::is_blocklisted(const entity_addr_t& orig) const
{
- if (blacklist.empty()) {
+ if (blocklist.empty()) {
return false;
}
- // all blacklist entries are type ANY for nautilus+
+ // all blocklist entries are type ANY for nautilus+
// FIXME: avoid this copy!
entity_addr_t a = orig;
if (require_osd_release < ceph_release_t::nautilus) {
}
// this specific instance?
- if (blacklist.count(a)) {
+ if (blocklist.count(a)) {
return true;
}
- // is entire ip blacklisted?
+ // is entire ip blocklisted?
if (a.is_ip()) {
a.set_port(0);
a.set_nonce(0);
- if (blacklist.count(a)) {
+ if (blocklist.count(a)) {
return true;
}
}
return false;
}
-bool OSDMap::is_blacklisted(const entity_addrvec_t& av) const
+bool OSDMap::is_blocklisted(const entity_addrvec_t& av) const
{
- if (blacklist.empty())
+ if (blocklist.empty())
return false;
for (auto& a : av.v) {
- if (is_blacklisted(a)) {
+ if (is_blocklisted(a)) {
return true;
}
}
return false;
}
-void OSDMap::get_blacklist(list<pair<entity_addr_t,utime_t> > *bl) const
+void OSDMap::get_blocklist(list<pair<entity_addr_t,utime_t> > *bl) const
{
- std::copy(blacklist.begin(), blacklist.end(), std::back_inserter(*bl));
+ std::copy(blocklist.begin(), blocklist.end(), std::back_inserter(*bl));
}
-void OSDMap::get_blacklist(std::set<entity_addr_t> *bl) const
+void OSDMap::get_blocklist(std::set<entity_addr_t> *bl) const
{
- for (const auto &i : blacklist) {
+ for (const auto &i : blocklist) {
bl->insert(i.first);
}
}
int OSDMap::apply_incremental(const Incremental &inc)
{
- new_blacklist_entries = false;
+ new_blocklist_entries = false;
if (inc.epoch == 1)
fsid = inc.fsid;
else if (inc.fsid != fsid)
pg_upmap_items.erase(pg);
}
- // blacklist
- if (!inc.new_blacklist.empty()) {
- blacklist.insert(inc.new_blacklist.begin(),inc.new_blacklist.end());
- new_blacklist_entries = true;
+ // blocklist
+ if (!inc.new_blocklist.empty()) {
+ blocklist.insert(inc.new_blocklist.begin(),inc.new_blocklist.end());
+ new_blocklist_entries = true;
}
- for (const auto &addr : inc.old_blacklist)
- blacklist.erase(addr);
+ for (const auto &addr : inc.old_blocklist)
+ blocklist.erase(addr);
for (auto& i : inc.new_crush_node_flags) {
if (i.second) {
encode(ev, bl);
encode(osd_addrs->hb_back_addrs, bl, features);
encode(osd_info, bl);
- encode(blacklist, bl, features);
+ encode(blocklist, bl, features);
encode(osd_addrs->cluster_addrs, bl, features);
encode(cluster_snapshot_epoch, bl);
encode(cluster_snapshot, bl);
{
// put this in a sorted, ordered map<> so that we encode in a
// deterministic order.
- map<entity_addr_t,utime_t> blacklist_map;
- for (const auto &addr : blacklist)
- blacklist_map.insert(make_pair(addr.first, addr.second));
- encode(blacklist_map, bl, features);
+ map<entity_addr_t,utime_t> blocklist_map;
+ for (const auto &addr : blocklist)
+ blocklist_map.insert(make_pair(addr.first, addr.second));
+ encode(blocklist_map, bl, features);
}
if (target_v < 7) {
encode_addrvec_pvec_as_addr(osd_addrs->cluster_addrs, bl, features);
if (v < 5)
decode(pool_name, p);
- decode(blacklist, p);
+ decode(blocklist, p);
if (ev >= 6)
decode(osd_addrs->cluster_addrs, p);
else
DECODE_START(9, bl); // extended, osd-only data
decode(osd_addrs->hb_back_addrs, bl);
decode(osd_info, bl);
- decode(blacklist, bl);
+ decode(blocklist, bl);
decode(osd_addrs->cluster_addrs, bl);
decode(cluster_snapshot_epoch, bl);
decode(cluster_snapshot, bl);
}
f->close_section(); // primary_temp
- f->open_object_section("blacklist");
- for (const auto &addr : blacklist) {
+ f->open_object_section("blocklist");
+ for (const auto &addr : blocklist) {
stringstream ss;
ss << addr.first;
f->dump_stream(ss.str().c_str()) << addr.second;
uuid_d fsid;
o.back()->build_simple(cct, 1, fsid, 16);
o.back()->created = o.back()->modified = utime_t(1, 2); // fix timestamp
- o.back()->blacklist[entity_addr_t()] = utime_t(5, 6);
+ o.back()->blocklist[entity_addr_t()] = utime_t(5, 6);
cct->put();
}
for (const auto pg : *primary_temp)
out << "primary_temp " << pg.first << " " << pg.second << "\n";
- for (const auto &addr : blacklist)
- out << "blacklist " << addr.first << " expires " << addr.second << "\n";
+ for (const auto &addr : blocklist)
+ out << "blocklist " << addr.first << " expires " << addr.second << "\n";
}
class OSDTreePlainDumper : public CrushTreeDumper::Dumper<TextTable> {
mempool::osdmap::map<int32_t,uuid_d> new_uuid;
mempool::osdmap::map<int32_t,osd_xinfo_t> new_xinfo;
- mempool::osdmap::map<entity_addr_t,utime_t> new_blacklist;
- mempool::osdmap::vector<entity_addr_t> old_blacklist;
+ mempool::osdmap::map<entity_addr_t,utime_t> new_blocklist;
+ mempool::osdmap::vector<entity_addr_t> old_blocklist;
mempool::osdmap::map<int32_t, entity_addrvec_t> new_hb_back_up;
mempool::osdmap::map<int32_t, entity_addrvec_t> new_hb_front_up;
std::shared_ptr< mempool::osdmap::vector<uuid_d> > osd_uuid;
mempool::osdmap::vector<osd_xinfo_t> osd_xinfo;
- mempool::osdmap::unordered_map<entity_addr_t,utime_t> blacklist;
+ mempool::osdmap::unordered_map<entity_addr_t,utime_t> blocklist;
/// queue of snaps to remove
mempool::osdmap::map<int64_t, snap_interval_set_t> removed_snaps_queue;
epoch_t cluster_snapshot_epoch;
std::string cluster_snapshot;
- bool new_blacklist_entries;
+ bool new_blocklist_entries;
float full_ratio = 0, backfillfull_ratio = 0, nearfull_ratio = 0;
primary_temp(std::make_shared<mempool::osdmap::map<pg_t,int32_t>>()),
osd_uuid(std::make_shared<mempool::osdmap::vector<uuid_d>>()),
cluster_snapshot_epoch(0),
- new_blacklist_entries(false),
+ new_blocklist_entries(false),
cached_up_osd_features(0),
crc_defined(false), crc(0),
crush(std::make_shared<CrushWrapper>()) {
const utime_t& get_created() const { return created; }
const utime_t& get_modified() const { return modified; }
- bool is_blacklisted(const entity_addr_t& a) const;
- bool is_blacklisted(const entity_addrvec_t& a) const;
- void get_blacklist(std::list<std::pair<entity_addr_t,utime_t > > *bl) const;
- void get_blacklist(std::set<entity_addr_t> *bl) const;
+ bool is_blocklisted(const entity_addr_t& a) const;
+ bool is_blocklisted(const entity_addrvec_t& a) const;
+ void get_blocklist(std::list<std::pair<entity_addr_t,utime_t > > *bl) const;
+ void get_blocklist(std::set<entity_addr_t> *bl) const;
std::string get_cluster_snapshot() const {
if (cluster_snapshot_epoch == epoch)
void dump_osd(int id, ceph::Formatter *f) const;
void dump_osds(ceph::Formatter *f) const;
static void generate_test_instances(std::list<OSDMap*>& o);
- bool check_new_blacklist_entries() const { return new_blacklist_entries; }
+ bool check_new_blocklist_entries() const { return new_blocklist_entries; }
void check_health(CephContext *cct, health_check_map_t *checks) const;
}
write_if_dirty(rctx.transaction);
- if (get_osdmap()->check_new_blacklist_entries()) {
- pl->check_blacklisted_watchers();
+ if (get_osdmap()->check_new_blocklist_entries()) {
+ pl->check_blocklisted_watchers();
}
}
/// Notification to check outstanding operation targets
virtual void check_recovery_sources(const OSDMapRef& newmap) = 0;
- /// Notification to check outstanding blacklist
- virtual void check_blacklisted_watchers() = 0;
+ /// Notification to check outstanding blocklist
+ virtual void check_blocklisted_watchers() = 0;
/// Notification to clear state associated with primary
virtual void clear_primary_state() = 0;
return;
}
- // blacklisted?
- if (get_osdmap()->is_blacklisted(m->get_source_addr())) {
- dout(10) << "do_op " << m->get_source_addr() << " is blacklisted" << dendl;
- osd->reply_op_error(op, -EBLACKLISTED);
+ // blocklisted?
+ if (get_osdmap()->is_blocklisted(m->get_source_addr())) {
+ dout(10) << "do_op " << m->get_source_addr() << " is blocklisted" << dendl;
+ osd->reply_op_error(op, -EBLOCKLISTED);
return;
}
}
}
-void PrimaryLogPG::check_blacklisted_watchers()
+void PrimaryLogPG::check_blocklisted_watchers()
{
- dout(20) << "PrimaryLogPG::check_blacklisted_watchers for pg " << get_pgid() << dendl;
+ dout(20) << "PrimaryLogPG::check_blocklisted_watchers for pg " << get_pgid() << dendl;
pair<hobject_t, ObjectContextRef> i;
while (object_contexts.get_next(i.first, &i))
- check_blacklisted_obc_watchers(i.second);
+ check_blocklisted_obc_watchers(i.second);
}
-void PrimaryLogPG::check_blacklisted_obc_watchers(ObjectContextRef obc)
+void PrimaryLogPG::check_blocklisted_obc_watchers(ObjectContextRef obc)
{
- dout(20) << "PrimaryLogPG::check_blacklisted_obc_watchers for obc " << obc->obs.oi.soid << dendl;
+ dout(20) << "PrimaryLogPG::check_blocklisted_obc_watchers for obc " << obc->obs.oi.soid << dendl;
for (map<pair<uint64_t, entity_name_t>, WatchRef>::iterator k =
obc->watchers.begin();
k != obc->watchers.end();
dout(30) << "watch: Found " << j->second->get_entity() << " cookie " << j->second->get_cookie() << dendl;
entity_addr_t ea = j->second->get_peer_addr();
dout(30) << "watch: Check entity_addr_t " << ea << dendl;
- if (get_osdmap()->is_blacklisted(ea)) {
- dout(10) << "watch: Found blacklisted watcher for " << ea << dendl;
+ if (get_osdmap()->is_blocklisted(ea)) {
+ dout(10) << "watch: Found blocklisted watcher for " << ea << dendl;
ceph_assert(j->second->get_pg() == this);
j->second->unregister_cb();
handle_watch_timeout(j->second);
make_pair(p->first.first, p->first.second),
watch));
}
- // Look for watchers from blacklisted clients and drop
- check_blacklisted_obc_watchers(obc);
+ // Look for watchers from blocklisted clients and drop
+ check_blocklisted_obc_watchers(obc);
}
void PrimaryLogPG::handle_watch_timeout(WatchRef watch)
std::map<hobject_t, std::map<client_t, ceph_tid_t>> debug_op_order;
void populate_obc_watchers(ObjectContextRef obc);
- void check_blacklisted_obc_watchers(ObjectContextRef obc);
- void check_blacklisted_watchers() override;
+ void check_blocklisted_obc_watchers(ObjectContextRef obc);
+ void check_blocklisted_watchers() override;
void get_watchers(std::list<obj_watch_item_t> *ls) override;
void get_obc_watchers(ObjectContextRef obc, std::list<obj_watch_item_t> &pg_watchers);
public:
switch (static_cast<osd_errc>(ev)) {
case osd_errc::old_snapc:
return "ORDERSNAP flag set; writer has old snapc";
- case osd_errc::blacklisted:
- return "Blacklisted";
+ case osd_errc::blocklisted:
+ return "Blocklisted";
}
if (len) {
switch (static_cast<osd_errc>(ev)) {
case osd_errc::old_snapc:
return "ORDERSNAP flag set; writer has old snapc";
- case osd_errc::blacklisted:
- return "Blacklisted";
+ case osd_errc::blocklisted:
+ return "Blocklisted";
}
return cpp_strerror(ev);
boost::system::error_condition osd_error_category::default_error_condition(int ev) const noexcept {
if (ev == static_cast<int>(osd_errc::old_snapc) ||
- ev == static_cast<int>(osd_errc::blacklisted))
+ ev == static_cast<int>(osd_errc::blocklisted))
return { ev, *this };
else
return { ev, boost::system::generic_category() };
switch (static_cast<osd_errc>(ev)) {
case osd_errc::old_snapc:
return c == boost::system::errc::invalid_argument;
- case osd_errc::blacklisted:
+ case osd_errc::blocklisted:
return c == boost::system::errc::operation_not_permitted;
}
return default_error_condition(ev) == c;
enum class osd_errc {
old_snapc = 85, /* ORDERSNAP flag set; writer has old snapc*/
- blacklisted = 108 /* blacklisted */
+ blocklisted = 108 /* blocklisted */
};
namespace boost::system {
}
// Let the caller know that the operation has failed or was intentionally
- // failed since the caller has been blacklisted.
- if (r == -EBLACKLISTED) {
+ // failed since the caller has been blocklisted.
+ if (r == -EBLOCKLISTED) {
onfinish->complete(r);
return;
}
OSDMap::Incremental inc(m->incremental_maps[e]);
osdmap->apply_incremental(inc);
- emit_blacklist_events(inc);
+ emit_blocklist_events(inc);
logger->inc(l_osdc_map_inc);
}
auto new_osdmap = std::make_unique<OSDMap>();
new_osdmap->decode(m->maps[e]);
- emit_blacklist_events(*osdmap, *new_osdmap);
+ emit_blocklist_events(*osdmap, *new_osdmap);
osdmap = std::move(new_osdmap);
logger->inc(l_osdc_map_full);
}
}
-void Objecter::enable_blacklist_events()
+void Objecter::enable_blocklist_events()
{
unique_lock wl(rwlock);
- blacklist_events_enabled = true;
+ blocklist_events_enabled = true;
}
-void Objecter::consume_blacklist_events(std::set<entity_addr_t> *events)
+void Objecter::consume_blocklist_events(std::set<entity_addr_t> *events)
{
unique_lock wl(rwlock);
if (events->empty()) {
- events->swap(blacklist_events);
+ events->swap(blocklist_events);
} else {
- for (const auto &i : blacklist_events) {
+ for (const auto &i : blocklist_events) {
events->insert(i);
}
- blacklist_events.clear();
+ blocklist_events.clear();
}
}
-void Objecter::emit_blacklist_events(const OSDMap::Incremental &inc)
+void Objecter::emit_blocklist_events(const OSDMap::Incremental &inc)
{
- if (!blacklist_events_enabled) {
+ if (!blocklist_events_enabled) {
return;
}
- for (const auto &i : inc.new_blacklist) {
- blacklist_events.insert(i.first);
+ for (const auto &i : inc.new_blocklist) {
+ blocklist_events.insert(i.first);
}
}
-void Objecter::emit_blacklist_events(const OSDMap &old_osd_map,
+void Objecter::emit_blocklist_events(const OSDMap &old_osd_map,
const OSDMap &new_osd_map)
{
- if (!blacklist_events_enabled) {
+ if (!blocklist_events_enabled) {
return;
}
std::set<entity_addr_t> old_set;
std::set<entity_addr_t> new_set;
- old_osd_map.get_blacklist(&old_set);
- new_osd_map.get_blacklist(&new_set);
+ old_osd_map.get_blocklist(&old_set);
+ new_osd_map.get_blocklist(&new_set);
std::set<entity_addr_t> delta_set;
std::set_difference(
new_set.begin(), new_set.end(), old_set.begin(), old_set.end(),
std::inserter(delta_set, delta_set.begin()));
- blacklist_events.insert(delta_set.begin(), delta_set.end());
+ blocklist_events.insert(delta_set.begin(), delta_set.end());
}
// op pool check
return 0;
}
-void Objecter::blacklist_self(bool set)
+void Objecter::blocklist_self(bool set)
{
- ldout(cct, 10) << "blacklist_self " << (set ? "add" : "rm") << dendl;
+ ldout(cct, 10) << "blocklist_self " << (set ? "add" : "rm") << dendl;
vector<string> cmd;
- cmd.push_back("{\"prefix\":\"osd blacklist\", ");
+ cmd.push_back("{\"prefix\":\"osd blocklist\", ");
if (set)
- cmd.push_back("\"blacklistop\":\"add\",");
+ cmd.push_back("\"blocklistop\":\"add\",");
else
- cmd.push_back("\"blacklistop\":\"rm\",");
+ cmd.push_back("\"blocklistop\":\"rm\",");
stringstream ss;
- // this is somewhat imprecise in that we are blacklisting our first addr only
+ // this is somewhat imprecise in that we are blocklisting our first addr only
ss << messenger->get_myaddrs().front().get_legacy_str();
cmd.push_back("\"addr\":\"" + ss.str() + "\"");
auto m = new MMonCommand(monc->get_fsid());
m->cmd = cmd;
+ // NOTE: no fallback to legacy blacklist command implemented here
+ // since this is only used for test code.
+
monc->send_mon_message(m);
}
* sending any more operations to OSDs. Use this
* when it is known that the client can't trust
* anything from before this epoch (e.g. due to
- * client blacklist at this epoch).
+ * client blocklist at this epoch).
*/
void Objecter::set_epoch_barrier(epoch_t epoch)
{
bool honor_pool_full = true;
bool pool_full_try = false;
- // If this is true, accumulate a set of blacklisted entities
- // to be drained by consume_blacklist_events.
- bool blacklist_events_enabled = false;
- std::set<entity_addr_t> blacklist_events;
+ // If this is true, accumulate a set of blocklisted entities
+ // to be drained by consume_blocklist_events.
+ bool blocklist_events_enabled = false;
+ std::set<entity_addr_t> blocklist_events;
struct pg_mapping_t {
epoch_t epoch = 0;
std::vector<int> up;
public:
void maybe_request_map();
- void enable_blacklist_events();
+ void enable_blocklist_events();
private:
void _maybe_request_map();
/**
- * Get std::list of entities blacklisted since this was last called,
+ * Get std::list of entities blocklisted since this was last called,
* and reset the std::list.
*
* Uses a std::set because typical use case is to compare some
- * other std::list of clients to see which overlap with the blacklisted
+ * other std::list of clients to see which overlap with the blocklisted
* addrs.
*
*/
- void consume_blacklist_events(std::set<entity_addr_t> *events);
+ void consume_blocklist_events(std::set<entity_addr_t> *events);
int pool_snap_by_name(int64_t poolid,
const char *snap_name,
int pool_snap_list(int64_t poolid, std::vector<uint64_t> *snaps);
private:
- void emit_blacklist_events(const OSDMap::Incremental &inc);
- void emit_blacklist_events(const OSDMap &old_osd_map,
+ void emit_blocklist_events(const OSDMap::Incremental &inc);
+ void emit_blocklist_events(const OSDMap &old_osd_map,
const OSDMap &new_osd_map);
// low-level
void ms_handle_remote_reset(Connection *con) override;
bool ms_handle_refused(Connection *con) override;
- void blacklist_self(bool set);
+ void blocklist_self(bool set);
private:
epoch_t epoch_barrier = 0;
'prefix': 'auth get-or-create',
'entity': utils.name_to_auth_entity('iscsi', igw_id),
'caps': ['mon', 'profile rbd, '
- 'allow command "osd blacklist", '
+ 'allow command "osd blocklist", '
'allow command "config-key get" with "key" prefix "iscsi/"',
'osd', 'allow rwx'],
})
int rados_cluster_stat(rados_t cluster, rados_cluster_stat_t *result)
int rados_cluster_fsid(rados_t cluster, char *buf, size_t len)
- int rados_blacklist_add(rados_t cluster, char *client_address, uint32_t expire_seconds)
+ int rados_blocklist_add(rados_t cluster, char *client_address, uint32_t expire_seconds)
int rados_getaddrs(rados_t cluster, char** addrs)
int rados_application_enable(rados_ioctx_t io, const char *app_name,
int force)
ret = rados_wait_for_latest_osdmap(self.cluster)
return ret
- def blacklist_add(self, client_address, expire_seconds=0):
+ def blocklist_add(self, client_address, expire_seconds=0):
"""
- Blacklist a client from the OSDs
+ Blocklist a client from the OSDs
:param client_address: client address
:type client_address: str
- :param expire_seconds: number of seconds to blacklist
+ :param expire_seconds: number of seconds to blocklist
:type expire_seconds: int
:raises: :class:`Error`
char *_client_address = client_address
with nogil:
- ret = rados_blacklist_add(self.cluster, _client_address, _expire_seconds)
+ ret = rados_blocklist_add(self.cluster, _client_address, _expire_seconds)
if ret < 0:
- raise make_ex(ret, "error blacklisting client '%s'" % client_address)
+ raise make_ex(ret, "error blocklisting client '%s'" % client_address)
def monitor_log(self, level, callback, arg):
if level not in MONITOR_LEVELS:
std::map<std::string, ceph::bufferlist>& attrs,
const bool allow_empty_attrs = true)
{
- static const std::set<std::string> blacklisted_headers = {
+ static const std::set<std::string> blocklisted_headers = {
"x-amz-server-side-encryption-customer-algorithm",
"x-amz-server-side-encryption-customer-key",
"x-amz-server-side-encryption-customer-key-md5",
const std::string& name = kv.first;
std::string& xattr = kv.second;
- if (blacklisted_headers.count(name) == 1) {
+ if (blocklisted_headers.count(name) == 1) {
lsubdout(cct, rgw, 10) << "skipping x>> " << name << dendl;
continue;
} else if (allow_empty_attrs || !xattr.empty()) {
return impl->aio_watch_flush(c->pc);
}
-int Rados::blacklist_add(const std::string& client_address,
+int Rados::blocklist_add(const std::string& client_address,
uint32_t expire_seconds) {
TestRadosClient *impl = reinterpret_cast<TestRadosClient*>(client);
- return impl->blacklist_add(client_address, expire_seconds);
+ return impl->blocklist_add(client_address, expire_seconds);
}
config_t Rados::cct() {
client = NULL;
}
-void Rados::test_blacklist_self(bool set) {
+void Rados::test_blocklist_self(bool set) {
}
int Rados::wait_for_latest_osdmap() {
get_mem_cluster()->get_pool(pool_name));
}
- MOCK_METHOD2(blacklist_add, int(const std::string& client_address,
+ MOCK_METHOD2(blocklist_add, int(const std::string& client_address,
uint32_t expire_seconds));
- int do_blacklist_add(const std::string& client_address,
+ int do_blocklist_add(const std::string& client_address,
uint32_t expire_seconds) {
- return TestMemRadosClient::blacklist_add(client_address, expire_seconds);
+ return TestMemRadosClient::blocklist_add(client_address, expire_seconds);
}
MOCK_METHOD1(get_min_compatible_osd, int(int8_t*));
ON_CALL(*this, connect()).WillByDefault(Invoke(this, &MockTestMemRadosClient::do_connect));
ON_CALL(*this, create_ioctx(_, _)).WillByDefault(Invoke(this, &MockTestMemRadosClient::do_create_ioctx));
- ON_CALL(*this, blacklist_add(_, _)).WillByDefault(Invoke(this, &MockTestMemRadosClient::do_blacklist_add));
+ ON_CALL(*this, blocklist_add(_, _)).WillByDefault(Invoke(this, &MockTestMemRadosClient::do_blocklist_add));
ON_CALL(*this, get_min_compatible_osd(_)).WillByDefault(Invoke(this, &MockTestMemRadosClient::do_get_min_compatible_osd));
ON_CALL(*this, get_min_compatible_client(_, _)).WillByDefault(Invoke(this, &MockTestMemRadosClient::do_get_min_compatible_client));
ON_CALL(*this, service_daemon_register(_, _, _)).WillByDefault(Invoke(this, &MockTestMemRadosClient::do_service_daemon_register));
m_pending_ops++;
c->get();
C_AioNotify *ctx = new C_AioNotify(this, c);
- if (m_client->is_blacklisted()) {
- m_client->get_aio_finisher()->queue(ctx, -EBLACKLISTED);
+ if (m_client->is_blocklisted()) {
+ m_client->get_aio_finisher()->queue(ctx, -EBLOCKLISTED);
} else {
m_client->get_watch_notify()->aio_watch(m_client, m_pool_id,
get_namespace(), o,
m_pending_ops++;
c->get();
C_AioNotify *ctx = new C_AioNotify(this, c);
- if (m_client->is_blacklisted()) {
- m_client->get_aio_finisher()->queue(ctx, -EBLACKLISTED);
+ if (m_client->is_blocklisted()) {
+ m_client->get_aio_finisher()->queue(ctx, -EBLOCKLISTED);
} else {
m_client->get_watch_notify()->aio_unwatch(m_client, handle, ctx);
}
const char *cls, const char *method,
bufferlist& inbl, bufferlist* outbl,
uint64_t snap_id, const SnapContext &snapc) {
- if (m_client->is_blacklisted()) {
- return -EBLACKLISTED;
+ if (m_client->is_blocklisted()) {
+ return -EBLOCKLISTED;
}
cls_method_cxx_call_t call = handler->get_method(cls, method);
int TestIoCtxImpl::list_watchers(const std::string& o,
std::list<obj_watch_t> *out_watchers) {
- if (m_client->is_blacklisted()) {
- return -EBLACKLISTED;
+ if (m_client->is_blocklisted()) {
+ return -EBLOCKLISTED;
}
return m_client->get_watch_notify()->list_watchers(m_pool_id, get_namespace(),
int TestIoCtxImpl::notify(const std::string& o, bufferlist& bl,
uint64_t timeout_ms, bufferlist *pbl) {
- if (m_client->is_blacklisted()) {
- return -EBLACKLISTED;
+ if (m_client->is_blocklisted()) {
+ return -EBLOCKLISTED;
}
return m_client->get_watch_notify()->notify(m_client, m_pool_id,
}
int TestIoCtxImpl::tmap_update(const std::string& oid, bufferlist& cmdbl) {
- if (m_client->is_blacklisted()) {
- return -EBLACKLISTED;
+ if (m_client->is_blocklisted()) {
+ return -EBLOCKLISTED;
}
// TODO: protect against concurrent tmap updates
}
int TestIoCtxImpl::unwatch(uint64_t handle) {
- if (m_client->is_blacklisted()) {
- return -EBLACKLISTED;
+ if (m_client->is_blocklisted()) {
+ return -EBLOCKLISTED;
}
return m_client->get_watch_notify()->unwatch(m_client, handle);
int TestIoCtxImpl::watch(const std::string& o, uint64_t *handle,
librados::WatchCtx *ctx, librados::WatchCtx2 *ctx2) {
- if (m_client->is_blacklisted()) {
- return -EBLACKLISTED;
+ if (m_client->is_blocklisted()) {
+ return -EBLOCKLISTED;
}
return m_client->get_watch_notify()->watch(m_client, m_pool_id,
int TestIoCtxImpl::execute_operation(const std::string& oid,
const Operation &operation) {
- if (m_client->is_blacklisted()) {
- return -EBLACKLISTED;
+ if (m_client->is_blocklisted()) {
+ return -EBLOCKLISTED;
}
TestRadosClient::Transaction transaction(m_client, get_namespace(), oid);
bufferlist *pbl, uint64_t snap_id,
const SnapContext &snapc) {
int ret = 0;
- if (m_client->is_blacklisted()) {
- ret = -EBLACKLISTED;
+ if (m_client->is_blocklisted()) {
+ ret = -EBLOCKLISTED;
} else {
TestRadosClient::Transaction transaction(m_client, get_namespace(), oid);
for (ObjectOperations::iterator it = ops->ops.begin();
void TestMemCluster::deallocate_client(uint32_t nonce) {
std::lock_guard locker{m_lock};
- m_blacklist.erase(nonce);
+ m_blocklist.erase(nonce);
}
-bool TestMemCluster::is_blacklisted(uint32_t nonce) const {
+bool TestMemCluster::is_blocklisted(uint32_t nonce) const {
std::lock_guard locker{m_lock};
- return (m_blacklist.find(nonce) != m_blacklist.end());
+ return (m_blocklist.find(nonce) != m_blocklist.end());
}
-void TestMemCluster::blacklist(uint32_t nonce) {
- m_watch_notify.blacklist(nonce);
+void TestMemCluster::blocklist(uint32_t nonce) {
+ m_watch_notify.blocklist(nonce);
std::lock_guard locker{m_lock};
- m_blacklist.insert(nonce);
+ m_blocklist.insert(nonce);
}
void TestMemCluster::transaction_start(const ObjectLocator& locator) {
void allocate_client(uint32_t *nonce, uint64_t *global_id);
void deallocate_client(uint32_t nonce);
- bool is_blacklisted(uint32_t nonce) const;
- void blacklist(uint32_t nonce);
+ bool is_blocklisted(uint32_t nonce) const;
+ void blocklist(uint32_t nonce);
void transaction_start(const ObjectLocator& locator);
void transaction_finish(const ObjectLocator& locator);
private:
typedef std::map<std::string, Pool*> Pools;
- typedef std::set<uint32_t> Blacklist;
+ typedef std::set<uint32_t> Blocklist;
mutable ceph::mutex m_lock =
ceph::make_mutex("TestMemCluster::m_lock");
uint32_t m_next_nonce;
uint64_t m_next_global_id = 1234;
- Blacklist m_blacklist;
+ Blocklist m_blocklist;
ceph::condition_variable m_transaction_cond;
std::set<ObjectLocator> m_transactions;
const SnapContext &snapc) {
if (get_snap_read() != CEPH_NOSNAP) {
return -EROFS;
- } else if (m_client->is_blacklisted()) {
- return -EBLACKLISTED;
+ } else if (m_client->is_blocklisted()) {
+ return -EBLOCKLISTED;
}
TestMemCluster::SharedFile file;
}
int TestMemIoCtxImpl::assert_exists(const std::string &oid, uint64_t snap_id) {
- if (m_client->is_blacklisted()) {
- return -EBLACKLISTED;
+ if (m_client->is_blocklisted()) {
+ return -EBLOCKLISTED;
}
std::shared_lock l{m_pool->file_lock};
const SnapContext &snapc) {
if (get_snap_read() != CEPH_NOSNAP) {
return -EROFS;
- } else if (m_client->is_blacklisted()) {
- return -EBLACKLISTED;
+ } else if (m_client->is_blocklisted()) {
+ return -EBLOCKLISTED;
}
std::unique_lock l{m_pool->file_lock};
}
int TestMemIoCtxImpl::list_snaps(const std::string& oid, snap_set_t *out_snaps) {
- if (m_client->is_blacklisted()) {
- return -EBLACKLISTED;
+ if (m_client->is_blocklisted()) {
+ return -EBLOCKLISTED;
}
out_snaps->seq = 0;
bool *pmore) {
if (out_vals == NULL) {
return -EINVAL;
- } else if (m_client->is_blacklisted()) {
- return -EBLACKLISTED;
+ } else if (m_client->is_blocklisted()) {
+ return -EBLOCKLISTED;
}
TestMemCluster::SharedFile file;
const std::set<std::string>& keys) {
if (get_snap_read() != CEPH_NOSNAP) {
return -EROFS;
- } else if (m_client->is_blacklisted()) {
- return -EBLACKLISTED;
+ } else if (m_client->is_blocklisted()) {
+ return -EBLOCKLISTED;
}
TestMemCluster::SharedFile file;
const std::map<std::string, bufferlist> &map) {
if (get_snap_read() != CEPH_NOSNAP) {
return -EROFS;
- } else if (m_client->is_blacklisted()) {
- return -EBLACKLISTED;
+ } else if (m_client->is_blocklisted()) {
+ return -EBLOCKLISTED;
}
TestMemCluster::SharedFile file;
int TestMemIoCtxImpl::read(const std::string& oid, size_t len, uint64_t off,
bufferlist *bl, uint64_t snap_id) {
- if (m_client->is_blacklisted()) {
- return -EBLACKLISTED;
+ if (m_client->is_blocklisted()) {
+ return -EBLOCKLISTED;
}
TestMemCluster::SharedFile file;
int TestMemIoCtxImpl::remove(const std::string& oid, const SnapContext &snapc) {
if (get_snap_read() != CEPH_NOSNAP) {
return -EROFS;
- } else if (m_client->is_blacklisted()) {
- return -EBLACKLISTED;
+ } else if (m_client->is_blocklisted()) {
+ return -EBLOCKLISTED;
}
std::unique_lock l{m_pool->file_lock};
}
int TestMemIoCtxImpl::selfmanaged_snap_create(uint64_t *snapid) {
- if (m_client->is_blacklisted()) {
- return -EBLACKLISTED;
+ if (m_client->is_blocklisted()) {
+ return -EBLOCKLISTED;
}
std::unique_lock l{m_pool->file_lock};
}
int TestMemIoCtxImpl::selfmanaged_snap_remove(uint64_t snapid) {
- if (m_client->is_blacklisted()) {
- return -EBLACKLISTED;
+ if (m_client->is_blocklisted()) {
+ return -EBLOCKLISTED;
}
std::unique_lock l{m_pool->file_lock};
int TestMemIoCtxImpl::selfmanaged_snap_rollback(const std::string& oid,
uint64_t snapid) {
- if (m_client->is_blacklisted()) {
- return -EBLACKLISTED;
+ if (m_client->is_blocklisted()) {
+ return -EBLOCKLISTED;
}
std::unique_lock l{m_pool->file_lock};
const SnapContext &snapc) {
if (get_snap_read() != CEPH_NOSNAP) {
return -EROFS;
- } else if (m_client->is_blacklisted()) {
- return -EBLACKLISTED;
+ } else if (m_client->is_blocklisted()) {
+ return -EBLOCKLISTED;
}
{
uint64_t len,
std::map<uint64_t,uint64_t> *m,
bufferlist *data_bl, uint64_t snap_id) {
- if (m_client->is_blacklisted()) {
- return -EBLACKLISTED;
+ if (m_client->is_blocklisted()) {
+ return -EBLOCKLISTED;
}
// TODO verify correctness
int TestMemIoCtxImpl::stat(const std::string& oid, uint64_t *psize,
time_t *pmtime) {
- if (m_client->is_blacklisted()) {
- return -EBLACKLISTED;
+ if (m_client->is_blocklisted()) {
+ return -EBLOCKLISTED;
}
TestMemCluster::SharedFile file;
const SnapContext &snapc) {
if (get_snap_read() != CEPH_NOSNAP) {
return -EROFS;
- } else if (m_client->is_blacklisted()) {
- return -EBLACKLISTED;
+ } else if (m_client->is_blocklisted()) {
+ return -EBLOCKLISTED;
}
TestMemCluster::SharedFile file;
uint64_t off, const SnapContext &snapc) {
if (get_snap_read() != CEPH_NOSNAP) {
return -EROFS;
- } else if (m_client->is_blacklisted()) {
- return -EBLACKLISTED;
+ } else if (m_client->is_blocklisted()) {
+ return -EBLOCKLISTED;
}
TestMemCluster::SharedFile file;
const SnapContext &snapc) {
if (get_snap_read() != CEPH_NOSNAP) {
return -EROFS;
- } else if (m_client->is_blacklisted()) {
- return -EBLACKLISTED;
+ } else if (m_client->is_blocklisted()) {
+ return -EBLOCKLISTED;
}
TestMemCluster::SharedFile file;
const SnapContext &snapc) {
if (get_snap_read() != CEPH_NOSNAP) {
return -EROFS;
- } else if (m_client->is_blacklisted()) {
- return -EBLACKLISTED;
+ } else if (m_client->is_blocklisted()) {
+ return -EBLOCKLISTED;
}
if (len == 0 || (len % bl.length())) {
int TestMemIoCtxImpl::cmpext(const std::string& oid, uint64_t off,
bufferlist& cmp_bl, uint64_t snap_id) {
- if (m_client->is_blacklisted()) {
- return -EBLACKLISTED;
+ if (m_client->is_blocklisted()) {
+ return -EBLOCKLISTED;
}
bufferlist read_bl;
int TestMemIoCtxImpl::xattr_get(const std::string& oid,
std::map<std::string, bufferlist>* attrset) {
- if (m_client->is_blacklisted()) {
- return -EBLACKLISTED;
+ if (m_client->is_blocklisted()) {
+ return -EBLOCKLISTED;
}
TestMemCluster::SharedFile file;
int TestMemIoCtxImpl::xattr_set(const std::string& oid, const std::string &name,
bufferlist& bl) {
- if (m_client->is_blacklisted()) {
- return -EBLACKLISTED;
+ if (m_client->is_blocklisted()) {
+ return -EBLOCKLISTED;
}
std::unique_lock l{m_pool->file_lock};
int TestMemIoCtxImpl::zero(const std::string& oid, uint64_t off, uint64_t len,
const SnapContext &snapc) {
- if (m_client->is_blacklisted()) {
- return -EBLACKLISTED;
+ if (m_client->is_blocklisted()) {
+ return -EBLOCKLISTED;
}
bool truncate_redirect = false;
}
int TestMemRadosClient::pool_create(const std::string &pool_name) {
- if (is_blacklisted()) {
- return -EBLACKLISTED;
+ if (is_blocklisted()) {
+ return -EBLOCKLISTED;
}
return m_mem_cluster->pool_create(pool_name);
}
int TestMemRadosClient::pool_delete(const std::string &pool_name) {
- if (is_blacklisted()) {
- return -EBLACKLISTED;
+ if (is_blocklisted()) {
+ return -EBLOCKLISTED;
}
return m_mem_cluster->pool_delete(pool_name);
}
return 0;
}
-bool TestMemRadosClient::is_blacklisted() const {
- return m_mem_cluster->is_blacklisted(m_nonce);
+bool TestMemRadosClient::is_blocklisted() const {
+ return m_mem_cluster->is_blocklisted(m_nonce);
}
-int TestMemRadosClient::blacklist_add(const std::string& client_address,
+int TestMemRadosClient::blocklist_add(const std::string& client_address,
uint32_t expire_seconds) {
- if (is_blacklisted()) {
- return -EBLACKLISTED;
+ if (is_blocklisted()) {
+ return -EBLOCKLISTED;
}
// extract the nonce to use as a unique key to the client
return -EINVAL;
}
- m_mem_cluster->blacklist(nonce);
+ m_mem_cluster->blocklist(nonce);
return 0;
}
int watch_flush() override;
- bool is_blacklisted() const override;
- int blacklist_add(const std::string& client_address,
+ bool is_blocklisted() const override;
+ int blocklist_add(const std::string& client_address,
uint32_t expire_seconds) override;
protected:
TestMemCluster *get_mem_cluster() {
virtual int aio_watch_flush(AioCompletionImpl *c);
virtual int watch_flush() = 0;
- virtual bool is_blacklisted() const = 0;
- virtual int blacklist_add(const std::string& client_address,
+ virtual bool is_blocklisted() const = 0;
+ virtual int blocklist_add(const std::string& client_address,
uint32_t expire_seconds) = 0;
virtual int wait_for_latest_osd_map() {
maybe_remove_watcher(watcher);
}
-void TestWatchNotify::blacklist(uint32_t nonce) {
+void TestWatchNotify::blocklist(uint32_t nonce) {
std::lock_guard locker{m_lock};
for (auto file_it = m_file_watchers.begin();
librados::WatchCtx2 *ctx2);
int unwatch(TestRadosClient *rados_client, uint64_t handle);
- void blacklist(uint32_t nonce);
+ void blocklist(uint32_t nonce);
private:
typedef std::tuple<int64_t, std::string, std::string> PoolFile;
static BreakRequest* create(librados::IoCtx& ioctx,
AsioEngine& asio_engine,
const std::string& oid, const Locker &locker,
- bool exclusive, bool blacklist_locker,
- uint32_t blacklist_expire_seconds,
+ bool exclusive, bool blocklist_locker,
+ uint32_t blocklist_expire_seconds,
bool force_break_lock, Context *on_finish) {
CephContext *cct = reinterpret_cast<CephContext *>(ioctx.cct());
- EXPECT_EQ(cct->_conf.get_val<bool>("rbd_blacklist_on_break_lock"),
- blacklist_locker);
- EXPECT_EQ(cct->_conf.get_val<uint64_t>("rbd_blacklist_expire_seconds"),
- blacklist_expire_seconds);
+ EXPECT_EQ(cct->_conf.get_val<bool>("rbd_blocklist_on_break_lock"),
+ blocklist_locker);
+ EXPECT_EQ(cct->_conf.get_val<uint64_t>("rbd_blocklist_expire_seconds"),
+ blocklist_expire_seconds);
EXPECT_FALSE(force_break_lock);
ceph_assert(s_instance != nullptr);
s_instance->on_finish = on_finish;
}
- void expect_blacklist_add(MockTestImageCtx &mock_image_ctx, int r) {
+ void expect_blocklist_add(MockTestImageCtx &mock_image_ctx, int r) {
auto& mock_rados_client = librados::get_mock_rados_client(
mock_image_ctx.rados_api);
- EXPECT_CALL(mock_rados_client, mon_command(IsBlacklistCommand(), _, _, _))
+ EXPECT_CALL(mock_rados_client, mon_command(IsBlocklistCommand(), _, _, _))
.WillOnce(Return(r));
}
{entity_name_t::CLIENT(1), "auto 123", "1.2.3.4:0/0", 123},
0);
- expect_blacklist_add(mock_image_ctx, 0);
+ expect_blocklist_add(mock_image_ctx, 0);
expect_wait_for_latest_osd_map(mock_image_ctx, 0);
expect_break_lock(mock_image_ctx, 0);
{entity_name_t::CLIENT(1), "auto 123", "1.2.3.4:0/0", 123},
0);
- expect_blacklist_add(mock_image_ctx, 0);
+ expect_blocklist_add(mock_image_ctx, 0);
expect_wait_for_latest_osd_map(mock_image_ctx, 0);
expect_break_lock(mock_image_ctx, 0);
ASSERT_EQ(-EINVAL, ctx.wait());
}
-TEST_F(TestMockManagedLockBreakRequest, BlacklistDisabled) {
+TEST_F(TestMockManagedLockBreakRequest, BlocklistDisabled) {
REQUIRE_FEATURE(RBD_FEATURE_EXCLUSIVE_LOCK);
librbd::ImageCtx *ictx;
ASSERT_EQ(0, ctx.wait());
}
-TEST_F(TestMockManagedLockBreakRequest, BlacklistSelf) {
+TEST_F(TestMockManagedLockBreakRequest, BlocklistSelf) {
REQUIRE_FEATURE(RBD_FEATURE_EXCLUSIVE_LOCK);
librbd::ImageCtx *ictx;
ASSERT_EQ(-EINVAL, ctx.wait());
}
-TEST_F(TestMockManagedLockBreakRequest, BlacklistError) {
+TEST_F(TestMockManagedLockBreakRequest, BlocklistError) {
REQUIRE_FEATURE(RBD_FEATURE_EXCLUSIVE_LOCK);
librbd::ImageCtx *ictx;
{entity_name_t::CLIENT(1), "auto 123", "1.2.3.4:0/0", 123},
0);
- expect_blacklist_add(mock_image_ctx, -EINVAL);
+ expect_blocklist_add(mock_image_ctx, -EINVAL);
C_SaferCond ctx;
Locker locker{entity_name_t::CLIENT(1), "auto 123", "1.2.3.4:0/0", 123};
{entity_name_t::CLIENT(1), "auto 123", "1.2.3.4:0/0", 123},
0);
- expect_blacklist_add(mock_image_ctx, 0);
+ expect_blocklist_add(mock_image_ctx, 0);
expect_wait_for_latest_osd_map(mock_image_ctx, 0);
expect_break_lock(mock_image_ctx, -ENOENT);
{entity_name_t::CLIENT(1), "auto 123", "1.2.3.4:0/0", 123},
0);
- expect_blacklist_add(mock_image_ctx, 0);
+ expect_blocklist_add(mock_image_ctx, 0);
expect_wait_for_latest_osd_map(mock_image_ctx, 0);
expect_break_lock(mock_image_ctx, -EINVAL);
struct MockImageWatcher {
MOCK_METHOD0(is_registered, bool());
MOCK_METHOD0(is_unregistered, bool());
- MOCK_METHOD0(is_blacklisted, bool());
+ MOCK_METHOD0(is_blocklisted, bool());
MOCK_METHOD0(unregister_watch, void());
MOCK_METHOD1(flush, void(Context *));
happens, split off the last number in the exception 'args' string
and use it as the process exit code, if it's convertible to a number.
-Designed to run against a blacklist operation and verify the
+Designed to run against a blocklist operation and verify the
ESHUTDOWN expected from the image operation.
Note: this cannot be run with writeback caching on, currently, as
static char buf[10];
- rados_t blacklist_cluster;
- ASSERT_EQ("", connect_cluster(&blacklist_cluster));
+ rados_t blocklist_cluster;
+ ASSERT_EQ("", connect_cluster(&blocklist_cluster));
- rados_ioctx_t ioctx, blacklist_ioctx;
+ rados_ioctx_t ioctx, blocklist_ioctx;
ASSERT_EQ(0, rados_ioctx_create(_cluster, m_pool_name.c_str(), &ioctx));
- ASSERT_EQ(0, rados_ioctx_create(blacklist_cluster, m_pool_name.c_str(),
- &blacklist_ioctx));
+ ASSERT_EQ(0, rados_ioctx_create(blocklist_cluster, m_pool_name.c_str(),
+ &blocklist_ioctx));
std::string name = get_temp_image_name();
uint64_t size = 2 << 20;
int order = 0;
ASSERT_EQ(0, create_image(ioctx, name.c_str(), size, &order));
- rbd_image_t image, blacklist_image;
+ rbd_image_t image, blocklist_image;
ASSERT_EQ(0, rbd_open(ioctx, name.c_str(), &image, NULL));
- ASSERT_EQ(0, rbd_open(blacklist_ioctx, name.c_str(), &blacklist_image, NULL));
+ ASSERT_EQ(0, rbd_open(blocklist_ioctx, name.c_str(), &blocklist_image, NULL));
- ASSERT_EQ(0, rbd_metadata_set(image, "conf_rbd_blacklist_on_break_lock", "true"));
- ASSERT_EQ(0, rbd_lock_acquire(blacklist_image, RBD_LOCK_MODE_EXCLUSIVE));
+ ASSERT_EQ(0, rbd_metadata_set(image, "conf_rbd_blocklist_on_break_lock", "true"));
+ ASSERT_EQ(0, rbd_lock_acquire(blocklist_image, RBD_LOCK_MODE_EXCLUSIVE));
rbd_lock_mode_t lock_mode;
char *lock_owners[1];
ASSERT_EQ(0, rbd_lock_break(image, RBD_LOCK_MODE_EXCLUSIVE, lock_owners[0]));
ASSERT_EQ(0, rbd_lock_acquire(image, RBD_LOCK_MODE_EXCLUSIVE));
- EXPECT_EQ(0, rados_wait_for_latest_osdmap(blacklist_cluster));
+ EXPECT_EQ(0, rados_wait_for_latest_osdmap(blocklist_cluster));
ASSERT_EQ((ssize_t)sizeof(buf), rbd_write(image, 0, sizeof(buf), buf));
- ASSERT_EQ(-EBLACKLISTED, rbd_write(blacklist_image, 0, sizeof(buf), buf));
+ ASSERT_EQ(-EBLOCKLISTED, rbd_write(blocklist_image, 0, sizeof(buf), buf));
ASSERT_EQ(0, rbd_close(image));
- ASSERT_EQ(0, rbd_close(blacklist_image));
+ ASSERT_EQ(0, rbd_close(blocklist_image));
rbd_lock_get_owners_cleanup(lock_owners, max_lock_owners);
rados_ioctx_destroy(ioctx);
- rados_ioctx_destroy(blacklist_ioctx);
- rados_shutdown(blacklist_cluster);
+ rados_ioctx_destroy(blocklist_ioctx);
+ rados_shutdown(blocklist_cluster);
}
TEST_F(TestLibRBD, DiscardAfterWrite)
struct ManagedLock<MockExclusiveLockImageCtx> {
ManagedLock(librados::IoCtx& ioctx, AsioEngine& asio_engine,
const std::string& oid, librbd::MockImageWatcher *watcher,
- managed_lock::Mode mode, bool blacklist_on_break_lock,
- uint32_t blacklist_expire_seconds)
+ managed_lock::Mode mode, bool blocklist_on_break_lock,
+ uint32_t blocklist_expire_seconds)
{}
virtual ~ManagedLock() = default;
expect_is_state_acquiring(exclusive_lock, true);
expect_prepare_lock_complete(mock_image_ctx);
expect_is_action_acquire_lock(exclusive_lock, true);
- ASSERT_EQ(-EBLACKLISTED, when_post_acquire_lock_handler(exclusive_lock,
- -EBLACKLISTED));
+ ASSERT_EQ(-EBLOCKLISTED, when_post_acquire_lock_handler(exclusive_lock,
+ -EBLOCKLISTED));
}
TEST_F(TestMockExclusiveLock, PostAcquireLockError) {
struct MockMockManagedLock : public ManagedLock<MockManagedLockImageCtx> {
MockMockManagedLock(librados::IoCtx& ioctx, AsioEngine& asio_engine,
const std::string& oid, librbd::MockImageWatcher *watcher,
- managed_lock::Mode mode, bool blacklist_on_break_lock,
- uint32_t blacklist_expire_seconds)
+ managed_lock::Mode mode, bool blocklist_on_break_lock,
+ uint32_t blocklist_expire_seconds)
: ManagedLock<MockManagedLockImageCtx>(ioctx, asio_engine, oid, watcher,
librbd::managed_lock::EXCLUSIVE, true, 0) {
};
AsioEngine& asio_engine,
const std::string& oid,
const std::string& cookie,
- bool exclusive, bool blacklist_on_break_lock,
- uint32_t blacklist_expire_seconds,
+ bool exclusive, bool blocklist_on_break_lock,
+ uint32_t blocklist_expire_seconds,
Context *on_finish) {
return BaseRequest::create(ioctx, watcher, oid, cookie, on_finish);
}
static BreakRequest* create(librados::IoCtx& ioctx,
AsioEngine& asio_engine,
const std::string& oid, const Locker &locker,
- bool exclusive, bool blacklist_locker,
- uint32_t blacklist_expire_seconds,
+ bool exclusive, bool blocklist_locker,
+ uint32_t blocklist_expire_seconds,
bool force_break_lock, Context *on_finish) {
ceph_abort_msg("unexpected call");
}
ASSERT_EQ(0, when_shut_down(managed_lock));
}
-TEST_F(TestMockManagedLock, AcquireLockBlacklist) {
+TEST_F(TestMockManagedLock, AcquireLockBlocklist) {
librbd::ImageCtx *ictx;
ASSERT_EQ(0, open_image(m_image_name, &ictx));
librbd::managed_lock::EXCLUSIVE, true, 0);
InSequence seq;
- // will abort after seeing blacklist error (avoid infinite request loop)
+ // will abort after seeing blocklist error (avoid infinite request loop)
MockAcquireRequest request_lock_acquire;
- expect_acquire_lock(*mock_image_ctx.image_watcher, ictx->op_work_queue, request_lock_acquire, -EBLACKLISTED);
- ASSERT_EQ(-EBLACKLISTED, when_acquire_lock(managed_lock));
+ expect_acquire_lock(*mock_image_ctx.image_watcher, ictx->op_work_queue, request_lock_acquire, -EBLOCKLISTED);
+ ASSERT_EQ(-EBLOCKLISTED, when_acquire_lock(managed_lock));
ASSERT_FALSE(is_lock_owner(managed_lock));
ASSERT_EQ(0, when_shut_down(managed_lock));
ASSERT_EQ(0, when_shut_down(managed_lock));
}
-TEST_F(TestMockManagedLock, ReleaseLockBlacklist) {
+TEST_F(TestMockManagedLock, ReleaseLockBlocklist) {
librbd::ImageCtx *ictx;
ASSERT_EQ(0, open_image(m_image_name, &ictx));
expect_acquire_lock(*mock_image_ctx.image_watcher, ictx->op_work_queue, try_lock_acquire, 0);
ASSERT_EQ(0, when_acquire_lock(managed_lock));
- expect_pre_release_lock_handler(managed_lock, false, -EBLACKLISTED);
- expect_post_release_lock_handler(managed_lock, false, -EBLACKLISTED, -EBLACKLISTED);
- ASSERT_EQ(-EBLACKLISTED, when_release_lock(managed_lock));
+ expect_pre_release_lock_handler(managed_lock, false, -EBLOCKLISTED);
+ expect_post_release_lock_handler(managed_lock, false, -EBLOCKLISTED, -EBLOCKLISTED);
+ ASSERT_EQ(-EBLOCKLISTED, when_release_lock(managed_lock));
ASSERT_FALSE(is_lock_owner(managed_lock));
ASSERT_EQ(0, when_shut_down(managed_lock));
ASSERT_FALSE(is_lock_owner(managed_lock));
}
-TEST_F(TestMockManagedLock, AttemptReacquireBlacklistedLock) {
+TEST_F(TestMockManagedLock, AttemptReacquireBlocklistedLock) {
librbd::ImageCtx *ictx;
ASSERT_EQ(0, open_image(m_image_name, &ictx));
ASSERT_FALSE(is_lock_owner(managed_lock));
}
-TEST_F(TestMockManagedLock, ReacquireBlacklistedLock) {
+TEST_F(TestMockManagedLock, ReacquireBlocklistedLock) {
librbd::ImageCtx *ictx;
ASSERT_EQ(0, open_image(m_image_name, &ictx));
// wait for recovery unwatch/watch
ASSERT_TRUE(wait_for_watch(mock_image_ctx, 2));
- ASSERT_TRUE(mock_image_watcher.is_blacklisted());
+ ASSERT_TRUE(mock_image_watcher.is_blocklisted());
C_SaferCond unregister_ctx;
mock_image_watcher.unregister_watch(&unregister_ctx);
// inject an unregister
C_SaferCond unregister_ctx;
- expect_aio_unwatch(mock_image_ctx, -EBLACKLISTED,
+ expect_aio_unwatch(mock_image_ctx, -EBLOCKLISTED,
[&mock_image_watcher, &unregister_ctx]() {
mock_image_watcher.unregister_watch(&unregister_ctx);
});
ASSERT_EQ(0, register_ctx.wait());
ceph_assert(m_watch_ctx != nullptr);
- m_watch_ctx->handle_error(0, -EBLACKLISTED);
+ m_watch_ctx->handle_error(0, -EBLOCKLISTED);
ASSERT_EQ(0, unregister_ctx.wait());
}
assert_equal({}, validate_command(sigdict, ['osd', 'lspools',
'toomany']))
- def test_blacklist_ls(self):
- self.assert_valid_command(['osd', 'blacklist', 'ls'])
- assert_equal({}, validate_command(sigdict, ['osd', 'blacklist']))
- assert_equal({}, validate_command(sigdict, ['osd', 'blacklist',
+ def test_blocklist_ls(self):
+ self.assert_valid_command(['osd', 'blocklist', 'ls'])
+ assert_equal({}, validate_command(sigdict, ['osd', 'blocklist']))
+ assert_equal({}, validate_command(sigdict, ['osd', 'blocklist',
'ls', 'toomany']))
def test_crush_rule(self):
uuid,
'toomany']))
- def test_blacklist(self):
+ def test_blocklist(self):
for action in ('add', 'rm'):
- self.assert_valid_command(['osd', 'blacklist', action,
+ self.assert_valid_command(['osd', 'blocklist', action,
'1.2.3.4/567'])
- self.assert_valid_command(['osd', 'blacklist', action,
+ self.assert_valid_command(['osd', 'blocklist', action,
'1.2.3.4'])
- self.assert_valid_command(['osd', 'blacklist', action,
+ self.assert_valid_command(['osd', 'blocklist', action,
'1.2.3.4/567', '600.40'])
- self.assert_valid_command(['osd', 'blacklist', action,
+ self.assert_valid_command(['osd', 'blocklist', action,
'1.2.3.4', '600.40'])
- assert_equal({}, validate_command(sigdict, ['osd', 'blacklist',
+ assert_equal({}, validate_command(sigdict, ['osd', 'blocklist',
action,
'invalid',
'600.40']))
- assert_equal({}, validate_command(sigdict, ['osd', 'blacklist',
+ assert_equal({}, validate_command(sigdict, ['osd', 'blocklist',
action,
'1.2.3.4/567',
'-1.0']))
- assert_equal({}, validate_command(sigdict, ['osd', 'blacklist',
+ assert_equal({}, validate_command(sigdict, ['osd', 'blocklist',
action,
'1.2.3.4/567',
'600.40',
assert_raises(RadosStateError, rados.osd_command, 0, '', b'')
assert_raises(RadosStateError, rados.pg_command, '', '', b'')
assert_raises(RadosStateError, rados.wait_for_latest_osdmap)
- assert_raises(RadosStateError, rados.blacklist_add, '127.0.0.1/123', 0)
+ assert_raises(RadosStateError, rados.blocklist_add, '127.0.0.1/123', 0)
def test_configuring(self):
rados = Rados(conffile='')
fsid = self.rados.get_fsid()
assert re.match('[0-9a-f\-]{36}', fsid, re.I)
- def test_blacklist_add(self):
- self.rados.blacklist_add("1.2.3.4/123", 1)
+ def test_blocklist_add(self):
+ self.rados.blocklist_add("1.2.3.4/123", 1)
@attr('stats')
def test_get_cluster_stats(self):
return functools.wraps(fn)(_require_features)
return wrapper
-def blacklist_features(blacklisted_features):
+def blocklist_features(blocklisted_features):
def wrapper(fn):
- def _blacklist_features(*args, **kwargs):
+ def _blocklist_features(*args, **kwargs):
global features
- for feature in blacklisted_features:
+ for feature in blocklisted_features:
if features is not None and feature & features == feature:
raise SkipTest
return fn(*args, **kwargs)
- return functools.wraps(fn)(_blacklist_features)
+ return functools.wraps(fn)(_blocklist_features)
return wrapper
def test_version():
self.image = None
@require_new_format()
- @blacklist_features([RBD_FEATURE_EXCLUSIVE_LOCK])
+ @blocklist_features([RBD_FEATURE_EXCLUSIVE_LOCK])
def test_update_features(self):
features = self.image.features()
self.image.update_features(RBD_FEATURE_EXCLUSIVE_LOCK, True)
def test_remove_with_exclusive_lock(self):
assert_raises(ImageBusy, remove_image)
- @blacklist_features([RBD_FEATURE_EXCLUSIVE_LOCK])
+ @blocklist_features([RBD_FEATURE_EXCLUSIVE_LOCK])
def test_remove_with_snap(self):
self.image.create_snap('snap1')
assert_raises(ImageHasSnapshots, remove_image)
self.image.remove_snap('snap1')
- @blacklist_features([RBD_FEATURE_EXCLUSIVE_LOCK])
+ @blocklist_features([RBD_FEATURE_EXCLUSIVE_LOCK])
def test_remove_with_watcher(self):
data = rand_data(256)
self.image.write(data, 0)
image.lock_release()
def test_break_lock(self):
- blacklist_rados = Rados(conffile='')
- blacklist_rados.connect()
+ blocklist_rados = Rados(conffile='')
+ blocklist_rados.connect()
try:
- blacklist_ioctx = blacklist_rados.open_ioctx(pool_name)
+ blocklist_ioctx = blocklist_rados.open_ioctx(pool_name)
try:
- rados2.conf_set('rbd_blacklist_on_break_lock', 'true')
+ rados2.conf_set('rbd_blocklist_on_break_lock', 'true')
with Image(ioctx2, image_name) as image, \
- Image(blacklist_ioctx, image_name) as blacklist_image:
+ Image(blocklist_ioctx, image_name) as blocklist_image:
lock_owners = list(image.lock_get_owners())
eq(0, len(lock_owners))
- blacklist_image.lock_acquire(RBD_LOCK_MODE_EXCLUSIVE)
+ blocklist_image.lock_acquire(RBD_LOCK_MODE_EXCLUSIVE)
assert_raises(ReadOnlyImage, image.lock_acquire,
RBD_LOCK_MODE_EXCLUSIVE)
lock_owners = list(image.lock_get_owners())
lock_owners[0]['owner'])
assert_raises(ConnectionShutdown,
- blacklist_image.is_exclusive_lock_owner)
+ blocklist_image.is_exclusive_lock_owner)
- blacklist_rados.wait_for_latest_osdmap()
+ blocklist_rados.wait_for_latest_osdmap()
data = rand_data(256)
assert_raises(ConnectionShutdown,
- blacklist_image.write, data, 0)
+ blocklist_image.write, data, 0)
image.lock_acquire(RBD_LOCK_MODE_EXCLUSIVE)
try:
- blacklist_image.close()
+ blocklist_image.close()
except ConnectionShutdown:
pass
finally:
- blacklist_ioctx.close()
+ blocklist_ioctx.close()
finally:
- blacklist_rados.shutdown()
+ blocklist_rados.shutdown()
class TestMirroring(object):
expect_listener_images_unmapped(mock_listener, 1, &released_global_image_ids,
&release_peer_ack_ctxs);
- // instance blacklisted -- ACQUIRE request fails
+ // instance blocklisted -- ACQUIRE request fails
remote_peer_ack_nowait(mock_image_map.get(), shuffled_global_image_ids,
- -EBLACKLISTED, &peer_ack_ctxs);
+ -EBLOCKLISTED, &peer_ack_ctxs);
ASSERT_TRUE(wait_for_listener_notify(shuffled_global_image_ids.size()));
std::map<std::string, Context*> remap_peer_ack_ctxs;
mock_listener, shuffled_global_image_ids, 0,
&remap_peer_ack_ctxs);
- // instance blacklisted -- RELEASE request fails
+ // instance blocklisted -- RELEASE request fails
remote_peer_ack_listener_wait(mock_image_map.get(), shuffled_global_image_ids,
-ENOENT, &release_peer_ack_ctxs);
wait_for_scheduled_task();
MOCK_METHOD0(get_local_image_id, const std::string &());
MOCK_METHOD0(is_running, bool());
MOCK_METHOD0(is_stopped, bool());
- MOCK_METHOD0(is_blacklisted, bool());
+ MOCK_METHOD0(is_blocklisted, bool());
MOCK_CONST_METHOD0(is_finished, bool());
MOCK_METHOD1(set_finished, void(bool));
C_SaferCond on_acquire;
EXPECT_CALL(mock_image_replayer, add_peer(_));
EXPECT_CALL(mock_image_replayer, is_stopped()).WillOnce(Return(true));
- EXPECT_CALL(mock_image_replayer, is_blacklisted()).WillOnce(Return(false));
+ EXPECT_CALL(mock_image_replayer, is_blocklisted()).WillOnce(Return(false));
EXPECT_CALL(mock_image_replayer, is_finished()).WillOnce(Return(false));
EXPECT_CALL(mock_image_replayer, start(_, false))
.WillOnce(CompleteContext(0));
C_SaferCond on_acquire;
EXPECT_CALL(mock_image_replayer, add_peer(_));
EXPECT_CALL(mock_image_replayer, is_stopped()).WillOnce(Return(true));
- EXPECT_CALL(mock_image_replayer, is_blacklisted()).WillOnce(Return(false));
+ EXPECT_CALL(mock_image_replayer, is_blocklisted()).WillOnce(Return(false));
EXPECT_CALL(mock_image_replayer, is_finished()).WillOnce(Return(false));
EXPECT_CALL(mock_image_replayer, start(_, false))
.WillOnce(CompleteContext(0));
EXPECT_CALL(mock_image_replayer, get_health_state()).WillOnce(
Return(image_replayer::HEALTH_STATE_OK));
EXPECT_CALL(mock_image_replayer, is_stopped()).WillOnce(Return(true));
- EXPECT_CALL(mock_image_replayer, is_blacklisted()).WillOnce(Return(false));
+ EXPECT_CALL(mock_image_replayer, is_blocklisted()).WillOnce(Return(false));
EXPECT_CALL(mock_image_replayer, is_finished()).WillOnce(Return(true));
EXPECT_CALL(mock_image_replayer, destroy());
EXPECT_CALL(mock_service_daemon,
EXPECT_CALL(mock_image_replayer, add_peer(_));
EXPECT_CALL(mock_image_replayer, is_stopped()).WillOnce(Return(true));
- EXPECT_CALL(mock_image_replayer, is_blacklisted()).WillOnce(Return(false));
+ EXPECT_CALL(mock_image_replayer, is_blocklisted()).WillOnce(Return(false));
EXPECT_CALL(mock_image_replayer, is_finished()).WillOnce(Return(false));
EXPECT_CALL(mock_image_replayer, start(_, false))
.WillOnce(CompleteContext(0));
librbd::AsioEngine& asio_engine,
const std::string& oid, librbd::Watcher *watcher,
managed_lock::Mode mode,
- bool blacklist_on_break_lock,
- uint32_t blacklist_expire_seconds) {
+ bool blocklist_on_break_lock,
+ uint32_t blocklist_expire_seconds) {
ceph_assert(s_instance != nullptr);
return s_instance;
}
expect_acquire_lock(mock_managed_lock, 0);
ASSERT_EQ(0, instance_watcher->init());
- // Acquire image on dead (blacklisted) instance
+ // Acquire image on dead (blocklisted) instance
C_SaferCond on_acquire;
instance_watcher->notify_image_acquire("dead instance", "global image id",
&on_acquire);
expect_acquire_lock(mock_managed_lock, 0);
ASSERT_EQ(0, instance_watcher->init());
- // Release image on dead (blacklisted) instance
+ // Release image on dead (blocklisted) instance
C_SaferCond on_acquire;
instance_watcher->notify_image_release("dead instance", "global image id",
&on_acquire);
struct ManagedLock<MockTestImageCtx> {
ManagedLock(librados::IoCtx& ioctx, librbd::AsioEngine& asio_engine,
const std::string& oid, librbd::Watcher *watcher,
- managed_lock::Mode mode, bool blacklist_on_break_lock,
- uint32_t blacklist_expire_seconds)
+ managed_lock::Mode mode, bool blocklist_on_break_lock,
+ uint32_t blocklist_expire_seconds)
: m_work_queue(asio_engine.get_work_queue()) {
MockManagedLock::get_instance().construct();
}
return s_instances[pool_id];
}
- MOCK_METHOD0(is_blacklisted, bool());
+ MOCK_METHOD0(is_blocklisted, bool());
MOCK_METHOD0(get_image_count, uint64_t());
return namespace_replayer;
}
- MOCK_METHOD0(is_blacklisted, bool());
+ MOCK_METHOD0(is_blocklisted, bool());
MOCK_METHOD0(get_instance_id, std::string());
MOCK_METHOD1(init, void(Context*));
return s_instance;
}
- MOCK_METHOD0(is_blacklisted, bool());
+ MOCK_METHOD0(is_blocklisted, bool());
MOCK_METHOD0(is_leader, bool());
MOCK_METHOD0(release_leader, void());
}));
}
- void expect_leader_watcher_is_blacklisted(
- MockLeaderWatcher &mock_leader_watcher, bool blacklisted) {
- EXPECT_CALL(mock_leader_watcher, is_blacklisted())
- .WillRepeatedly(Return(blacklisted));
+ void expect_leader_watcher_is_blocklisted(
+ MockLeaderWatcher &mock_leader_watcher, bool blocklisted) {
+ EXPECT_CALL(mock_leader_watcher, is_blocklisted())
+ .WillRepeatedly(Return(blocklisted));
}
- void expect_namespace_replayer_is_blacklisted(
+ void expect_namespace_replayer_is_blocklisted(
MockNamespaceReplayer &mock_namespace_replayer,
- bool blacklisted) {
- EXPECT_CALL(mock_namespace_replayer, is_blacklisted())
- .WillRepeatedly(Return(blacklisted));
+ bool blocklisted) {
+ EXPECT_CALL(mock_namespace_replayer, is_blocklisted())
+ .WillRepeatedly(Return(blocklisted));
}
void expect_namespace_replayer_get_instance_id(
peer_spec.key = "234";
auto mock_default_namespace_replayer = new MockNamespaceReplayer();
- expect_namespace_replayer_is_blacklisted(*mock_default_namespace_replayer,
+ expect_namespace_replayer_is_blocklisted(*mock_default_namespace_replayer,
false);
MockThreads mock_threads(m_threads);
auto mock_leader_watcher = new MockLeaderWatcher();
expect_leader_watcher_get_leader_instance_id(*mock_leader_watcher);
- expect_leader_watcher_is_blacklisted(*mock_leader_watcher, false);
+ expect_leader_watcher_is_blocklisted(*mock_leader_watcher, false);
InSequence seq;
peer_spec.key = "234";
auto mock_default_namespace_replayer = new MockNamespaceReplayer();
- expect_namespace_replayer_is_blacklisted(*mock_default_namespace_replayer,
+ expect_namespace_replayer_is_blocklisted(*mock_default_namespace_replayer,
false);
MockThreads mock_threads(m_threads);
auto mock_leader_watcher = new MockLeaderWatcher();
expect_leader_watcher_get_leader_instance_id(*mock_leader_watcher);
expect_leader_watcher_list_instances(*mock_leader_watcher);
- expect_leader_watcher_is_blacklisted(*mock_leader_watcher, false);
+ expect_leader_watcher_is_blocklisted(*mock_leader_watcher, false);
InSequence seq;
MockNamespace mock_namespace;
auto mock_default_namespace_replayer = new MockNamespaceReplayer();
- expect_namespace_replayer_is_blacklisted(*mock_default_namespace_replayer,
+ expect_namespace_replayer_is_blocklisted(*mock_default_namespace_replayer,
false);
auto mock_ns1_namespace_replayer = new MockNamespaceReplayer("ns1");
- expect_namespace_replayer_is_blacklisted(*mock_ns1_namespace_replayer,
+ expect_namespace_replayer_is_blocklisted(*mock_ns1_namespace_replayer,
false);
auto mock_ns2_namespace_replayer = new MockNamespaceReplayer("ns2");
- expect_namespace_replayer_is_blacklisted(*mock_ns2_namespace_replayer,
+ expect_namespace_replayer_is_blocklisted(*mock_ns2_namespace_replayer,
false);
MockThreads mock_threads(m_threads);
auto mock_leader_watcher = new MockLeaderWatcher();
expect_leader_watcher_get_leader_instance_id(*mock_leader_watcher);
expect_leader_watcher_list_instances(*mock_leader_watcher);
- expect_leader_watcher_is_blacklisted(*mock_leader_watcher, false);
+ expect_leader_watcher_is_blocklisted(*mock_leader_watcher, false);
auto& mock_cluster = get_mock_cluster();
auto mock_local_rados_client = mock_cluster.do_create_rados_client(
MockNamespace mock_namespace;
auto mock_default_namespace_replayer = new MockNamespaceReplayer();
- expect_namespace_replayer_is_blacklisted(*mock_default_namespace_replayer,
+ expect_namespace_replayer_is_blocklisted(*mock_default_namespace_replayer,
false);
auto mock_ns1_namespace_replayer = new MockNamespaceReplayer("ns1");
auto mock_ns2_namespace_replayer = new MockNamespaceReplayer("ns2");
- expect_namespace_replayer_is_blacklisted(*mock_ns2_namespace_replayer,
+ expect_namespace_replayer_is_blocklisted(*mock_ns2_namespace_replayer,
false);
auto mock_ns3_namespace_replayer = new MockNamespaceReplayer("ns3");
auto mock_leader_watcher = new MockLeaderWatcher();
expect_leader_watcher_get_leader_instance_id(*mock_leader_watcher);
expect_leader_watcher_list_instances(*mock_leader_watcher);
- expect_leader_watcher_is_blacklisted(*mock_leader_watcher, false);
+ expect_leader_watcher_is_blocklisted(*mock_leader_watcher, false);
auto& mock_cluster = get_mock_cluster();
auto mock_local_rados_client = mock_cluster.do_create_rados_client(
C_SaferCond ctx;
mock_pool_watcher.init(&ctx);
ASSERT_EQ(-EBLACKLISTED, ctx.wait());
- ASSERT_TRUE(mock_pool_watcher.is_blacklisted());
+ ASSERT_TRUE(mock_pool_watcher.is_blocklisted());
expect_mirroring_watcher_unregister(mock_mirroring_watcher, 0);
ASSERT_EQ(0, when_shut_down(mock_pool_watcher));
ASSERT_EQ(0, when_shut_down(mock_pool_watcher));
}
-TEST_F(TestMockPoolWatcher, RefreshBlacklist) {
+TEST_F(TestMockPoolWatcher, RefreshBlocklist) {
MockThreads mock_threads(m_threads);
expect_work_queue(mock_threads);
expect_mirroring_watcher_register(mock_mirroring_watcher, 0);
MockRefreshImagesRequest mock_refresh_images_request;
- expect_refresh_images(mock_refresh_images_request, {}, -EBLACKLISTED);
+ expect_refresh_images(mock_refresh_images_request, {}, -EBLOCKLISTED);
MockListener mock_listener(this);
MockPoolWatcher mock_pool_watcher(&mock_threads, m_remote_io_ctx,
"remote uuid", mock_listener);
C_SaferCond ctx;
mock_pool_watcher.init(&ctx);
- ASSERT_EQ(-EBLACKLISTED, ctx.wait());
- ASSERT_TRUE(mock_pool_watcher.is_blacklisted());
+ ASSERT_EQ(-EBLOCKLISTED, ctx.wait());
+ ASSERT_TRUE(mock_pool_watcher.is_blocklisted());
expect_mirroring_watcher_unregister(mock_mirroring_watcher, 0);
ASSERT_EQ(0, when_shut_down(mock_pool_watcher));
ASSERT_EQ(0, when_shut_down(mock_pool_watcher));
}
-TEST_F(TestMockPoolWatcher, RewatchBlacklist) {
+TEST_F(TestMockPoolWatcher, RewatchBlocklist) {
MockThreads mock_threads(m_threads);
expect_work_queue(mock_threads);
ASSERT_EQ(0, ctx.wait());
ASSERT_TRUE(wait_for_update(1));
- MirroringWatcher::get_instance().handle_rewatch_complete(-EBLACKLISTED);
- ASSERT_TRUE(mock_pool_watcher.is_blacklisted());
+ MirroringWatcher::get_instance().handle_rewatch_complete(-EBLOCKLISTED);
+ ASSERT_TRUE(mock_pool_watcher.is_blocklisted());
expect_mirroring_watcher_unregister(mock_mirroring_watcher, 0);
ASSERT_EQ(0, when_shut_down(mock_pool_watcher));
// 192.168.1.0/24
const rgw::IAM::MaskedIP allowedIPv4Range = { false, rgw::IAM::Address("11000000101010000000000100000000"), 24 };
// 192.168.1.1/32
- const rgw::IAM::MaskedIP blacklistedIPv4 = { false, rgw::IAM::Address("11000000101010000000000100000001"), 32 };
+ const rgw::IAM::MaskedIP blocklistedIPv4 = { false, rgw::IAM::Address("11000000101010000000000100000001"), 32 };
// 2001:db8:85a3:0:0:8a2e:370:7334/128
const rgw::IAM::MaskedIP allowedIPv6 = { true, rgw::IAM::Address("00100000000000010000110110111000100001011010001100000000000000000000000000000000100010100010111000000011011100000111001100110100"), 128 };
// ::1
- const rgw::IAM::MaskedIP blacklistedIPv6 = { true, rgw::IAM::Address(1), 128 };
+ const rgw::IAM::MaskedIP blocklistedIPv6 = { true, rgw::IAM::Address(1), 128 };
// 2001:db8:85a3:0:0:8a2e:370:7330/124
const rgw::IAM::MaskedIP allowedIPv6Range = { true, rgw::IAM::Address("00100000000000010000110110111000100001011010001100000000000000000000000000000000100010100010111000000011011100000111001100110000"), 124 };
public:
TEST_F(IPPolicyTest, MaskedIPOperations) {
EXPECT_EQ(stringify(allowedIPv4Range), "192.168.1.0/24");
- EXPECT_EQ(stringify(blacklistedIPv4), "192.168.1.1/32");
+ EXPECT_EQ(stringify(blocklistedIPv4), "192.168.1.1/32");
EXPECT_EQ(stringify(allowedIPv6), "2001:db8:85a3:0:0:8a2e:370:7334/128");
EXPECT_EQ(stringify(allowedIPv6Range), "2001:db8:85a3:0:0:8a2e:370:7330/124");
- EXPECT_EQ(stringify(blacklistedIPv6), "0:0:0:0:0:0:0:1/128");
- EXPECT_EQ(allowedIPv4Range, blacklistedIPv4);
+ EXPECT_EQ(stringify(blocklistedIPv6), "0:0:0:0:0:0:0:1/128");
+ EXPECT_EQ(allowedIPv4Range, blocklistedIPv4);
EXPECT_EQ(allowedIPv6Range, allowedIPv6);
}
TEST_F(IPPolicyTest, asNetworkIPv4) {
auto actualIPv4 = rgw::IAM::Condition::as_network("192.168.1.1");
ASSERT_TRUE(actualIPv4.is_initialized());
- EXPECT_EQ(*actualIPv4, blacklistedIPv4);
+ EXPECT_EQ(*actualIPv4, blocklistedIPv4);
}
TEST_F(IPPolicyTest, asNetworkIPv6Range) {
auto fullp = Policy(cct.get(), arbitrary_tenant,
bufferlist::static_from_string(ip_address_full_example));
Environment e;
- Environment allowedIP, blacklistedIP, allowedIPv6, blacklistedIPv6;
+ Environment allowedIP, blocklistedIP, allowedIPv6, blocklistedIPv6;
allowedIP["aws:SourceIp"] = "192.168.1.2";
allowedIPv6["aws:SourceIp"] = "::1";
- blacklistedIP["aws:SourceIp"] = "192.168.1.1";
- blacklistedIPv6["aws:SourceIp"] = "2001:0db8:85a3:0000:0000:8a2e:0370:7334";
+ blocklistedIP["aws:SourceIp"] = "192.168.1.1";
+ blocklistedIPv6["aws:SourceIp"] = "2001:0db8:85a3:0000:0000:8a2e:0370:7334";
auto trueacct = FakeIdentity(
Principal::tenant("ACCOUNT-ID-WITHOUT-HYPHENS"));
ARN(Partition::aws, Service::s3,
"", arbitrary_tenant, "example_bucket")),
Effect::Allow);
- EXPECT_EQ(allowp.eval(blacklistedIPv6, trueacct, s3ListBucket,
+ EXPECT_EQ(allowp.eval(blocklistedIPv6, trueacct, s3ListBucket,
ARN(Partition::aws, Service::s3,
"", arbitrary_tenant, "example_bucket")),
Effect::Pass);
"", arbitrary_tenant, "example_bucket/myobject")),
Effect::Deny);
- EXPECT_EQ(denyp.eval(blacklistedIP, trueacct, s3ListBucket,
+ EXPECT_EQ(denyp.eval(blocklistedIP, trueacct, s3ListBucket,
ARN(Partition::aws, Service::s3,
"", arbitrary_tenant, "example_bucket")),
Effect::Pass);
- EXPECT_EQ(denyp.eval(blacklistedIP, trueacct, s3ListBucket,
+ EXPECT_EQ(denyp.eval(blocklistedIP, trueacct, s3ListBucket,
ARN(Partition::aws, Service::s3,
"", arbitrary_tenant, "example_bucket/myobject")),
Effect::Pass);
- EXPECT_EQ(denyp.eval(blacklistedIPv6, trueacct, s3ListBucket,
+ EXPECT_EQ(denyp.eval(blocklistedIPv6, trueacct, s3ListBucket,
ARN(Partition::aws, Service::s3,
"", arbitrary_tenant, "example_bucket")),
Effect::Pass);
- EXPECT_EQ(denyp.eval(blacklistedIPv6, trueacct, s3ListBucket,
+ EXPECT_EQ(denyp.eval(blocklistedIPv6, trueacct, s3ListBucket,
ARN(Partition::aws, Service::s3,
"", arbitrary_tenant, "example_bucket/myobject")),
Effect::Pass);
"", arbitrary_tenant, "example_bucket/myobject")),
Effect::Allow);
- EXPECT_EQ(fullp.eval(blacklistedIP, trueacct, s3ListBucket,
+ EXPECT_EQ(fullp.eval(blocklistedIP, trueacct, s3ListBucket,
ARN(Partition::aws, Service::s3,
"", arbitrary_tenant, "example_bucket")),
Effect::Pass);
- EXPECT_EQ(fullp.eval(blacklistedIP, trueacct, s3ListBucket,
+ EXPECT_EQ(fullp.eval(blocklistedIP, trueacct, s3ListBucket,
ARN(Partition::aws, Service::s3,
"", arbitrary_tenant, "example_bucket/myobject")),
Effect::Pass);
"", arbitrary_tenant, "example_bucket/myobject")),
Effect::Allow);
- EXPECT_EQ(fullp.eval(blacklistedIPv6, trueacct, s3ListBucket,
+ EXPECT_EQ(fullp.eval(blocklistedIPv6, trueacct, s3ListBucket,
ARN(Partition::aws, Service::s3,
"", arbitrary_tenant, "example_bucket")),
Effect::Pass);
- EXPECT_EQ(fullp.eval(blacklistedIPv6, trueacct, s3ListBucket,
+ EXPECT_EQ(fullp.eval(blocklistedIPv6, trueacct, s3ListBucket,
ARN(Partition::aws, Service::s3,
"", arbitrary_tenant, "example_bucket/myobject")),
Effect::Pass);
cluster.ioctx_create(pool_name.c_str(), ioctx);
ASSERT_EQ(0, ioctx.watch("foo", 0, &handle, &ctx));
- bool do_blacklist = i % 2;
- if (do_blacklist) {
- cluster.test_blacklist_self(true);
- std::cerr << "blacklisted" << std::endl;
+ bool do_blocklist = i % 2;
+ if (do_blocklist) {
+ cluster.test_blocklist_self(true);
+ std::cerr << "blocklisted" << std::endl;
sleep(1);
}
bufferlist bl2;
ASSERT_EQ(0, nioctx.notify("foo", 0, bl2));
- if (do_blacklist) {
+ if (do_blocklist) {
sleep(1); // Give a change to see an incorrect notify
} else {
TestAlarm alarm;
sem_wait(sem);
}
- if (do_blacklist) {
- cluster.test_blacklist_self(false);
+ if (do_blocklist) {
+ cluster.test_blocklist_self(false);
}
ioctx.unwatch("foo", handle);
int error_code,
double retry_delay) {
dout(20) << "info=" << *delete_info << ", r=" << error_code << dendl;
- if (error_code == -EBLACKLISTED) {
+ if (error_code == -EBLOCKLISTED) {
std::lock_guard locker{m_lock};
- derr << "blacklisted while deleting local image" << dendl;
+ derr << "blocklisted while deleting local image" << dendl;
complete_active_delete(delete_info, error_code);
return;
}
m_finished = finished;
}
- inline bool is_blacklisted() const {
+ inline bool is_blocklisted() const {
std::lock_guard locker{m_lock};
- return (m_last_r == -EBLACKLISTED);
+ return (m_last_r == -EBLOCKLISTED);
}
image_replayer::HealthState get_health_state() const;
}
template <typename I>
-bool InstanceReplayer<I>::is_blacklisted() const {
+bool InstanceReplayer<I>::is_blocklisted() const {
std::lock_guard locker{m_lock};
- return m_blacklisted;
+ return m_blocklisted;
}
template <typename I>
std::string global_image_id = image_replayer->get_global_image_id();
if (!image_replayer->is_stopped()) {
return;
- } else if (image_replayer->is_blacklisted()) {
- derr << "global_image_id=" << global_image_id << ": blacklisted detected "
+ } else if (image_replayer->is_blocklisted()) {
+ derr << "global_image_id=" << global_image_id << ": blocklisted detected "
<< "during image replay" << dendl;
- m_blacklisted = true;
+ m_blocklisted = true;
return;
} else if (image_replayer->is_finished()) {
// TODO temporary until policy integrated
PoolMetaCache* pool_meta_cache);
~InstanceReplayer();
- bool is_blacklisted() const;
+ bool is_blocklisted() const;
int init();
void shut_down();
Context *m_image_state_check_task = nullptr;
Context *m_on_shut_down = nullptr;
bool m_manual_stop = false;
- bool m_blacklisted = false;
+ bool m_blocklisted = false;
void wait_for_ops();
void handle_wait_for_ops(int r);
unique_lock_name("rbd::mirror::InstanceWatcher::m_lock", this))),
m_instance_lock(librbd::ManagedLock<I>::create(
m_ioctx, asio_engine, m_oid, this, librbd::managed_lock::EXCLUSIVE, true,
- m_cct->_conf.get_val<uint64_t>("rbd_blacklist_expire_seconds"))) {
+ m_cct->_conf.get_val<uint64_t>("rbd_blocklist_expire_seconds"))) {
}
template <typename I>
dout(10) << "r=" << r << ", instance_ids=" << instance_ids << dendl;
ceph_assert(r == 0);
- // fire removed notification now that instances have been blacklisted
+ // fire removed notification now that instances have been blocklisted
m_threads->work_queue->queue(
new C_NotifyInstancesRemoved(this, instance_ids), 0);
m_instance_id(stringify(m_notifier_id)),
m_leader_lock(new LeaderLock(m_ioctx, *m_threads->asio_engine, m_oid, this,
true, m_cct->_conf.get_val<uint64_t>(
- "rbd_blacklist_expire_seconds"))) {
+ "rbd_blocklist_expire_seconds"))) {
}
template <typename I>
}
template <typename I>
-bool LeaderWatcher<I>::is_blacklisted() const {
+bool LeaderWatcher<I>::is_blocklisted() const {
std::lock_guard locker{m_lock};
- return m_blacklisted;
+ return m_blocklisted;
}
template <typename I>
void LeaderWatcher<I>::handle_rewatch_complete(int r) {
dout(5) << "r=" << r << dendl;
- if (r == -EBLACKLISTED) {
- dout(1) << "blacklisted detected" << dendl;
- m_blacklisted = true;
+ if (r == -EBLOCKLISTED) {
+ dout(1) << "blocklisted detected" << dendl;
+ m_blocklisted = true;
return;
}
void init(Context *on_finish);
void shut_down(Context *on_finish);
- bool is_blacklisted() const;
+ bool is_blocklisted() const;
bool is_leader() const;
bool is_releasing_leader() const;
bool get_leader_instance_id(std::string *instance_id) const;
LeaderLock(librados::IoCtx& ioctx, librbd::AsioEngine& asio_engine,
const std::string& oid, LeaderWatcher *watcher,
- bool blacklist_on_break_lock,
- uint32_t blacklist_expire_seconds)
+ bool blocklist_on_break_lock,
+ uint32_t blocklist_expire_seconds)
: Parent(ioctx, asio_engine, oid, watcher,
- librbd::managed_lock::EXCLUSIVE, blacklist_on_break_lock,
- blacklist_expire_seconds),
+ librbd::managed_lock::EXCLUSIVE, blocklist_on_break_lock,
+ blocklist_expire_seconds),
watcher(watcher) {
}
Instances<ImageCtxT> *m_instances = nullptr;
librbd::managed_lock::Locker m_locker;
- bool m_blacklisted = false;
+ bool m_blocklisted = false;
AsyncOpTracker m_timer_op_tracker;
Context *m_timer_task = nullptr;
// TODO: make async
pool_replayer->shut_down();
pool_replayer->init(site_name);
- } else if (pool_replayer->is_blacklisted()) {
- derr << "restarting blacklisted pool replayer for " << peer << dendl;
+ } else if (pool_replayer->is_blocklisted()) {
+ derr << "restarting blocklisted pool replayer for " << peer << dendl;
// TODO: make async
pool_replayer->shut_down();
pool_replayer->init(site_name);
}
template <typename I>
-bool NamespaceReplayer<I>::is_blacklisted() const {
+bool NamespaceReplayer<I>::is_blocklisted() const {
std::lock_guard locker{m_lock};
- return m_instance_replayer->is_blacklisted() ||
+ return m_instance_replayer->is_blocklisted() ||
(m_local_pool_watcher &&
- m_local_pool_watcher->is_blacklisted()) ||
+ m_local_pool_watcher->is_blocklisted()) ||
(m_remote_pool_watcher &&
- m_remote_pool_watcher->is_blacklisted());
+ m_remote_pool_watcher->is_blocklisted());
}
template <typename I>
template <typename I>
void NamespaceReplayer<I>::handle_shut_down_image_map(int r, Context *on_finish) {
dout(5) << "r=" << r << dendl;
- if (r < 0 && r != -EBLACKLISTED) {
+ if (r < 0 && r != -EBLOCKLISTED) {
derr << "failed to shut down image map: " << cpp_strerror(r) << dendl;
}
NamespaceReplayer(const NamespaceReplayer&) = delete;
NamespaceReplayer& operator=(const NamespaceReplayer&) = delete;
- bool is_blacklisted() const;
+ bool is_blocklisted() const;
void init(Context *on_finish);
void shut_down(Context *on_finish);
}
template <typename I>
-bool PoolReplayer<I>::is_blacklisted() const {
+bool PoolReplayer<I>::is_blocklisted() const {
std::lock_guard locker{m_lock};
- return m_blacklisted;
+ return m_blocklisted;
}
template <typename I>
// reset state
m_stopping = false;
- m_blacklisted = false;
+ m_blocklisted = false;
m_site_name = site_name;
dout(10) << "replaying for " << m_peer << dendl;
std::unique_lock locker{m_lock};
- if (m_leader_watcher->is_blacklisted() ||
- m_default_namespace_replayer->is_blacklisted()) {
- m_blacklisted = true;
+ if (m_leader_watcher->is_blocklisted() ||
+ m_default_namespace_replayer->is_blocklisted()) {
+ m_blocklisted = true;
m_stopping = true;
}
for (auto &it : m_namespace_replayers) {
- if (it.second->is_blacklisted()) {
- m_blacklisted = true;
+ if (it.second->is_blocklisted()) {
+ m_blocklisted = true;
m_stopping = true;
break;
}
PoolReplayer(const PoolReplayer&) = delete;
PoolReplayer& operator=(const PoolReplayer&) = delete;
- bool is_blacklisted() const;
+ bool is_blocklisted() const;
bool is_leader() const;
bool is_running() const;
std::string m_site_name;
bool m_stopping = false;
bool m_manual_stop = false;
- bool m_blacklisted = false;
+ bool m_blocklisted = false;
RadosRef m_local_rados;
RadosRef m_remote_rados;
}
template <typename I>
-bool PoolWatcher<I>::is_blacklisted() const {
+bool PoolWatcher<I>::is_blocklisted() const {
std::lock_guard locker{m_lock};
- return m_blacklisted;
+ return m_blocklisted;
}
template <typename I>
Context *on_init_finish = nullptr;
if (r >= 0) {
refresh_images();
- } else if (r == -EBLACKLISTED) {
- dout(0) << "detected client is blacklisted" << dendl;
+ } else if (r == -EBLOCKLISTED) {
+ dout(0) << "detected client is blocklisted" << dendl;
std::lock_guard locker{m_lock};
- m_blacklisted = true;
+ m_blocklisted = true;
std::swap(on_init_finish, m_on_init_finish);
} else if (r == -ENOENT) {
dout(5) << "mirroring directory does not exist" << dendl;
std::swap(on_init_finish, m_on_init_finish);
schedule_listener();
- } else if (r == -EBLACKLISTED) {
- dout(0) << "detected client is blacklisted during image refresh" << dendl;
+ } else if (r == -EBLOCKLISTED) {
+ dout(0) << "detected client is blocklisted during image refresh" << dendl;
- m_blacklisted = true;
+ m_blocklisted = true;
std::swap(on_init_finish, m_on_init_finish);
} else {
retry_refresh = true;
void PoolWatcher<I>::handle_rewatch_complete(int r) {
dout(5) << "r=" << r << dendl;
- if (r == -EBLACKLISTED) {
- dout(0) << "detected client is blacklisted" << dendl;
+ if (r == -EBLOCKLISTED) {
+ dout(0) << "detected client is blocklisted" << dendl;
std::lock_guard locker{m_lock};
- m_blacklisted = true;
+ m_blocklisted = true;
return;
} else if (r == -ENOENT) {
dout(5) << "mirroring directory deleted" << dendl;
PoolWatcher(const PoolWatcher&) = delete;
PoolWatcher& operator=(const PoolWatcher&) = delete;
- bool is_blacklisted() const;
+ bool is_blocklisted() const;
void init(Context *on_finish = nullptr);
void shut_down(Context *on_finish);
Context *m_timer_ctx = nullptr;
AsyncOpTracker m_async_op_tracker;
- bool m_blacklisted = false;
+ bool m_blocklisted = false;
bool m_shutting_down = false;
bool m_image_ids_invalid = true;
bool m_refresh_in_progress = false;
dout(5) << "r=" << r << dendl;
if (r == -EBLACKLISTED) {
- dout(0) << "detected client is blacklisted" << dendl;
+ dout(0) << "detected client is blocklisted" << dendl;
return;
} else if (r == -ENOENT) {
dout(5) << "trash directory deleted" << dendl;
}
Context* on_init_finish = nullptr;
- if (r == -EBLACKLISTED || r == -ENOENT) {
- if (r == -EBLACKLISTED) {
- dout(0) << "detected client is blacklisted" << dendl;
+ if (r == -EBLOCKLISTED || r == -ENOENT) {
+ if (r == -EBLOCKLISTED) {
+ dout(0) << "detected client is blocklisted" << dendl;
} else {
dout(0) << "detected pool no longer exists" << dendl;
}
Context *on_init_finish = nullptr;
if (r >= 0) {
trash_list(true);
- } else if (r == -EBLACKLISTED) {
- dout(0) << "detected client is blacklisted" << dendl;
+ } else if (r == -EBLOCKLISTED) {
+ dout(0) << "detected client is blocklisted" << dendl;
std::lock_guard locker{m_lock};
std::swap(on_init_finish, m_on_init_finish);
r = 0;
}
- if (r == -EBLACKLISTED) {
- dout(0) << "detected client is blacklisted during trash refresh" << dendl;
+ if (r == -EBLOCKLISTED) {
+ dout(0) << "detected client is blocklisted during trash refresh" << dendl;
m_trash_list_in_progress = false;
std::swap(on_init_finish, m_on_init_finish);
} else if (r >= 0 && images.size() < MAX_RETURN) {
m_last_image_id = images.rbegin()->first;
trash_list(false);
return;
- } else if (r < 0 && r != -EBLACKLISTED) {
+ } else if (r < 0 && r != -EBLOCKLISTED) {
derr << "failed to retrieve trash directory: " << cpp_strerror(r) << dendl;
schedule_trash_list(10);
}