]>
git.apps.os.sepia.ceph.com Git - ceph.git/log
David Zafman [Tue, 5 Jun 2018 18:11:23 +0000 (11:11 -0700)]
Merge pull request #22375 from dzafman/wip-scrub-omap-luminous
Reviewed-by: Kefu Chai <kchai@redhat.com>
Yan, Zheng [Tue, 5 Jun 2018 00:49:18 +0000 (08:49 +0800)]
Merge pull request #21187 from smithfarm/wip-23157-luminous
luminous: mds: underwater dentry check in CDir::_omap_fetched is racy
Yan, Zheng [Tue, 5 Jun 2018 00:44:24 +0000 (08:44 +0800)]
Merge pull request #21278 from batrick/i22696
luminous: client: dirty caps may never get the chance to flush
Yan, Zheng [Sat, 17 Feb 2018 01:37:48 +0000 (09:37 +0800)]
mds: fix check of underwater dentries
Underwater dentry is dentry that is dirty in our cache from journal
replay, but had already been flushed to disk before the mds failed.
To decide if an dentry is underwater, original code compares dirty
dentry's version to on-disk dirfrag's version. This method is racy
because CDir::log_mark_dirty() can increase dirfrag's version without
adding log event. After mds failover, version of dirfrag from journal
replay can be less than on-disk dirfrag's version. So newly dirtied
dentry can be equal to or less than the on-disk dirfrag's version.
The race can cause incorrect fragstat/rstat
Fixes: http://tracker.ceph.com/issues/23032
Signed-off-by: Yan, Zheng <zyan@redhat.com>
(cherry picked from commit
9d271696b7735ab5b7384cdd386d6ac3eaafe437 )
Conflicts:
src/mds/CDir.cc - conditional being replaced by "if (!dn)" is different
in luminous
dongdong tao [Thu, 4 Jan 2018 07:05:41 +0000 (15:05 +0800)]
client: make mark_caps_clean and mark_caps_dirty as member function to Inode
Signed-off-by: dongdong tao <tdd21151186@gmail.com>
(cherry picked from commit
06ce613779a810dd979cb2eb15b510131a81fd74 )
dongdong tao [Wed, 27 Dec 2017 15:47:16 +0000 (23:47 +0800)]
client: fix dirty caps might never be flushed
Fixes: http://tracker.ceph.com/issues/22546
Signed-off-by: dongdong tao <tdd21151186@gmail.com>
(cherry picked from commit
aeb920be8ab5f0e5e47d82943c214b012bb8ec5c )
Conflicts:
src/client/Client.cc
Yuri Weinstein [Mon, 4 Jun 2018 14:32:52 +0000 (07:32 -0700)]
Merge pull request #22258 from yuriw/wip-yuriw-fix-luminous
luminous: qa/tests: added missed ubuntu_latest
Yan, Zheng [Sat, 2 Jun 2018 14:24:24 +0000 (22:24 +0800)]
Merge pull request #21475 from pdvian/wip-23704-luminous
luminous: fuse: wire up fuse_ll_access
Yan, Zheng [Sat, 2 Jun 2018 14:21:56 +0000 (22:21 +0800)]
Merge pull request #22119 from pdvian/wip-24049-luminous
luminous: ceph-fuse: missing dentries in readdir result
Yan, Zheng [Sat, 2 Jun 2018 01:14:41 +0000 (09:14 +0800)]
Merge branch 'luminous' into wip-24049-luminous
Yan, Zheng [Sat, 2 Jun 2018 01:06:54 +0000 (09:06 +0800)]
Merge branch 'luminous' into wip-23704-luminous
Yan, Zheng [Sat, 2 Jun 2018 00:36:48 +0000 (08:36 +0800)]
Merge pull request #21495 from pdvian/wip-23770-luminous
luminous: ceph-fuse: return proper exit code
Yan, Zheng [Sat, 2 Jun 2018 00:35:33 +0000 (08:35 +0800)]
Merge pull request #22176 from pdvian/wip-24107-luminous
luminous: mds: set could_consume to false when no purge queue item actually exe…
Yan, Zheng [Sat, 2 Jun 2018 00:34:47 +0000 (08:34 +0800)]
Merge pull request #22310 from ukernel/luminous-24341
luminous: mds: fix some memory leak
Yan, Zheng [Sat, 2 Jun 2018 00:34:19 +0000 (08:34 +0800)]
Merge pull request #22271 from pdvian/wip-24205-luminous
luminous: mds: broadcast quota to relevant clients when quota is explicitly set
Yan, Zheng [Sat, 2 Jun 2018 00:33:28 +0000 (08:33 +0800)]
Merge pull request #22221 from pdvian/wip-24201-luminous
luminous: client: fix issue of revoking non-auth caps
Yan, Zheng [Sat, 2 Jun 2018 00:32:54 +0000 (08:32 +0800)]
Merge pull request #22208 from pdvian/wip-24188-luminous
luminous: kceph: umount on evicted client blocks forever
Yan, Zheng [Sat, 2 Jun 2018 00:32:03 +0000 (08:32 +0800)]
Merge pull request #22171 from ukernel/luminous-24108
luminous: mds: avoid calling rejoin_gather_finish() two times successively
Yan, Zheng [Sat, 2 Jun 2018 00:31:22 +0000 (08:31 +0800)]
Merge pull request #22168 from ukernel/luminous-24207
luminous: client: avoid freeing inode when it contains TX buffer head
Yan, Zheng [Sat, 2 Jun 2018 00:30:30 +0000 (08:30 +0800)]
Merge pull request #22118 from pdvian/wip-24050-luminous
luminous: mds: include nfiles/nsubdirs of directory inode in MClientCaps
Yan, Zheng [Sat, 2 Jun 2018 00:30:11 +0000 (08:30 +0800)]
Merge pull request #22018 from batrick/i23991
luminous: client: hangs on umount if it had an MDS session evicted
Yan, Zheng [Sat, 2 Jun 2018 00:28:56 +0000 (08:28 +0800)]
Merge pull request #21990 from batrick/i23935
luminous: mds: don't discover inode/dirfrag when mds is in 'starting' state
Yan, Zheng [Sat, 2 Jun 2018 00:28:02 +0000 (08:28 +0800)]
Merge pull request #21989 from batrick/i24130
luminous: mds: handle imported session race
Yan, Zheng [Sat, 2 Jun 2018 00:27:43 +0000 (08:27 +0800)]
Merge pull request #21922 from pdvian/wip-23984-luminous
luminous: mds: mark new root inode dirty
Yan, Zheng [Sat, 2 Jun 2018 00:27:16 +0000 (08:27 +0800)]
Merge pull request #21921 from pdvian/wip-23982-luminous
luminous: qa: fix blacklisted check for test_lifecycle
Yan, Zheng [Sat, 2 Jun 2018 00:26:49 +0000 (08:26 +0800)]
Merge pull request #21901 from pdvian/wip-23951-luminous
luminous: mds: kick rdlock if waiting for dirfragtreelock
Yan, Zheng [Sat, 2 Jun 2018 00:26:28 +0000 (08:26 +0800)]
Merge pull request #21900 from pdvian/wip-23946-luminous
luminous: mds: crash when failover
Yan, Zheng [Sat, 2 Jun 2018 00:25:33 +0000 (08:25 +0800)]
Merge pull request #21874 from pdvian/wip-23936-luminous
luminous: cephfs-journal-tool: wait prezero ops before destroying journal
Yan, Zheng [Sat, 2 Jun 2018 00:24:58 +0000 (08:24 +0800)]
Merge pull request #21841 from pdvian/wip-23931-luminous
luminous: qa: remove racy/buggy test_purge_queue_op_rate
Yan, Zheng [Sat, 2 Jun 2018 00:24:31 +0000 (08:24 +0800)]
Merge pull request #21730 from joscollin/wip-23933-luminous
luminous: client: avoid second lock on client_lock
Yan, Zheng [Sat, 2 Jun 2018 00:23:21 +0000 (08:23 +0800)]
Merge pull request #21617 from pdvian/wip-23835-luminous
luminous: mds: fix occasional dir rstat inconsistency between multi-MDSes
Yan, Zheng [Sat, 2 Jun 2018 00:22:30 +0000 (08:22 +0800)]
Merge pull request #21600 from joscollin/wip-23475-luminous
luminous: ceph-fuse: trim ceph-fuse -V output
Yan, Zheng [Sat, 2 Jun 2018 00:21:41 +0000 (08:21 +0800)]
Merge pull request #21616 from joscollin/wip-23308-luminous
luminous: doc: Fix -d description in ceph-fuse
Yan, Zheng [Sat, 2 Jun 2018 00:19:58 +0000 (08:19 +0800)]
Merge pull request #21899 from pdvian/wip-23950-luminous
luminous: mds: trim log during shutdown to clean metadata
Yan, Zheng [Sat, 2 Jun 2018 00:15:10 +0000 (08:15 +0800)]
Merge pull request #21589 from pdvian/wip-23818-luminous
luminous: client: add client option descriptions
Yan, Zheng [Sat, 2 Jun 2018 00:13:38 +0000 (08:13 +0800)]
Merge pull request #21687 from batrick/i23638
luminous: ceph-fuse: getgroups failure causes exception
David Zafman [Thu, 31 May 2018 00:18:03 +0000 (17:18 -0700)]
osd: Handle omap and data digests independently
Caused by:
be078c8b7b131764caa28bc44452b8c5c2339623
The original attempt above to fix the omap_digest handling when
data_digest isn't present had 2 errors. First, it checked
is_data_digest() and is_omap_digest() instead of digest_present and
omap_digest_present which indicate the source digest is available.
Second, MAYBE could only be set if both digests are available.
Fixes: http://tracker.ceph.com/issues/24366
Signed-off-by: David Zafman <dzafman@redhat.com>
(cherry picked from commit
01f9669928abd571e14421a51a749d44fa041337 )
David Zafman [Wed, 30 May 2018 18:47:04 +0000 (11:47 -0700)]
cleanup: Remove debug option osd_debug_scrub_chance_rewrite_digest
This option seems pointless and there are no test cases that use it.
Signed-off-by: David Zafman <dzafman@redhat.com>
(cherry picked from commit
6adeaed32f70923d012bf9410bfa8651694be3cf )
xie xingguo [Sat, 30 Sep 2017 08:49:20 +0000 (16:49 +0800)]
osd/PrimaryLogPG: do not set data/omap digest blindly
As bluestore has bulitin csum, we generally no longer generate
object data digest for now. The consequence is that we should
handle data/omap digest more carefully to make certain ops,
such as copy_from/promote, to work properly since they heavily
relies on data digest for data transfer correctness.
Example of failure:
http://pulpito.ceph.com/xxg-2017-09-30_11:46:34-rbd-master-distro-basic-mira/
1690609 /
Signed-off-by: xie xingguo <xie.xingguo@zte.com.cn>
(cherry picked from commit
be078c8b7b131764caa28bc44452b8c5c2339623 )
Conflicts:
src/osd/PrimaryLogPG.cc (still have snapdir)
Patrick Donnelly [Tue, 17 Apr 2018 13:39:09 +0000 (06:39 -0700)]
client: fix error operator precedence
/home/pdonnell/ceph/src/client/Client.cc: In member function ‘int Client::mount(const string&, const UserPerm&, bool)’:
/home/pdonnell/ceph/src/client/Client.cc:5681:23: warning: suggest parentheses around ‘+’ inside ‘<<’ [-Wparentheses]
return CEPH_FUSE_NO_MDS_UP;
~~^~
Found by gcc. I am ashamed.
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
(cherry picked from commit
75f980d85256e22b7b79e5f64bfa3fc6018328b5 )
Patrick Donnelly [Thu, 12 Apr 2018 17:14:18 +0000 (10:14 -0700)]
ceph-fuse: exit with failure on failed mount
Fixes: https://tracker.ceph.com/issues/23665
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
(cherry picked from commit
7740fcde942603750029322ad2c16c18a7b546ab )
Conflicts:
src/ceph_fuse.cc : Resolved in main
Patrick Donnelly [Thu, 12 Apr 2018 17:21:05 +0000 (10:21 -0700)]
common: ignore errors during preforker exit
Caller can't do anything useful and it obsecures the error the caller wants to
return.
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
(cherry picked from commit
3f0b7d96f5cb96ff61c935777671ab7ddd94cfd3 )
Patrick Donnelly [Thu, 12 Apr 2018 17:13:48 +0000 (10:13 -0700)]
client: do not overload system errnos
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
(cherry picked from commit
26f717361ea025462bc8016aca5a7c130048e769 )
Patrick Donnelly [Sun, 29 Apr 2018 00:17:53 +0000 (17:17 -0700)]
mds: trim log during shutdown to clean metadata
Otherwise the trimming won't advance so that the remaining inodes are marked
clean.
Fixes: http://tracker.ceph.com/issues/23923
Signed-off-by: Patrick Donnelly <pdonnell@redhat.com>
(cherry picked from commit
c60ef1b806c4a0c60362193675990447d82a65f4 )
Alfredo Deza [Wed, 30 May 2018 16:10:33 +0000 (12:10 -0400)]
Merge pull request #22191 from alfredodeza/backport-wip-cv-ansible-deps
lumionus ceph-volume tests.functional install new ceph-ansible dependencies
Reviewed-by: Andrew Schoen <aschoen@redhat.com>
Yan, Zheng [Wed, 30 May 2018 03:23:25 +0000 (11:23 +0800)]
mds: fix leak of MDSCacheObject::waiting
Fixes: http://tracker.ceph.com/issues/24289
Signed-off-by: "Yan, Zheng" <zyan@redhat.com>
(cherry picked from commit
8f3c8bf6eafd3545c3c786b8520e8ff2c40af2a0 )
Yan, Zheng [Fri, 25 May 2018 08:11:30 +0000 (16:11 +0800)]
mds: fix some memory leak
Fixes: http://tracker.ceph.com/issues/24289
Signed-off-by: "Yan, Zheng" <zyan@redhat.com>
(cherry picked from commit
e7c149b93dc384ee4a2c8250c502548d12535123 )
Zhi Zhang [Wed, 16 May 2018 03:21:48 +0000 (11:21 +0800)]
mds: broadcast quota to relevant clients when quota is explicitly set
Try to broadcast quota to relevant clients proactively if quota is
explicitly set by someone, in case that client won't get quota update
for a long time.
Fixes: http://tracker.ceph.com/issues/24133
Signed-off-by: Zhi Zhang <zhangz.david@outlook.com>
(cherry picked from commit
b2a7643b102dbbb8221dcb8a785db5e4276ac284 )
Yuri Weinstein [Sun, 27 May 2018 14:15:46 +0000 (07:15 -0700)]
qa/tests: added missed ubuntu_latest
Signed-off-by: Yuri Weinstein <yweinste@redhat.com>
Jos Collin [Thu, 24 May 2018 11:57:02 +0000 (17:27 +0530)]
doc: Fix typo in ceph-fuse
Fixes: https://github.com/ceph/ceph/pull/21616#pullrequestreview-122923127
Signed-off-by: Jos Collin <jcollin@redhat.com>
(cherry picked from commit
7fd3189c98b0b1c2885110c2c33487ef36a9596a )
Kefu Chai [Thu, 24 May 2018 09:57:12 +0000 (17:57 +0800)]
Merge pull request #21603 from joscollin/wip-23151-luminous
luminous: doc: Update ceph-fuse doc
Reviewed-by: Sage Weil <sage@redhat.com>
Yan, Zheng [Fri, 18 May 2018 06:26:32 +0000 (14:26 +0800)]
client: fix issue of revoking non-auth caps
when non-auth mds revokes caps, Fcb caps can still be issued by auth
auth mds. It's wrong to flush buffer or invalidate cache when non-auth
mds revokes other caps. This bug can cause client to not respond the
revoke.
Signed-off-by: "Yan, Zheng" <zyan@redhat.com>
Fixes: https://tracker.ceph.com/issues/24172
(cherry picked from commit
341a9114e0726e1a7cbb7e6f22adb54c2024c506 )
Kefu Chai [Thu, 24 May 2018 08:56:19 +0000 (16:56 +0800)]
Merge pull request #22076 from tchaikov/wip-cmake-build-rocksdb-no-Werror
luminous: cmake: disable FAIL_ON_WARNINGS for rocksdb
Reviewed-by: Nathan Cutler <cutler@suse.cz>
Reviewed-by: Patrick Donnelly <pdonnell@redhat.com>
Kefu Chai [Thu, 24 May 2018 07:06:31 +0000 (15:06 +0800)]
Merge pull request #22197 from dzafman/wip-test-fixes-luminous
luminous: test fixes
Reviewed-by: Kefu Chai <kchai@redhat.com>
Yan, Zheng [Fri, 11 May 2018 12:26:43 +0000 (20:26 +0800)]
qa/tasks/cephfs: add timeout parameter to kclient umount_wait
Just make caller happy. there is no easy way to support timeout.
Signed-off-by: Yan, Zheng <zyan@redhat.com>
Fixes: https://tracker.ceph.com/issues/24053
(cherry picked from commit
e7d0b41deae7ec99ddf0a1f5f30ea82683b7b474 )
Yan, Zheng [Fri, 11 May 2018 06:55:12 +0000 (14:55 +0800)]
mds: reply session reject for open request from blacklisted client
Kernel client and old version libcephfs do not check if themselves
are blacklisted. They can be stuck at opening session after getting
blacklisted. The session reject message can avoid this.
Signed-off-by: "Yan, Zheng" <zyan@redhat.com>
Fixes: https://tracker.ceph.com/issues/24054
(cherry picked from commit
b7c6cd8a54f094acb58603b8c6bae9e570a73e27 )
David Zafman [Wed, 23 May 2018 19:36:44 +0000 (12:36 -0700)]
test: wait_for_pg_stats() should do another check after last 13 second sleep
Signed-off-by: David Zafman <dzafman@redhat.com>
(cherry picked from commit
151de1797b9163918b95a5996f422688e0964126 )
Sage Weil [Mon, 8 Jan 2018 22:27:51 +0000 (16:27 -0600)]
os/bluestore: fix data read error injection in bluestore
Signed-off-by: Sage Weil <sage@redhat.com>
(cherry picked from commit
be32d15a04d9d900f604aa366e82791249f1bdb2 )
Alfredo Deza [Mon, 21 May 2018 11:11:28 +0000 (07:11 -0400)]
ceph-volume tests.functional install new ceph-ansible dependencies
Make note that ceph-ansible's requirements.txt can't be used just yet
Signed-off-by: Alfredo Deza <adeza@redhat.com>
(cherry picked from commit
22310f43165e474e8e12732be57217b26e2b5424 )
Xuehan Xu [Thu, 10 May 2018 04:22:24 +0000 (12:22 +0800)]
mds: set could_consume to false when no purge queue item actually executed
Fixes: http://tracker.ceph.com/issues/24073
Signed-off-by: Xuehan Xu <xuxuehan@360.cn>
(cherry picked from commit
46b4e6afa631058fe066bfd58c76d644d5c2181d )
Kefu Chai [Wed, 23 May 2018 09:41:51 +0000 (17:41 +0800)]
Merge pull request #21502 from smithfarm/wip-23782-luminous
luminous: table of contents doesn't render for luminous/jewel docs
Reviewed-by: Alfredo Deza <adeza@redhat.com>
Reviewed-by: Casey Bodley <cbodley@redhat.com>
Yan, Zheng [Tue, 8 May 2018 03:32:01 +0000 (11:32 +0800)]
mds: tighten conditions of calling rejoin_gather_finish()
Handle two cases:
1. mds receives all cache rejoin messages, then receives mdsmap that
says mds cluster enters rejoining state.
2. when opening undef inodes/dirfrags, other mds restarts.
Signed-off-by: "Yan, Zheng" <zyan@redhat.com>
(cherry picked from commit
0a38a499b86c0ee13aa0e783a8359bcce0876088 )
Yan, Zheng [Tue, 8 May 2018 02:42:05 +0000 (10:42 +0800)]
mds: avoid calling rejoin_gather_finish() two times successively
If MDCache::rejoin_gather is empty and MDCache::rejoins_pending is true
when MDCache::process_imported_caps() calls maybe_send_pending_rejoins()
Both MDCache::rejoin_send_rejoins() and MDCache::process_imported_caps()
may call rejoin_gather_finish().
Signed-off-by: "Yan, Zheng" <zyan@redhat.com>
Fixes: http://tracker.ceph.com/issues/24047
(cherry picked from commit
0451dae777a2a9b1e70303d7bbc4398849f45f3e )
Yan, Zheng [Wed, 2 May 2018 02:23:33 +0000 (10:23 +0800)]
mds: properly reconnect client caps after loading inodes
Commit
e43c02d6 "mds: filter out blacklisted clients when importing
caps" makes MDCache::process_imported_caps() ignore clients that are
not in MDCache::rejoin_imported_session_map. The map does not contain
clients from which mds has received reconnect messages. This causes
some client caps (corresponding inodes were not in cache when mds was
in reconnect state) to get dropped.
Signed-off-by: "Yan, Zheng" <zyan@redhat.com>
(cherry picked from commit
48f60e7f274de9d76499816a528eff859bb161e3 )
Conflicts:
src/mds/MDCache.h: Resolved for rejoin_recovered_client and
rejoin_open_sessions_finish
Yan, Zheng [Sun, 22 Apr 2018 09:46:28 +0000 (17:46 +0800)]
mds: filter out blacklisted clients when importing caps
The very first step of importing caps is calling
Server::prepare_force_open_sessions(). This patch makes the function
ignore blacklisted clients and return a session map for clients that
are not blacklisted. This patch also modify the codes that actually
do cap imports, make them skip caps for clients that are not in the
session map.
Signed-off-by: "Yan, Zheng" <zyan@redhat.com>
Fixes: http://tracker.ceph.com/issues/23518
(cherry picked from commit
e43c02d6abd065dff413440a0f9c3d3f6653e87b )
Conflicts:
src/mds/MDCache.h: Resolved in rejoin_open_sessions_finish
src/mds/Migrator.cc : Resolved in handle_export_dir
and decode_import_inode_caps
src/mds/Server.cc : Resolved in _rename_prepare_import
Yan, Zheng [Fri, 20 Apr 2018 10:04:09 +0000 (18:04 +0800)]
mds: don't add blacklisted clients to reconnect gather set
Signed-off-by: "Yan, Zheng" <zyan@redhat.com>
(cherry picked from commit
857e3edac5eec82f1d04caec440d3d6e61bbf679 )
Conflicts:
src/mds/SessionMap.h: Removed get_client_set
Yan, Zheng [Sun, 22 Apr 2018 10:27:52 +0000 (18:27 +0800)]
mds: combine MDCache::{cap_exports,cap_export_targets}
this change saves a map lookup
Signed-off-by: "Yan, Zheng" <zyan@redhat.com>
(cherry picked from commit
ea72863b2b8ab5c387b70687300f6f7cff2019db )
YunfeiGuan [Tue, 8 May 2018 11:35:32 +0000 (19:35 +0800)]
client: avoid freeing inode when it contains TX buffer heads
ObjectCacher::discard_set() prematurely delete TX buffer heads. But
the pending writebacks still pin parent objects of these buffer heads.
Assertion "oset.objects.empty()" gets triggered if inode with pending
writebacks get freed.
Fixes:http://tracker.ceph.com/issues/23837
Signed-off-by: Guan yunfei <yunfei.guan@xtaotech.com>
Signed-off-by: "Yan, Zheng" <zyan@redhat.com>
(cherry picked from commit
8a03757ca0ab493c6c2ea4fa4307e053e8ebc944 )
Jason Dillaman [Wed, 4 Apr 2018 15:47:05 +0000 (11:47 -0400)]
osdc/ObjectCacher: allow discard to complete in-flight writeback
Signed-off-by: Jason Dillaman <dillaman@redhat.com>
(cherry picked from commit
feee52d57d6386a2d62580d1b7ca24ee56831b20 )
David Zafman [Tue, 22 May 2018 15:37:22 +0000 (08:37 -0700)]
test: Whitelist corrections
Signed-off-by: David Zafman <dzafman@redhat.com>
(cherry picked from commit
ee4acb6e1ff7458ceaefdb288cbcb158c6a3bed3 )
Add "erasure code profile property .ruleset-failure-domain. is no longer supported" for luminous
Josh Durgin [Tue, 22 May 2018 15:29:17 +0000 (08:29 -0700)]
Merge pull request #22134 from dzafman/wip-missed-backport
test: Add CACHE_POOL_NO_HIT_SET to whitelist for mon/pool_ops.sh
Reviewed-by: Josh Durgin <jdurgin@redhat.com>
David Zafman [Sat, 19 May 2018 03:15:41 +0000 (20:15 -0700)]
test: Add CACHE_POOL_NO_HIT_SET to whitelist for mon/pool_ops.sh
Ignore
cluster [WRN] Health check failed: 1 cache pools are missing hit_sets (CACHE_POOL_NO_HIT_SET)
Signed-off-by: David Zafman <dzafman@redhat.com>
(cherry picked from commit
4fad800043d44024a496f78869e9bb02a16af063 )
Josh Durgin [Mon, 21 May 2018 23:53:31 +0000 (16:53 -0700)]
Merge pull request #22044 from dzafman/wip-24045-luminous
luminous: osd: Don't evict even when preemption has restarted with smaller chunk
Reviewed-by: Josh Durgin <jdurgin@redhat.com>
Josh Durgin [Mon, 21 May 2018 22:54:21 +0000 (15:54 -0700)]
Merge pull request #22131 from ceph/wip-yuriw-clients-fix-luminous
qa/tests: added supported distro
Reviewed-by: Josh Durgin <jdurgin@redhat.com>
Jason Dillaman [Mon, 21 May 2018 20:02:18 +0000 (16:02 -0400)]
Merge pull request #22128 from liewegas/wip-rbd-msgr-luminous
luminous: qa/suites/rbd/basic/msgr-failures: remove many.yaml
Reviewed-by: Jason Dillaman <dillaman@redhat.com>
Sage Weil [Mon, 21 May 2018 19:38:34 +0000 (14:38 -0500)]
qa/suites/rbd/basic/msgr-failures: remove many.yaml
Overkill, and triggers some failures, see
http://tracker.ceph.com/issues/23789
Removed in master by
4046f46d0e6a70d860d74945dfb95c2511394640
Fixes: http://tracker.ceph.com/issues/23789
Signed-off-by: Sage Weil <sage@redhat.com>
Yuri Weinstein [Mon, 21 May 2018 16:21:16 +0000 (09:21 -0700)]
Merge pull request #21547 from VictorDenisov/backport
luminous: tests: filestore journal replay does not guard omap operations
Reviewed-by: David Zafman <dzafman@redhat.com>
Yuri Weinstein [Mon, 21 May 2018 16:20:30 +0000 (09:20 -0700)]
Merge pull request #21515 from tchaikov/wip-luminous-pr-21469
luminous: mon/LogMonitor: do not crash on log sub w/ no messages
Reviewed-by: David Zafman <dzafman@redhat.com>
Yuri Weinstein [Mon, 21 May 2018 16:18:52 +0000 (09:18 -0700)]
Merge pull request #21376 from pdvian/wip-23666-luminous
luminous: msg/async/AsyncConnection: Fix FPE in process_connection
Reviewed-by: Kefu Chai <kchai@redhat.com>
Yuri Weinstein [Mon, 21 May 2018 16:18:05 +0000 (09:18 -0700)]
Merge pull request #21405 from pdvian/wip-23672-luminous
luminous: os/bluestore: alter the allow_eio policy regarding kernel's error list.
Reviewed-by: Kefu Chai <kchai@redhat.com>
Reviewed-by: Sage Weil <sage@redhat.com>
Yuri Weinstein [Mon, 21 May 2018 16:17:04 +0000 (09:17 -0700)]
Merge pull request #21407 from tchaikov/wip-luminous-23246
luminous: os/bluestore: fix exceeding the max IO queue depth in KernelDevice.
Reviewed-by: Radoslaw Zarzynski <rzarzyns@redhat.com>
Yuri Weinstein [Mon, 21 May 2018 16:15:04 +0000 (09:15 -0700)]
Merge pull request #21514 from smithfarm/wip-posix-zfs-luminous
luminous: common: posix_fallocate on ZFS returns EINVAL
Reviewed-by: Mykola Golub <mgolub@mirantis.com>
Reviewed-by: Willem Jan Withagen <wjw@digiware.nl>
Yuri Weinstein [Mon, 21 May 2018 16:12:56 +0000 (09:12 -0700)]
Merge pull request #21818 from xiexingguo/wip-23925
luminous: osd/OSDMap: check against cluster topology changing before applying pg upmaps
Reviewed-by: Sage Weil <sage@redhat.com>
Yan, Zheng [Thu, 26 Apr 2018 07:12:48 +0000 (15:12 +0800)]
mds: include nfiles/nsubdirs of directory inode in MClientCaps
Directory inode's dirstat gets updated by request reply, but not by
cap message. This causes problem for following case.
1. MDS modifies a directory
2. MDS issues CEPH_CAP_ANY_SHARED to client
3. The client satifies stat(2) by its cached metadata.
Signed-off-by: "Yan, Zheng" <zyan@redhat.com>
Fixes: http://tracker.ceph.com/issues/23855
(cherry picked from commit
ee2c628f6783954e9b25fab8ac9b572a58666a91 )
Conflicts:
src/messages/MClientCaps.h: Resolved in encode_payload
Yan, Zheng [Tue, 1 May 2018 04:26:51 +0000 (12:26 +0800)]
qa/tasks/cephfs: add test for renewing stale session
Signed-off-by: "Yan, Zheng" <zyan@redhat.com>
(cherry picked from commit
5688476513a78cf9ab2cf3b1f65e6244f05ea73d )
Conflicts:
qa/tasks/cephfs/test_client_recovery.py: Added test test_stale_renew
Yan, Zheng [Sat, 28 Apr 2018 04:36:43 +0000 (12:36 +0800)]
client: invalidate caps and leases when session becomes stale
Signed-off-by: "Yan, Zheng" <zyan@redhat.com>
Fixes: https://tracker.ceph.com/issues/23894
(cherry picked from commit
8b2e7d834ccf2a4ff6c7aa3d4aeee07ebe36fb59 )
Conflicts:
src/client/Client.cc : Resolved in add_update_cap
Yan, Zheng [Fri, 27 Apr 2018 01:13:51 +0000 (09:13 +0800)]
client: fix race in concurrent readdir
For a large directory, program needs to issue multiple readdir
syscalls to get all dentries. When there are multiple programs
read the directory concurrently. Following sequence of events
can happen.
- program calls readdir with pos = 2. ceph sends readdir request
to mds. The reply contains N1 entries. ceph adds these N1 entries
to readdir cache.
- program calls readdir with pos = N1+2. The readdir is satisfied
by the readdir cache, N2 entries are returned. (Other program
calls readdir in the middle, which fills the cache)
- program calls readdir with pos = N1+N2+2. ceph sends readdir
request to mds. The reply contains N3 entries and it reaches
directory end. ceph adds these N3 entries to the readdir cache
and marks directory complete.
The second readdir call does not update dirp->cache_index. ceph adds
the last N3 entries to wrong places.
Signed-off-by: "Yan, Zheng" <zyan@redhat.com>
Fixes: http://tracker.ceph.com/issues/23894
(cherry picked from commit
01e23c178d068a3983c58cf115d57f6e1cc06255 )
Yuri Weinstein [Fri, 18 May 2018 19:53:25 +0000 (12:53 -0700)]
qa/tests: added supported distro
Signed-off-by: Yuri Weinstein <yweinste@redhat.com>
vasukulkarni [Fri, 18 May 2018 17:27:56 +0000 (10:27 -0700)]
Merge pull request #21575 from ceph/wip-cd-fix-pool-create
luminous: tests: ceph-deploy: create the rbd pool right after install
David Zafman [Fri, 18 May 2018 06:50:43 +0000 (23:50 -0700)]
test: Fix omap_digest changes in osd-scrub-repair.sh
Signed-off-by: David Zafman <dzafman@redhat.com>
David Zafman [Fri, 18 May 2018 04:55:23 +0000 (21:55 -0700)]
test: No more omap_digest being set
Signed-off-by: David Zafman <dzafman@redhat.com>
David Zafman [Fri, 18 May 2018 00:35:54 +0000 (17:35 -0700)]
test: Luminous specifc changes
*** Not sure why this wasn't seen earlier
Signed-off-by: David Zafman <dzafman@redhat.com>
David Zafman [Fri, 18 May 2018 00:30:32 +0000 (17:30 -0700)]
test: Need to escape parens in log-whitelist for grep
Signed-off-by: David Zafman <dzafman@redhat.com>
(cherry picked from commit
a9e43ed85236c8412679da58d068253e80d21d05 )
Conflicts:
qa/suites/rados/monthrash/ceph.yaml (no changes needed)
Additional changes for luminous:
qa/suites/rados/basic/tasks/rados_api_tests.yaml
qa/suites/rados/singleton/all/thrash-eio.yaml
qa/suites/smoke/basic/tasks/rados_api_tests.yaml
David Zafman [Wed, 16 May 2018 00:32:50 +0000 (17:32 -0700)]
osd: Clear part of cleaned_meta_map in case of a restarted smaller chunk
This can not happen at the primary because scrub_compare_maps() is only
called once per chunk start.
Preemption causes a smaller chunk from start to be processed again at
replicas. We clear any of the previous chunk's information.
Signed-off-by: David Zafman <dzafman@redhat.com>
(cherry picked from commit
9e0ac797c602a088447679b04e14ec0cfaf9dd7b )
David Zafman [Thu, 10 May 2018 00:32:39 +0000 (17:32 -0700)]
osd: Don't evict even when preemption has restarted with smaller chunk
Fixes: https://tracker.ceph.com/issues/24045
Signed-off-by: David Zafman <dzafman@redhat.com>
(cherry picked from commit
818b59fa95ee60e86991276f18c4dee405dc79b1 )
Conflicts:
src/osd/PG.h (trivial)
Sage Weil [Tue, 24 Apr 2018 20:35:28 +0000 (15:35 -0500)]
osd/PrimaryLogPG: defer evict if head *or* object intersect scrub interval
Consider a scenario like:
- scrub [3:
2525d100 :::earlier:head,3:
2525d12f :::foo:200]
- we see 3:
2525d12f :::foo:100 and include it in scrub map
- scrub [3:
2525d12f :::foo:200, 3:
2525dfff :::later:head]
- some op(s) that cause scrub to be preempted
- agent_work wants to evict 3:
2525d12f :::foo:100
- write_blocked_by_scrub sees scrub is preempted, returns false
- 3:
2525d12f :::foo:100 is removed, :head SnapSet is updated
- scrub rescrubs [3:
2525d12f :::foo:200, 3:
2525dfff :::later:head]
- includes (updated) :head SnapSet
- issues error like "3:
2525d12f :::foo:100 is an unexpected clone"
Fix the problem by checking if anything part of the object-to-evict and
its head touch the scrub range; if so, back off. Do not let eviction
preempt scrub; we can come back and do it later.
Fixes: http://tracker.ceph.com/issues/23646
Signed-off-by: Sage Weil <sage@redhat.com>
(cherry picked from commit
c20a95b0b9f4082dcebb339135683b91fe39ec0a )
David Zafman [Sat, 28 Apr 2018 22:44:06 +0000 (15:44 -0700)]
osd: If ending on a head object get all of meta map
When ending on a head object, the head and snapshots would stay in
cleaned_meta_map until more maps arrive. The problem as that
during a scrub an eviction could occur because scrubber.start
is already past the stray object(s) so range_intersects_scrub() is false.
Fixes: http://tracker.ceph.com/issues/23909
Signed-off-by: David Zafman <dzafman@redhat.com>
(cherry picked from commit
83861a5b75ddb98366f1ec106487b88703f25cf7 )
David Zafman [Wed, 25 Apr 2018 22:19:57 +0000 (15:19 -0700)]
test: Add test cases for multiple copy pool and snapshot errors
Signed-off-by: David Zafman <dzafman@redhat.com>
(cherry picked from commit
1a7fa9a62a62a35c645757287917101925044df1 )
David Zafman [Wed, 25 Apr 2018 22:15:50 +0000 (15:15 -0700)]
test: Fix comment at end of scrub test scripts
Signed-off-by: David Zafman <dzafman@redhat.com>
(cherry picked from commit
bae4940574fa0ee267e40785c88ee6baa3fba96b )
David Zafman [Fri, 20 Apr 2018 22:56:36 +0000 (15:56 -0700)]
test: Prepare for second test and minor improvements
Check list-inconsistent-obj output
Check how many _scan_snap groupings
Use more general check for crashed osd(s)
Signed-off-by: David Zafman <dzafman@redhat.com>
(cherry picked from commit
2fa596dc0c515b757bce3bd3089a2ed32304d976 )