Samuel Just [Thu, 28 Feb 2013 00:58:45 +0000 (16:58 -0800)]
FileJournal::wrap_read_bl: adjust pos before returning
Otherwise, we may feed an offset past the end of the journal to
check_header in read_entry and incorrectly determine that the entry is
corrupt.
Fixes: 4296
Backport: bobtail
Backport: argonaut Reviewed-by: Sage Weil <sage@inktank.com> Signed-off-by: Samuel Just <sam.just@inktank.com>
(cherry picked from commit 5d54ab154ca790688a6a1a2ad5f869c17a23980a)
Yehuda Sadeh [Thu, 7 Feb 2013 00:43:48 +0000 (16:43 -0800)]
rgw: bucket recreation should not clobber bucket info
Fixes: #4039
User's list of buckets is getting modified even if bucket already
exists. This fix removes the newly created directory object, and
makes sure that user info's data points at the correct bucket.
Yehuda Sadeh [Tue, 5 Feb 2013 22:59:51 +0000 (14:59 -0800)]
rgw: unlink multipart upload parts when completing upload
Fixes: #4011
When completing the multipart upload, we also need to unlink the
parts from the bucket index. Originally we used to remove the parts
however, nowadays the parts live on as we just point the object
manifest at them. So we don't remove the objects, however, we need
to remove them from the bucket index.
Samuel Just [Sun, 18 Nov 2012 02:18:23 +0000 (18:18 -0800)]
os/: Add CollectionIndex::prep_delete
If an unlink is interupted between removing the file
and updating the subdir attribute, the attribute will
overestimate the number of files in the directory. This
is by design, at worst we will merge the collection later
than intended, but closing the gap would require a second
subdir xattr update. However, this can in extreme cases
result in a collection with subdirectories but no objects.
FileStore::_destry_collection would therefore see an
erroneous -ENOTEMPTY.
prep_delete allows the CollectionIndex implementation to
clean up state prior to removal.
Yehuda Sadeh [Wed, 16 Jan 2013 23:01:47 +0000 (15:01 -0800)]
rgw: copy object should not copy source acls
Fixes: #3802
Backport: argonaut, bobtail
When using the S3 api and x-amz-metadata-directive is
set to COPY we used to copy complete metadata of source
object. However, this shouldn't include the source ACLs.
Sage Weil [Wed, 16 Jan 2013 03:27:13 +0000 (19:27 -0800)]
osd: send forced scrub/repair through scrub scheduling
This marks a PG for immediate scrub or repair. Adjust the sched_scrub()
code so that we handle these PGs even when should_schedule_scrub is
false (e.g., because the load is high). When we explicitly request a
scrub or repair, we then go through the normal scrub reservation process
to avoid unduly impacting cluster performance.
This is particularly helpful on argonaut, where the final scrub
finalization step blocks writes to the PG, and overlapping scrubs can
exacerbate the problem.
Samuel Just [Thu, 10 Jan 2013 00:41:40 +0000 (16:41 -0800)]
ReplicatedPG: compare nlinks to snapcolls
nlinks gives us the number of hardlinks to the object.
nlinks should be 1 + snapcolls.size(). This will allow
us to detect links which remain in an erroneous snap
collection.
Sage Weil [Thu, 16 Aug 2012 18:38:46 +0000 (11:38 -0700)]
byteorder: fix gcc 4.7 warnings
./include/encoding.h: In function 'void encode(int64_t, ceph::bufferlist&, uint64_t)':
./include/encoding.h:101:1: warning: narrowing conversion of 'v' from 'int64_t {aka long int}' to '__le64 {aka long long unsigned int}' inside { } is ill-formed in C++11 [-Wnarrowing]
Samuel Just [Thu, 10 Jan 2013 03:17:23 +0000 (19:17 -0800)]
ReplicatedPG: fix snapdir trimming
The previous logic was both complicated and not correct. Consequently,
we have been tending to drop snapcollection links in some cases. This
has resulted in clones incorrectly not being trimmed. This patch
replaces the logic with something less efficient but hopefully a bit
clearer.
Signed-off-by: Samuel Just <sam.just@inktank.com> Reviewed-by: Sage Weil <sage@inktank.com>
(cherry picked from commit 0f42c37359d976d1fe90f2d3b877b9b0268adc0b)
Sage Weil [Mon, 7 Jan 2013 04:43:21 +0000 (20:43 -0800)]
osd: fix race in do_recovery()
Verify that the PG is still RECOVERING or BACKFILL when we take the pg
lock in the recovery thread. This prevents a crash from an invalid
state machine event when the recovery queue races with a PG state change
(e.g., due to peering).
Signed-off-by: Sage Weil <sage@inktank.com> Reviewed-by: Samuel Just <sam.just@inktank.com>
Josh Durgin [Fri, 16 Nov 2012 00:20:33 +0000 (16:20 -0800)]
ObjectCacher: fix off-by-one error in split
This error left a completion that should have been attached
to the right BufferHead on the left BufferHead, which would
result in the completion never being called unless the buffers
were merged before it's original read completed. This would cause
a hang in any higher level waiting for a read to complete.
The existing loop went backwards (using a forward iterator),
but stopped when the iterator reached the beginning of the map,
or when a waiter belonged to the left BufferHead.
If the first list of waiters should have been moved to the right
BufferHead, it was skipped because at that point the iterator
was at the beginning of the map, which was the main condition
of the loop.
Restructure the waiters-moving loop to go forward in the map instead,
so it's harder to make an off-by-one error.
Sage Weil [Fri, 4 Jan 2013 01:15:07 +0000 (17:15 -0800)]
os/FileStore: fix non-btrfs op_seq commit order
The op_seq file is the starting point for journal replay. For stable btrfs
commit mode, which is using a snapshot as a reference, we should write this
file before we take the snap. We normally ignore current/ contents anyway.
On non-btrfs file systems, however, we should only write this file *after*
we do a full sync, and we should then fsync(2) it before we continue
(and potentially trim anything from the journal).
This fixes a serious bug that could cause data loss and corruption after
a power loss event. For a 'kill -9' or crash, however, there was little
risk, since the writes were still captured by the host's cache.
Fixes: #3721 Signed-off-by: Sage Weil <sage@inktank.com> Reviewed-by: Samuel Just <sam.just@inktank.com>
(cherry picked from commit 28d59d374b28629a230d36b93e60a8474c902aa5)
Sage Weil [Fri, 28 Dec 2012 21:07:18 +0000 (13:07 -0800)]
log: broadcast cond signals
We were using a single cond, and only signalling one waiter. That means
that if the flusher and several logging threads are waiting, and we hit
a limit, we the logger could signal another logger instead of the flusher,
and we could deadlock.
Similarly, if the flusher empties the queue, it might signal only a single
logger, and that logger could re-signal the flusher, and the other logger
could wait forever.
Intead, break the single cond into two: one for loggers, and one for the
flusher. Always signal the (one) flusher, and always broadcast to all
loggers.
Backport: bobtail, argonaut Signed-off-by: Sage Weil <sage@inktank.com> Reviewed-by: Dan Mick <dan.mick@inktank.com>
(cherry picked from commit 813787af3dbb99e42f481af670c4bb0e254e4432)
Travis Rhoden [Mon, 20 Aug 2012 20:29:11 +0000 (13:29 -0700)]
init-ceph: use SSH in "service ceph status -a" to get version
When running "service ceph status -a", a version number was never
returned for remote hosts, only for the local. This was because
the command to query the version number didn't use the do_cmd
function, which is responsible for running the command over SSH
when needed.
Modify the ceph init.d script to use do_cmd for querying the
Ceph version.
Sage Weil [Thu, 20 Dec 2012 21:48:06 +0000 (13:48 -0800)]
log: fix flush/signal race
We need to signal the cond in the same interval where we hold the lock
*and* modify the queue. Otherwise, we can have a race like:
queue has 1 item, max is 1.
A: enter submit_entry, signal cond, wait on condition
B: enter submit_entry, signal cond, wait on condition
C: flush wakes up, flushes 1 previous item
A: retakes lock, enqueues something, exits
B: retakes lock, condition fails, waits
-> C is never woken up as there are 2 items waiting
Sam Lang [Mon, 24 Sep 2012 16:55:25 +0000 (09:55 -0700)]
client: Fix for #3184 cfuse segv with no keyring
Fixes bug #3184 where the ceph-fuse client segfaults if authx is
enabled but no keyring file is present. This was due to the
client->init() return value not getting checked.
Gary Lowell [Fri, 9 Nov 2012 21:28:13 +0000 (13:28 -0800)]
ceph.spec.in: Build debuginfo subpackage.
This is a partial fix for bug 3471. Enable building of debuginfo package.
Some distributions enable this automatically by installing additional rpm
macros, on others it needs to be explicity added to the spec file.
Yehuda Sadeh [Thu, 29 Nov 2012 21:39:22 +0000 (13:39 -0800)]
rgw: fix rgw_tools get_obj()
The original implementation broke whenever data exceeded
the chunk size. Also don't keep cache for objects that
exceed the chunk size as cache is not designed for
it. Increased chunk size to 512k.
Yehuda Sadeh [Thu, 29 Nov 2012 20:47:59 +0000 (12:47 -0800)]
rgw: fix PUT acls
This fixes a regression introduced at 17e4c0df44781f5ff1d74f3800722452b6a0fc58. The original
patch fixed error leak, however it also removed the
operation's send_response() call.
Yehuda Sadeh [Wed, 14 Nov 2012 19:30:34 +0000 (11:30 -0800)]
rgw: relax date format check
Don't try to parse beyond the GMT or UTC. Some clients use
special date formatting. If we end up misparsing the date
it'll fail in the authorization, so don't need to be too
restrictive.
Sage Weil [Tue, 30 Oct 2012 21:17:56 +0000 (14:17 -0700)]
ceph-disk-activate: avoid duplicating mounts if already activated
If the given device is already mounted at the target location, do not
mount --move it again and create a bunch of dup entries in the /etc/mtab
and kernel mount table.
Sage Weil [Fri, 26 Oct 2012 04:21:18 +0000 (21:21 -0700)]
ceph-disk-prepare: poke kernel into refreshing partition tables
Prod the kernel to refresh the partition table after we create one. The
partprobe program is packaged with parted, which we already use, so this
introduces no new dependency.
Sage Weil [Fri, 9 Nov 2012 13:28:12 +0000 (05:28 -0800)]
mds: re-try_set_loner() after doing evals in eval(CInode*, int mask)
Consider a case where current loner is A and wanted loner is B.
At the top of the function we try to set the loner, but that may fail
because we haven't processed the gathered caps yet for the previous
loner. In the body we do that and potentially drop the old loner, but we
do not try_set_loner() again on the desired loner.
Try after our drop. If it succeeds, loop through the eval's one more time
so that we can issue caps approriately.
This fixes a hang induced by a simple loop like:
while true ; do echo asdf >> mnt.a/foo ; tail mnt.b/foo ; done &
while true ; do ls mnt.a mnt.b ; done
Samuel Just [Fri, 13 Jul 2012 21:23:27 +0000 (14:23 -0700)]
CompatSet: users pass bit indices rather than masks
CompatSet users number the Feature objects rather than
providing masks. Thus, we should do
mask |= (1 << f.id) rather than mask |= f.id.
In order to detect old, broken encodings, the lowest
bit will be set in memory but not set in the encoding.
We can reconstruct the correct mask from the names map.
This bug can cause an incompat bit to not be detected
since 1|2 == 1|2|3.
Sage Weil [Tue, 6 Nov 2012 07:27:13 +0000 (23:27 -0800)]
mds: move to from loner -> mix if *anyone* wants rd|wr
We were either going to MIX or SYNC depending on whether non-loners wanted
to read/write, but it may be that the loner wants to if our logic for
choosing loner vs not longer is based on anything other that just rd|wr
wanted.
Sage Weil [Tue, 30 Oct 2012 16:00:11 +0000 (09:00 -0700)]
osd: make pool_snap_info_t encoding backward compatible
Way back in fc869dee1e8a1c90c93cb7e678563772fb1c51fb (v0.42) when we redid
the osd type encoding we forgot to make this conditionally encode the old
format for old clients. In particular, this means that kernel clients
will fail to decode the osdmap if there is a rados pool with a pool-level
snapshot defined.
Fixes: #3290 Signed-off-by: Sage Weil <sage@inktank.com>
Conflicts:
Sage Weil [Thu, 18 Oct 2012 00:44:12 +0000 (17:44 -0700)]
addr_parsing: make , and ; and ' ' all delimiters
Instead of just ,. Currently "foo.com, bar.com" will fail because of the
space after the comma. This patches fixes that, and makes all delim
chars interchangeable.
Tommi Virtanen [Fri, 5 Oct 2012 17:57:42 +0000 (10:57 -0700)]
ceph-disk-prepare, debian/control: Support external journals.
Previously, ceph-disk-* would only let you use a journal that was a
file inside the OSD data directory. With this, you can do:
ceph-disk-prepare /dev/sdb /dev/sdb
to put the journal as a second partition on the same disk as the OSD
data (might save some file system overhead), or, more interestingly:
ceph-disk-prepare /dev/sdb /dev/sdc
which makes it create a new partition on /dev/sdc to use as the
journal. Size of the partition is decided by $osd_journal_size.
/dev/sdc must be a GPT-format disk. Multiple OSDs may share the same
journal disk (using separate partitions); this way, a single fast SSD
can serve as journal for multiple spinning disks.
The second use case currently requires parted, so a Recommends: for
parted has been added to Debian packaging.
Closes: #3078 Closes: #3079 Signed-off-by: Tommi Virtanen <tv@inktank.com>
Sage Weil [Fri, 5 Oct 2012 16:10:31 +0000 (09:10 -0700)]
osd: Make --get-journal-fsid not really start the osd.
This way, it won't need -i ID and it won't access the osd_data_dir.
That makes it useful for locating the right osd to use with an
external journal partition.
Tommi Virtanen [Wed, 3 Oct 2012 19:38:38 +0000 (12:38 -0700)]
debian/control, ceph-disk-prepare: Depend on xfsprogs, use xfs by default.
Ext4 as a default is a bad choice, as we don't perform enough QA with
it. To use XFS as the default for ceph-disk-prepare, we need to depend
on xfsprogs.
btrfs-tools is already recommended, so no change there. If you set
osd_fs_type=btrfs, and don't have the package installed, you'll just
get an error message.
Tommi Virtanen [Wed, 3 Oct 2012 15:47:20 +0000 (08:47 -0700)]
ceph-disk-prepare: Avoid triggering activate before prepare is done.
Earlier testing never saw this, but now a mount of a disk triggers a
udev blockdev-added event, causing ceph-disk-activate to run even
before ceph-disk-prepare has had a chance to write the files and
unmount the disk.
Avoid this by using a temporary partition type uuid ("ceph 2 be"), and
only setting it to the permanent ("ceph osd"). The hotplug event won't
match the type uuid, and thus won't trigger ceph-disk-activate.
Tommi Virtanen [Tue, 2 Oct 2012 23:37:07 +0000 (16:37 -0700)]
ceph-disk-activate: Unmount on errors (if it did the mount).
This cleans up the error handling to not leave disks mounted
in /var/lib/ceph/tmp/mnt.* when something fails, e.g. when
the ceph command line tool can't talk to mons.
Tommi Virtanen [Tue, 2 Oct 2012 23:04:15 +0000 (16:04 -0700)]
ceph-disk-prepare: Allow specifying fs type to use.
Either use ceph.conf variable osd_fs_type or command line option
--fs-type=
Default is still ext4, as currently nothing guarantees xfsprogs
or btrfs-tools are installed.
Currently both btrfs and xfs seems to trigger a disk hotplug event at
mount time, thus triggering a useless and unwanted ceph-disk-activate
run. This will be worked around in a later commit.
Currently mkfs and mount options cannot be configured.
Bug: #2549 Signed-off-by: Tommi Virtanen <tv@inktank.com>
Fixes: #3127
Bad variable scoping made it so that specific variables
weren't initialized between suggested changes iterations.
This specifically affected a case where in a specific
change we had an updated followed by a remove, and the
remove was on a non-existent key (e.g., was already
removed earlier). We ended up re-substracting the
object stats, as the entry wasn't reset between
the iterations (and we didn't read it because the
key didn't exist).
Sage Weil [Tue, 4 Sep 2012 18:29:21 +0000 (11:29 -0700)]
objecter: fix osdmap wait
When we get a pool_op_reply, we find out which osdmap we need to wait for.
The wait_for_new_map() code was feeding that epoch into
maybe_request_map(), which was feeding it to the monitor with the subscribe
request. However, that epoch is the *start* epoch, not what we want. Fix
this code to always subscribe to what we have (+1), and ensure we keep
asking for more until we catch up to what we know we should eventually
get.
Bug: #3075 Signed-off-by: Sage Weil <sage@inktank.com> Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
(cherry picked from commit e09b26555c6132ffce08b565780a39e4177cbc1c)
Sage Weil [Wed, 22 Aug 2012 04:12:33 +0000 (21:12 -0700)]
objecter: use ordered map<> for tracking tids to preserve order on resend
We are using a hash_map<> to map tids to Op*'s. In handle_osd_map(),
we will recalc_op_target() on each Op in a random (hash) order. These
will get put in a temp map<tid,Op*> to ensure they are resent in the
correct order, but their order on the session->ops list will be random.
Then later, if we reset an OSD connection, we will resend everything for
that session in ops order, which is be incorrect.
Fix this by explicitly reordering the requests to resend in
kick_requests(), much like we do in handle_osd_map(). This lets us
continue to use a hash_map<>, which is faster for reasonable numbers of
requests. A simpler but slower fix would be to just use map<> instead.
This is one of many bugs contributing to #2947.
Signed-off-by: Sage Weil <sage@inktank.com> Reviewed-by: Samuel Just <sam.just@inktank.com>
(cherry picked from commit 1113a6c56739a56871f01fa13da881dab36a32c4)
Dan Mick [Mon, 20 Aug 2012 22:02:57 +0000 (15:02 -0700)]
rbd: force all exiting paths through main()/return
This properly destroys objects. In the process, remove usage_exit();
also kill error-handling in set_conf_param (never relevant for rbd.cc,
and if you call it with both pointers NULL, well...)
Also switch to EXIT_FAILURE for consistency.
rbd: make --pool/--image args easier to understand for import
There's no need to set the default pool in set_pool_image_name - this
is done later, in a way that doesn't ignore --pool if --dest-pool
is not specified.
This means --pool and --image can be used with import, just like
the rest of the commands. Without this change, --dest and --dest-pool
had to be used, and --pool would be silently ignored for rbd import.