Samuel Just [Fri, 30 Nov 2012 22:04:53 +0000 (14:04 -0800)]
ReplicatedPG: only increment active_scrub on primary for final push
We only queue the _applied_recovered_object callback on the primary for the
final push. It is this callback which decrements active_pushes. It's ok to
not increment active_pushes for the intermediate pushes since these only affact
a temp file.
Sam Lang [Thu, 29 Nov 2012 18:19:51 +0000 (12:19 -0600)]
client: Fix for #3490 and config option to test
If the mds revokes our cache cap, and we follow
the _read_sync() path, on a zero-byte file the
osd returns ENOENT. We need to replace ENOENT
with a return of 0 in this case.
Samuel Just [Wed, 28 Nov 2012 23:10:43 +0000 (15:10 -0800)]
PG: scrubber.end should be exactly a boundary
Let scrubber.end be (foo, HEAD, 10) where the oid is foo , HEAD is the
snap, and 10 is the hash and scrubber.begin similarly be (bar, 5, 1).
After choosing to scan [(bar, 5, 1), (foo, HEAD, 10)), we block writes
on that interval.
1) A write might then come in for foo (which isn't blocked) which
creates a new snap (foo, 400, 10) which happens to fall in the interval.
This will result in a crash in _scrub() when it attempts to compare
clones since it will get (foo, 400, 10) but not the head object
(foo, HEAD, 10).
2) Alternately, the write from 1) has already happened. When we scan
the log, we find 34'10 and 34'11 are the clone operation creating
(foo, 400, 10) and the modify on (foo, HEAD, 10) respectively. Both
primary and replica will wait for last_update_applied to be 34'10
before scanning, but last_update_applied will in fact skip to 34'11
since 34'10 and 34'11 happened in the same transaction. This can
result in IO hanging on the scrubber interval.
Instead, we ensure that scrubber.end is exactly a hash boundary
(min hobject_t a with the specified hash). No such object can
exist since we don't create objects with empty oids, so no writes
can occur on that object.
Samuel Just [Wed, 28 Nov 2012 00:00:03 +0000 (16:00 -0800)]
OSD: history.last_epoch_started should start at 0
history.last_epoch_started marks a lower bound on the last epoch at
which the pg went active. As with info.last_epoch_started, it should be
0 prior to the first activation.
Samuel Just [Wed, 21 Nov 2012 21:59:22 +0000 (13:59 -0800)]
PG: maintain osd local last_epoch_started for find_best_info
In order to proceed with peering, we need an osd with a log including
the last commit sent to a client. This translates to the oldest
last_update from the infos of the most recent acting set to go active.
history.last_epoch_started gives us a lower bound on the last time the
entire acting set persisted authoratative logs/infos. However, it
doesn't indicate anything about the info/log on the osd which sent it.
Thus, we will maintain an osd local info.last_epoch_started to determine
which osds were actually active (and thus have the required log
entries). The max info.last_epoch_started in the prior set gives us an
upper bound on the last interval during which writes occurred. The min
last_update among the infos with that last_epoch_started must therefore
be an upper bound on the oldest operation which clients consider
committed. Any osd with an info.last_updated past that version must be
sufficient.
The observed bug was there was an empty pg info with a
last_epoch_started at the most recent interval which pushed
min_last_update_acceptable to eversion_t(). There were two down osds,
but peering proceeded since the backfill peer did survive. However,
its info was later disregarded due to incomplete. An empty osd was
then chosen as the best_info since it's last_update was equal to
min_last_update_acceptable. This caused the contents of the pg to be
lost.
Greg Farnum [Wed, 28 Nov 2012 22:27:10 +0000 (14:27 -0800)]
mon: add new get_bl_[sn|ss]_safe functions
These functions are like the non-safe versions, but assert that
there were no disk errors and have void return types. Change a
bunch of callers who weren't checking the return code to use
these variants instead.
(Unfortunately we can't make them default safe because several of
the callers depend on getting back the length, and are perfectly happy
with ENOENT producing a 0 return value.)
Dan Mick [Wed, 28 Nov 2012 00:54:43 +0000 (16:54 -0800)]
rbd: fix import from stdin, add test
Make import work; do I/O in image native block size.
Note: creating sparse images is not currently attempted; could
scan for runs of zeros and write discontiguous chunks to image.
Fixes: #3503 Signed-off-by: Dan Mick <dan.mick@inktank.com> Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
(cherry picked from commit c99d9c3ae782597984f0c67dd1488fb95bd2ce54)
Sage Weil [Sun, 25 Nov 2012 22:27:23 +0000 (14:27 -0800)]
osd: do not ENOENT on missing key on remove
The MDS may include RM ops in a tmap update for items that were already
removed: after restarting and replaying the journal, it doesn't know
which dentries were previously committed and which were not.
Sage Weil [Sun, 25 Nov 2012 22:24:08 +0000 (14:24 -0800)]
osd: tolerate misordered TMAP updates
The previous tmap implementation requires that the update stream be
sorted or else it will behave erratically (by placing new keys in the
map out of order). This can cause very strange failures: reads may
appear to return the correct result initially, but once intervening
keys are remove they will not... depending on how read is implemented
on the client side.
Fix this by doing the optimized updates initially, but falling back to
a slow implementation if an unsorted update is detected. It is slow,
but such updates are rare.
Dan Mick [Fri, 16 Nov 2012 06:41:36 +0000 (22:41 -0800)]
rbd: fix import pool assumptions
import allows specifying one image, implicitly or explicitly the
"source" image, even though it's really the destination. Fix up
the reassignment of 'source' to 'dest', and check for and complain
about specifying two different pools or images for import.
Signed-off-by: Dan Mick <dan.mick@inktank.com> Reviewed-by: Josh Durgin <josh.durgin@inktank.com>
(cherry picked from commit c219698149c2fe4d2539f0bc1e2009b937aa4250)
User-space tool that interacts with the monitor, with the objective of
generating a workload mimicking a set of OSDs and clients.
As it is, the tool will mimic any number of OSDs, by keeping in-memory
stubs that will act as independent OSDs, generating random operations
that will induce map updates; the client stub, on the other hand,
performs no operations besides connecting to the monitor and whatever
happens between the Objecter class and the monitor (mainly keeping
updated with map updates).
Signed-off-by: Joao Eduardo Luis <joao.luis@inktank.com>
crush: relax the order by which rules and buckets must be defined
Before we only allowed buckets (say, 'root') to be defined *before*
rules.
With this patch, we allow buckets and rules to be defined by any order,
although some care should be taken when creating the plain-text crush
map, or the crushtool will error out when a rule uses a bucket only
defined later on in the file.
Signed-off-by: Joao Eduardo Luis <joao.luis@inktank.com>
'verbose' was a bool that would either be passed as one or zero to class
CrushCompile. However, most messages would only be outputted with a
verbose level > 1.
This patch makes it so that multiple '-v' increase the verbosity level;
i.e., -v mean verbose = 1; -v -v means verbose = 2; and so forth.
Signed-off-by: Joao Eduardo Luis <joao.luis@inktank.com>
Danny Al-Gaaf [Tue, 27 Nov 2012 15:54:40 +0000 (16:54 +0100)]
fix syncfs handling in error case
If the call to syncfs() fails, don't try to call syncfs again via
syscall(). If HAVE_SYS_SYNCFS is defined, don't fall through to try
syscall() with SYS_syncfs or __NR_syncfs.
Signed-off-by: Danny Al-Gaaf <danny.al-gaaf@bisect.de>
Yan, Zheng [Mon, 19 Nov 2012 02:43:35 +0000 (10:43 +0800)]
mds: don't add not issued caps when confirming cap receipt
There is message ordering race in cephfs kernel client. We compose
cap messages when i_ceph_lock is hold. But when adding messages
to the output queue, the kernel releases i_ceph_lock and acquires
a mutex. So it is possible that cap messages are send out of order.
If the kernel client send a cap update, then send a cap release,
but the two messages reach MDS out of order. The update message
will re-add the released caps. This patch adds code to check if
caps were actually issued when confirming cap receipt.
Yan, Zheng [Mon, 19 Nov 2012 02:43:34 +0000 (10:43 +0800)]
mds: fix anchor table update
The reference count of an anchor table entry that corresponds to
directory is number of anchored inodes under the directory. But
when updating anchor trace for an directory inode, the code only
increases/decreases its new/old ancestor anchor table entries'
reference counts by one.
The current conundrum:
- commit_set() will issue a write and queue a waiter on a tid
- discard will discard all BufferHeads and unpin the object
- trim will try to close and fail assert(ob->can_close())
But:
- we can't wake the waiter on discard because we don't know what range(s)
it is waiting for; discard needn't be the whole object.
So: pin the object so it doesn't get trimmed, and unpin when we write.
Adjust can_close() so that it is based on the lru pin status, and assert
that pinned implies the previous conditions are all true.
Signed-off-by: Sage Weil <sage@inktank.com> Reviewed-by: Sam Lang <sam.lang@inktank.com>
Jim Schutt [Mon, 10 Sep 2012 21:43:19 +0000 (15:43 -0600)]
crush: for chooseleaf rules, retry CRUSH map descent from root if leaf is failed
Consider the CRUSH rule
step chooseleaf firstn 0 type <node_type>
This rule means that <n> replicas will be chosen in a manner such that
each chosen leaf's branch will contain a unique instance of <node_type>.
When an object is re-replicated after a leaf failure, if the CRUSH map uses
a chooseleaf rule the remapped replica ends up under the <node_type> bucket
that held the failed leaf. This causes uneven data distribution across the
storage cluster, to the point that when all the leaves but one fail under a
particular <node_type> bucket, that remaining leaf holds all the data from
its failed peers.
This behavior also limits the number of peers that can participate in the
re-replication of the data held by the failed leaf, which increases the
time required to re-replicate after a failure.
For a chooseleaf CRUSH rule, the tree descent has two steps: call them the
inner and outer descents.
If the tree descent down to <node_type> is the outer descent, and the descent
from <node_type> down to a leaf is the inner descent, the issue is that a
down leaf is detected on the inner descent, so only the inner descent is
retried.
In order to disperse re-replicated data as widely as possible across a
storage cluster after a failure, we want to retry the outer descent. So,
fix up crush_choose() to allow the inner descent to return immediately on
choosing a failed leaf. Wire this up as a new CRUSH tunable.
Note that after this change, for a chooseleaf rule, if the primary OSD
in a placement group has failed, choosing a replacement may result in
one of the other OSDs in the PG colliding with the new primary. This
requires that OSD's data for that PG to need moving as well. This
seems unavoidable but should be relatively rare.
Sage Weil [Mon, 26 Nov 2012 22:34:44 +0000 (14:34 -0800)]
perfcounters: fl -> time, use u64 nsec instead of double
(Almost) all current float users are actually time values, so switch to
a utime_t-based interface and internally using nsec in a u64. This avoids
using floating point in librbd, which is problematic for windows VMs that
leave the FPU in an unfriendly state.
There are two non-time users in the mds and osd that log the CPU load.
Just multiply those values by 100 and report as ints instead.
Fixes: #3521 Signed-off-by: Sage Weil <sage@inktank.com>
Alexandre Oliva [Mon, 26 Nov 2012 21:13:46 +0000 (13:13 -0800)]
logrotate on systems without invoke-rc.d
The which command doesn't output anything to stdout when it can't find
the given program name, and then [ -x ] passes. Use the exit status
of which to tell whether the command exists, before testing whether
it's executable, to fix it.
Yehuda Sadeh [Mon, 26 Nov 2012 18:15:32 +0000 (10:15 -0800)]
rgw: POST requests not default to init multipart upload
Fixes: #3516
We don't default to init multipart upload request when
getting S3 POST. This way when the request is not really
init multipart upload we'd end up sending a 405 response
instead of 500. Also, it's cleaner this way.
New config key:
- 'osd mkfs options $fstype': file system specific options for mkfs
- 'osd mkfs type': to define the filesystem for mkfs and also mount
Replaced in mkcephfs: --mkbtrfs with --mkfs
Replaced in init-ceph:
- --btrfs with --fsmount
- --nobtrfs with --nofsmount
- --btrfsumount with --fsumount
NOTE: old options from mkcephfs and init-ceph will still work, but
get may removed in the future from the scripts.
Signed-off-by: Danny Al-Gaaf <danny.al-gaaf@bisect.de>
auth: cephx: increase log levels when logging secrets
We understand that logging secrets may be useful when debugging the root
causes for auth issues. However, logging secrets is far from a good idea.
Therefore, just increase the log levels to a high enough value so that
most other debug infos can be obtained without even logging the secrets.
If one really wants to log the secrets, then setting --debug-auth 30 should
do the trick.
Fixes: #3361 Signed-off-by: Joao Eduardo Luis <joao.luis@inktank.com>
crush: CrushWrapper: don't add item to a bucket with != type than wanted
We take little consideration about the type of the bucket we are adding
an item to. Although this works for the vast majority of cases, it was
also leaving room for silly little mistakes to become problematic and
leading a monitor to crash.
For instance, say that we ran:
'ceph osd crush set 0 osd.0 1 root=foo row=foo'
If root 'foo' exists, then this will work and 'row=foo' will be ignored.
However, if there is no bucket named 'foo', then we would (in order)
create a bucket for row 'foo', adding osd.0 to it, and would then add
osd.0 to bucket 'foo' again -- remember, little consideration regarding
the bucket type was given.
This would trigger a monitor crash due to the recursion done in
'adjust_item_weight'. A solution to this problem is to make sure that we
do not allow specifying multiple buckets with the same name when adding
an item to crush. Not only solves our crash problem, but will also render
invalid any mistake when specifying the wrong bucket type (say, using
'row=bar' when in fact 'bar' is a rack).
Fixes: #3515 Signed-off-by: Joao Eduardo Luis <joao.luis@inktank.com>