Samuel Just [Thu, 23 Jan 2014 21:32:21 +0000 (13:32 -0800)]
ReplicatedPG,osd_types: seperate require_rollback from ec_pool
It's handy to allow a pool to answer false to ec_pool() and
true to require_rollback() in order to allow a replicated
pool to test the rollback mechanisms without allowing
non-NO_SHARD shards.
Yan, Zheng [Sun, 16 Feb 2014 14:14:50 +0000 (22:14 +0800)]
ReplicatedPG: return no data if read size is trimmed to zero
OSD should return no data if the read size is trimmed to zero by the
truncate_seq/truncate_size check. We can't rely on ObjectStore::read()
to do that because it reads the entire object when the 'len' parameter
is zero.
Sage Weil [Sun, 16 Feb 2014 01:22:30 +0000 (17:22 -0800)]
osd: set client incarnation for Objecter instance
Each ceph-osd process's Objecter instance has a sequence
of tid's that start at 1. To ensure these are unique
across all time, set the client incarnation to the
OSDMap epoch in which we booted.
Note that the MDS does something similar (except the
incarnation is actually the restart count for the MDS
rank, since the MDSMap tracks that explicitly).
Backport: emperor Signed-off-by: Sage Weil <sage@inktank.com>
Sage Weil [Wed, 12 Feb 2014 20:39:25 +0000 (12:39 -0800)]
osd: schedule agent from a priority queue
We need to focus agent attention on those PGs that most need it. For
starters, full PGs need immediate attention so that we can unblock IO.
More generally, fuller ones will give us the best payoff in terms of
evicted data vs effort expended finding candidate objects.
Restructure the agent queue with priorities. Quantize evict_effort so that
PGs do not jump between priorities too frequently.
Sage Weil [Wed, 12 Feb 2014 00:25:51 +0000 (16:25 -0800)]
osd/ReplicatedPG: block requests to cache PGs when they are full
If we are full and get a write request to a new object, put the op on a
wait list. Wake up when the agent frees up some space.
Note that we do not block writes to existing objects. That would be a
more aggressive strategy, but it is difficult to know up front whether we
will increase the size of the object or not, so we just leave it be. I
suspect this strategy is "good enough".
Also note that we do not yet prioritize agent attention to PGs that most
need eviction (e.g., those that are full).
Sage Weil [Tue, 11 Feb 2014 22:01:10 +0000 (14:01 -0800)]
osd/ReplicatedPG: redirect reads instead of promoting when full
If the cache pool is full, we are processing a read op, and we would
otherwise promote, redirect instead. This lets us continue to process the
op without blocking or making the cache pool any more full than it is.
Sage Weil [Sat, 8 Feb 2014 02:05:04 +0000 (18:05 -0800)]
osd/ReplicatedPG: do not flush omap objects to an EC base pool
The EC pool does not support omap content. If the caching/tiering agent
encounters such an object, just skip it. Use the OMAP object_info_t flag
for this.
Although legacy pools will have objects with omap that do not have this
flag set, no *cache* pools yet exist, so we do not need to worry about the
agent running across legacy content.
Sage Weil [Sat, 8 Feb 2014 02:03:19 +0000 (18:03 -0800)]
osd: add OMAP flag to object_info_t
Set a flag if we ever set or update OMAP content on an object. This gives
us an easy indicator for the cache agent (without actually querying the
ObjectStore) so that we can avoid trying to flush omap to EC pools.
Sage Weil [Mon, 3 Feb 2014 01:11:23 +0000 (17:11 -0800)]
osd/ReplicatedPG: do not choke on op-less flush OpContexts (from flush)
The agent initiates flush ops that don't have an OpRequest associated
with them. Make reply_ctx skip the actual reply message instead of
crashing if the flush request gets canceled (e.g., due to a race with
a write).
Sage Weil [Tue, 28 Jan 2014 01:57:53 +0000 (17:57 -0800)]
osd/ReplicatedPG: add slop to agent mode selection
We want to avoid a situation where the agent clicks on and off when the
system hovers around a utilization threshold. Particularly for trim,
the system can expend a lot of energy doing a minimal amount of work when
the effort level is low. To avoid this, enable when we are some amount
above the threshold, and do not turn off until we are the same amount below
the target.
Sage Weil [Tue, 28 Jan 2014 00:26:19 +0000 (16:26 -0800)]
osd/ReplicatedPG: initial agent to random hash position inside pg
When the agent starts, start at a random offset to ensure we get a more
uniform distribution of attention to all objects in the PG. Otherwise, we
will disproportionately examine objects at the "beginning" of the PG if we
are interrupted by peering or restarts or some other activity.
Note that if the agent_state is preserved, we do not forget our position,
which is also nice.
We *could* persist this position in the pg_info_t somewhere, but I am not
sure it is worth the effort.
Sage Weil [Fri, 24 Jan 2014 22:35:41 +0000 (14:35 -0800)]
osd/ReplicatedPG: basic flush and evict agent functionality
This is very basic flush and evict functionality for the tiering agent.
The flush policy is very simple: if we are above the threshold and the
object is dirty, and not super young, flush it. This is not too braindead
of a policy (although we could clearly do something smarter).
The evict policy is pretty simple: evict the object if it is clean and
we are over our full threshold. If we are in the middle mode, try to
estimate how cold the object is based on an accumulated histogram of
objects we have examined so far, and decide to evict based on our
position in that histogram relative to our "effort" level.
Caveats:
* the histograms are not refreshed
* we aren't taking temperature into consideration yet, although some of
the infrastructure is there.
Sage Weil [Tue, 4 Feb 2014 06:10:07 +0000 (22:10 -0800)]
osd/ReplicatedPG: add on_finish to OpContext
Add a callback hook for whenever an OpContext completes or cancels. We
are pretty sloppy here about the return values because our initial user
will not care, and it is unclear if future users will.
Sage Weil [Mon, 20 Jan 2014 18:28:43 +0000 (10:28 -0800)]
osd: add pg_pool_t::get_pg_num_divisor
A PG is not always an equally sized fraction of the total pool size due to
the use of ceph_stable_mod. Add a helper to return the fraction
(denominator) of a given pg based on the current pg_num value.
By default, disallow adjustment of primary affinity unless the user has
opted in by adjusting their monitor config. This will avoid some user
pain because inadvertantly setting the affinity will prevent older clients
from connecting to and using the cluster.
Sage Weil [Tue, 11 Feb 2014 17:25:04 +0000 (09:25 -0800)]
osd/OSDMap: apply primary_affinity to mapping
The behavior is a bit different for replicated and indep/erasure mode.
In the first case, we are rearranging the result. In the second case,
we can just set the primary argument to the right value.
Sage Weil [Sat, 15 Feb 2014 16:59:51 +0000 (08:59 -0800)]
mon/Elector: bootstrap on timeout
Currently if an election times out we call a new
election. If we have never joined a quorum, bootstrap
instead. This is heavier weight, but captures the case
where, during bootstrap:
- a and b have learned each others' addresses
- everybody calls an election
- a and b form a quorum
- c loops trying to call an election, but is ignored
because a and b don't see its address in the monmap
See logs:
ubuntu@teuthology:/var/lib/teuthworker/archive/sage-2014-02-14_13:50:04-ceph-deploy-wip-7212-sage-b-testing-basic-plana/83194
Sage Weil [Fri, 14 Feb 2014 19:25:52 +0000 (11:25 -0800)]
mon: tell MonmapMonitor first about winning an election
It is important in the bootstrap case that the very first paxos round
also codify the contents of the monmap itself in order to avoid any manner
of confusing scenarios where subsequent elections are called and people
try to recover and modify paxos without agreeing on who the quorum
participants are.
Sage Weil [Fri, 14 Feb 2014 19:13:26 +0000 (11:13 -0800)]
mon: only learn peer addresses when monmap == 0
It is only safe to dynamically update the address for a peer mon in our
monmap if we are in the midst of the initial quorum formation (i.e.,
monmap.epoch == 0). If it is a later epoch, we have formed our initial
quorum and any and all monmap changes need to be agreed upon by the quorum
and committed via paxos.
Fixes: #7212 Signed-off-by: Sage Weil <sage@inktank.com>
Greg Farnum [Tue, 11 Feb 2014 21:34:39 +0000 (13:34 -0800)]
OSD: create a helper for handling OSDMap subscriptions, and clean them up
We've had some trouble with not clearing out subscription requests and
overloading the monitors (though only because of other bugs). Write a
helper for handling subscription requests that we can use to centralize
safety logic. Clear out the subscription whenever we get a map that covers
it; if there are more maps available than we received, we will issue another
subscription request based on "m->newest_map" at the end of handle_osd_map().
Notice that the helper will no longer request old maps which we already have,
and that unless forced it will not dispatch multiple subscribe requests
to a single monitor.
Skipping old maps is safe:
1) we only trim old maps when the monitor tells us to,
2) we do not send messages to our peers until we have updated our maps
from the monitor.
That means only old and broken OSDs will send us messages based on maps
in our past, and we can (and should) ignore any directives from them anyway.