Samuel Just [Wed, 29 Jan 2014 21:38:04 +0000 (13:38 -0800)]
PG,ReplicatedPG: Generalize missing_loc for ECBackend
Prior to EC pools, unfound => missing. Now, unfound (unreadable,
really) is dependent on the PGBackend requirements for reconstituting
an object. This also means recovering an object missing on a replica
but not the primary requires tracking the missing_loc set.
Thus, rather than maintaining missing_loc only for objects missing
on the primary, the MissingLoc structure will track all missing
objects actingbackfill-wide until each object is recovered.
For simplicity, since we don't really know what objects need recovery
until activation (and since we can't do anything with that information
prior to activation anyway), we defer populating the missing_loc
information until activation.
We need peers to rollback divergent log entries before we attempt to
read the relevant objects. The simplest way to accomplish this seems to
be the simply choose to always activate peers if search_for_missing
turns up missing objects.
Due to EC pools, missing is necessary, but not sufficient for readability.
Thus, we instead check is_unreadable for cases where we need to read the object
and reserve is_missing for cases where we need the object context.
wait_for_missing_object becomes waiting_for_unreadable_object in order to avoid
having another layer of waiting_for_* maps. These ops may be requeued
either when the primary is recovered or when the object is no longer degraded,
depending on when the object becomes readable.
Samuel Just [Wed, 12 Feb 2014 18:53:13 +0000 (10:53 -0800)]
PG: allow PGBackend to set criteria for PG up-ness
ECBackend needs to be able to require that a readable
set of the most recent interval to write be available
in order to ensure that it rolls back the log far
enough.
Samuel Just [Tue, 28 Jan 2014 00:52:05 +0000 (16:52 -0800)]
PGBackend: add some additional helpers.
ECBackend's primary specific logic mostly won't treat the
primary shard specially, so it'll be handy to have primary
agnostic helpers for get_shard_info and get_shard_missing.
Samuel Just [Wed, 12 Feb 2014 18:44:45 +0000 (10:44 -0800)]
osd/: extend pg_interval_t to include primary
Otherwise, we cannot correctly determine up_from/up_thru for
old intervals. Also, we need this information to determine
when a new interval starts due to a new primary without a
change in the acting set.
Samuel Just [Thu, 16 Jan 2014 23:27:36 +0000 (15:27 -0800)]
messages/: include shard information in various pg messages
We can no longer use the messenger source information to determine
the origin of the message since an osd might have more than one
shard of a particular pg. Thus, we need to include a pg_shard_t
from field to indicate origin. Similarly, pg_t is no longer
sufficient to specify the destination pg, we instead use spg_t.
In the event that we get a message from an old peer, we default
from to pg_shard_t(get_source().num(), ghobject_t::no_shard())
and spg_t to spg_t(pgid, ghobject_t::no_shard()). This suffices
because non-NO_SHARD shards can only appear once ec pools have
been enabled -- and doing that bans unenlightened osds.
Samuel Just [Thu, 23 Jan 2014 21:32:21 +0000 (13:32 -0800)]
ReplicatedPG,osd_types: seperate require_rollback from ec_pool
It's handy to allow a pool to answer false to ec_pool() and
true to require_rollback() in order to allow a replicated
pool to test the rollback mechanisms without allowing
non-NO_SHARD shards.
Yan, Zheng [Sun, 16 Feb 2014 14:14:50 +0000 (22:14 +0800)]
ReplicatedPG: return no data if read size is trimmed to zero
OSD should return no data if the read size is trimmed to zero by the
truncate_seq/truncate_size check. We can't rely on ObjectStore::read()
to do that because it reads the entire object when the 'len' parameter
is zero.
Sage Weil [Sun, 16 Feb 2014 01:22:30 +0000 (17:22 -0800)]
osd: set client incarnation for Objecter instance
Each ceph-osd process's Objecter instance has a sequence
of tid's that start at 1. To ensure these are unique
across all time, set the client incarnation to the
OSDMap epoch in which we booted.
Note that the MDS does something similar (except the
incarnation is actually the restart count for the MDS
rank, since the MDSMap tracks that explicitly).
Backport: emperor Signed-off-by: Sage Weil <sage@inktank.com>
Sage Weil [Wed, 12 Feb 2014 20:39:25 +0000 (12:39 -0800)]
osd: schedule agent from a priority queue
We need to focus agent attention on those PGs that most need it. For
starters, full PGs need immediate attention so that we can unblock IO.
More generally, fuller ones will give us the best payoff in terms of
evicted data vs effort expended finding candidate objects.
Restructure the agent queue with priorities. Quantize evict_effort so that
PGs do not jump between priorities too frequently.
Sage Weil [Wed, 12 Feb 2014 00:25:51 +0000 (16:25 -0800)]
osd/ReplicatedPG: block requests to cache PGs when they are full
If we are full and get a write request to a new object, put the op on a
wait list. Wake up when the agent frees up some space.
Note that we do not block writes to existing objects. That would be a
more aggressive strategy, but it is difficult to know up front whether we
will increase the size of the object or not, so we just leave it be. I
suspect this strategy is "good enough".
Also note that we do not yet prioritize agent attention to PGs that most
need eviction (e.g., those that are full).
Sage Weil [Tue, 11 Feb 2014 22:01:10 +0000 (14:01 -0800)]
osd/ReplicatedPG: redirect reads instead of promoting when full
If the cache pool is full, we are processing a read op, and we would
otherwise promote, redirect instead. This lets us continue to process the
op without blocking or making the cache pool any more full than it is.
Sage Weil [Sat, 8 Feb 2014 02:05:04 +0000 (18:05 -0800)]
osd/ReplicatedPG: do not flush omap objects to an EC base pool
The EC pool does not support omap content. If the caching/tiering agent
encounters such an object, just skip it. Use the OMAP object_info_t flag
for this.
Although legacy pools will have objects with omap that do not have this
flag set, no *cache* pools yet exist, so we do not need to worry about the
agent running across legacy content.
Sage Weil [Sat, 8 Feb 2014 02:03:19 +0000 (18:03 -0800)]
osd: add OMAP flag to object_info_t
Set a flag if we ever set or update OMAP content on an object. This gives
us an easy indicator for the cache agent (without actually querying the
ObjectStore) so that we can avoid trying to flush omap to EC pools.
Sage Weil [Mon, 3 Feb 2014 01:11:23 +0000 (17:11 -0800)]
osd/ReplicatedPG: do not choke on op-less flush OpContexts (from flush)
The agent initiates flush ops that don't have an OpRequest associated
with them. Make reply_ctx skip the actual reply message instead of
crashing if the flush request gets canceled (e.g., due to a race with
a write).
Sage Weil [Tue, 28 Jan 2014 01:57:53 +0000 (17:57 -0800)]
osd/ReplicatedPG: add slop to agent mode selection
We want to avoid a situation where the agent clicks on and off when the
system hovers around a utilization threshold. Particularly for trim,
the system can expend a lot of energy doing a minimal amount of work when
the effort level is low. To avoid this, enable when we are some amount
above the threshold, and do not turn off until we are the same amount below
the target.
Sage Weil [Tue, 28 Jan 2014 00:26:19 +0000 (16:26 -0800)]
osd/ReplicatedPG: initial agent to random hash position inside pg
When the agent starts, start at a random offset to ensure we get a more
uniform distribution of attention to all objects in the PG. Otherwise, we
will disproportionately examine objects at the "beginning" of the PG if we
are interrupted by peering or restarts or some other activity.
Note that if the agent_state is preserved, we do not forget our position,
which is also nice.
We *could* persist this position in the pg_info_t somewhere, but I am not
sure it is worth the effort.
Sage Weil [Fri, 24 Jan 2014 22:35:41 +0000 (14:35 -0800)]
osd/ReplicatedPG: basic flush and evict agent functionality
This is very basic flush and evict functionality for the tiering agent.
The flush policy is very simple: if we are above the threshold and the
object is dirty, and not super young, flush it. This is not too braindead
of a policy (although we could clearly do something smarter).
The evict policy is pretty simple: evict the object if it is clean and
we are over our full threshold. If we are in the middle mode, try to
estimate how cold the object is based on an accumulated histogram of
objects we have examined so far, and decide to evict based on our
position in that histogram relative to our "effort" level.
Caveats:
* the histograms are not refreshed
* we aren't taking temperature into consideration yet, although some of
the infrastructure is there.
Sage Weil [Tue, 4 Feb 2014 06:10:07 +0000 (22:10 -0800)]
osd/ReplicatedPG: add on_finish to OpContext
Add a callback hook for whenever an OpContext completes or cancels. We
are pretty sloppy here about the return values because our initial user
will not care, and it is unclear if future users will.