From 0386b951f72d9c401ae94c1f3f1a01ad20d7921e Mon Sep 17 00:00:00 2001 From: sageweil Date: Mon, 12 Nov 2007 19:02:46 +0000 Subject: [PATCH] tasks/roadmap git-svn-id: https://ceph.svn.sf.net/svnroot/ceph@2054 29311d96-e01e-0410-9327-a35deaab8ce9 --- trunk/web/tasks.body | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/trunk/web/tasks.body b/trunk/web/tasks.body index 038c9395d43e1..fc81ba8249cd8 100644 --- a/trunk/web/tasks.body +++ b/trunk/web/tasks.body @@ -5,9 +5,9 @@ Here is a brief summary of what we're currently working on, and what state we expect Ceph to take in the foreseeable future. This is a rough estimate, and highly dependent on what kind of interest Ceph generates in the larger community. @@ -32,7 +32,7 @@
Each Ceph OSD (storage node) runs a custom object "file system" called EBOFS to store objects on locally attached disks. Although the current implementation of EBOFS is fully functional and already demonstrates promising performance (outperforming ext2/3, XFS, and ReiserFS under the workloads we anticipate), a range of improvements will be needed before it is ready for prime-time. These include:
@@ -40,7 +40,7 @@

Native kernel client

- The prototype Ceph client is implemented as a user-space library. Although it can be mounted under Linux via the FUSE (file system in userspace) library, this incurs a significant performance penalty and limits Ceph's ability to provide strong POSIX semantics and consistency. A native Linux kernel implementation of the client in needed in order to properly take advantage of the performance and consistency features of Ceph. Because the client interaction with Ceph MDS and OSDs is more complicated than existing network file systems like NFS, this is a non-trivial endeavor. We are actively looking for experienced kernel programmers to help us out! + The prototype Ceph client is implemented as a user-space library. Although it can be mounted under Linux via the FUSE (file system in userspace) library, this incurs a significant performance penalty and limits Ceph's ability to provide strong POSIX semantics and consistency. A native Linux kernel implementation of the client in needed in order to properly take advantage of the performance and consistency features of Ceph. We are actively looking for experienced kernel programmers to help guide development in this area!

CRUSH tools

@@ -48,9 +48,9 @@ Ceph utilizes a novel data distribution function called CRUSH to distribute data (in the form of objects) to storage nodes (OSDs). CRUSH is designed to generate a balanced distribution will allowing the storage cluster to be dynamically expanded or contracted, and to separate object replicas across failure domains to enhance data safety. There is a certain amount of finesse involved in properly managing the OSD hierarchy from which CRUSH generates its distribution in order to minimize the amount of data migration that results from changes. An administrator tool would be useful for helping to manage the CRUSH mapping function in order to best exploit the available storage and network infrastructure. For more information, please refer to the technical paper describing CRUSH. -

The Ceph project is always looking for more participants. If any of these projects sound interesting to you, please join our mailing list and drop us a line. +

The Ceph project is always looking for more participants. If any of these projects sound interesting to you, please join our mailing list. -Please feel free to contact us with any questions or comments. \ No newline at end of file +Please feel free to contact me with any questions or comments. -- 2.39.5