From: John Wilkins Date: Wed, 11 Apr 2012 18:26:36 +0000 (-0700) Subject: Removed some files for reorg. X-Git-Tag: v0.47~55^2~24 X-Git-Url: http://git-server-git.apps.pok.os.sepia.ceph.com/?a=commitdiff_plain;h=d7922e0d7b09f228ef23be53ec03f109a7adb9f4;p=ceph.git Removed some files for reorg. Submitted by: John Wilkins Signed-off-by: Tommi Virtanen --- diff --git a/doc/install/build_prerequisites.rst b/doc/install/build_prerequisites.rst deleted file mode 100644 index 481cd3ef2a9b..000000000000 --- a/doc/install/build_prerequisites.rst +++ /dev/null @@ -1,105 +0,0 @@ -=================== -Build Prerequisites -=================== - -Before you can build Ceph documentation or Ceph source code, you need to install several libraries and tools. - -.. tip:: Check this section to see if there are specific prerequisites for your Linux/Unix distribution. - - -Prerequisites for Building Ceph Documentation -============================================= -Ceph utilizes Python's Sphinx documentation tool. For details on -the Sphinx documentation tool, refer to: `Sphinx `_ -Follow the directions at `Sphinx 1.1.3 `_ -to install Sphinx. To run Sphinx, with `admin/build-doc`, at least the following are required: - -- ``python-dev`` -- ``python-pip`` -- ``python-virtualenv`` -- ``libxml2-dev`` -- ``libxslt-dev`` -- ``doxygen`` -- ``ditaa`` -- ``graphviz`` - -Execute ``sudo apt-get install`` for each dependency that isn't installed on your host. :: - - $ sudo apt-get install python-dev python-pip python-virtualenv libxml2-dev libxslt-dev doxygen ditaa graphviz - -Prerequisites for Building Ceph Source Code -=========================================== -Ceph provides ``autoconf`` and ``automake`` scripts to get you started quickly. Ceph build scripts -depend on the following: - -- ``autotools-dev`` -- ``autoconf`` -- ``automake`` -- ``cdbs`` -- ``gcc`` -- ``g++`` -- ``git`` -- ``libboost-dev`` -- ``libedit-dev`` -- ``libssl-dev`` -- ``libtool`` -- ``libfcgi`` -- ``libfcgi-dev`` -- ``libfuse-dev`` -- ``linux-kernel-headers`` -- ``libcrypto++-dev`` -- ``libcrypto++`` -- ``libexpat1-dev`` -- ``libgtkmm-2.4-dev`` -- ``pkg-config`` - -On Ubuntu, execute ``sudo apt-get install`` for each dependency that isn't installed on your host. :: - - $ sudo apt-get install autotools-dev autoconf automake cdbs - gcc g++ git libboost-dev libedit-dev libssl-dev libtool - libfcgi libfcgi-dev libfuse-dev linux-kernel-headers - libcrypto++-dev libcrypto++ libexpat1-dev libgtkmm-2.4-dev - -On Debian/Squeeze, execute ``aptitude install`` for each dependency that isn't installed on your host. :: - - $ aptitude install autotools-dev autoconf automake cdbs - gcc g++ git libboost-dev libedit-dev libssl-dev libtool - libfcgi libfcgi-dev libfuse-dev linux-kernel-headers - libcrypto++-dev libcrypto++ libexpat1-dev libgtkmm-2.4-dev - - -Ubuntu Requirements -------------------- - -- ``uuid-dev`` -- ``libkeytutils-dev`` -- ``libgoogle-perftools-dev`` -- ``libatomic-ops-dev`` -- ``libaio-dev`` -- ``libgdata-common`` -- ``libgdata13`` - -Execute ``sudo apt-get install`` for each dependency that isn't installed on your host. :: - - $ sudo apt-get install uuid-dev libkeytutils-dev libgoogle-perftools-dev - libatomic-ops-dev libaio-dev libgdata-common libgdata13 - -Debian ------- -Alternatively, you may also install:: - - $ aptitude install fakeroot dpkg-dev - $ aptitude install debhelper cdbs libexpat1-dev libatomic-ops-dev - -openSUSE 11.2 (and later) -------------------------- - -- ``boost-devel`` -- ``gcc-c++`` -- ``libedit-devel`` -- ``libopenssl-devel`` -- ``fuse-devel`` (optional) - -Execute ``zypper install`` for each dependency that isn't installed on your host. :: - - $zypper install boost-devel gcc-c++ libedit-devel libopenssl-devel fuse-devel \ No newline at end of file diff --git a/doc/install/building_ceph.rst b/doc/install/building_ceph.rst deleted file mode 100644 index 81a2039901da..000000000000 --- a/doc/install/building_ceph.rst +++ /dev/null @@ -1,31 +0,0 @@ -============= -Building Ceph -============= - -Ceph provides build scripts for source code and for documentation. - -Building Ceph -============= -Ceph provides ``automake`` and ``configure`` scripts to streamline the build process. To build Ceph, navigate to your cloned Ceph repository and execute the following:: - - $ cd ceph - $ ./autogen.sh - $ ./configure - $ make - -You can use ``make -j`` to execute multiple jobs depending upon your system. For example:: - - $ make -j4 - -Building Ceph Documentation -=========================== -Ceph utilizes Python’s Sphinx documentation tool. For details on the Sphinx documentation tool, refer to: `Sphinx `_. To build the Ceph documentaiton, navigate to the Ceph repository and execute the build script:: - - $ cd ceph - $ admin/build-doc - -Once you build the documentation set, you may navigate to the source directory to view it:: - - $ cd build-doc/output - -There should be an ``/html`` directory and a ``/man`` directory containing documentation in HTML and manpage formats respectively. diff --git a/doc/install/cloning_the_ceph_source_code_repository.rst b/doc/install/cloning_the_ceph_source_code_repository.rst deleted file mode 100644 index 8486e2df2987..000000000000 --- a/doc/install/cloning_the_ceph_source_code_repository.rst +++ /dev/null @@ -1,54 +0,0 @@ -======================================= -Cloning the Ceph Source Code Repository -======================================= -To check out the Ceph source code, you must have ``git`` installed -on your local host. To install ``git``, execute:: - - $ sudo apt-get install git - -You must also have a ``github`` account. If you do not have a -``github`` account, go to `github.com `_ and register. -Follow the directions for setting up git at `Set Up Git `_. - -Generate SSH Keys ------------------ -You must generate SSH keys for github to clone the Ceph -repository. If you do not have SSH keys for ``github``, execute:: - - $ ssh-keygen -d - -Get the key to add to your ``github`` account:: - - $ cat .ssh/id_dsa.pub - -Copy the public key. - -Add the Key ------------ -Go to your your ``github`` account, -click on "Account Settings" (i.e., the 'tools' icon); then, -click "SSH Keys" on the left side navbar. - -Click "Add SSH key" in the "SSH Keys" list, enter a name for -the key, paste the key you generated, and press the "Add key" -button. - -Clone the Source ----------------- -To clone the Ceph source code repository, execute:: - - $ git clone git@github.com:ceph/ceph.git - -Once ``git clone`` executes, you should have a full copy of the Ceph repository. - -Clone the Submodules --------------------- -Before you can build Ceph, you must get the ``init`` submodule and the ``update`` submodule:: - - $ git submodule init - $ git submodule update - -.. tip:: Make sure you maintain the latest copies of these submodules. Running ``git status`` will tell you if the submodules are out of date:: - - $ git status - diff --git a/doc/install/downloading_a_ceph_release.rst b/doc/install/downloading_a_ceph_release.rst deleted file mode 100644 index 5a3ce1a48908..000000000000 --- a/doc/install/downloading_a_ceph_release.rst +++ /dev/null @@ -1,6 +0,0 @@ -========================== -Downloading a Ceph Release -========================== -As Ceph development progresses, the Ceph team releases new versions. You may download Ceph releases here: - -`Ceph Releases `_ \ No newline at end of file diff --git a/doc/install/file_system_requirements.rst b/doc/install/file_system_requirements.rst deleted file mode 100644 index 5bc20ac8e116..000000000000 --- a/doc/install/file_system_requirements.rst +++ /dev/null @@ -1,31 +0,0 @@ -======================== -File System Requirements -======================== -Ceph OSDs depend on the Extended Attributes (XATTRS) of the underlying file system for:: - -- Internal object state -- Snapshot metadata -- RADOS Gateway Access Control Lists (ACLs). - -Ceph OSDs rely heavily upon the stability and performance of the underlying file system. The -underlying file system must provide sufficient capacity for XATTRS. File system candidates for -Ceph include B tree and B+ tree file systems such as: - -- ``btrfs`` -- ``XFS`` - -.. warning:: XATTR limits. - - The RADOS Gateway's ACL and Ceph snapshots easily surpass the 4-kilobyte limit for XATTRs in ``ext4``, - causing the ``ceph-osd`` process to crash. So ``ext4`` is a poor file system choice if - you intend to deploy the RADOS Gateway or use snapshots. - -.. tip:: Use `btrfs` - - The Ceph team believes that the best performance and stability will come from ``btrfs.`` - The ``btrfs`` file system has internal transactions that keep the local data set in a consistent state. - This makes OSDs based on ``btrfs`` simple to deploy, while providing scalability not - currently available from block-based file systems. The 64-kb XATTR limit for ``xfs`` - XATTRS is enough to accommodate RDB snapshot metadata and RADOS Gateway ACLs. So ``xfs`` is the second-choice - file system of the Ceph team. If you only plan to use RADOS and ``rbd`` without snapshots and without - ``radosgw``, the ``ext4`` file system should work just fine. diff --git a/doc/install/hardware_requirements.rst b/doc/install/hardware_requirements.rst deleted file mode 100644 index d9cbea4c26ff..000000000000 --- a/doc/install/hardware_requirements.rst +++ /dev/null @@ -1,46 +0,0 @@ -===================== -Hardware Requirements -===================== -Ceph OSDs run on commodity hardware and a Linux operating system over a TCP/IP network. OSD hosts -should have ample data storage in the form of one or more hard drives or a Redundant Array of -Independent Devices (RAIDs). - - - -Discussing the hardware requirements for each daemon, -the tradeoffs of doing one ceph-osd per machine versus one per disk, -and hardware-related configuration options like journaling locations. - -+--------------+----------------+------------------------------------+ -| Process | Criteria | Minimum Requirement | -+==============+================+====================================+ -| ``ceph-osd`` | Processor | 64-bit x86; x-cores; 2MB Ln Cache | -| +----------------+------------------------------------+ -| | RAM | 12 GB | -| +----------------+------------------------------------+ -| | Disk Space | 30 GB | -| +----------------+------------------------------------+ -| | Volume Storage | 2-4TB SATA Drives | -| +----------------+------------------------------------+ -| | Network | 2-1GB Ethernet NICs | -+--------------+----------------+------------------------------------+ -| ``ceph-mon`` | Processor | 64-bit x86; x-cores; 2MB Ln Cache | -| +----------------+------------------------------------+ -| | RAM | 12 GB | -| +----------------+------------------------------------+ -| | Disk Space | 30 GB | -| +----------------+------------------------------------+ -| | Volume Storage | 2-4TB SATA Drives | -| +----------------+------------------------------------+ -| | Network | 2-1GB Ethernet NICs | -+--------------+----------------+------------------------------------+ -| ``ceph-mds`` | Processor | 64-bit x86; x-cores; 2MB Ln Cache | -| +----------------+------------------------------------+ -| | RAM | 12 GB | -| +----------------+------------------------------------+ -| | Disk Space | 30 GB | -| +----------------+------------------------------------+ -| | Volume Storage | 2-4TB SATA Drives | -| +----------------+------------------------------------+ -| | Network | 2-1GB Ethernet NICs | -+--------------+----------------+------------------------------------+ \ No newline at end of file diff --git a/doc/install/installing_rados_processes_and_daemons.rst b/doc/install/installing_rados_processes_and_daemons.rst deleted file mode 100644 index 85dcad1aa7a5..000000000000 --- a/doc/install/installing_rados_processes_and_daemons.rst +++ /dev/null @@ -1,41 +0,0 @@ -====================================== -Installing RADOS Processes and Daemons -====================================== - -When you start the Ceph service, the initialization process activates a series of daemons that run in the background. -The hosts in a typical RADOS cluster run at least one of three processes: - -- RADOS (``ceph-osd``) -- Monitor (``ceph-mon``) -- Metadata Server (``ceph-mds``) - -Each instance of a RADOS ``ceph-osd`` process performs a few essential tasks. - -1. Each ``ceph-osd`` instance provides clients with an object interface to the OSD for read/write operations. -2. Each ``ceph-osd`` instance communicates and coordinates with other OSDs to store, replicate, redistribute and restore data. -3. Each ``ceph-osd`` instance communicates with monitors to retrieve and/or update the master copy of the cluster map. - -Each instance of a monitor process performs a few essential tasks: - -1. Each ``ceph-mon`` instance communicates with other ``ceph-mon`` instances using PAXOS to establish consensus for distributed decision making. -2. Each ``ceph-mon`` instance serves as the first point of contact for clients, and provides clients with the topology and status of the cluster. -3. Each ``ceph-mon`` instance provides RADOS instances with a master copy of the cluster map and receives updates for the master copy of the cluster map. - -A metadata server (MDS) process performs a few essential tasks: - -1. Each ``ceph-mds`` instance provides clients with metadata regarding the file system. -2. Each ``ceph-mds`` instance manage the file system namespace -3. Coordinate access to the shared OSD cluster. - - -Installing ``ceph-osd`` -======================= - - -Installing ``ceph-mon`` -======================= - - -Installing ``ceph-mds`` -======================= -