--- /dev/null
+=============================================
+ cauthtool -- ceph keyring manipulation tool
+=============================================
+
+.. program:: cauthtool
+
+Synopsis
+========
+
+| **cauthtool** *keyringfile* [ -l | --list ] [ -C | --create-keyring
+ ] [ -p | --print ] [ -n | --name *entityname* ] [ --gen-key ] [ -a |
+ --add-key *base64_key* ] [ --caps *capfils* ] [ -b | --bin ]
+
+
+Description
+===========
+
+**cauthtool** is a utility to create, view, and modify a Ceph keyring
+file. A keyring file stores one or more Ceph authentication keys and
+possibly an associated capability specification. Each key is
+associated with an entity name, of the form
+``{client,mon,mds,osd}.name``.
+
+
+Options
+=======
+
+.. option:: -l, --list
+
+ will list all keys and capabilities present in the keyring
+
+.. option:: -p, --print
+
+ will print an encoded key for the specified entityname. This is
+ suitable for the ``mount -o secret=`` argument
+
+.. option:: -C, --create-keyring
+
+ will create a new keyring, overwriting any existing keyringfile
+
+.. option:: --gen-key
+
+ will generate a new secret key for the specified entityname
+
+.. option:: --add-key
+
+ will add an encoded key to the keyring
+
+.. option:: --cap subsystem capability
+
+ will set the capability for given subsystem
+
+.. option:: --caps capsfile
+
+ will set all of capabilities associated with a given key, for all subsystems
+
+.. option:: -b, --bin
+
+ will create a binary formatted keyring
+
+
+Capabilities
+============
+
+The subsystem is the name of a Ceph subsystem: ``mon``, ``mds``, or
+``osd``.
+
+The capability is a string describing what the given user is allowed
+to do. This takes the form of a comma separated list of allow, deny
+clauses with a permission specifier containing one or more of rwx for
+read, write, and execute permission. The ``allow *`` grants full
+superuser permissions for the given subsystem.
+
+For example::
+
+ # can read, write, and execute objects
+ osd = "allow rwx [pool=foo[,bar]]|[uid=baz[,bay]]"
+
+ # can access mds server
+ mds = "allow"
+
+ # can modify cluster state (i.e., is a server daemon)
+ mon = "allow rwx"
+
+A librados user restricted to a single pool might look like::
+
+ osd = "allow rw pool foo"
+
+A client mounting the file system with minimal permissions would need caps like::
+
+ mds = "allow"
+
+ osd = "allow rw pool=data"
+
+ mon = "allow r"
+
+
+Caps file format
+================
+
+The caps file format consists of zero or more key/value pairs, one per
+line. The key and value are separated by an ``=``, and the value must
+be quoted (with ``'`` or ``"``) if it contains any whitespace. The key
+is the name of the Ceph subsystem (``osd``, ``mds``, ``mon``), and the
+value is the capability string (see above).
+
+
+Example
+=======
+
+To create a new keyring containing a key for client.foo::
+
+ cauthtool -c -n client.foo --gen-key keyring
+
+To associate some capabilities with the key (namely, the ability to
+mount a Ceph filesystem)::
+
+ cauthtool -n client.foo --cap mds 'allow' --cap osd 'allow rw pool=data' --cap mon 'allow r' keyring
+
+To display the contents of the keyring::
+
+ cauthtool -l keyring
+
+When mount a Ceph file system, you can grab the appropriately encoded secret key with::
+
+ mount -t ceph serverhost:/ mountpoint -o name=foo,secret=`cauthtool -p -n client.foo keyring`
+
+
+Availability
+============
+
+**cauthtool** is part of the Ceph distributed file system. Please
+refer to the Ceph wiki at http://ceph.newdream.net/wiki for more
+information.
+
+
+See also
+========
+
+:doc:`ceph <ceph>`\(8)
--- /dev/null
+===========================================
+ cclsinfo -- show class object information
+===========================================
+
+.. program:: cclsinfo
+
+Synopsis
+========
+
+| **cclsinfo** [ *options* ] ... *filename*
+
+
+Description
+===========
+
+**cclsinfo** can show name, version, and architecture information
+about a specific class object.
+
+
+Options
+=======
+
+.. option:: -n, --name
+
+ Shows the class name
+
+.. option:: -v, --version
+
+ Shows the class version
+
+.. option:: -a, --arch
+
+ Shows the class architecture
+
+
+Availability
+============
+
+**cclsinfo** is part of the Ceph distributed file system. Please
+refer to the Ceph wiki at http://ceph.newdream.net/wiki for more
+information.
+
+
+See also
+========
+
+:doc:`ceph <ceph>`\(8)
--- /dev/null
+==============================
+ cconf -- ceph conf file tool
+==============================
+
+.. program:: cconf
+
+Synopsis
+========
+
+| **cconf** -c *conffile* --list-all-sections
+| **cconf** -c *conffile* -L
+| **cconf** -c *conffile* -l *prefix*
+| **cconf** *key* -s *section1* ...
+| **cconf** [-s *section* ] --lookup *key*
+| **cconf** [-s *section* ] *key*
+
+
+Description
+===========
+
+**cconf** is a utility for getting information about a ceph
+configuration file. As with most Ceph programs, you can specify which
+Ceph configuration file to use with the ``-c`` flag.
+
+
+Actions
+=======
+
+.. TODO format this like a proper man page
+
+**cconf** will perform one of the following actions:
+
+--list-all-sections or -L prints out a list of all the section names in the configuration
+file.
+
+--list-sections or -l prints out a list of all the sections that begin
+with a given prefix. For example, --list-sections mon would list all
+sections beginning with mon.
+
+--lookup will search the configuration for a given value. By default, the sections that
+are searched are determined by the Ceph name that we are using. The Ceph name defaults to
+client.admin. It can be specified with --name.
+
+For example, if we specify --name osd.0, the following sections will be searched:
+[osd.0], [osd], [global]
+
+You can specify additional sections to search with --section or -s. These additional
+sections will be searched before the sections that would normally be searched. As always,
+the first matching entry we find will be returned.
+
+Note: --lookup is the default action. If no other actions are given on the command line,
+we will default to doing a lookup.
+
+
+Examples
+========
+
+To find out what value osd 0 will use for the "osd data" option::
+
+ cconf -c foo.conf --name osd.0 --lookup "osd data"
+
+To find out what value will mds a use for the "log file" option::
+
+ cconf -c foo.conf --name mds.a "log file"
+
+To list all sections that begin with osd::
+
+ cconf -c foo.conf -l osd
+
+To list all sections::
+
+ cconf -c foo.conf -L
+
+
+Availability
+============
+
+**cconf** is part of the Ceph distributed file system. Please refer
+to the Ceph wiki at http://ceph.newdream.net/wiki for more
+information.
+
+
+See also
+========
+
+:doc:`ceph <ceph>`\(8),
+:doc:`mkcephfs <mkcephfs>`\(8)
--- /dev/null
+=========================================
+ cdebugpack -- ceph debug packer utility
+=========================================
+
+.. program:: cdebugpack
+
+Synopsis
+========
+
+| **cdebugpack** [ *options* ] *filename.tar.gz*
+
+
+Description
+===========
+
+**cdebugpack** will build a tarball containing various items that are
+useful for debugging crashes. The resulting tarball can be shared with
+Ceph developers when debugging a problem.
+
+The tarball will include the binaries for cmds, cosd, and cmon, any
+log files, the ceph.conf configuration file, any core files we can
+find, and (if the system is running) dumps of the current osd, mds,
+and pg maps from the monitor.
+
+
+Options
+=======
+
+.. option:: -c ceph.conf, --conf=ceph.conf
+
+ Use *ceph.conf* configuration file instead of the default
+ ``/etc/ceph/ceph.conf`` to determine monitor addresses during
+ startup.
+
+
+Availability
+============
+
+**cdebugpack** is part of the Ceph distributed file system. Please
+refer to the Ceph wiki at http://ceph.newdream.net/wiki for more
+information.
+
+
+See also
+========
+
+:doc:`ceph <ceph>`\(8)
ceph -- ceph file system control utility
==========================================
-.. todo:: write me
+.. program:: ceph
+
+Synopsis
+========
+
+| **ceph** [ -m *monaddr* ] [ -w | *command* ... ]
+
+
+Description
+===========
+
+**ceph** is a control utility for communicating with the monitor
+cluster of a running Ceph distributed file system.
+
+There are three basic modes of operation.
+
+Interactive mode
+----------------
+
+To start in interactive mode, no arguments are necessary. Control-d or
+'quit' will exit.
+
+Watch mode
+----------
+
+To watch cluster state changes in real time, starting in -w (watch)
+mode will print updates to stdout as they occur. For example, to keep
+an eye on cluster state, run::
+
+ ceph -C ceph.conf -w
+
+Command line mode
+-----------------
+
+Finally, to send a single instruction to the monitor cluster (and wait
+for a response), the command can be specified on the command line.
+
+
+Options
+=======
+
+.. option:: -i infile
+
+ will specify an input file to be passed along as a payload with the
+ command to the monitor cluster. This is only used for specific
+ monitor commands.
+
+.. option:: -o outfile
+
+ will write any payload returned by the monitor cluster with its
+ reply to outfile. Only specific monitor commands (e.g. osd getmap)
+ return a payload.
+
+.. option:: -c ceph.conf, --conf=ceph.conf
+
+ Use ceph.conf configuration file instead of the default
+ /etc/ceph/ceph.conf to determine monitor addresses during startup.
+
+.. option:: -m monaddress[:port]
+
+ Connect to specified monitor (instead of looking through ceph.conf).
+
+
+Examples
+========
+
+To grab a copy of the current OSD map::
+
+ ceph -m 1.2.3.4:6789 osd getmap -o osdmap
+
+To get a dump of placement group (PG) state::
+
+ ceph pg dump -o pg.txt
+
+
+Monitor commands
+================
+
+A more complete summary of commands understood by the monitor cluster can be found in the
+wiki, at
+
+ http://ceph.newdream.net/wiki/Monitor_commands
+
+
+Availability
+============
+
+**ceph** is part of the Ceph distributed file system. Please refer to the Ceph wiki at
+http://ceph.newdream.net/wiki for more information.
+
+
+See also
+========
+
+:doc:`ceph <ceph>`\(8),
+:doc:`mkcephfs <mkcephfs>`\(8)
--- /dev/null
+============================================
+ cephfs -- ceph file system options utility
+============================================
+
+.. program:: cephfs
+
+Synopsis
+========
+
+| **cephfs** [ *path* *command* *options* ]
+
+
+Description
+===========
+
+**cephfs** is a control utility for accessing and manipulating file
+layout and location data in the Ceph distributed file system.
+
+.. TODO format this like a proper man page
+
+Choose one of the following three commands:
+
+- ``show_layout`` View the layout information on a file or directory
+- ``set_layout`` Set the layout information on a file or directory
+- ``show_location`` View the location information on a file
+
+
+Options
+=======
+
+Your applicable options differ depending on whether you are setting or viewing layout/location.
+
+Viewing options:
+----------------
+
+.. option:: -l --offset
+
+ Specify an offset for which to retrieve location data
+
+Setting options:
+----------------
+
+.. option:: -u --stripe_unit
+
+ Set the size of each stripe
+
+.. option:: -c --stripe_count
+
+ Set the number of stripes per object
+
+.. option:: -s --object_size
+
+ Set the size of the objects to stripe across
+
+.. option:: -p --pool
+
+ Set the pool (by numeric value, not name!) to use
+
+.. option:: -o --osd
+
+ Set the preferred OSD to use as the primary
+
+
+Limitations
+===========
+
+When setting layout data, the specified stripe unit and stripe count
+must multiply to the size of an object. Any parameters you don't set
+explicitly are left at the system defaults.
+
+Obviously setting the layout of a file and a directory means different
+things. Setting the layout of a file specifies exactly how to place
+the individual file. This must be done before writing *any* data to
+it. Truncating a file does not allow you to change the layout either.
+
+Setting the layout of a directory sets the "default layout", which is
+used to set the file layouts on any files subsequently created in the
+directory (or any subdirectory). Pre-existing files do not have their
+layouts changed.
+
+You'll notice that the layout information allows you to specify a
+preferred OSD for placement. This is allowed but is not recommended
+since it can dramatically unbalance your storage cluster's space
+utilization.
+
+
+Availability
+============
+
+**cephfs** is part of the Ceph distributed file system. Please refer
+to the Ceph wiki at http://ceph.newdream.net/wiki for more
+information.
+
+
+See also
+========
+
+:doc:`ceph <ceph>`\(8)
--- /dev/null
+=====================================
+ cfuse -- FUSE-based client for ceph
+=====================================
+
+.. program:: cfuse
+
+Synopsis
+========
+
+| **cfuse** [ -m *monaddr*:*port* ] *mountpoint* [ *fuse options* ]
+
+
+Description
+===========
+
+**cfuse** is a FUSE (File system in USErspace) client for Ceph
+distributed file system. It will mount a ceph file system (specified
+via the -m option for described by ceph.conf (see below) at the
+specific mount point.
+
+The file system can be unmounted with::
+
+ fusermount -u mountpoint
+
+or by sending ``SIGINT`` to the ``cfuse`` process.
+
+
+Options
+=======
+
+Any options not recognized by cfuse will be passed on to libfuse.
+
+.. option:: -d
+
+ Detach from console and daemonize after startup.
+
+.. option:: -c ceph.conf, --conf=ceph.conf
+
+ Use *ceph.conf* configuration file instead of the default
+ ``/etc/ceph/ceph.conf`` to determine monitor addresses during startup.
+
+.. option:: -m monaddress[:port]
+
+ Connect to specified monitor (instead of looking through ceph.conf).
+
+.. option:: -r root_directory
+
+ Use root_directory as the mounted root, rather than the full Ceph tree.
+
+
+Availability
+============
+
+**cfuse** is part of the Ceph distributed file system. Please refer to
+the Ceph wiki at http://ceph.newdream.net/wiki for more information.
+
+
+See also
+========
+
+fusermount(8),
+:doc:`ceph <ceph>`\(8)
--- /dev/null
+=====================================
+ cmds -- ceph metadata server daemon
+=====================================
+
+.. program:: cmds
+
+Synopsis
+========
+
+| **cmds** -i *name* [[ --hot-standby [*rank*] ]|[--journal_check *rank*]]
+
+
+Description
+===========
+
+**cmds** is the metadata server daemon for the Ceph distributed file
+system. One or more instances of cmds collectively manage the file
+system namespace, coordinating access to the shared OSD cluster.
+
+Each cmds daemon instance should have a unique name. The name is used
+to identify daemon instances in the ceph.conf.
+
+Once the daemon has started, the monitor cluster will normally assign
+it a logical rank, or put it in a standby pool to take over for
+another daemon that crashes. Some of the specified options can cause
+other behaviors.
+
+If you specify hot-standby or journal-check, you must either specify
+the rank on the command line, or specify one of the
+mds_standby_for_[rank|name] parameters in the config. The command
+line specification overrides the config, and specifying the rank
+overrides specifying the name.
+
+
+Options
+=======
+
+.. option:: -f, --foreground
+
+ Foreground: do not daemonize after startup (run in foreground). Do
+ not generate a pid file. Useful when run via :doc:`crun
+ <crun>`\(8).
+
+.. option:: -d
+
+ Debug mode: like ``-f``, but also send all log output to stderr.
+
+.. option:: -c ceph.conf, --conf=ceph.conf
+
+ Use *ceph.conf* configuration file instead of the default
+ ``/etc/ceph/ceph.conf`` to determine monitor addresses during
+ startup.
+
+.. option:: -m monaddress[:port]
+
+ Connect to specified monitor (instead of looking through
+ ``ceph.conf``).
+
+
+Availability
+============
+
+**cmon** is part of the Ceph distributed file system. Please refer to the Ceph wiki at
+http://ceph.newdream.net/wiki for more information.
+
+
+See also
+========
+
+:doc:`ceph <ceph>`\(8),
+:doc:`cmon <cmon>`\(8),
+:doc:`cosd <cosd>`\(8)
--- /dev/null
+=============================
+ cmon -- ceph monitor daemon
+=============================
+
+.. program:: cmon
+
+Synopsis
+========
+
+| **cmon** -i *monid* [ --mon-data *mondatapath* ]
+
+
+Description
+===========
+
+**cmon** is the cluster monitor daemon for the Ceph distributed file
+system. One or more instances of **cmon** form a Paxos part-time
+parliament cluster that provides extremely reliable and durable
+storage of cluster membership, configuration, and state.
+
+The *mondatapath* refers to a directory on a local file system storing
+monitor data. It is normally specified via the ``mon data`` option in
+the configuration file.
+
+Options
+=======
+
+.. option:: -f, --foreground
+
+ Foreground: do not daemonize after startup (run in foreground). Do
+ not generate a pid file. Useful when run via :doc:`crun <crun>`\(8).
+
+.. option:: -d
+
+ Debug mode: like ``-f``, but also send all log output to stderr.
+
+.. option:: -c ceph.conf, --conf=ceph.conf
+
+ Use *ceph.conf* configuration file instead of the default
+ ``/etc/ceph/ceph.conf`` to determine monitor addresses during
+ startup.
+
+
+Availability
+============
+
+**cmon** is part of the Ceph distributed file system. Please refer to
+the Ceph wiki at http://ceph.newdream.net/wiki for more information.
+
+
+See also
+========
+
+:doc:`ceph <ceph>`\(8),
+:doc:`cmds <cmds>`\(8),
+:doc:`cosd <cosd>`\(8)
--- /dev/null
+====================================
+ cosd -- ceph object storage daemon
+====================================
+
+.. program:: cosd
+
+Synopsis
+========
+
+| **cosd** -i *osdnum* [ --osd-data *datapath* ] [ --osd-journal
+ *journal* ] [ --mkfs ] [ --mkjournal ] [ --mkkey ]
+
+
+Description
+===========
+
+**cosd** is the object storage daemon for the Ceph distributed file
+system. It is responsible for storing objects on a local file system
+and providing access to them over the network.
+
+The datapath argument should be a directory on a btrfs file system
+where the object data resides. The journal is optional, and is only
+useful performance-wise when it resides on a different disk than
+datapath with low latency (ideally, an NVRAM device).
+
+
+Options
+=======
+
+.. option:: -f, --foreground
+
+ Foreground: do not daemonize after startup (run in foreground). Do
+ not generate a pid file. Useful when run via :doc:`crun <crun>`\(8).
+
+.. option:: -d
+
+ Debug mode: like ``-f``, but also send all log output to stderr.
+
+.. option:: --osd-data osddata
+
+ Use object store at *osddata*.
+
+.. option:: --osd-journal journal
+
+ Journal updates to *journal*.
+
+.. option:: --mkfs
+
+ Create an empty object repository. Normally invoked by
+ :doc:`mkcephfs <mkcephfs>`\(8). This also initializes the journal
+ (if one is defined).
+
+.. option:: --mkkey
+
+ Generate a new secret key. This is normally used in combination
+ with ``--mkfs`` as it is more convenient than generating a key by
+ hand with :doc:`cauthtool <cauthtool>`\(8).
+
+.. option:: --mkjournal
+
+ Create a new journal file to match an existing object repository.
+ This is useful if the journal device or file is wiped out due to a
+ disk or file system failure.
+
+.. option:: --flush-journal
+
+ Flush the journal to permanent store. This runs in the foreground
+ so you know when it's completed. This can be useful if you want to
+ resize the journal or need to otherwise destroy it: this guarantees
+ you won't lose data.
+
+.. option:: -c ceph.conf, --conf=ceph.conf
+
+ Use *ceph.conf* configuration file instead of the default
+ ``/etc/ceph/ceph.conf`` for runtime configuration options.
+
+.. option:: -m monaddress[:port]
+
+ Connect to specified monitor (instead of looking through
+ ``ceph.conf``).
+
+
+Availability
+============
+
+**cosd** is part of the Ceph distributed file system. Please refer to
+the Ceph wiki at http://ceph.newdream.net/wiki for more information.
+
+See also
+========
+
+:doc:`ceph <ceph>`\(8),
+:doc:`cmds <cmds>`\(8),
+:doc:`cmon <cmon>`\(8),
+:doc:`cauthtool <cauthtool>`\(8)
--- /dev/null
+==============================================
+ crbdnamer -- udev helper to name RBD devices
+==============================================
+
+.. program:: crbdnamer
+
+
+Synopsis
+========
+
+| **crbdnamer** *num*
+
+
+Description
+===========
+
+**crbdnamer** prints the pool and image name for the given RBD devices
+to stdout. It is used by `udev` (using a rule like the one below) to
+set up a device symlink.
+
+
+::
+
+ KERNEL=="rbd[0-9]*", PROGRAM="/usr/bin/crbdnamer %n", SYMLINK+="rbd/%c{1}/%c{2}:%n"
+
+
+Availability
+============
+
+**crbdnamer** is part of the Ceph distributed file system. Please
+refer to the Ceph wiki at http://ceph.newdream.net/wiki for more
+information.
+
+
+See also
+========
+
+:doc:`rbd <rbd>`\(8),
+:doc:`ceph <ceph>`\(8)
--- /dev/null
+=====================================
+ crun -- restart daemon on core dump
+=====================================
+
+.. program:: crun
+
+Synopsis
+========
+
+| **crun** *command* ...
+
+
+Description
+===========
+
+**crun** is a simple wrapper that will restart a daemon if it exits
+with a signal indicating it crashed and possibly core dumped (that is,
+signals 3, 4, 5, 6, 8, or 11).
+
+The command should run the daemon in the foreground. For Ceph daemons,
+that means the ``-f`` option.
+
+
+Options
+=======
+
+None
+
+
+Availability
+============
+
+**crun** is part of the Ceph distributed file system. Please refer to
+the Ceph wiki at http://ceph.newdream.net/wiki for more information.
+
+
+See also
+========
+
+:doc:`ceph <ceph>`\(8),
+:doc:`cmon <cmon>`\(8),
+:doc:`cmds <cmds>`\(8),
+:doc:`cosd <cosd>`\(8)
--- /dev/null
+==========================================
+ crushtool -- CRUSH map manipulation tool
+==========================================
+
+.. program:: crushtool
+
+Synopsis
+========
+
+| **crushtool** ( -d *map* | -c *map.txt* | --build *numosds*
+ *layer1* *...* ) [ -o *outfile* [ --clobber ]]
+
+
+Description
+===========
+
+**crushtool** is a utility that lets you create, compile, and
+decompile CRUSH map files.
+
+CRUSH is a pseudo-random data distribution algorithm that efficiently
+maps input values (typically data objects) across a heterogeneous,
+hierarchically structured device map. The algorithm was originally
+described in detail in the following paper (although it has evolved
+some since then):
+
+ http://www.ssrc.ucsc.edu/Papers/weil-sc06.pdf
+
+The tool has three modes of operation.
+
+.. option:: -c map.txt
+
+ will compile a plaintext map.txt into a binary map file.
+
+.. option:: -d map
+
+ will take the compiled map and decompile it into a plaintext source
+ file, suitable for editing.
+
+.. option:: --build numosds layer1 ...
+
+ will create a relatively generic map with the given layer
+ structure. See below for examples.
+
+
+Options
+=======
+
+.. option:: -o outfile
+
+ will specify the output file.
+
+.. option:: --clobber
+
+ will allow the tool to overwrite an existing outfile (it will normally refuse).
+
+
+Building a map
+==============
+
+The build mode will generate relatively generic hierarchical maps. The
+first argument simply specifies the number of devices (leaves) in the
+CRUSH hierarchy. Each layer describes how the layer (or raw devices)
+preceding it should be grouped.
+
+Each layer consists of::
+
+ name ( uniform | list | tree | straw ) size
+
+The first element is the name for the elements in the layer
+(e.g. "rack"). Each element's name will be append a number to the
+provided name.
+
+The second component is the type of CRUSH bucket.
+
+The third component is the maximum size of the bucket. If the size is
+0, a single bucket will be generated that includes everything in the
+preceding layer.
+
+
+Example
+=======
+
+Suppose we have 128 devices, each grouped into shelves with 4 devices
+each, and 8 shelves per rack. We could create a three level hierarchy
+with::
+
+ crushtool --build 128 shelf uniform 4 rack straw 8 root straw 0 -o map
+
+To adjust the default (generic) mapping rules, we can run::
+
+ # decompile
+ crushtool -d map -o map.txt
+
+ # edit
+ vi map.txt
+
+ # recompile
+ crushtool -c map.txt -o map
+
+
+Availability
+============
+
+**crushtool** is part of the Ceph distributed file system. Please
+refer to the Ceph wiki at http://ceph.newdream.net/wiki for more
+information.
+
+
+See also
+========
+
+:doc:`ceph <ceph>`\(8),
+:doc:`osdmaptool <osdmaptool>`\(8),
+:doc:`mkcephfs <mkcephfs>`\(8)
--- /dev/null
+===========================================
+ csyn -- ceph synthetic workload generator
+===========================================
+
+.. program:: csyn
+
+Synopsis
+========
+
+| **csyn** [ -m *monaddr*:*port* ] --syn *command* *...*
+
+
+Description
+===========
+
+**csyn** is a simple synthetic workload generator for the Ceph
+distributed file system. It uses the userspace client library to
+generate simple workloads against a currently running file system. The
+file system need not be mounted via cfuse(8) or the kernel client.
+
+One or more ``--syn`` command arguments specify the particular
+workload, as documented below.
+
+
+Options
+=======
+
+.. option:: -d
+
+ Detach from console and daemonize after startup.
+
+.. option:: -c ceph.conf, --conf=ceph.conf
+
+ Use *ceph.conf* configuration file instead of the default
+ ``/etc/ceph/ceph.conf`` to determine monitor addresses during
+ startup.
+
+.. option:: -m monaddress[:port]
+
+ Connect to specified monitor (instead of looking through
+ ``ceph.conf``).
+
+.. option:: --num_client num
+
+ Run num different clients, each in a separate thread.
+
+.. option:: --syn workloadspec
+
+ Run the given workload. May be specified as many times as
+ needed. Workloads will normally run sequentially.
+
+
+Workloads
+=========
+
+Each workload should be preceded by ``--syn`` on the command
+line. This is not a complete list.
+
+:command:`mknap` *path* *snapname*
+ Create a snapshot called *snapname* on *path*.
+
+:command:`rmsnap` *path* *snapname*
+ Delete snapshot called *snapname* on *path*.
+
+:command:`rmfile` *path*
+ Delete/unlink *path*.
+
+:command:`writefile` *sizeinmb* *blocksize*
+ Create a file, named after our client id, that is *sizeinmb* MB by
+ writing *blocksize* chunks.
+
+:command:`readfile` *sizeinmb* *blocksize*
+ Read file, named after our client id, that is *sizeinmb* MB by
+ writing *blocksize* chunks.
+
+:command:`rw` *sizeinmb* *blocksize*
+ Write file, then read it back, as above.
+
+:command:`makedirs` *numsubdirs* *numfiles* *depth*
+ Create a hierarchy of directories that is *depth* levels deep. Give
+ each directory *numsubdirs* subdirectories and *numfiles* files.
+
+:command:`walk`
+ Recursively walk the file system (like find).
+
+
+Availability
+============
+
+**csyn** is part of the Ceph distributed file system. Please refer to
+the Ceph wiki at http://ceph.newdream.net/wiki for more information.
+
+See also
+========
+
+:doc:`ceph <ceph>`\(8),
+:doc:`cfuse <cfuse>`\(8)
--- /dev/null
+=======================================================
+ librados-config -- display information about librados
+=======================================================
+
+.. program:: librados-config
+
+Synopsis
+========
+
+| **librados-config** [ --version ] [ --vernum ]
+
+
+Description
+===========
+
+**librados-config** is a utility that displays information about the
+ installed ``librados``.
+
+
+Options
+=======
+
+.. option:: --version
+
+ Display ``librados`` version
+
+.. option:: --vernum
+
+ Display the ``librados`` version code
+
+
+Availability
+============
+
+**librados-config** is part of the Ceph distributed file system.
+Please refer to the Ceph wiki at http://ceph.newdream.net/wiki for
+more information.
+
+
+See also
+========
+
+:doc:`ceph <ceph>`\(8),
+:doc:`rados <rados>`\(8)
--- /dev/null
+=======================================
+ mkcephfs -- create a ceph file system
+=======================================
+
+.. program:: mkcephfs
+
+Synopsis
+========
+
+| **mkcephfs** [ -c *ceph.conf* ] [ --mkbtrfs ] [ -a, --all-hosts [ -k
+ */path/to/admin.keyring* ] ]
+
+
+Description
+===========
+
+**mkcephfs** is used to create an empty Ceph file system, possibly
+spanning multiple hosts. The ceph.conf file describes the composition
+of the entire Ceph cluster, including which hosts are participating,
+which daemons run where, and which paths are used to store file system
+data or metadata.
+
+The mkcephfs tool can be used in two ways. If -a is used, it will use
+ssh and scp to connect to remote hosts on your behalf and do the setup
+of the entire cluster. This is the easiest solution, but can also be
+inconvenient (if you don't have ssh to connect without prompting for
+passwords) or slow (if you have a large cluster).
+
+Alternatively, you can run each setup phase manually. First, you need
+to prepare a monmap that will be shared by each node::
+
+ # prepare
+ master# mkdir /tmp/foo
+ master# mkcephfs -c /etc/ceph/ceph.conf \
+ --prepare-monmap -d /tmp/foo
+
+Share the ``/tmp/foo`` directory with other nodes in whatever way is
+convenient for you. On each OSD and MDS node::
+
+ osdnode# mkcephfs --init-local-daemons osd -d /tmp/foo
+ mdsnode# mkcephfs --init-local-daemons mds -d /tmp/foo
+
+Collect the contents of the /tmp/foo directories back onto a single
+node, and then::
+
+ master# mkcephfs --prepare-mon -d /tmp/foo
+
+Finally, distribute ``/tmp/foo`` to all monitor nodes and, on each of
+those nodes::
+
+ monnode# mkcephfs --init-local-daemons mon -d /tmp/foo
+
+
+Options
+=======
+
+.. option:: -a, --allhosts
+
+ Performs the necessary initialization steps on all hosts in the
+ cluster, executing commands via SSH.
+
+.. option:: -c ceph.conf, --conf=ceph.conf
+
+ Use the given conf file instead of the default ``/etc/ceph/ceph.conf``.
+
+.. option:: -k /path/to/keyring
+
+ When ``-a`` is used, we can specify a location to copy the
+ client.admin keyring, which is used to administer the cluster. The
+ default is ``/etc/ceph/keyring`` (or whatever is specified in the
+ config file).
+
+.. option:: --mkbtrfs
+
+ Create and mount the any btrfs file systems specified in the
+ ceph.conf for OSD data storage using mkfs.btrfs. The "btrfs devs"
+ and (if it differs from "osd data") "btrfs path" options must be
+ defined.
+
+
+Subcommands
+===========
+
+The sub-commands performed during cluster setup can be run individually with
+
+.. option:: --prepare-monmap -d dir -c ceph.conf
+
+ Create an initial monmap with a random fsid/uuid and store it and
+ the ceph.conf in dir.
+
+.. option:: --init-local-daemons type -d dir
+
+ Initialize any daemons of type type on the local host using the
+ monmap in dir. For types osd and mds, the resulting authentication
+ keys will be placed in dir. For type mon, the initial data files
+ generated by --prepare-mon (below) are expected in dir.
+
+.. option:: --prepare-mon -d dir
+
+ Prepare the initial monitor data based on the monmap, OSD, and MDS
+ authentication keys collected in dir, and put the result in dir.
+
+
+Availability
+============
+
+**mkcephfs** is part of the Ceph distributed file system. Please refer
+to the Ceph wiki at http://ceph.newdream.net/wiki for more
+information.
+
+
+See also
+========
+
+:doc:`ceph <ceph>`\(8),
+:doc:`monmaptool <monmaptool>`\(8),
+:doc:`osdmaptool <osdmaptool>`\(8),
+:doc:`crushtool <crushtool>`\(8)
--- /dev/null
+==========================================================
+ monmaptool -- ceph monitor cluster map manipulation tool
+==========================================================
+
+.. program:: monmaptool
+
+Synopsis
+========
+
+| **monmaptool** *mapfilename* [ --clobber ] [ --print ] [ --create ]
+ [ --add *ip*:*port* *...* ] [ --rm *ip*:*port* *...* ]
+
+
+Description
+===========
+
+**monmaptool** is a utility to create, view, and modify a monitor
+cluster map for the Ceph distributed file system. The monitor map
+specifies the only fixed addresses in the Ceph distributed system.
+All other daemons bind to arbitrary addresses and register themselves
+with the monitors.
+
+When creating a map with --create, a new monitor map with a new,
+random UUID will be created. It should be followed by one or more
+monitor addresses.
+
+The default Ceph monitor port is 6789.
+
+
+Options
+=======
+
+.. option:: --print
+
+ will print a plaintext dump of the map, after any modifications are
+ made.
+
+.. option:: --clobber
+
+ will allow monmaptool to overwrite mapfilename if changes are made.
+
+.. option:: --create
+
+ will create a new monitor map with a new UUID (and with it, a new,
+ empty Ceph file system).
+
+.. option:: --add name ip:port
+
+ will add a monitor with the specified ip:port to the map.
+
+.. option:: --rm name
+
+ will remove the monitor with the specified ip:port from the map.
+
+
+Example
+=======
+
+To create a new map with three monitors (for a fresh Ceph file system)::
+
+ monmaptool --create --add mon.a 192.168.0.10:6789 --add mon.b 192.168.0.11:6789 \
+ --add mon.c 192.168.0.12:6789 --clobber monmap
+
+To display the contents of the map::
+
+ monmaptool --print onmap
+
+To replace one monitor::
+
+ monmaptool --rm mon.a --add mon.a 192.168.0.9:6789 --clobber monmap
+
+
+Availability
+============
+
+**monmaptool** is part of the Ceph distributed file system. Please
+refer to the Ceph wiki at http://ceph.newdream.net/wiki for more
+information.
+
+
+See also
+========
+
+:doc:`ceph <ceph>`\(8),
+:doc:`crushtool <crushtool>`\(8),
+:doc:`mkcephfs <mkcephfs>`\(8)
--- /dev/null
+========================================
+ mount.ceph -- mount a ceph file system
+========================================
+
+.. program:: mount.ceph
+
+Synopsis
+========
+
+| **mount.ceph** *monaddr1*\ [,\ *monaddr2*\ ,...]:/[*subdir*] *dir* [
+ -o *options* ]
+
+
+Description
+===========
+
+**mount.ceph** is a simple helper for mounting the Ceph file system on
+a Linux host. The only real purpose it serves is to resolve monitor
+hostname(s) into IP addresses; the Linux kernel client component does
+most of the real work. In fact, it is possible to mount a Ceph file
+system without mount.ceph by specifying monitor address(es) by IP::
+
+ mount -t ceph 1.2.3.4:/ mountpoint
+
+Each monitor address monaddr takes the form host[:port]. If the port
+is not specified, the Ceph default of 6789 is assumed.
+
+Multiple monitor addresses can be separated by commas. Only one
+responsible monitor is needed to successfully mount; the client will
+learn about all monitors from any responsive monitor. However, it is a
+good idea to specify more than one in case one happens to be down at
+the time of mount.
+
+A subdirectory subdir may be specified if a subset of the file system
+is to be mounted.
+
+
+Options
+=======
+
+:command:`wsize`
+ int, max write size. Default: none (writeback uses smaller of wsize
+ and stripe unit)
+
+:command:`rsize`
+ int (bytes), max readahead, multiple of 1024, Default: 524288
+ (512*1024)
+
+:command:`osdtimeout`
+ int (seconds), Default: 60
+
+:command:`osdkeepalivetimeout`
+ int, Default: 5
+
+:command:`mount_timeout`
+ int (seconds), Default: 60
+
+:command:`osd_idle_ttl`
+ int (seconds), Default: 60
+
+:command:`caps_wanted_delay_min`
+ int, cap release delay, Default: 5
+
+:command:`caps_wanted_delay_max`
+ int, cap release delay, Default: 60
+
+:command:`cap_release_safety`
+ int, Default: calculated
+
+:command:`readdir_max_entries`
+ int, Default: 1024
+
+:command:`readdir_max_bytes`
+ int, Default: 524288 (512*1024)
+
+:command:`write_congestion_kb`
+ int (kb), max writeback in flight. scale with available
+ memory. Default: calculated from available memory
+
+:command:`snapdirname`
+ string, set the name of the hidden snapdir. Default: .snap
+
+:command:`name`
+ string, used with authx, Default: guest
+
+:command:`secret`
+ string, used with authx
+
+:command:`ip`
+ my ip
+
+:command:`noshare`
+ create a new client instance, instead of sharing an existing
+ instance of a client mounting the same cluster
+
+:command:`dirstat`
+ funky `cat dirname` for stats, Default: off
+
+:command:`nodirstat`
+ no funky `cat dirname` for stats
+
+:command:`rbytes`
+ Report the recursive size of the directory contents for st_size on
+ directories. Default: on
+
+:command:`norbytes`
+ Do not report the recursive size of the directory contents for
+ st_size on directories.
+
+:command:`nocrc`
+ no data crc on writes
+
+:command:`noasyncreaddir`
+ no dcache readdir
+
+
+Examples
+========
+
+Mount the full file system::
+
+ mount.ceph monhost:/ /mnt/foo
+
+If there are multiple monitors::
+
+ mount.ceph monhost1,monhost2,monhost3:/ /mnt/foo
+
+If cmon(8) is running on a non-standard port::
+
+ mount.ceph monhost1:7000,monhost2:7000,monhost3:7000:/ /mnt/foo
+
+To mount only part of the namespace::
+
+ mount.ceph monhost1:/some/small/thing /mnt/thing
+
+Assuming mount.ceph(8) is installed properly, it should be
+automatically invoked by mount(8) like so::
+
+ mount -t ceph monhost:/ /mnt/foo
+
+
+Availability
+============
+
+**mount.ceph** is part of the Ceph distributed file system. Please
+refer to the Ceph wiki at http://ceph.newdream.net/wiki for more
+information.
+
+See also
+========
+
+:doc:`cfuse <cfuse>`\(8),
+:doc:`ceph <ceph>`\(8)
--- /dev/null
+========================================
+ obsync -- The object synchronizer tool
+========================================
+
+.. program:: obsync
+
+Synopsis
+========
+
+| **obsync** [ *options* ] *source-url* *destination-url*
+
+
+Description
+===========
+
+**obsync** is an object syncrhonizer tool designed to transfer objects
+between different object storage systems. Similar to rsync, you
+specify a source and a destination, and it will transfer objects
+between them until the destination has all the objects in the
+source. Obsync will never modify the source -- only the destination.
+
+By default, obsync does not delete anything. However, by specifying
+``--delete-after`` or ``--delete-before``, you can ask it to delete
+objects from the destination that are not in the source.
+
+
+Target types
+============
+
+Obsync supports S3 via ``libboto``. To use the s3 target, your URL
+should look like this: ``s3://host-name/bucket-name``
+
+Obsync supports storing files locally via the ``file://`` target. To
+use the file target, your URL should look like this:
+``file://directory-name``
+
+Alternately, give no prefix, like this: ``./directory-name``
+
+Obsync supports storing files in a RADOS Gateway backend via the
+``librados`` Python bindings. To use the ``rgw` target, your URL
+should look like this: ``rgw:ceph-configuration-path:rgw-bucket-name``
+
+
+Options
+=======
+
+.. option:: -h, --help
+
+ Display a help message
+
+.. option:: -n, --dry-run
+
+ Show what would be done, but do not modify the destination.
+
+.. option:: -c, --create-dest
+
+ Create the destination if it does not exist.
+
+.. option:: --delete-before
+
+ Before copying any files, delete objects in the destination that
+ are not in the source.
+
+.. option:: -L, --follow-symlinks
+
+ Follow symlinks when dealing with ``file://`` targets.
+
+.. option:: --no-preserve-acls
+
+ Don't preserve ACLs when copying objects.
+
+.. option:: -v, --verbose
+
+ Be verbose.
+
+.. option:: -V, --more-verbose
+
+ Be really, really verbose (developer mode)
+
+.. option:: -x SRC=DST, --xuser SRC=DST
+
+ Set up a user translation. You can specify multiple user
+ translations with multiple ``--xuser`` arguments.
+
+.. option:: --force
+
+ Overwrite all destination objects, even if they appear to be the
+ same as the source objects.
+
+
+Environment variables
+=====================
+
+.. envvar:: SRC_AKEY
+
+ Access key for the source URL
+
+.. envvar:: SRC_SKEY
+
+ Secret access key for the source URL
+
+.. envvar:: DST_AKEY
+
+ Access key for the destination URL
+
+.. envvar:: DST_SKEY
+
+ Secret access key for the destination URL
+
+.. envvar:: AKEY
+
+ Access key for both source and dest
+
+.. envvar:: SKEY
+
+ Secret access key for both source and dest
+
+.. envvar:: DST_CONSISTENCY
+
+ Set to 'eventual' if the destination is eventually consistent. If the destination
+ is eventually consistent, we may have to retry certain operations multiple times.
+
+
+Examples
+========
+
+::
+
+ AKEY=... SKEY=... obsync -c -d -v ./backup-directory s3://myhost1/mybucket1
+
+Copy objects from backup-directory to mybucket1 on myhost1::
+
+ SRC_AKEY=... SRC_SKEY=... DST_AKEY=... DST_SKEY=... obsync -c -d -v s3://myhost1/mybucket1 s3://myhost1/mybucket2
+
+Copy objects from mybucket1 to mybucket2
+
+
+Availability
+============
+
+**obsync** is part of the Ceph distributed file system. Please refer
+to the Ceph wiki at http://ceph.newdream.net/wiki for more
+information.
--- /dev/null
+======================================================
+ osdmaptool -- ceph osd cluster map manipulation tool
+======================================================
+
+.. program:: osdmaptool
+
+Synopsis
+========
+
+| **osdmaptool** *mapfilename* [--print] [--createsimple *numosd*
+ [--pgbits *bitsperosd* ] ] [--clobber]
+
+
+Description
+===========
+
+**osdmaptool** is a utility that lets you create, view, and manipulate
+OSD cluster maps from the Ceph distributed file system. Notably, it
+lets you extract the embedded CRUSH map or import a new CRUSH map.
+
+
+Options
+=======
+
+.. option:: --print
+
+ will simply make the tool print a plaintext dump of the map, after
+ any modifications are made.
+
+.. option:: --clobber
+
+ will allow osdmaptool to overwrite mapfilename if changes are made.
+
+.. option:: --import-crush mapfile
+
+ will load the CRUSH map from mapfile and embed it in the OSD map.
+
+.. option:: --export-crush mapfile
+
+ will extract the CRUSH map from the OSD map and write it to
+ mapfile.
+
+.. option:: --createsimple numosd [--pgbits bitsperosd]
+
+ will create a relatively generic OSD map with the numosd devices.
+ If --pgbits is specified, the initial placement group counts will
+ be set with bitsperosd bits per OSD. That is, the pg_num map
+ attribute will be set to numosd shifted by bitsperosd.
+
+
+Example
+=======
+
+To create a simple map with 16 devices::
+
+ osdmaptool --createsimple 16 osdmap --clobber
+
+To view the result::
+
+ osdmaptool --print osdmap
+
+
+Availability
+============
+
+**osdmaptool** is part of the Ceph distributed file system. Please
+refer to the Ceph wiki at http://ceph.newdream.net/wiki for more
+information.
+
+
+See also
+========
+
+:doc:`ceph <ceph>`\(8),
+:doc:`crushtool <crushtool>`\(8),
+:doc:`mkcephfs <mkcephfs>`\(8)
rados -- rados object storage utility
=======================================
-.. todo:: write me
+.. program:: rados
+
+Synopsis
+========
+
+| **rados** [ -m *monaddr* ] [ mkpool | rmpool *foo* ] [ -p | --pool
+ *pool* ] [ -s | --snap *snap* ] [ -i *infile* ] [ -o *outfile* ]
+ *command* ...
+
+
+Description
+===========
+
+**rados** is a utility for interacting with a Ceph object storage
+cluster (RADOS), part of the Ceph distributed file system.
+
+
+Options
+=======
+
+.. option:: -p pool, --pool pool
+
+ Interact with the given pool. Required by most commands.
+
+.. option:: -s snap, --snap snap
+
+ Read from the given pool snapshot. Valid for all pool-specific read operations.
+
+.. option:: -i infile
+
+ will specify an input file to be passed along as a payload with the
+ command to the monitor cluster. This is only used for specific
+ monitor commands.
+
+.. option:: -o outfile
+
+ will write any payload returned by the monitor cluster with its
+ reply to outfile. Only specific monitor commands (e.g. osd getmap)
+ return a payload.
+
+.. option:: -c ceph.conf, --conf=ceph.conf
+
+ Use ceph.conf configuration file instead of the default
+ /etc/ceph/ceph.conf to determine monitor addresses during startup.
+
+.. option:: -m monaddress[:port]
+
+ Connect to specified monitor (instead of looking through ceph.conf).
+
+
+Global commands
+===============
+
+:command:`lspools`
+ List object pools
+
+:command:`df`
+ Show utilization statistics, including disk usage (bytes) and object
+ counts, over the entire system and broken down by pool.
+
+:command:`mkpool` *foo*
+ Create a pool with name foo.
+
+:command:`rmpool` *foo*
+ Delete the pool foo (and all its data)
+
+
+Pool specific commands
+======================
+
+:command:`get` *name* *outfile*
+ Read object name from the cluster and write it to outfile.
+
+:command:`put` *name* *infile*
+ Write object name to the cluster with contents from infile.
+
+:command:`rm` *name*
+ Remove object name.
+
+:command:`ls` *outfile*
+ List objects in given pool and write to outfile.
+
+:command:`lssnap`
+ List snapshots for given pool.
+
+:command:`mksnap` *foo*
+ Create pool snapshot named *foo*.
+
+:command:`rmsnap` *foo*
+ Remove pool snapshot names *foo*.
+
+:command:`bench` *seconds* *mode* [ -b *objsize* ] [ -t *threads* ]
+ Benchmark for seconds. The mode can be write or read. The default
+ object size is 4 KB, and the default number of simulated threads
+ (parallel writes) is 16.
+
+
+Examples
+========
+
+To view cluster utilization::
+
+ rados df
+
+To get a list object in pool foo sent to stdout::
+
+ rados -p foo ls -
+
+To write an object::
+
+ rados -p foo put myobject blah.txt
+
+To create a snapshot::
+
+ rados -p foo mksnap mysnap
+
+To delete the object::
+
+ rados -p foo rm myobject
+
+To read a previously snapshotted version of an object::
+
+ rados -p foo -s mysnap get myobject blah.txt.old
+
+
+Availability
+============
+
+**rados** is part of the Ceph distributed file system. Please refer to
+the Ceph wiki at http://ceph.newdream.net/wiki for more information.
+
+
+See also
+========
+
+:doc:`ceph <ceph>`\(8)
--- /dev/null
+===============================
+ radosgw -- rados REST gateway
+===============================
+
+.. program:: radosgw
+
+Synopsis
+========
+
+| **radosgw**
+
+
+Description
+===========
+
+**radosgw** is an HTTP REST gateway for the RADOS object store, a part
+of the Ceph distributed storage system. It is implemented as a FastCGI
+module using libfcgi, and can be used in conjunction with any FastCGI
+capable web server.
+
+
+Options
+=======
+
+.. option:: -c ceph.conf, --conf=ceph.conf
+
+ Use *ceph.conf* configuration file instead of the default
+ ``/etc/ceph/ceph.conf`` to determine monitor addresses during startup.
+
+.. option:: -m monaddress[:port]
+
+ Connect to specified monitor (instead of looking through
+ ``ceph.conf``).
+
+.. option:: --rgw-socket-path=path
+
+ Specify a unix domain socket path.
+
+
+Examples
+========
+
+An apache example configuration for using the RADOS gateway::
+
+ <VirtualHost *:80>
+ ServerName rgw.example1.com
+ ServerAlias rgw
+ ServerAdmin webmaster@example1.com
+ DocumentRoot /var/www/web1/web/
+
+ #turn engine on
+ RewriteEngine On
+
+ #following is important for RGW/rados
+ RewriteRule ^/([a-zA-Z0-9-_.]*)([/]?.*) /s3gw.fcgi?page=$1¶ms=$2&%{QUERY_STRING} [E=HTTP_AUTHORIZATION:%{HTTP:Authorization},L]
+
+ <IfModule mod_fcgid.c>
+ SuexecUserGroup web1 web1
+ <Directory /var/www/web1/web/>
+ Options +ExecCGI
+ AllowOverride All
+ SetHandler fcgid-script
+ FCGIWrapper /var/www/fcgi-scripts/web1/radosgw .fcgi
+ Order allow,deny
+ Allow from all
+ AuthBasicAuthoritative Off
+ </Directory>
+ </IfModule>
+
+ AllowEncodedSlashes On
+
+ # ErrorLog /var/log/apache2/error.log
+ # CustomLog /var/log/apache2/access.log combined
+ ServerSignature Off
+
+ </VirtualHost>
+
+And the corresponding radosgw script::
+
+ #!/bin/sh
+ exec /usr/bin/radosgw -c /etc/ceph.conf
+
+By default radosgw will run as single threaded and its execution will
+be controlled by the fastcgi process manager. An alternative way to
+run it would be by specifying (along the lines of) the following in
+the apache config::
+
+ FastCgiExternalServer /var/www/web1/web/s3gw.fcgi -socket /tmp/.radosgw.sock
+
+and specify a unix domain socket path (either by passing a command
+line option, or through ceph.conf).
+
+
+Availability
+============
+
+**radosgw** is part of the Ceph distributed file system. Please refer
+to the Ceph wiki at http://ceph.newdream.net/wiki for more
+information.
+
+
+See also
+========
+
+:doc:`ceph <ceph>`\(8)
--- /dev/null
+=================================================================
+ radosgw_admin -- rados REST gateway user administration utility
+=================================================================
+
+.. program:: radosgw_admin
+
+Synopsis
+========
+
+| **radosgw_admin** *command* [ *options* *...* ]
+
+
+Description
+===========
+
+**radosgw_admin** is a RADOS gateway user administration utility. It
+allows creating and modifying users.
+
+
+Commands
+========
+
+*command* can be one of the following options:
+
+:command:`user create`
+ Create a new user
+
+:command:`user modify`
+ Modify a user
+
+:command:`user info`
+ Display information of a user
+
+:command:`user rm`
+ Remove a user
+
+:command:`bucket list`
+ List all buckets
+
+:command:`bucket unlink`
+ Remove a bucket
+
+:command:`policy`
+ Display bucket/object policy
+
+:command:`log show`
+ Show the log of a bucket (with a specified date)
+
+
+Options
+=======
+
+.. option:: -c ceph.conf, --conf=ceph.conf
+
+ Use *ceph.conf* configuration file instead of the default
+ ``/etc/ceph/ceph.conf`` to determine monitor addresses during
+ startup.
+
+.. option:: -m monaddress[:port]
+
+ Connect to specified monitor (instead of looking through ceph.conf).
+
+.. option:: --uid=uid
+
+ The S3 user/access key.
+
+.. option:: --secret=secret
+
+ The S3 secret.
+
+.. option:: --display-name=name
+
+ Configure the display name of the user.
+
+.. option:: --email=email
+
+ The e-mail address of the user
+
+.. option:: --bucket=bucket
+
+ Specify the bucket name.
+
+.. option:: --object=object
+
+ Specify the object name.
+
+.. option:: --date=yyyy-mm-dd
+
+ The date need for some commands
+
+.. option:: --os-user=group:name
+
+ The OpenStack user (only needed for use with OpenStack)
+
+.. option:: --os-secret=key
+
+ The OpenStack key
+
+.. option:: --auth-uid=auid
+
+ The librados auid
+
+
+Examples
+========
+
+Generate a new user::
+
+ $ radosgw_admin user gen --display-name="johnny rotten" --email=johnny@rotten.com
+ User ID: CHBQFRTG26I8DGJDGQLW
+ Secret Key: QR6cI/31N+J0VKVgHSpEGVSfEEsmf6PyXG040KCB
+ Display Name: johnny rotten
+
+Remove a user::
+
+ $ radosgw_admin user rm --uid=CHBQFRTG26I8DGJDGQLW
+
+Remove a bucket::
+
+ $ radosgw_admin bucket unlink --bucket=foo
+
+Show the logs of a bucket from April 1st 2011::
+
+ $ radosgw_admin log show --bucket=foo --date=2011=04-01
+
+Availability
+============
+
+**radosgw_admin** is part of the Ceph distributed file system. Please
+refer to the Ceph wiki at http://ceph.newdream.net/wiki for more
+information.
+
+See also
+========
+
+:doc:`ceph <ceph>`\(8)
Synopsis
========
-| **rbd** [ -c *ceph.conf* ] [ -m *monaddr* ] [ -p | --pool *pool* ] [ --size *size* ] [ --order *bits* ]
- [ *command* ... ]
+| **rbd** [ -c *ceph.conf* ] [ -m *monaddr* ] [ -p | --pool *pool* ] [
+ --size *size* ] [ --order *bits* ] [ *command* ... ]
+
Description
===========
See also
========
-:doc:`ceph <ceph>`\(8), :doc:`rados <rados>`\(8)
+:doc:`ceph <ceph>`\(8),
+:doc:`rados <rados>`\(8)