.. _Monitoring OSDs and PGs: ../rados/operations/monitoring-osd-pg
.. _Heartbeats: ../rados/configuration/mon-osd-interaction
.. _Monitoring OSDs: ../rados/operations/monitoring-osd-pg/#monitoring-osds
-.. _CRUSH - Controlled, Scalable, Decentralized Placement of Replicated Data: http://ceph.com/papers/weil-crush-sc06.pdf
+.. _CRUSH - Controlled, Scalable, Decentralized Placement of Replicated Data: https://ceph.com/wp-content/uploads/2016/08/weil-crush-sc06.pdf
.. _Data Scrubbing: ../rados/configuration/osd-config-ref#scrubbing
.. _Report Peering Failure: ../rados/configuration/mon-osd-interaction#osds-report-peering-failure
.. _Troubleshooting Peering Failure: ../rados/troubleshooting/troubleshooting-pg#placement-group-down-peering-failure
Zipkin_.
.. _Dapper: http://static.googleusercontent.com/media/research.google.com/el//pubs/archive/36356.pdf
-.. _Zipkin: http://twitter.github.io/zipkin/
+.. _Zipkin: http://zipkin.io/
Installing Blkin
Four times a year, the development roadmap is discussed online during
-the `Ceph Developer Summit <http://wiki.ceph.com/Planning/CDS/>`_. A
+the `Ceph Developer Summit <http://tracker.ceph.com/projects/ceph/wiki/Planning#Ceph-Developer-Summit>`_. A
new stable release (hammer, infernalis, jewel ...) is published at the same
frequency. Every other release (firefly, hammer, jewel...) is a `Long Term
Stable (LTS) <../../releases>`_. See `Understanding the release cycle
The :doc:`/dev/sepia` runs `teuthology
<https://github.com/ceph/teuthology/>`_ integration tests `on a regular basis <http://tracker.ceph.com/projects/ceph-releases/wiki/HOWTO_monitor_the_automated_tests_AKA_nightlies#Automated-tests-AKA-nightlies>`_ and the
results are posted on `pulpito <http://pulpito.ceph.com/>`_ and the
-`ceph-qa mailing list <http://ceph.com/resources/mailing-list-irc/>`_.
+`ceph-qa mailing list <https://ceph.com/irc/>`_.
* The job failures are `analyzed by quality engineers and developers
<http://tracker.ceph.com/projects/ceph-releases/wiki/HOWTO_monitor_the_automated_tests_AKA_nightlies#List-of-suites-and-watchers>`_
`librados.rst`_, which is rendered at :doc:`/rados/api/librados`.
.. _`librados C API`: https://github.com/ceph/ceph/blob/master/src/include/rados/librados.h
-.. _`librados.rst`: https://raw.github.com/ceph/ceph/master/doc/api/librados.rst
+.. _`librados.rst`: https://github.com/ceph/ceph/raw/master/doc/rados/api/librados.rst
Drawing diagrams
================
There are also `other Ceph-related mailing lists`_.
-.. _`other Ceph-related mailing lists`: https://ceph.com/resources/mailing-list-irc/
+.. _`other Ceph-related mailing lists`: https://ceph.com/irc/
IRC
---
.. _`Internet Relay Chat`: http://www.irchelp.org/
-See https://ceph.com/resources/mailing-list-irc/ for how to set up your IRC
+See https://ceph.com/irc/ for how to set up your IRC
client and a list of channels.
Submitting patches
http://pulpito.ovh.sepia.ceph.com:8081/. The developer nick shows in the
test results URL and in the first column of the Pulpito dashboard. The
results are also reported on the `ceph-qa mailing list
-<http://ceph.com/resources/mailing-list-irc/>`_ for analysis.
+<https://ceph.com/irc/>`_ for analysis.
Suites inventory
----------------
Since testing in the cloud is done using the `ceph-workbench
ceph-qa-suite`_ tool, you will need to install that first. It is designed
to be installed via Docker, so if you don't have Docker running on your
-development machine, take care of that first. The Docker project has a good
-tutorial called `Get Started with Docker Engine for Linux
-<https://docs.docker.com/linux/>`_ if you unsure how to proceed.
+development machine, take care of that first. You can follow `the official
+tutorial<https://docs.docker.com/engine/installation/>`_ to install if
+you have not installed yet.
Once Docker is up and running, install ``ceph-workbench`` by following the
`Installation instructions in the ceph-workbench documentation
The *ErasureCodePlugin* derived object must provide a factory method
from which the concrete implementation of the *ErasureCodeInterface*
-object can be generated. The `ErasureCodePluginExample plugin <https://github.com/ceph/ceph/blob/v0.78/src/test/osd/ErasureCodePluginExample.cc>`_ reads:
+object can be generated. The `ErasureCodePluginExample plugin <https://github.com/ceph/ceph/blob/v0.78/src/test/erasure-code/ErasureCodePluginExample.cc>`_ reads:
::
Many PGs can map to one OSD.
A PG represents nothing but a grouping of objects; you configure the
-number of PGs you want (see
-http://ceph.com/wiki/Changing_the_number_of_PGs ), number of
-OSDs * 100 is a good starting point, and all of your stored objects
-are pseudo-randomly evenly distributed to the PGs. So a PG explicitly
-does NOT represent a fixed amount of storage; it represents 1/pg_num
-'th of the storage you happen to have on your OSDs.
+number of PGs you want, number of OSDs * 100 is a good starting point
+, and all of your stored objects are pseudo-randomly evenly distributed
+to the PGs. So a PG explicitly does NOT represent a fixed amount of
+storage; it represents 1/pg_num'th of the storage you happen to have
+on your OSDs.
Ignoring the finer points of CRUSH and custom placement, it goes
something like this in pseudocode::
``ceph-osd`` daemon after reverting to legacy values as the feature
bit is not perfectly enforced.
-.. _CRUSH - Controlled, Scalable, Decentralized Placement of Replicated Data: http://ceph.com/papers/weil-crush-sc06.pdf
+.. _CRUSH - Controlled, Scalable, Decentralized Placement of Replicated Data: https://ceph.com/wp-content/uploads/2016/08/weil-crush-sc06.pdf
also the default for Ceph erasure coded pools.
The *jerasure* plugin encapsulates the `Jerasure
-<https://bitbucket.org/jimplank/jerasure/>`_ library. It is
+<http://jerasure.org>`_ library. It is
recommended to read the *jerasure* documentation to get a better
understanding of the parameters.
ceph tell osd.0 heap stop_profiler
.. _Logging and Debugging: ../log-and-debug
-.. _Google Heap Profiler: http://google-perftools.googlecode.com/svn/trunk/doc/heapprofile.html
+.. _Google Heap Profiler: http://goog-perftools.sourceforge.net/doc/heap_profiler.html
| **Blog** | Check the Ceph Blog_ periodically to keep track | http://ceph.com/community/blog/ |
| | of Ceph progress and important announcements. | |
+----------------------+-------------------------------------------------+-----------------------------------------------+
-| **Planet Ceph** | Check the blog aggregation on Planet Ceph for | http://ceph.com/community/planet-ceph/ |
+| **Planet Ceph** | Check the blog aggregation on Planet Ceph for | https://ceph.com/category/planet/ |
| | interesting stories, information and | |
| | experiences from the community. | |
+----------------------+-------------------------------------------------+-----------------------------------------------+
-| **Wiki** | Check the Ceph Wiki is a source for more | https://wiki.ceph.com/ |
+| **Wiki** | Check the Ceph Wiki is a source for more | http://wiki.ceph.com/ |
| | community and development related topics. You | |
| | can find there information about blueprints, | |
| | meetups, the Ceph Developer Summits and more. | |