Open MPI logo

Fault tolerance for parallel MPI jobs

  |   Home   |   Support   |   FAQ   |   all just the FAQ

Table of contents:

  1. What is "fault tolerance"?
  2. What fault tolerance techniques does Open MPI plan on supporting?
  3. Does Open MPI support checkpoint and restart of parallel jobs (similar to LAM/MPI)?
  4. Where can I find the fault tolerance development work?
  5. Does Open MPI support end-to-end data reliability in MPI message passing?

1. What is "fault tolerance"?

The phrase "fault tolerance" means many things to many people. Typical definitions range from user processes dumping vital state to disk periodically to checkpoint/restart of running processes to elaborate recreate-process-state-from-incremental-pieces schemes to ... (you get the idea).

In the scope of Open MPI, we typically define "fault tolerance" to mean the ability to recover from one or more component failures in a well defined manner with either a transparent or application-directed mechanism. Component failures may exhibit themselves as a corrupted transmission over a faulty network interface or the failure of one or more serial or parallel processes due to a processor or node failure. Open MPI strives to provide the application with a consistent system view while still providing a production quality, high performance implementation.

Yes, that's pretty much as all-inclusive as possible -- intentionally so! Remember that in addition to being a production-quality MPI implementation, Open MPI is also a vehicle for research. So while some forms of "fault tolerance" are more widely accepted and used, others are certainly of valid academic interest.

2. What fault tolerance techniques does Open MPI plan on supporting?

Open MPI plans on supporting the following fault tolerance techniques:

  • Coordinated and uncoordinated process checkpoint and restart. Similar to those implemented in LAM/MPI and MPICH-V, respectively.
  • Message logging techniques. Similar to those implemented in MPICH-V
  • Data Reliability and network fault tolerance. Similar to those implemented in LA-MPI
  • User directed, and communicator driven fault tolerance. Similar to those implemented in FT-MPI.

The Open MPI team will not limit their fault tolerance techniques to those mentioned above, but intend on extending beyond them in the future.

3. Does Open MPI support checkpoint and restart of parallel jobs (similar to LAM/MPI)?

Yes. The v1.3 series was the first release series of Open MPI to include support for the transparent, coordinated checkpointing and restarting of MPI processes (similar to LAM/MPI).

Open MPI supports both the the BLCR checkpoint/restart system and a "self" checkpointer that allows applications to perform their own checkpoint/restart functionality while taking advantage of the Open MPI checkpoint/restart infrastructure. For both of these, Open MPI provides a coordinated checkpoint/restart protocol and integration with a variety of network interconnects including shared memory, Ethernet, InfiniBand, and Myrinet.

The implementation introduces a series of new frameworks and components designed to support a variety of checkpoint and restart techniques. This allows us to support the methods described above (application-directed, BLCR, etc.) as well as other kinds of checkpoint/restart systems (e.g., Condor, libckpt) and protocols (e.g., uncoordinated, message induced).

Note: The checkpoint/restart support was last released as part of the v1.6 series. The v1.7 series and the Open MPI master do not support this functionality (most of the code is present in the repository, but it is known to be non-functional in most cases). This feature is looking for a maintainer. Interested parties should inquire on the developers mailing list.

4. Where can I find the fault tolerance development work?

The end-to-end MPI message data reliability work is being actively developed on the git master (i.e., reliable message passing over unreliable networks). See this FAQ entry for more details.

The coordinated checkpoint and restart process fault tolerance work is currently available on the Open MPI development master and in the v1.3 release series. For more information about how to use this feature see the following websites:

For information on the Fault Tolerant MPI prototype in Open MPI see the links below:

5. Does Open MPI support end-to-end data reliability in MPI message passing?

The current release of Open MPI does not support end-to-end data reliability in message passing any more than the underlying network already guarantees. Future releases of Open MPI will include explicit data reliability support (i.e., more functionality than is provided by the underlying network).

Specifically, the data reliability ("dr") PML component (available on the master, but not yet in a stable release) assumes that the underlying network is unreliable. It can drop / restart connections, retransmit corrupted or lost data, etc. The end effect is that data sent through MPI API functions will be guaranteed to be reliable.

For example, if you're using TCP as a message transport, chances of data corruption are fairly low. However, other interconnects do not guarantee that data will be uncorrupted when traveling across the network. Additionally, there are nonzero possibilities that data can be corrupted while traversing PCI buses, etc. (some corruption errors at this level can be caught/fixed, others cannot). Such errors are not uncommon at high altitudes (!).

Note that such added reliability does incur a performance cost -- latency and bandwidth suffer when Open MPI performs the consistency checks that are necessary to provide such guarantees.

Many clusters/networks will not need data reliability. But some do (e.g., those operating at high altitudes). The dr PML is intended for environments where reliability is an issue; users are willing to tolerate slightly slower applications in order to guarantee that their job does not crash (or worse, produce wrong answers).