PEARL.mpi

MPI-related part of the PEARL library. More...


Classes

class  EventSet
 Container class for a set of local events with associated roles. More...
class  MpiCartesian
 Stores information related to a virtual cartesian MPI topology. More...
class  MpiComm
 Stores information related to an MPI communicator. More...
class  MpiMessage
 Abstraction layer for MPI messages. More...
class  MpiWindow
 Stores information related to an MPI-2 remote memory access window. More...
class  RemoteEventSet
 Container class for a set of remote events with associated roles. More...

Files

file  pearl.h
 Declarations of global library functions.
file  EventSet.h
 Declaration of the class EventSet.
file  MpiCartesian.h
 Declaration of the class MpiCartesian.
file  MpiComm.h
 Declaration of the class MpiComm.
file  MpiMessage.h
 Declaration of the class MpiMessage.
file  MpiWindow.h
 Declaration of the class MpiWindow.
file  RemoteEventSet.h
 Declaration of the class RemoteEventSet.


Detailed Description

This part of the PEARL library provides all functions and classes that are specific to the handling traces of MPI-based programs, including traces of hybrid OpenMP/MPI applications.

The following code snippet shows the basic steps required to load and set up the PEARL data structures to handle pure MPI traces (for information on how to handle serial, pure OpenMP or hybrid OpenMP/MPI traces, see the PEARL.base, PEARL.omp, and PEARL.hybrid parts of PEARL).

  // Initialize MPI, etc.
  ...
  // Initialize PEARL
  PEARL_mpi_init();

  // Load global definitions & trace data
  GlobalDefs defs(archive_name);
  LocalTrace trace(archive_name, mpi_rank);

  // Preprocessing
  PEARL_verify_calltree(defs, trace);
  PEARL_mpi_unify_calltree(defs, trace);
  PEARL_preprocess_trace(defs, trace);
  ...

Note that all of the aforementioned function calls except PEARL_mpi_init() throw exceptions in case of errors. This has to be taken into account to avoid deadlocks (e.g., one process failing with an exception while the other processes wait in an MPI communication operation).


SCALASCA    Copyright © 1998–2009 Forschungszentrum Jülich, Jülich Supercomputing Centre