PetaVision
Alpha
|
Public Member Functions | |
BorderExchange (MPIBlock const &mpiBlock, PVLayerLoc const &loc) | |
void | exchange (float *data, std::vector< MPI_Request > &req) |
MPIBlock const * | getMPIBlock () const |
int | getNumNeighbors () const |
int | getRank () const |
Private Member Functions | |
void | freeDatatypes () |
void | initNeighbors () |
int | neighborIndex (int commId, int direction) |
void | newDatatypes () |
std::size_t | recvOffset (int direction) |
int | reverseDirection (int commId, int direction) |
std::size_t | sendOffset (int direction) |
Private Attributes | |
std::vector< MPI_Datatype > | mDatatypes |
PVLayerLoc | mLayerLoc |
MPIBlock const * | mMPIBlock = nullptr |
unsigned int | mNumNeighbors |
std::vector< int > | neighbors |
Definition at line 15 of file BorderExchange.hpp.
|
private |
Returns the rank of the neighbor in the given direction If there is no neighbor, returns a negative value
Returns the intercolumn rank of the neighbor in the given direction If there is no neighbor, returns a negative value
Definition at line 177 of file BorderExchange.cpp.
|
private |
Returns the recv data offset for the given neighbor
Definition at line 398 of file BorderExchange.cpp.
|
private |
In a send/receive exchange, when rank A makes an MPI send to its neighbor in direction x, that neighbor must make a complementary MPI receive call. To get the tags correct, the receiver needs to know the direction that the sender was using in determining which process to send to.
Thus, if every process does an MPI send in each direction, to the process of rank neighborIndex(icRank,direction) with mTags[direction], every process must also do an MPI receive in each direction, to the process of rank neighborIndex(icRank,direction) with tag[reverseDirection(icRank,direction)].
Definition at line 324 of file BorderExchange.cpp.
|
private |
returns the send data offset for the given neighbor
Returns the send data offset for the given neighbor
Definition at line 431 of file BorderExchange.cpp.