Notes for Lab 4
- Add an if test around the receive call in
Global_sum(), so you only receive ifpartner < nproc. i.e:
if (rank < partner_rank) {
if (partner_rank < nproc) {
int partner_value;
MPI_Recv(&partner_value, 1, MPI_INT, partner_rank, 0, comm, MPI_STATUS_IGNORE);
sum += partner_value;
}
} else {
MPI_Send(&sum, 1, MPI_INT, partner_rank, 0, comm);
break;
}
-
int total; MPI_Reduce(&x, &total, 1, MPI_INT, MPI_SUM, 0, comm); -
If you used MPI_Allreduce(), all processes would have the total.
- This is just an exercise in message passing and managing process IDs. Just remap the nodes: e.g. for nodes 0-3 reducing to 0, if you want the reduction to 1 then just map node 1 to 0, 2 to 1, 3 to 2, and 0 to 3. i.e.
int shifted_rank = (rank + nproc - root) % nproc;
int shifted_partner_rank = (shifted_rank ^ bitmask);
partner = (shifted_partner_rank + root) % nproc;
(See the attached solution global_sum.c).
- See attached solutions
global_gather.candglobal_gather2.c.