Notes for Lab 4


  1. Add an if test around the receive call in Global_sum(), so you only receive if partner < nproc. i.e:
if (rank < partner_rank) {
  if (partner_rank < nproc) {
    int partner_value;
    MPI_Recv(&partner_value, 1, MPI_INT, partner_rank, 0, comm, MPI_STATUS_IGNORE);
    sum += partner_value;
  }
} else {
  MPI_Send(&sum, 1, MPI_INT, partner_rank, 0, comm); 
  break;
}
  1. int total;
    MPI_Reduce(&x, &total, 1, MPI_INT, MPI_SUM, 0, comm);
    
  2. If you used MPI_Allreduce(), all processes would have the total.

  3. This is just an exercise in message passing and managing process IDs. Just remap the nodes: e.g. for nodes 0-3 reducing to 0, if you want the reduction to 1 then just map node 1 to 0, 2 to 1, 3 to 2, and 0 to 3. i.e.
int shifted_rank = (rank + nproc - root) % nproc;
int shifted_partner_rank = (shifted_rank ^ bitmask);
partner = (shifted_partner_rank + root) % nproc;

(See the attached solution global_sum.c).

  1. See attached solutions global_gather.c and global_gather2.c.
bars search times arrow-up