c++ - Using Loops in MPI -


i'm trying sort large arrays reversal , coding mpi on c.

basically, program splits array portions workers , each worker finds own increasing , decreasing strips , sends strips root. root makes reversals finding , using max , min elements of these strips. program ends when there no break point left, means array sorted.

it long code, simplified problem:

int *array;  int main(int argc, char *argv[])  {      int p_id, n_procs, flag = 1;     mpi_init(&argc, &argv);     mpi_status status;     mpi_comm_rank(mpi_comm_world, &p_id);     mpi_comm_size(mpi_comm_world, &n_procs);      if(p_id == 0) {           array = createrandomarray(n_data);         // print unsorted array          while(hasbreakpoints(array, n_data)) {              for(i=1;i<n_procs; i++)                  // send portions workers              for(i=1;i<n_procs; i++)                 // receive each strip workers              // find max , min of strips             // make reversals on "array"             }         flag = 0;         // print sorted array     }     else {          while(flag == 1) {             // receive portion root             // find own strips             // send own strips root         }      }     mpi_finalize();     return 0; } 

as can see, need use while loop run program until no break point left. know number of mpi_send commands have equal number of mpi_receive commands. so, created flag run root , workers equal times.

by use of lazy approach, program working without error never ending , doesn't goes mpi_finalize. there fix on or more efficient way use? help.

your flag variable being local each process, have find way of transferring value process #0 other processes when changes.

well actually, can solve issue playing message tags example. worker processes just receive root using mpi_any_tag , decide next, i.e sending data or finishing, depending on actual tag value received. (not tested):

int *array;  int main(int argc, char *argv[])  {      int p_id, n_procs, flag = 1;     mpi_init(&argc, &argv);     mpi_status status;     mpi_comm_rank(mpi_comm_world, &p_id);     mpi_comm_size(mpi_comm_world, &n_procs);      const int compute=1, stop=2;      if(p_id == 0) {           array = createrandomarray(n_data);         // print unsorted array          while(hasbreakpoints(array, n_data)) {              for(i=1;i<n_procs; i++)                  // send portions workers using tag compute                 mpi_send(......, compute, ...);              for(i=1;i<n_procs; i++)                 // receive each strip workers              // find max , min of strips             // make reversals on "array"             }         // send stop message using tag stop         for(i=1;i<n_procs; i++)              mpi_send(....,  stop, ...);         // print sorted array     }     else {          while(flag == 1) {             // receive portion root using mpi_any_tag             mpi_recv(..., mpi_any_tag, ..., &status);             if ( status.mpi_tag == compute ) {                 // find own strips                 // send own strips root             }             else                 flag = 0;         }      }     mpi_finalize();     return 0; } 

Comments

Popular posts from this blog

get url and add instance to a model with prefilled foreign key :django admin -

css - Make div keyboard-scrollable in jQuery Mobile? -

ruby on rails - Seeing duplicate requests handled with Unicorn -