Wednesday, 23 April 2014

Fwd: blog



Message Passing Interface (MPI)


I. Introduction


                If shared memory is not present, two or more processors will have to work asynchronously. They can --

--  Use a Host Processor.

--  Use a BUS for communication.

For Example,

                If two friends want to communicate and they are  working asynchronously, they will have to use an interface like SMS, E-mail or Postcard. It is not necessary for the other friend to receive this message as he/she may be working on another task. So, this message has to be stored in a free space like a mailbox or a Post Office.

                In a network, a server in-between can collect all the messages and send them to their respective addresses. In MPI, when many processors have to communicate with each other, they create MPI_Comm_World which works as a Post Office.

 


II. MPI IPC C-Program Sample


#include<mpi.h>              // library for implementing MPI IPC.

#include <stdio.h>

#include <string.h>

int main(int argc , char* argv[])

{

MPI_Init(&argc,&argv);                 //MPI Program starts with this function.

 {

  //Actual program.

  }

MPI_Finalize();                 //MPI Program ends with this function.

return 0;

}

 

III. MPI_Init()


 -- Prototype :

 MPI_Init(int* argc_p, char*** argv_p);


-- Purpose :
 Its main purpose is to define a communicator in MPI_Comm_World. It may be an entire network or a small part of it.

 

IV. MPI_Comm_World


-- Purposes :

1) It allocates some storage or buffers.
2) It allocates an ID to each processor.

3) It provides information about the number of processors.


--Sub-Functions Prototypes :

1) MPI_Comm_Size( MPI_Comm MPI_Comm_World, int*  commsize_p)

2) MPI_Comm_Rank( MPI_Comm MPI_Comm_World, int*  rank_p)

        All processes can know their rank and size using the above mentioned functions.

 




 


V. MPI_Communicators


Certain functions are present in the header file that can be used to combine the results of different processes. Some of these functions are --


1) MPI_Send()

-- Prototype :

MPI_Send( void* msgbuffer_p, int msgsize,

                                 MPI_Datatype msgtype, int destination,

                                int tag, MPI_Comm  MPI_Comm_World);

--  msgbuffer_p : it points at the address where the buffer is present.

-- msgsize           : it stores the size of the message.

-- destination     : it stores the destination of the message.

-- tag                     : it can be used as a flag for if-else statements. For Example, It stores 1 if message is to be printed or 0 if it is not.

 

2) MPI_Receive()


-- Prototype :

MPI_Receive( void* msgbuffer_p, int maxbuffersize,

                                 MPI_Datatype msgtype, int source,

                                int tag, MPI_Comm  MPI_Comm_World,

                                MPI_Status* status_p);

--  msgbuffer_p   : it points at the address where the buffer is present.

-- maxbuffersize : it stores the value of maximum buffer size. Max size is used because the receiver doesn't know the actual size of the message being received.

 -- source              : it stores the source of the message.

-- tag                     : it can be used as a flag for if-else statements. For Example, It stores 1 if message is to be printed or 0 if it is not.

 

# MPI_Any_Tag : In case there are multiple messages being sent from the same source then value of tag will not be sequential, so MPI_Any_Tag can update the value accordingly.

 

3) MPI_Reduce() : It is an inbuilt function included in the header file. It provides the facility to the programmer to execute the task efficiently. The main use is to sum the result from all the processors. As the time taken for computation and compilation depends upon the hardware or architecture, this function chooses the best method to sum up the result efficiently.

 

 

# Note :

This is SPMD (Single Program Multiple Data) model and has medium-coarse Granularity, in contrast with SIMD, which has fine granularity.

 

# Something to Ponder Upon :

 Boolean Satisfiability Problem --

-- Given a Circuit along with a set of inputs, determine whether the output is true or false.


(Hint : Using MPI, a user can implement a program that will take 1 of the inputs from the set of input combinations and further compute the output.)

# No reference has been taken from any internet source or any book while writing this blog. All images belong to Vijay Sahil Sondhi. :P

# EXTERNAL LINKS

-- Example Program : http://en.wikipedia.org/wiki/Message_Passing_Interface#Example_program

-- SAT : http://en.wikipedia.org/wiki/Boolean_satisfiability_problem

 

# Notes Compiled by --

                Vijay Sahil Sondhi   365/CO/11

                Vikas Mulodhia       367/CO/11



No comments:

Post a Comment