Monday 3 February 2014

By Sumit Kumar and Shivam Rastogi.

GRANULARITY

Granularity is the extent to which a system is broken down into small parts, either the system itself or its description or observation. It is the extent to which a larger entity is subdivided. For example, a yard broken into inches has finer granularity than a yard broken into feet.

Coarse-grained systems consist of fewer, larger components than fine-grained systems; a coarse-grained description of a system regards large subcomponents while a fine-grained description regards smaller components of which the larger ones are composed.

The terms granularity, coarse, and fine are relative, used when comparing systems or descriptions of systems. An example of increasingly fine granularity: a list of nations in the United Nations, a list of all states/provinces in those nations, a list of all cities in those states, etc.

 

·         Programmer divides the program into coarse grain, compiler into fine grain and OS into middle grain.
Programmer is not concerned about fine grain dependency.

·         Coarse grained  systems are implemented in distrtibuted environment, i.e.  MIMD  - NORMA.

·         Middle grained systems are implemented in UMA(Uniform memory access).

·         Fine grained systems are implemented in Cores.

 

Speed Up (decrease in overall computation time with parallelism)

 Suppose we have a program with ‘N1’ no. of instructions and time taken by the processor to execute single instruction be ‘t1’.

So the total time taken by a single processor machine to execute all the instructions will be = N1* t1

Now if we have ‘N2’ no. of processors then the overall time will depend on

·         time taken per processor to executes its set of instructions :

(t1*N1)/N2

·         Communication time : (  t2*N3)  

Where ‘t2’ is single communication time

           ‘N3’ is the no. of times the communication is made

·         OS overheads (we will neglect these)

So the new time = (t1*N1)/N2 + (t2*N3)

Speed up       =          normal time/ time during parallel execution  

                        =          (N1*t1)/ ((t1*N1)/N2 + (t2*N3))

Dividing the numerator and denominator by (t1*N1)/N2

We have, 

Speed up       =          N2/ (1+ (t2/t1)*(N3/ (N1/N2))

We can write N1/N2 = N (size of computation)

So,

Speed up       =          N2/ (1+ (t2/t1)*(N3/ N)

Also N/N3 (size of computation by no. of communications) is size of computation b/w two communications, i.e. size of grain

So,

Speed up       =          N2/ (1+ (t2/t1)*(1/size of grain)

 

Observations from above relation

From the above relation,

For constant speedup the term in the denominator should be constant

=> If size of grain is increased then t2/t1 must be increased and

=> If size of grain is decreased then t2/t1 should also decrease

Here ‘t2’ is the communication time so we conclude that “size of grain is proportional to communication time”.

So, for systems with large grain size we can afford to have more communication time therefore in “coarse grained systems” we use “message passing paradigm”.

While with small grain size we can’t afford to have large ‘t2’ otherwise the speedup will decrease significantly so here in “fine grained systems” we use “shared memory paradigm”.

 -By Sumit Kumar and Shivam Rastogi.

No comments:

Post a Comment