Wednesday 29 January 2014

Fwd: Optimizing Compilers



---------- Forwarded message ----------
From: Siddharth Saluja <siddharthsaluja53@gmail.com>
Date: Thu, Jan 30, 2014 at 2:19 AM
Subject: Optimizing Compilers
To: apmahs.nsit.coeunited@blogger.com


                          Optimizing Compilers

 

Effective optimizing compilers need to gather information about the structure and the flow of control through programs.

 

·        Which instructions are always executed before a given instruction.

·        Which instructions are always executed after a given instruction.

·        Where the loops in a program are 90% of any computation is normally spent in 10% of the code: the inner loops.

 

Features of Optimization Techniques:

·        The most complex component of modern compilers must always be

sound, with  semantics-preserving.

·        Need to pay attention to exception cases as well

·        Use a conservative approach: risk missing out optimization rather

than changing semantics.

·        Reduce runtime resource requirements (most of the time)‏

·        Usually, runtime, but there are memory optimizations as well

·        Runtime optimizations focus on frequently executed code

·        Cost-effective, i.e., benefits of optimization must be worth

the effort of its implementation.

 

Various Levels :

 

High-level optimizations

• Operate at a level close to that of source-code

• Often language-dependent

 

 

Intermediate code optimizations

• Most optimizations fall here

• Typically, language-independent

 

Low-level optimizations

• Usually specific to each architecture

 

High Level Optimizations :

• Inlining

•Replace function call with the function body

• Partial evaluation

•Statically evaluate those components of a program that can be evaluated

• Tail recursion elimination

• Loop reordering

• Array alignment, padding, layout

 

Intermediate Level Optimizations :

• Common subexpression elimination

• Constant propagation

• Jump-threading

• Loop-invariant code motion

• Dead-code elimination

• Strength reduction

 

Low-level Optimizations

• Register allocation

• Instruction Scheduling for pipelined machines.

• loop unrolling

• instruction reordering

• delay slot filling

• Utilizing features of specialized components, e.g., floating-point units.

• Branch Prediction

 

Constant propogation :

 

·        Identify expressions that can be evaluated at compile time, and replace them with their values.

 

Strength reduction :

·        Replace expensive operations with equivalent cheaper (more efficient) ones.

y = 2;

ð y = 2;

z = x^y; z = x*x;

 

·        The underlying architecture may determine which operations are cheaper and which ones are more expensive.

 

Loop-Invariant Code Motion :

•Move code whose effect is independent of the loop's iteration outside the loop.

 

Common Subexpression Elimination:

 

·        Expression previously computed

 

§  Values of all variables in expression have not changed.

§  Based on available expressions analysis

 

·        Dead Code Elimination :

 

§  Dead variable: a variable whose value is no longer used

§  Live variable: opposite of dead variable

§  Dead code: a statement that assigns to a dead variable

 

BY -
Siddharth Saluja , Varun Jain

6 comments:

  1. Data Flow dependencies are also resolved by the compilers upto some extent in a way..
    We can consider a very basic example of putting instruction in between two instructions which are dependent to give the required clock cycle to the first instruction for producing
    the result used by the second instruction ....Although only this kind of dependency or we can say hazard to the parallelism cannot be resolved completely....
    And from the above post I didn't get the point "Loop-Invariant Code Motion"...If any of you guyz got this pls elaborate ... thanx in advance :)

    ReplyDelete
  2. The post of compiler optimizations is gives a lot of cues. I myself did not know about "tail recursion elimination" - this avoid the very tall stack frame taken by deep recursion.
    It is also an interesting fact that abt 90% of computation time is spent in 10% code in inner loops.
    We will ofcourse be doing some bit of compiler optimizations - but thanks Sidharth and Varun for generating the interest.
    Well - someone pl get the answer to Uzma' query!

    ReplyDelete
  3. @uzma. loop invariant code motion refers to the process of moving that part of the code outside the loop which is actually not affected by the loop...for ex.consider this code.
    for(i=0;i<n;i++)
    {
    x+=i;
    y=2;
    }
    Here y is assigned the value 2 at every iteration of the loop.The compiler brings this statement out of the loop. This is what i understood.It might mean something different as well !

    ReplyDelete
  4. @uzma@suraj
    Loop-invariant computations
    A definition
    t = x ⊕ y
    in a loop is loop-invariant if
    x and y are constants, or
    all reaching definitions of x and y are
    outside the loop, or
    only one definition reaches x (or y), and
    that definition is loop-invariant

    ReplyDelete
  5. @varun can you pls. elaborate the part "all reaching definitions of x and y are outside the loop, or only one definition reaches x (or y)".
    and @suraj if that was the case then why there was a need to assign value = 2 to 'y' variable each time the loop is iterated.

    ReplyDelete