Wednesday 30 April 2014

Supercomputers and changing times



A supercomputer is a computer at the frontline of contemporary processing capacity particularly speed of calculation which can happen at speeds of nanoseconds.
Supercomputers play an important role in the field of computational science, and are used for a wide range of computationally intensive tasks in various fields, including quantum mechanics, weather forecasting, climate research, oil and gas exploration, molecular modeling (computing the structures and properties of chemical compounds, biological macromolecules, polymers, and crystals), and physical simulations (such as simulations of the early moments of the universe, airplane and spacecraft aerodynamics, the detonation of nuclear weapons, and nuclear fusion). 

Supercomputing, in the broadest sense, is about finding the perfect combination of speed and power, even as the definition of perfection changes as technology advances. But the single biggest challenge in high-performance computing (HPC) is now on the software side: Creating code that can keep up with the processors.

The history of supercomputing goes back to the 1960s .During those days the main idea was to use innovative designs and parallelism to achieve superior computational peak performance. While the supercomputers of the 1980s used only a few processors, in the 1990s, machines with thousands of processors began to appear both in the United States and in Japan, setting new computational performance records. Moore's Law is a famous "rule" of computer science which states that processing power will double every 1.5-2 years and it has  held true for over 50 years. By the end of the 20th century, massively parallel supercomputers with thousands of "off-the-shelf" processors similar to those found in personal computers were constructed and broke through the teraflop computational barrier. Progress in the first decade of the 21st century was dramatic and supercomputers with over 60,000 processors have appeared.

Here is the performance of the fastest supercomputer in the world, the past 15 years:

·  Top in 2010: 2.57 petaflops

·  Top in 2005: 280.6 teraflops

·  Top in 2000: 4.94 teraflops

·  Top in 1995: 170 gigaflops

 

 

One of the challenges to faster supercomputers is designing an operating system capable of handling that many calculations per second.Other important trends which can be identified include emphasis on having open, rather than proprietary, systems, and the growing awareness of energy efficiency as a requirement.

According to top scientists emphasis is shifting a little away from the speed/power paradigm and toward addressing software challenges. "What matters is not acquiring the iron, but being able to run code that matters". Rather than increasing the push to parallelize codes, the effort is on efficient use of codes.



Sources:

http://en.wikipedia.org/wiki/History_of_supercomputing

http://royal.pingdom.com/2010/12/02/incredible-growth-supercomputing-performance-1995-2010/

http://research.microsoft.com/en-us/um/people/blampson/72-cstb-supercomputing/72-cstb-supercomputing.pdf

http://www.businessinsider.in/Soon-Well-Have-A-Supercomputer-In-Every-Living-Room/articleshow/21329777.cms

 

Varun Singla 363/CO/11

No comments:

Post a Comment