It is very hard to specifically say what High Performance Computing is about, but it relates very closely to the idea of making the most of all the machines that you have. This in a way, has lots to do with Super Computing. This concept usually manifests itself in a fashion whereby the application using the machine(s) completely consumes all the resources available in the system.
By resources, this can mean Memory (RAM), Processing Power (CPU), Disk Space (HDD), Bandwidth and even Visualization resources such as Graphics Bandwidth and the GPU.
Most of the time, High Performance Computing is envisioned as Computational Clusters, Graphics Clusters or Large Shared Memory Machines. To make use of all these resources, the concept of Parallel Computing is used.
The fastest machines in the world are usually tabulated at certain periods and is maintained at Top500.org.
See my Guides for High Performance Computing for more information on installation, configuration and tools for High Performance Computing.
Linux and Macintosh Computational Clusters
I have done quite a fair bit of work on Clusters and have been filing my work or interesting notes into my web forum. The collection of things has grown over the years and it has gotten hard for me to list all of the work. The list of works are found in my Guides for High Performance Computing. In general, i would break them down into
- Linux Clusters
- Macintosh Clusters
While both the underlying principles are the same, the way the systems are built is different.
Linux and Macintosh Graphics Clusters
Graphics clusters are a little different from computational clusters in the aspect that they do not just stress the computational components in a cluster, but also provides visual output in a way that also stresses the graphical components of the system. Through which, visuals are provided.
There isn’t many graphics clusters that i have worked on, but visualization systems has been around for a long time. Setups such as Reality Centers and Immersive Environments has been solution-ed by SGI or the likes for a long time. It usually takes more than computer skills to make such setups a success. Things such as projectors, mounting frames, cables, and a hint of right-brain activity also makes such clusters a rarity.
I have so far only come across very few types of real graphics clusters. By which, i mean that the visualization system is driven by many small machines with commodity graphics hardware. Each machine is also only in-charge or visible only to a subset of the data and each displays only a part of the entire intended image. A distributed memory model is also used. Some of these models are highlighted below.
- Chromium based Linux Tiled Display
- Quartz based Macintosh Video Wall
- SAGE and Synergy – Scalable Adaptive Graphics Environment (http://www.evl.uic.edu/cavern/sage/index.php)
- Ensight by CEI (http://www.ensight.com/)
- COVISE – COllaborative VIsualization and Simulation Environment (http://www.hlrs.de/organization/vis/covise/)
There is also a new Parallel Rendering framework available that seems to provide superior scaling compared to using chromium. I still don’t understand most of it, but the information is available here.
no images were found
Industries
There are many industries that makes use of HPC in its day to day operations or in researching. Many of the applications deals with analysis, discovery and design. These industries includes:-
- Automotive
- Aerospace
- Financial
- Oil and Gas
- Animation
- Media
- Bioinformatics: Database management and collection of data in-silico
- Computational Biology: Modelling and data-mining of biological data and infer new hypothesis
- Science and Engineering
- Semi-Conductor
- … …