What is Supercomputing?
Supercomputing efficiently solves extremely complex or data intensive problems by concentrating the processing power of multiple, parallel computers.
+ show more
The term "supercomputing" refers to the processing of massively complex or data-laden problems using the concentrated compute resources of multiple computer systems working in parallel (i.e. a "supercomputer"). Supercomputing involves a system working at the maximum potential performance of any computer, typically measured in Petaflops. Sample use cases include genomics, astronomical calculations, and so forth.
Supercomputing enables problem solving and data analysis that would be simply impossible, too time-consuming or costly with standard computers, e.g. fluid dynamics calculations. Today, big data presents a compelling use case. A supercomputer can discover insights in vast troves of otherwise impenetrable information. High Performance Computing (HPC) offers a helpful variant, making it possible to focus compute resources on data analytics problems without the cost of a full-scale super computer.
HPE approaches supercomputing through a High Performance Computing (HPC) architecture. HPC makes it possible to overcome traditional cost barriers to supercomputing. You can choose how much compute power you want to concentrate in HPC clusters. Our HPC solutions empower innovation at any scale, building on our purpose-built HPC systems and technologies solutions, applications and support services.
Featured HPC product families
HPE SGI 8600
HPE Apollo 6000 Gen10
HPE Integrity Servers
HPE Integrity MC990 X
How supercomputing democratizes AI: Organizations benefit from machine learning apps