Enable unprecedented levels of automation and agility with cloud computing solutions.
Break through with massive compute
With High Performance Computing (HPC) from HPE, you can overcome the barriers to supercomputing and compete in increasingly aggressive markets. When you are tasked with solving the world’s largest scientific, engineering and data analysis problems, you need the planet’s most powerful, most efficient machines.
Greater computing capabilities through energy efficiency
Energy efficiency has become an increasingly crucial requirement in the race to develop the next generation of high performance computing systems. New advancements in energy efficiency research enable more powerful and efficient HPC systems.
Accelerate deep learning intelligence
HPE enables you to accelerate real-time insights and intelligence for deep learning with innovations in system design, partner ecosystem, and HPE Pointnext expertise.
The world’s most advanced production supercomputer
The HPE SGI 8600 is the world’s most advanced production supercomputer. It has been architected for best performance, scale, and efficiency.
HPC for departments of all sizes
With HPE Apollo 10 Systems, even the smallest department can run deep learning and mixed workload HPC on industry-standard accelerated compute servers.
A range of HPC capabilities
HPE offers HPC solutions that can tackle a wide range of workloads at multiple levels of scale.
In a world where everything computes, you can harness compute to create insights from data and new business models from ideas. This is the genesis of the HPE Compute Experience, powered by Gen10, 3PAR, Nimble Storage, and Arista Networking.
HPE builds more of the most powerful supercomputers than any other company. To keep America the world leader in supercomputing, we are working to deliver exascale computing technologies to the world with exponentially higher performance and efficiency compared to today’s supercomputers.
Pushing the exascale frontier with the U.S. DoE PathForward programme
HPE received a U.S. Department of Energy (DoE) PathForward Programme award to develop an Exascale prototype design. HPE Expertise in HPC and AI, and innovations in Memory-Driven Computing are helping us overcome significant constraints in systems architecture, component technology, energy efficiency, size, and cost.
Building a global chemical research supercomputer
BASF collaborates with HPE to develop one of the world’s largest supercomputers for industrial chemical research based on the new generation of HPE Apollo 6000 systems. The new supercomputer drives the digitalisation of BASF's worldwide research.
Insights into the universe from in-memory HPC
COSMOS, founded by Stephen Hawking et al, leverages HPE Superdome Flex to analyse massive data sets and further our understanding of the universe.
Providing access to resources through HPC and big data
Pittsburgh Supercomputing Center (PSC) uses HPE Apollo 2000, HPE Integrity Superdome X and HPE ProLiant DL580 servers for its award-winning Bridges resource.
Ghent University in Belgium speeds research
The University’s HPE Apollo 6000 System supports bioinformatics, engineering and statistics research, while keeping costs down and enhancing operational efficiency.
Accelerating HPC with HPE Apollo 6000 Gen10 system
Bringing the advanced HPC capabilities of the HPE Apollo 6000 Gen10 System required a multi-faceted, collaborative engineering project. The exhaustive process meant striving for easier serviceability, better deployment, power use, density and more.
These scalable high performance computing servers optimise performance and capacity for specific workloads, so you can power innovations and find solutions quickly and efficiently.
New! Solve complex, data-intensive HPC problems at unparalleled scale by leveraging 4-32 sockets and 1-48 TB of in-memory computing capacity in a single system.
The HPE SGI 8600 is a liquid-cooled, scalable, high-density clustered computer system architected for best performance, scale, and efficiency.
Tailor your system precisely to meet your most demanding HPC workload requirements. Leverage the exceptional price and superlative compute performance of rack-level air-cooled density.
Run Hadoop and other big data analytics while maximising disk storage and implementing object storage with petabyte-scale data volume.
This high-density, scalable two-unit server gives you greater performance and workload capacity than standard one-unit servers – at a comparable cost.
Mixed workload HPC for even the smallest department, running on industry-standard accelerated compute servers.
Components delivering HPC
Building out an HPC environment requires a variety of components. Take a look at hardware and software offerings that can help you complete your system.
Bringing more HPC power to more industries
We’re making HPC available and affordable to business, scientific and academic communities of every size – everywhere.
Get more from high-performance computing
Have questions or need help getting started? Get in touch with us to assess your needs and move quickly to the HPC solution that’s right for you.
Innovate and Drive Results with Services for HPE Apollo
Meet HPE Apollo: Reinventing HPC and the Supercomputer
The Arctic University of Norway pushes hyperscale efficiency
Poland steps up its supercomputing game
Choosing the Right Infrastructure for Your Data-Driven Organisation
All-in on AI
What Is Supercomputing