HPE Apollo Systems

Breakthrough performance with economies of power and cooling in less space.

Apollo high-density servers for HPC, Big Data and more

Density-optimised for HPC, big data, and more

The HPE Apollo high-density server family is built for the highest levels of performance, and efficiency. They are rack-scale compute, storage, and networking solutions designed for massive scale-out. With a tiered approach for organisations of all sizes, they are suitable for your big data analytics, object storage, and high-performance computing (HPC) workloads. HPE Apollo features water-cooling that’s far more efficient than air as well as “right-sized scaling” with 2X the compute density for workgroup and private cloud workloads.

New possibilities for the data-driven organisation

Add purpose-built Compute to your infrastructure and accelerate time to value by reducing implementation time from months to days. Apply just the right amount of scalability, performance, and efficiency with a selection of hyperscale server systems optimised for specific workloads types and sizes.

Scale-up and Scale-out Workload Optimised Solutions


Optimise scale, performance, and cost-efficiency for each type of data and manage volume, velocity, and variety accordingly. Use HPE Apollo to provide just the capacity and performance you need with scale-up and scale-out, workload-optimised solutions.

High-density Economic Building Blocks


Enjoy significant capital and operating expense savings with high-density economic building blocks for efficient implementations at scale. Achieve up to 30 servers and 3.6 petabytes of capacity in a 42U1 rack in the HPE Apollo 4500.

Coming later in 2017: Extreme Performance in High Density

Extreme performance in high density

Apollo 6000 Gen10 System offers HPC with extreme performance in high density. We’ve taken the lead on a new Approach– thinking beyond the server and designing a rack-level solution that gives the right compute at the right economics.

Water-cooled supercomputing

The HPE SGI 8600 System is a liquid-cooled, tray-based, scalable, high-density clustered computer system designed from the ground up to run complex HPC workloads at petaflop speeds, scale seamlessly to thousands of nodes, and deliver leading energy efficiency.


in energy efficiency, according to Green500 list2


removal of system heat via datacenter water3


of connected nodes without external switches4

What's new in the portfolio

Apollo 70. new ARM based platform for large HPC cluster deployments

HPE Apollo70: New ARM-based HPC platform 

Disruptive ARM HPC processor technology with maximum memory bandwidth, familiar management and performance tools, and the density and scalability required for large HPC cluster deployments.

Coming Soon.

Apollo 2000 Flexible, Density-Optimised System

HPE Apollo 2000 multi-server platform

Double the density of traditional rack mount systems and firmware-level security – a bridge to scale-out server architecture for your HPC workloads.

SGI 8600 Scalable, High-Density Clustered Supercomputer

HPE SGI 8600: Dense, clustered supercomputing

A scalable, high-density clustered supercomputer system with liquid cooling for substantial power efficiency and energy savings.5 Industry-standard processors, interconnect, and software provide maximum flexibility and performance.

Apollo 6000 Water-Cooled High Performance Computing

HPE Apollo 6000 Gen10: Performance-optimised

A performance-optimised, air-cooled system that offers outstanding configuration flexibility, redundant power, simplified administration, and rack-scale efficiency to lower the total cost of ownership (TCO) needed for mass scale HPC deployments.

Complete HPE Apollo portfolio

See all components across the entire HPE Apollo line of high density servers to explore and compare details.

Technical specifications

Apollo 2000 Systems

- 1U and 2U chassis

- Up to 4 hot-pluggable server nodes

- DDR4-2666/1.5TB/16 DIMM slots

- Up to 24 SFF HDD/SSD or up to 16 NVMe SSD and two 2280 M.2 optional, with HPE Performance Software Suite and HPE Insight Cluster Management Utility (CMU)

Apollo 4000 Systems

- 2U and 4U chassis

- 28 LFF or 54 SFF hot-plug drives

- Available with HPE Performance Software Suite and HPE Insight Cluster Management Utility (CMU)

Apollo 6000 Systems

- Up to 2 processors per node – 14-28 Cores - 2.5 GHz

- 4 SFF HDD/SSD – 4 NVMe SSD option and 2 internal M.2 optional

- DDR4-2666/1.5 TB/16 DIMM slots

Healthcare and first wave genomics

Genetic sequencing techniques reveal links between genetics, health, and treatment. Learn how genomics technology affects patients, providers, and IT.

Contact us

Tap into our extended ecosystem of partners, integrators and ISVs ready to deliver the right HPE Apollo system for your unique needs.


Video : Choosing the Right Infrastructure for Your Data-Driven Organisation


Learn how HPE can harness the power of your data to generate intelligent insights and innovative services to drive better business results.

Article : Meet HPE Apollo: Reinventing HPC and the Supercomputer


Administrators, engineers and executives are now tasked with solving some of the world’s most complex challenges revolving around advanced computations for science, business, education, pharmaceuticals and beyond.

Article : Arctic University Creates Efficient Data Centre with Apollo 8000


Institution aims to create the world’s most efficient data centre with the HPE Apollo 8000 System.

  1. Based on 10 HPE Apollo 4530 Systems with 8TB HDDs
  2. Top500.org’s Green500, June 2017.
  3. Based on typical operating conditions: Data centre water at least 5°C cooler than data centre air ambient temperatures; the HPE SGI 8600 is data centre “room neutral” and exudes no heat into the data centre and does not add any load to data centre air cooling equipment.
  4. Claim: Scaling to >10,000 nodes without additional switches4 using integrated switches and hypercube technology. Substantiation: 4 HPE Internal testing on SGI ICE XA hierarchical cluster auto-provisioning up to 10K+ nodes, June 2015. Calculation: go to https://www.nas.nasa.gov/hecc/resources/pleiades.html and add up nodes stated by NASA on their public page under “Pleaides Node Detail” Pleiades is the name of the ICE system installed there.  Stated is 2016+2088+5400+1936 which equals 11440 nodes. NASA Pleiades, May 2017.
  5. Substantial savings in cooling costs with Closed-Loop Airflow, which ensures no air within the cell is mixed with data centre air Substantiation: HPE Internal Engineering study “Liquid Cooling Value Proposition”, WWAS February 2017