HPE Apollo Systems

Breakthrough performance with economies of power and cooling in less space.

Apollo high-density servers for HPC, Big Data and more

Density-optimized for HPC, Big Data and More

The HPE Apollo high-density server family is built for the highest levels of performance and efficiency. They are rack-scale compute, storage, and networking solutions designed for massive scale-out. With a tiered approach for organizations of all sizes, they are suitable for your big data analytics, object storage, and high-performance computing (HPC) workloads. HPE Apollo features water-cooling that’s far more efficient than air as well as “right-sized scaling” with 2X the compute density for workgroup and private cloud workloads.

New Possibilities for the Data-Driven Organization

Add purpose-built Compute to your infrastructure and accelerate time to value by reducing implementation time from months to days. Apply just the right amount of scalability, performance and efficiency with a selection of hyperscale server systems optimized for specific workloads types and sizes.

Scale-up and Scale-out Workload Optimized Solutions

Scale

Optimize scale, performance and cost-efficiency for each type of data and manage volume, velocity and variety accordingly. Use HPE Apollo to provide just the capacity and performance you need with scale-up and scale-out, workload-optimized solutions.

High-density Economic Building Blocks

Economics

Enjoy significant capital and operating expense savings with high-density economic building blocks for efficient implementations at scale. Achieve up to 30 servers and 3.6 petabytes of capacity in a 42U1 rack in the HPE Apollo 4500.

Coming later in 2017: Extreme Performance in High Density

Extreme Performance in High Density

Apollo 6000 Gen10 System offers HPC with extreme performance in high density. We’ve taken the lead on a new approach—thinking beyond the server and designing a rack-level solution that gives the right compute at the right economics.

Water-cooled supercomputing

The HPE SGI 8600 System is a liquid cooled, tray-based, scalable, high-density clustered computer system designed from the ground up to run complex high performance computing (HPC) workloads at petaflop speeds, scale seamlessly to thousands of nodes, and deliver leading energy efficiency.

#1

in energy efficiency, according to Green500 list2

100

removal of system heat via datacenter water3

1000

of connected nodes without external switches4

What's New in the Portfolio

New HPC Compute Experience

HPE SGI 8600: Dense, Clustered Supercomputing

A scalable, high-density clustered supercomputer system with liquid cooling for substantial power efficiency and energy savings.5 Industry-standard processors, interconnect, and software provide maximum flexibility and performance.

Apollo 4520 Extreme Density - Flexible Scale

HPE Apollo 6000 Gen10: Performance-Optimized

A performance-optimized, air-cooled system that offers outstanding configuration flexibility, redundant power, simplified administration, and rack-scale efficiency to lower the total cost of ownership (TCO) needed for mass scale HPC deployments.

Apollo Next Gen Sequencing

HPE Apollo Next Generation Sequencing Solution

HPE understands the HPC needs of next generation sequencing (NGS) technologies that are rapidly transforming the industry. We have developed a solution to execute genome analytics faster, more flexibly and cost-effectively.

Apollo 6500 High Density GPU Compute

HPE Apollo 6500: High Density GPU Compute

Solve problems faster with up to 15 Tflops of single-precision performance per 2U node. The new HPE Apollo 6500 increases ROI by accelerating the performance of your workloads with up to eight GPUs or coprocessors and high-powered analysis.

Complete HPE Apollo Portfolio

See all components across the entire HPE Apollo line of high density servers to explore and compare details.

Healthcare and first wave genomics

Genetic sequencing techniques reveal links between genetics, health, and treatment. Learn how genomics technology affects patients, providers and IT.

Contact Us

Tap into our extended ecosystem of partners, integrators and ISV’s ready to deliver the right HPE Apollo system for your unique needs.

Resources

Video : Choosing the Right Infrastructure for Your Data-Driven Organization

HPE-Choosing-Right-Infrastructure_2x-md
Video

Learn how HPE can harness the power of your data to generate intelligent insights and innovative services to drive better business results.

Article : Meet HPE Apollo: Reinventing HPC and the Supercomputer

apollo-resources_3
Article

Administrators, engineers and executives are now tasked with solving some of the world’s most complex challenges revolving around advanced computations for science, business, education, pharmaceuticals and beyond.

Article : UiT The Arctic University of Norway Pushes Hyperscale Efficiency

apollo-resources_4
Article

Institution aims to create the world’s most efficient data center with the HPE Apollo 8000 System.

  1. Based on 10 HPE Apollo 4530 Systems with 8 TB HDDs
  2. Top500.org’s Green500, June 2017.
  3. Based on typical operating conditions: Data center water at least 5°C cooler than data center air ambient temperatures; the HPE SGI 8600 is data center “room neutral” and exudes no heat into the data center and does not add any load to data center air cooling equipment.
  4. Claim: Scaling to >10,000 nodes without additional switches4 using integrated switches and hypercube technology. Substantiation: 4 HPE Internal testing on SGI ICE XA hierarchical cluster auto-provisioning up to 10K+ nodes, June, 2015. Calculation: go to https://www.nas.nasa.gov/hecc/resources/pleiades.html and add up nodes stated by NASA on their public page under “Pleaides Node Detail” Pleiades is the name of the ICE system installed there.  Stated is 2016+2088+5400+1936 which equals 11440 nodes. NASA Pleiades, May 2017.
  5. Substantial savings in cooling costs with Closed-Loop Airflow, which ensures no air within the cell is mixed with data center air Substantiation: HPE Internal Engineering study “Liquid Cooling Value Proposition”, WWAS February 2017