Do you need to rapidly transform massive data streams? The HPE Apollo 6500 System provides the tools and the confidence to deliver high performance computing (HPC) innovation. The system consists of three key elements: the HPE ProLiant XL270d Gen9 Server tray, the HPE Apollo d6500 Chassis, and the HPE Apollo 6000 Power Shelf. The XL270d Gen9 Server provides up to 56 Tflops of single precision performance per server with eight NVIDIA® Tesla M40 GPU and two Intel® Xeon® E5-2600 v4 processors in a 2U server. With a configurable internal PCIe Gen3 fabric, choose to optimize the GPU topology to match your specific needs. High-bandwidth, low-latency networking is tightly coupled to the accelerators allowing you to take full advantage of your network. Two x16 PCIe Gen3 slots for your choice of high speed fabrics. The Apollo 6500 System: Your next accelerated computing solution.
- Solve problems faster with up to 56 Tflops of single precision performance per 2U node.
- Optimize accelerator configurations to match your workload.
- Faster communications between nodes with two PCIe Gen3 x16 slots to enable your choice of high speed fabrics.
Flexible Configuration for the Most Demanding High-performance Computing Workloads
The HPE Apollo 6500 System supports up to eight 300W GPU or coprocessors delivering increased performance.
For workloads optimized for high peer to peer communication among the accelerators, place four (4) GPU on a single high speed PCIe switch, and two banks to a CPU for eight (8) GPU per CPU.
For workloads requiring higher CPU to GPU communications, choose our four (4) GPU per CPU configuration.
The HPE ProLiant XL270d Gen9 Server supports industry standard Intel® Xeon® E5-2600 v4 processors, solid state drives (SSD) with 12 G SAS and up to 1024 GB DDR4 2400 MHz memory for blazing performance.
Up to 16 HPE DDR4 2400 MHz Memory Modules per HPE ProLiant XL270d Gen9 Server for faster performance with data-intensive application workloads.
High-bandwidth, Low-latency Networking Between Accelerator Nodes
The HPE Apollo 6500 System includes two low profile PCIe Gen3 x16 slots to enable your choice of high speed fabrics..
In the 8:1 GPU to CPU topology, networking is directly attached to the PCIe Gen3 fabric of the GPU for reduced latencies between GPU nodes.
Supports GPUDirect with four (4) GPUs per HPE InfiniBand Adapter.
HPE Apollo Systems
Additional Server Resources
- Advanced Power Manager
- HPE Server Performance Benchmarks
- HPE Server Customer Case Studies
- Server management
Server Support & Downloads