Power your CPU and GPU-accelerated compute nodes without I/O bottlenecks
Power your CPU and GPU-accelerated compute nodes without I/O bottlenecks
Learn how to drive new efficiencies and reduce costs with HPC Storage and see how the Cray ClusterStor E1000 storage system compares with other vendor’s approaches.
Download the Business Whitepaper
Power your CPU and GPU-accelerated compute nodes without I/O bottlenecks
Power your CPU and GPU-accelerated compute nodes without I/O bottlenecks
Learn how to drive new efficiencies and reduce costs with HPC Storage and see how the Cray ClusterStor E1000 storage system compares with other vendor’s approaches.
Download the Business Whitepaper
Power your CPU and GPU-accelerated compute nodes without I/O bottlenecks
Power your CPU and GPU-accelerated compute nodes without I/O bottlenecks
Learn how to drive new efficiencies and reduce costs with HPC Storage and see how the Cray ClusterStor E1000 storage system compares with other vendor’s approaches.
Download the Business Whitepaper

The confluence of simulation and AI changes everything

This confluence put new demands on HPC storage as the input/output patterns of both workloads could not be more different. And it is happening right now. A recent study of the independent analyst firm Intersect360 found out that 63% of the HPC users today already are running machine learning programs. Hyperion Research forecasts that, at current course and speed, HPC storage spending in public sector organizations and enterprises will grow 57% faster than spending for HPC compute for the next three years.

+ show more
Spend less on HPC/AI and more on CPU/GPU compute

Seymour Cray once said, "Anyone can build a fast CPU. The trick is to build a fast system.” When it comes to HPC and AI, anyone can build fast file storage. The trick is to build a fast, but also cost-effective and scalable file storage system. We achieve this by embedding the leading parallel file systems into parallel storage products from HPE with cost effectiveness built in.

New Era of HPC Storage: The Best of Both Worlds

Traditional HPC cluster
New Era Converged
Traditional AI stack (POD)
Primary workload

Modeling & simulation

Both

Machine Learning/Deep Learning

Compute node type

 

CPU nodes like HPE Apollo 2000

Both

GPU nodes like HPE Apollo 6500

Number of compute nodes

Hundreds to thousands

Hundreds to thousands

A few

Typical interconnect

InfiniBand

Both (plus HPE Slingshot)

Gigabit Ethernet

Primary I/O pattern

Write-intensive

Both

Read-intensive

Storage capacity measured in

Petabytes

Petabytes

Terabytes

Storage scalability in single namespace

Up to exabytes

Up to exabytes

Up to low double digits petabytes

Typical storage

HDD-based parallel file system storage

Cray ClusterStor E1000

HPE Parallel File System Storage

All Flash Enterprise NAS storage

Storage well suited for

Serving large files in sequential order at speeds of up to terabytes per second

Serving files of all sizes in both sequential and random order at speeds of up to terabytes per second

Serving files of all sizes in both sequential and random order at speeds of up to double digit gigabytes per second

Price per terabyte

$

$$

$$$$$

Scalable, cost-effective performance

Designed to deliver performance and scalability far beyond scale-out NAS, our storage systems use leading parallel file systems, and do not require capacity-based licensing of the file system. HPE Pointnext provides support for the complete storage system, hardware and software.

+ show more
HPE Parallel File System Storage
Cray ClusterStor E1000 Storage System
Parallel file system

IBM Spectrum Scale Erasure Code Edition

Lustre

Typically attached to

Clusters of HPE Apollo Systems or HPE ProLiant DL rack servers with HPE IB HDR/Ethernet 200Gb adapters

HPE Cray EX supercomputers or very large clusters of HPE Apollo systems and HPE ProLiant DL servers

Number of compute nodes

10s to a few 100s

Several 100s to 1,000s

Type of compute nodes

GPU and/or CPU

GPU and/or CPU

Connected with

InfiniBand HDR or 100/200 GbE

HPE Slingshot or InfiniBand HDR or 100/200 GbE

Set of enterprise grade storage features

Comprehensive

Basic

Protocol support

POSIX, NFS, SMB, HDFS, Object

POSIX

Scale-out typically in

Terabytes (starts at 27 TB)

Petabytes (starts at one PB)

Best suited for HPC and AI in

Manufacturing, healthcare and life sciences, financial services, high tech, pharma, telco, retail, and for home directories in organizations listed in the right column

Research laboratories and universities, defense and intelligence agencies, space agencies, climate research, weather forecasting, and geosciences, including oil and gas

Parallel storage solutions for the new era

Whether you prefer IBM Spectrum or Lustre, we have the right parallel storage system for you. And with HPE Data Management Framework (DMF) V7 you can move files between them or out of them for data protection or long-term archive purposes.

+ show more
Cray ClusterStor E1000 Storage System

This storage system embeds Lustre, the leading parallel file system for scientific and federal use cases. It’s ideal for attachment with HPE Slingshot, InfiniBand HDR and 100/200 GbE to HPC Cray EX supercomputers and large clusters of HPE Apollo systems or HPE ProLiant DL rack servers running modelling and simulation, AI or high performance data analytics workloads. HPE Pointnext Services support both hardware and software.

HPE Parallel File System Storage

This storage system embeds IBM Spectrum Scale, the leading parallel file system for enterprises. It’s ideal for attachment with InfiniBand HDR and 100/200 GbE to clusters of HPE Apollo systems or HPE ProLiant DL rack servers running modelling and simulation, AI or high performance data analytics workloads. HPE Pointnext Services support both hardware and software. Storage servers are cost-effective HPE ProLiant DL rack servers.

HPE Data Management Framework (DMF)

Optimize HPC storage resources by enabling a hierarchical, tiered storage management architecture. The HPE Data Management Framework optimizes storage resource utilization for lower TCO and streamlines data workflows with time proven automated data movement across the full storage hierarchy. Now with native support for IBM Spectrum Scale in addition to Lustre and HPE XFS.