See the 10 reasons to choose HPC & AI Storage from HPE
See the 10 reasons to choose HPC & AI Storage from HPE
Learn how to drive new efficiencies and reduce costs with HPC Storage and see how the Cray ClusterStor E1000 storage system compares with other vendor’s approaches.
Download the Business Whitepaper
See the 10 reasons to choose HPC & AI Storage from HPE
See the 10 reasons to choose HPC & AI Storage from HPE
Learn how to drive new efficiencies and reduce costs with HPC Storage and see how the Cray ClusterStor E1000 storage system compares with other vendor’s approaches.
Download the Business Whitepaper
See the 10 reasons to choose HPC & AI Storage from HPE
See the 10 reasons to choose HPC & AI Storage from HPE
Learn how to drive new efficiencies and reduce costs with HPC Storage and see how the Cray ClusterStor E1000 storage system compares with other vendor’s approaches.
Download the Business Whitepaper

AI and HPC converged for your storage needs

This confluence put new demands on HPC storage as the input/output patterns of both workloads could not be more different. And it is happening right now. A recent study of the independent analyst firm Intersect360 found out that 66% of the HPC users today already are running machine learning programs. Hyperion Research forecasts that, at current course and speed, HPC storage spending in public sector organizations and enterprises will grow 57% faster than spending for HPC compute for the next three years.

HPE Storage: bringing HPC and AI together for scaled performance

Traditional HPC cluster
with parallel file system
New Era Converged
Traditional AI POD
with NFS storage
Primary workload

Modeling & simulation

Both
(plus Large Language Models)

Machine Learning/Deep Learning

Compute node type

 

Dense CPU nodes like HPE Apollo 2000
or HPE Cray XD2000

Both

Dense GPU nodes like HPE Apollo 6500
or HPE Cray XD6500

Number of compute nodes

Hundreds to thousands

Hundreds to thousands

Tens

Typical interconnect

InfiniBand

Both
(plus HPE Slingshot)

Gigabit Ethernet

Primary I/O pattern

Write-intensive

Both

Read-intensive

Storage capacity measured in

Petabytes

Petabytes

Terabytes

Storage scalability in single namespace

Up to exabytes

Up to exabytes

Up to petabytes

Typical storage

HDD-based parallel file system storage

Cray ClusterStor E1000
Storage Systems

All Flash Enterprise NAS/NFS storage

Storage well suited for

Serving large files in sequential order at speeds of up to terabytes per second

Serving files of all sizes in both sequential and random order at speeds of up to terabytes per second

Serving files of all sizes in both sequential and random order at speeds of up to double digit gigabytes per second

Optimized for

Writes
(Checkpoint & Restart)

Both

Reads
(Ingest & Load)

Storage architecture

Diskless compute nodes
(for optimal price/performance)

Diskless compute nodes
with high performance shared file system

Local SSD in compute nodes
(often “stranded” performance)

Storage created and supported  by

Compute system vendor

Compute system vendor

3rd party storage vendor

Price per terabyte

$

$$

$$$$$

Parallel storage for the new era

Whether you are running large-scale modeling and simulation or large AI models, we have the right storage system for you.

Cray ClusterStor E1000 Storage System

This storage system was optimized for both large-scale modelling and simulation and large AI models running on XD supercomputers or HPE ProLiant compute clusters. The world’s largest POSIX-compliant file system – the Orion file system at Oak Ridge National Laboratory – supporting  a supercomputer with 37,632 GPUs is built on it.

HPE Data Management Framework (DMF)

Optimize HPC & AI data management by enabling a hierarchical, tiered data management architecture. The HPE Data Management Framework optimized storage resource utilization for lower TCO and streamlines data workflows with time proven automated data movement across the full storage hierarchy. Now with support for IBM Spectrum Scale and NFS in addition to Lustre.

Featured resources

Technical White Paper

This technical white paper provides an overview of the Cray ClusterStor E1000 Storage System as a first introduction.

Customer Testimonial

In this presentation Oak Ridge National Laboratory provides an overview of the “Orion” file system that is based on Cray ClusterStor E1000 Storage Systems.