AI and HPC converged for your storage needs
This confluence put new demands on HPC storage as the input/output patterns of both workloads could not be more different. And it is happening right now. A recent study of the independent analyst firm Intersect360 found out that 66% of the HPC users today already are running machine learning programs. Hyperion Research forecasts that, at current course and speed, HPC storage spending in public sector organizations and enterprises will grow 57% faster than spending for HPC compute for the next three years.
HPE Storage: bringing HPC and AI together for scaled performance
| Traditional HPC cluster | New Era Converged | Traditional AI POD |
---|---|---|---|
Primary workload | Modeling & simulation | Both | Machine Learning/Deep Learning |
Compute node type
| Dense CPU nodes like HPE Apollo 2000 | Both | Dense GPU nodes like HPE Apollo 6500 |
Number of compute nodes | Hundreds to thousands | Hundreds to thousands | Tens |
Typical interconnect | InfiniBand | Both | Gigabit Ethernet |
Primary I/O pattern | Write-intensive | Both | Read-intensive |
Storage capacity measured in | Petabytes | Petabytes | Terabytes |
Storage scalability in single namespace | Up to exabytes | Up to exabytes | Up to petabytes |
Typical storage | HDD-based parallel file system storage | Cray ClusterStor E1000 | All Flash Enterprise NAS/NFS storage |
Storage well suited for | Serving large files in sequential order at speeds of up to terabytes per second | Serving files of all sizes in both sequential and random order at speeds of up to terabytes per second | Serving files of all sizes in both sequential and random order at speeds of up to double digit gigabytes per second |
Optimized for | Writes | Both | Reads |
Storage architecture | Diskless compute nodes | Diskless compute nodes | Local SSD in compute nodes |
Storage created and supported by | Compute system vendor | Compute system vendor | 3rd party storage vendor |
Price per terabyte | $ | $$ | $$$$$ |
Parallel storage for the new era
Whether you are running large-scale modeling and simulation or large AI models, we have the right storage system for you.
Cray ClusterStor E1000 Storage System
This storage system was optimized for both large-scale modelling and simulation and large AI models running on XD supercomputers or HPE ProLiant compute clusters. The world’s largest POSIX-compliant file system – the Orion file system at Oak Ridge National Laboratory – supporting a supercomputer with 37,632 GPUs is built on it.
HPE Data Management Framework (DMF)
Optimize HPC & AI data management by enabling a hierarchical, tiered data management architecture. The HPE Data Management Framework optimized storage resource utilization for lower TCO and streamlines data workflows with time proven automated data movement across the full storage hierarchy. Now with support for IBM Spectrum Scale and NFS in addition to Lustre.
Featured resources
Technical White Paper
This technical white paper provides an overview of the Cray ClusterStor E1000 Storage System as a first introduction.
Customer Testimonial
In this presentation Oak Ridge National Laboratory provides an overview of the “Orion” file system that is based on Cray ClusterStor E1000 Storage Systems.