Introducing ClusterStor E1000 storage for converged workloads

OCTOBER 30, 2019 • PRESS RELEASE

IN THIS ARTICLE

  • Cray ClusterStor E1000 system is an entirely new parallel storage platform for the Exascale Era
  • Addresses explosive data growth and the need to access it at unprecedented speed, with an optimal balance of storage performance, efficiency, and scalability
  • Chosen by the DOE for the first three US exascale systems

New HPC parallel storage system delivers massive scalability and performance to power data-driven workloads

Global supercomputer leader Cray, a Hewlett Packard Enterprise company (NYSE: HPE), today unveiled its Cray ClusterStor E1000 system, an entirely new parallel storage platform for the Exascale Era. ClusterStor E1000 addresses the explosive growth of data from converged workloads and the need to access that data at unprecedented speed, by offering an optimal balance of storage performance, efficiency and scalability, effectively eliminating job pipeline congestion caused by I/O bottlenecks. The next-generation global file storage system has already been selected by the US Department of Energy (DOE) for use at the Argonne Leadership Computing Facility, Oak Ridge National Laboratory and Lawrence Livermore National Laboratory, where the first three US exascale supercomputers will be housed (respectively Aurora, Frontier and El Capitan). With the introduction of the ClusterStor E1000 storage system, Cray has completed the re-architecture of its end-to-end infrastructure portfolio, which encompasses Cray Shasta supercomputers, Cray Slingshot interconnect, and the Cray software platform. With Cray’s next-generation end-to-end supercomputing architecture, available for any data center environment, customers around the world can unleash the full potential of their data.

“To handle the massive growth in data that corporations worldwide are dealing with in their digital transformations, a completely new approach to storage is required,” said Peter Ungaro, president and CEO of Cray, a Hewlett Packard Enterprise company. “Cray’s new storage platform is a comprehensive rethinking of what high performance storage means for the Exascale Era. The intelligent software and hardware design of ClusterStor E1000 orchestrates the data flow with the workflow – that’s something no other solution on the market can do.”

As the external high performance storage system for the first three U.S. exascale systems, Cray ClusterStor E1000 will total over 1.3 exabytes of storage for all three systems combined. The National Energy Research Scientific Computing Center (NERSC) also selected ClusterStor E1000, which will be the industry’s first all NVMe parallel file system at a scale of 30 petabytes of usable capacity.

“NERSC will deploy the new ClusterStor E1000 on Perlmutter as our fast all flash storage tier, which will be capable of over four terabytes per second write bandwidth. This architecture will support our diverse workloads and research disciplines," said NERSC Director Sudip Dosanjh. “Because this file system will be the first all-NVMe file system deployed at a scale of 30 petabytes usable capacity, extensive quantitative analysis was undertaken by NERSC to determine the optimal architecture to support the workflows our researchers and scientists use across biology, environment, chemistry, nuclear physics, fusion energy, plasma physics and computing research.”

Recognizing the data access challenges presented by the Exascale Era, Cray’s ClusterStor E1000 enables organizations to achieve their research missions and business objectives faster by offering:
 

  • Unprecedented storage performance: ClusterStor E1000 systems can deliver up to 1.6 terabytes per second and up to 50 million I/O operations per second per rack – more than double compared to other parallel storage systems in the market today.

  • Maximum performance efficiency: New purpose-engineered end-to-end PCIe 4.0 storage controllers serve the maximum performance of the underlying storage media to the compute nodes and new intelligent Cray software, ClusterStor Data Services, allows customers to align the data flow with their specific workflow, meaning they can place the application data at the right time on the right storage media (SSD pool or HDD pool) in the file system.

  • Massive scalability: An entry-level system starts at 30 gigabytes per second and at less than 60 terabytes usable capacity. Customers can start at the size dictated by their current needs and scale as those needs grow, with maximum architectural headroom for future growth. The ClusterStor E1000 storage system can connect to any HPC compute system that supports high speed networks like 200 Gbps Cray Slingshot, Infiniband EDR/HDR and 100/200 Gbps Ethernet.

    “Cray’s Shasta architecture substantially expands the company’s addressable market to include HPC simulation, AI, enterprise analytics and cloud computing. ClusterStor E1000 is an integral part of this strategy,” said Steve Conway, senior vice president of research at Hyperion Research. “This new edition of the proven ClusterStor solution is designed to enable leading enterprises to consolidate their AI, HPC and High Performance Data Analysis stacks, efficiently and easily. The fast-growing contingent of enterprises that are adopting HPC now have the cost-effective option to acquire a unified Cray Shasta-Slingshot-ClusterStor infrastructure.”

About Hewlett Packard Enterprise

Hewlett Packard Enterprise is the global edge-to-cloud platform-as-a-service company that helps organizations accelerate outcomes by unlocking value from all of their data, everywhere. Built on decades of reimagining the future and innovating to advance the way we live and work, HPE delivers unique, open and intelligent technology solutions, with a consistent experience across all clouds and edges, to help customers develop new business models, engage in new ways, and increase operational performance. For more information, visit: www.hpe.com.

This press release originally published on cray.com and has been updated and published here in HPE’s Newsroom.

RELATED NEWS