HPE Message Passing Interface (MPI) is an MPI development environment designed to enable the development and optimization of high performance computing (HPC) applications. It leverages optimized software libraries, runtime tools, and a scalable development environment to help customers tune and accelerate compute-intensive applications running on any HPE Linux-based cluster.

What's new

  • Support for HPE Apollo 35 systems
  • Faster MPI performance for systems with NVIDIA® GPU with Mellanox® InfiniBand remote direct memory access (RDMA)
  • Socket splitting option allows users to automatically divide processes between CPU sockets instead of packing one socket first

Features

Customized MPI Library

The HPE Message Passing Interface (MPI) includes an MPI library so you can take full advantage of the underlying server infrastructure.

Supports most major interconnects and fabrics even over multiple generations to efficiently manage MPI traffic for improved performance.

Tune MPI Application Runtime Performance

The HPE Message Passing Interface (MPI) boosts performance of any MPI at runtime without the need to recompiling your code. Support for third party libraries includes: Cray MPI, Intel MPI, IBM Spectrum MPI, OpenMPI, Mellanox X-MPI, MPICH, MVAPICH.

Improved job management with optimized job placement as well as prevention of the MPI process migration.

Includes profiling tools to identify performance bottlenecks and load imbalances for MPI applications, in addition to identifying guided placement for threads to improve application performance.

  • 1.
    Includes support for multi-rail Intel® Omni-Path and Mellanox® InfiniBand, HPE Superdome Flex Grid and TCP/IP.
  • ARM is a registered trademark of ARM Limited. Intel is a trademark of Intel Corporation in the U.S. and other countries. Linux is the registered trademark of Linus Torvalds in the U.S. and other countries. All other third-party trademark(s) is/are property of their respective owner(s).