Hyperscale

What is hyperscale?

Hyperscale refers to the complete mix of hardware and facilities that can scale a distributed computing environment up to thousands of servers. As its name implies, hyperscale is all about achieving massive scale in computing – typically for Big Data or cloud computing. Hyperscale infrastructure is designed for horizontal scalability and leads to high levels of performance, throughput and redundancy to enable fault tolerance and high availability. Hyperscale computing often relies on massively scalable server architectures and virtual networking.

Why Hyperscale?

There are many reasons why an organisation might adopt hyperscale computing. Hyperscale may offer the best, or only, way to realise a specific business goal like providing cloud computing services. Generally, though, hyperscale solutions deliver the most cost-effective approach to addressing a demanding set of requirements. For example, a big data analytics project might be most economically addressed through the scale and computing density available in hyperscale.

Related HPE Solutions, Products or Services

HPE Hyperscale

The HPE Apollo high-density server family is the HPE solution for hyperscale computing. Each Apollo high-density server is built for the highest levels of performance and efficiency. The density-optimised Apollo family enables organisations to achieve hyperscale within relatively small physical facilities. They offer a tiered approach to hyperscale.

Complete HPE Apollo portfolio

See all components across the entire HPE Apollo line of high-density servers to explore and compare details.

HPE Apollo 6000 Chassis

Performance-optimised, air-cooled solution for getting the most out of your infrastructure – and your investments

  • 20 percent more performance for single-threaded applications
  • 60 percent less space than a computing blade
  • $3 million USD of TCO savings over three years with 1,000 servers

 

Apollo 4000 Systems for Big Data and HPC Storage

Purpose-built to service Hadoop® and other big data analytics and object storage systems.

  • Maximise disk density with 68 LFF disks in a 4U form factor
  • Hadoop-based data mining and NoSQL-based analytics
  • Implement object storage with petabyte-scale data volumes

 

HPE Apollo 2000 System

Bridging to scale-out architecture for traditional data centres. Highly dense and packs a lot of performance and workload capacity into small spaces.

  • Four independent hot-pluggable HPE Apollo 2000 servers in a single 2U
  • 2x the performance density of standard 1U servers at a comparable cost