High performance computing

What is high performance computing?

High Performance Computing (HPC) encompasses solutions that are able to process data and execute calculations at a rate that far exceeds other computers. This aggregate computing power enables different science, business, and engineering organizations to solve large problems that would otherwise be unapproachable.

Putting HPC Into Perspective

An average desktop computer can perform billions of calculations per second. While this is incredibly impressive in comparison to the speed at which humans can complete complex calculations, HPC solutions are capable of performing quadrillions of calculations in one second.

Related HPE Solutions, Products, or Services

Supercomputing is an HPC solution

One of the most commonly referenced HPC solutions is supercomputing, or the processing of massively complex or data-laden problems using the concentrated compute resources of multiple computer systems working in parallel. To illustrate the power of supercomputing, it’s helpful to think about the types of problems supercomputers are able to address. 

Example of an HPC system

The University of Leicester uses an HPC system to complete calculations in theoretical physics, astrophysics, particle physics, cosmology, and nuclear physics. Being able to make these complex calculations quickly enables this team to approach questions such as how stars form and how planets form and evolve. In this way, gaining a deeper understanding of our universe is made possible through high performance computing.

How does high performance computing work?

There isn’t just one way that high performance computing works. Traditionally, there is a widely adopted system that enterprises use to enable high performance capabilities without the expense of running a supercomputer. This involves using groups of smaller computers in clusters which then functions as nodes. These nodes each have multiple processors that are dedicated to specific computation tasks. In this scenario, the interconnected clusters are what form the high performance computing system. Because these systems don’t require custom software to run, they are more within reach for smaller enterprises to deploy.

In recent years, another HPC system has emerged as a cost-effective solution for enterprises who don’t want to invest in their own infrastructure—high performance computing as a service. With HPC as a service, tech companies host HPC solutions on their infrastructure, enabling businesses to access it via the cloud. These businesses then have the resources they need and the ability to scale up quickly if necessary, while only paying for the capacity they use.

Why is high performance computing important?

High performance computing is important because it’s a solution that can use data to approach problems humans otherwise wouldn’t be able to address.

Beyond enabling the study of the universe, HPC has important applications in our everyday lives, including:

  • Fraud detection – To accurately detect fraud within the financial services industry, an algorithm needs to analyze both millions of transactions as they occur as well as information around those transactions to provide context. HPC is able to do this work automatically, potentially saving companies millions.
  • Medical record management – As electronic health records (EHRs) have become more prevalent in the medical field, doctors and nurses have access to an unprecedented amount of patient data that could theoretically enable them to make better diagnoses and treatment plans. HPC systems can handle and analyze this data in a way that normal computing systems can’t, cutting down on the time medical staff spend inputting and organizing data so they can spend more time working with patients.
  • Weather prediction – Part of the challenge of weather prediction is the amount of computing resources needed for accuracy. High performance computing is able to handle the complex partial differential equations used to express the physics of weather, along with the massive amount of weather data being collected by satellites.
  • Race car optimization – A huge component of Formula 1 is the engineering competition between design teams, as small design improvements can make a major difference on the race track. These teams use HPC in their fluid dynamics analysis and refinement to find where their cars can be optimized, while still staying within the constraints placed on them by the FIA.

Accelerating high performance computing innovation for organizations of every size

In this dynamic global economy, success begins in the data center. Today’s organizations—from small and midsized businesses to the largest global enterprises—rely on the latest technology developments to unlock the value of their data.

The ability to innovate is key to unleashing exceptional performance, achieving greater intelligence, and delivering the outcomes that will take businesses to the next level. Data centers are changing dramatically as the demand for high performance computing and artificial intelligence skyrockets across a wide range of industries.

 

HPC application history

High performance computing arose in the 1960s to support government and academic research. HPC began moving into major industries in the 1970s to accelerate the development of complex products in industries such as automotive, aerospace, oil and gas, financial services and pharmaceutical firms.

In 2019, nearly half or 49% of $13.7 billion in worldwide HPC server system revenues came from the private sector.1 Spending on the whole HPC ecosystem—including servers, storage, software, and technical support—doubled the server system total.

Historically, most HPC systems in the private sector have been installed in dedicated HPC data centers for product development or other upstream R&D tasks. But in recent years, more large and smaller businesses—many of them first-time HPC users—have integrated these systems into enterprise data centers to support complex business operations that enterprise server systems can't handle effectively alone.

 

HPC applications today

HPC and AI needs are now fueling major improvements in data processing and computation, and driving ongoing progress in a variety of scientific, industrial, and societal challenges. As HPC and AI workloads continue to escalate in size and complexity, many organizations have deployed accelerated computing solutions that provide greater power and memory bandwidth to handle their most data-intensive workloads.

Accelerated computing supports HPC, AI, and data analytics at scale by enhancing overall speed and performance. These robust platforms make it possible to manage rising data parameters, execute complex modeling and simulation applications, and run massive training and inference jobs at breakneck speeds.

 

HPC in the enterprise

The onset of the new exascale era is in particular driving a dramatic shift toward data-centric computing in the enterprise space. Exascale is expected to place rigorous demands on IT infrastructure to digest massive troves of data for AI at extreme scale. Highly sophisticated workloads will demand maximum durability, greater bandwidth, and high-speed interconnects in order to avoid data bottlenecks. Organizations are racing to prepare for exascale by developing compatible technologies that will ease digital transformation and enable more efficient, cost-effective solutions.

Accelerated computing is the ideal foundation for HPC and AI techniques such as machine learning and deep learning, which are transforming entire industries with unmatched speed, precision, and insight.

A number of vertical markets are reaping the benefits of these game-changing advancements, using the power and speed of HPC to simplify tasks of every kind. HPC applications can be found in nearly every industry, including healthcare and life sciences, energy, manufacturing, government, and financial services. %20Technology%20Spotlight%3A%20CIOs%20Exploit%20High%20Performance%20Computing%20to%20Boost%20Productivity%20and%20Competitiveness%2C%20Hyperion%20Research%2C%20August%202020.

HPE HPC accelerates the digital evolution with innovation on demand

Today, digital evolution isn’t a question; It’s a requirement. Success or failure depends on how well and how quickly companies adapt and innovate. But using status quo methods for solving the most pressing problems will no longer work—the global community needs an exponentially different approach.
 
Supercomputing remains a crucial part of this new approach, and it is rapidly evolving to meet today’s vastly different requirements. Today, supercomputing isn’t just about a few exclusive, speed-driven systems. It’s about the urgent need to extract insight and value out of massive data volumes, understanding those compute requirements, and delivering the accelerated capabilities to do it at every level of research and business regardless of size.
 
Modern AI workloads are already pushing the boundaries of supercomputing. Savvy organizations are looking for cutting-edge solutions to unleash the full power of AI, gain competitive advantage, and solve some of the world’s biggest problems. With so much intelligence at stake, the next phase of supercomputing is essential to harness the expansion of AI. These breakthrough innovations will not only meet existing standards for I/O, security, and manageability, but they will also provide superior processing at scale, with faster, more reliable data communication to optimize demanding workloads.

 
Welcome to the Exascale Era

High-performance computing from HPE is defining, delivering, and accelerating the next era of computing—when, where, and how organizations need it.
 
In a world where adaptation has moved from generational to real-time, companies need to accelerate digital evolution with insight and innovation on demand. HPE can help.
 
HPE has four decades of supercomputing leadership, building systems and technology for the world’s greatest thinkers, makers, and doers. And HPE is using it to define and deliver the next era of computing. With this kind of innovation, every organization can turn uncertainty into possibility.
 
Exascale is more than a speed milestone or system size. It represents new workloads prompted by new questions intersecting with new compute capabilities to create a major technological shift. HPE brings together the technology requirements into solutions that bring exascale computing capabilities to all.

HPE delivers HPC systems ready for the Exascale Era—and the era after that. These systems perform like a supercomputer and run like the cloud, support a wide range of future processors and accelerator architectures, and can be built for any size need or data center.