High performance computing
What is high performance computing?
High performance computing (HPC) encompasses solutions that are able to process data and execute calculations at a rate that far exceeds other computers. This aggregate computing power enables different science, business and engineering organisations to solve large problems that would otherwise be unapproachable.
Putting HPC Into Perspective
An average desktop computer can perform billions of calculations per second. While this is incredibly impressive in comparison to the speed at which humans can complete complex calculations, HPC solutions are capable of performing quadrillions of calculations in one second.
Supercomputing is an HPC solution
One of the most commonly referenced HPC solutions is supercomputing, or the processing of massively complex or data-laden problems using the concentrated compute resources of multiple computer systems working in parallel. To illustrate the power of supercomputing, it’s helpful to think about the types of problem supercomputers are able to address.
Example of an HPC system
The University of Leicester uses an HPC system to complete calculations in theoretical physics, astrophysics, particle physics, cosmology and nuclear physics. Being able to make these complex calculations quickly enables this team to tackle questions such as how stars form and how planets form and evolve. In this way, gaining a deeper understanding of our universe is made possible through high performance computing.
How does high performance computing work?
There isn’t just one way that high performance computing works. Traditionally, there is a widely adopted system that enterprises use to enable high performance capabilities without the expense of running a supercomputer. This involves using groups of smaller computers in clusters, which then function as nodes. These nodes each have multiple processors that are dedicated to specific computation tasks. In this scenario, the interconnected clusters are what form the high performance computing system. Because these systems don’t require custom software to run, they are more within reach for smaller enterprises to deploy.
In recent years, another HPC system has emerged as a cost-effective solution for enterprises that don’t want to invest in their own infrastructure – high performance computing as a service. With HPC as a service, tech companies host HPC solutions on their infrastructure, enabling businesses to access it via the cloud. These businesses then have the resources they need and the ability to scale up quickly, if necessary, while only paying for the capacity they use.
Why is high performance computing important?
High performance computing is important because it’s a solution that can use data to approach problems humans otherwise wouldn’t be able to address.
Beyond enabling the study of the universe, HPC has important applications in our everyday lives, including:
- Fraud detection : To accurately detect fraud within the financial services industry, an algorithm needs to analyse millions of transactions as they occur as well as information around those transactions to provide context. HPC is able to do this work automatically, potentially saving companies millions.
- Medical record management : As electronic health records (EHRs) have become more prevalent in the medical field, doctors and nurses have access to an unprecedented amount of patient data that could theoretically enable them to make better diagnoses and treatment plans. HPC systems can handle and analyse this data in a way that normal computing systems can’t, cutting down on the time medical staff spend inputting and organising data so they can spend more time working with patients.
- Weather prediction : Part of the challenge of weather prediction is the amount of computing resources needed for accuracy. High performance computing is able to handle the complex partial differential equations used to express the physics of weather, along with the massive amount of weather data being collected by satellites.
- Race car optimisation : A huge component of Formula 1 is the engineering competition between design teams, as small design improvements can make a major difference on the racetrack. These teams use HPC in their fluid dynamics analysis and refinement to find where their cars can be optimised while still staying within the constraints placed on them by the FIA.
Accelerating high performance computing innovation for organisations of every size
In this dynamic global economy, success begins in the data centre. Today’s organisations – from small and medium-sized businesses to the largest global enterprises – rely on the latest technology developments to unlock the value of their data.
The ability to innovate is key to unleashing exceptional performance, achieving greater intelligence and delivering the outcomes that will take businesses to the next level. Data centres are changing dramatically as the demand for high performance computing and artificial intelligence skyrockets across a wide range of industries.
HPC application history
High performance computing arose in the 1960s to support government and academic research. HPC began moving into major industries in the 1970s to accelerate the development of complex products, for example in the automotive, aerospace, oil and gas, financial services and pharmaceutical industries.
In 2019, nearly half or 49% of the $13.7 billion in worldwide HPC server system revenues came from the private sector.1 Spending on the whole HPC ecosystem – including servers, storage, software and technical support – doubled the server system total.
Historically, most HPC systems in the private sector have been installed in dedicated HPC data centres for product development or other upstream R&D tasks. But in recent years, more large and smaller businesses – many of them first-time HPC users – have integrated these systems into enterprise data centres to support complex business operations that enterprise server systems can’t handle effectively alone.
HPC applications today
HPC and AI needs are now fuelling major improvements in data processing and computation as well as driving ongoing progress in a variety of scientific, industrial and societal challenges. As HPC and AI workloads continue to escalate in size and complexity, many organisations have deployed accelerated computing solutions that provide greater power and memory bandwidth to handle their most data-intensive workloads.
Accelerated computing supports HPC, AI and data analytics at scale by enhancing overall speed and performance. These robust platforms make it possible to manage rising data parameters, execute complex modelling and simulation applications, and run massive training and inference jobs at breakneck speeds.
HPC in the enterprise
The onset of the new exascale era is particularly driving a dramatic shift toward data-centric computing in the enterprise space. Exascale is expected to place rigorous demands on IT infrastructure to digest massive troves of data for AI at extreme scale. Highly sophisticated workloads will demand maximum durability, greater bandwidth and high-speed interconnects in order to avoid data bottlenecks. Organisations are racing to prepare for exascale by developing compatible technologies that will ease digital transformation and enable more efficient, cost-effective solutions.
Accelerated computing is the ideal foundation for HPC and AI techniques such as machine learning and deep learning, which are transforming entire industries with unmatched speed, precision and insight.
A number of vertical markets are reaping the benefits of these game-changing advancements, using the power and speed of HPC to simplify tasks of every kind. HPC applications can be found in nearly every industry, including healthcare and life sciences, energy, manufacturing, government and financial services.
HPE HPC accelerates digital evolution with innovation on demand
Today, digital evolution isn’t a question, it’s a requirement. Success or failure depends on how well and how quickly companies adapt and innovate. But using status quo methods for solving the most pressing problems will no longer work – the global community needs an exponentially different approach.
Supercomputing remains a crucial part of this new approach and is rapidly evolving to meet today’s vastly different requirements. Today, supercomputing isn’t just about a few exclusive, speed-driven systems. It’s about the urgent need to extract insight and value out of massive data volumes, understanding those compute requirements and delivering the accelerated capabilities to do it at every level of research and business, regardless of size.
Modern AI workloads are already pushing the boundaries of supercomputing. Savvy organisations are looking for cutting-edge solutions to unleash the full power of AI, gain a competitive advantage and solve some of the world’s biggest problems. With so much intelligence at stake, the next phase of supercomputing is essential for harnessing the expansion of AI. These breakthrough innovations will not only meet existing standards for I/O, security and manageability, but they will also provide superior processing at scale, with faster, more reliable data communication to optimise demanding workloads.
Welcome to the Exascale Era
High performance computing from HPE is defining, delivering and accelerating the next era of computing – when, where and how organisations need it.
In a world where adaptation has moved from generational to real time, companies need to accelerate digital evolution with insight and innovation on demand. HPE can help.
HPE has four decades of supercomputing leadership, building systems and technology for the world’s greatest thinkers, makers and doers. And HPE is using it to define and deliver the next era of computing. With this kind of innovation, every organisation can turn uncertainty into possibility.
Exascale is more than a speed milestone or system size. It represents new workloads prompted by new questions intersecting with new compute capabilities to create a major technological shift. HPE brings together the technology requirements into solutions that bring exascale computing capabilities to all.
HPE delivers HPC systems ready for the exascale era – and the era after that. These systems perform like a supercomputer and run like the cloud, support a wide range of future processors and accelerator architectures, and can be built for any size need or data centre.