Exascale Computing

What is exascale computing?

Exascale computing is a new level of supercomputing that is capable of at least one exaflop of floating-point calculations per second to support the expansive workloads of converged modelling, simulation, AI and analytics.

What are the benefits of exascale computing?

The primary benefits of exascale computing come from the capacity to problem-solve at incredible levels of complexity.

Scientific discovery: Changes occur constantly within the scientific technology sector. With developments, validations and studies contributing to the continual progress of scientific discovery, there is a pressing need for supercomputing. Exascale computing has the necessary power to solve for the origin of chemical elements, controlling unstable chemicals and materials, validating laws of nature and probing particle physics. The study and analysis of these topics has led to scientific discoveries that would be unattainable without the capacity of supercomputing.

Security: There is a great demand for supercomputing within the security sector. Exascale computing helps us withstand emerging physical threats and cyberthreats to our national, energy and economic security – all while simultaneously promoting growth and efficiency in food production, sustainable urban planning and natural disaster recovery planning.

  • National security benefits from exascale computing’s capacity for intelligent responses to threats and analysis of hostile environments. This level of computing occurs at nearly incomprehensible speeds, countering innumerable risks and threats to the safety of the nation.
  • Energy security is attainable through exascale computing, as it not only benefits the design of low-emission technologies but also promotes analysis of stress-resistant crops. Further ensuring sustainable food and energy resources is a critical component of the nation’s security efforts.
  • Economic security is enhanced by exascale computing on several fronts. It enables accurate risk assessment of natural disasters, such as predicting seismic activity and forming proactive solutions. Urban planning also benefits from supercomputing, as it contributes to plans for efficient power and electric grid utilisation and construction.

 

Healthcare: The medical industry benefits greatly from exascale computing, specifically in the field of cancer research. With predictive models for drug reactions and intelligent automation capacity, critical processes within cancer research have been revolutionised and accelerated.

Related HPE Solutions, Products or Services

Why is exascale computing important?

We must make advancements in applied sciences in order to improve decision-making and expand our understanding of the universe. Exascale computing, otherwise known as exascale supercomputing, is necessary to accelerate this understanding. Scientists and engineers can apply the data analysis powered by exascale supercomputing to push the boundaries of our current knowledge and further promote revolutionary innovations within sciences and technology.

As exascale computing becomes more prevalent around the world, there is an increasing demand to expand and increase supercomputing capacities in order to maintain global leadership in science and technology sectors. With the addition of artificial intelligence (AI), machine learning (ML), and modelling and simulation, exascale computers are now exponentially more powerful than ever before.

Exascale computing is driving rapid advancements in the technology and science architectures of our societies. The sheer power of these machines demands responsible use – societies all over the world are experiencing dynamic shifts in their moral structures and expectations of sustainability. With exascale computing, we are beginning to discover solutions to problems that were previously believed to be impossible. 

How does exascale computing work?

Exascale computing systems analyse and solve for 1,000,000,000,000,000,000 floating-point operations per second (FLOPS), simulating methods and interactions of the fundamental forces within the universe.

These supercomputers are created from the ground up to handle the massive demands of today’s simulation, converged modelling, AI and analytics workloads. Exascale supercomputers deliver reliable performance by supporting a mix of CPUs and GPUs, even from different generations, multi-socket nodes and other processing devices, in a singular integrated infrastructure.

As workloads evolve rapidly, computing architecture is critical to supporting your organisation’s needs. Supercomputers are buildable, with multiple silicon processing choices and a single management and app development infrastructure.

We need computers capable of answering the world’s most complex research questions. Exascale computers have the capacity to solve for these questions through the movement of data between processors and storage quickly, without slowdowns – despite the massive amount of hardware and components used to build them.

Exascale computing vs quantum computing

Exascale computing

Exascale computing is a type of ultra-powerful supercomputing, with systems performing billions of computations per second utilising an infrastructure of CPUs and GPUs to process and analyse data. This type of compute is operated by digital systems in conjunction with the most powerful hardware in the world. 

Quantum computing

Quantum computing does not fall under conventional compute methods, as quantum systems utilise binary codes to be active in the same moment. This process is built on allowing super-positioning and entanglement of coding to occur simultaneously, effectively analysing and solving for problems, enabled by laws of quantum theory in physics. 

Currently, exascale computing is capable of processing and solving problems, informing and delivering technological developments at a much higher rate than quantum computing. However, quantum computing is currently on a trajectory to far surpass exascale compute capacity. Quantum compute also requires much less energy consumption to power similar workloads than exascale supercomputers.

What is an exascale computer?

An exascale computer is a massive computer system stored in cabinets within warehouses or research buildings. These computers are usually owned by governments but can be owned by large conglomerates. Essentially, exascale supercomputers have such an incredibly high cost to build that scientists and researchers typically use grants to rent them.

Computer systems that are capable of exascale compute generate massive amounts of heat due to the level of processing that occurs. They must have special cooling devices within the systems and racks – or be housed in extremely cold climates – to maintain the highest level of function. They are digital computers with the highest capacity and most powerful hardware, which differentiates them from other supercomputers or quantum computers.

Exascale computers simulate fundamental laws of physics, such as granular interactions between atoms, in order to build our knowledge of the universe and everything in it. Several industries utilise this capability to better understand, predict and build the future of the world. For example, when researchers at the National Oceanic and Atmospheric Administration (NOAA) attempt to improve their weather predictions, they examine every potential interaction of rain, wind, clouds and other atmospheric phenomena to establish the implications of each element, down to the atomic level.

These calculations are done through basic mathematics equations for every interaction between every single force within a given environment, in a given moment, down to the millisecond. These simple interactions quickly form trillions of combinations, calculated and analysed by trillions of compiled mathematics equations. Only an exascale computer can calculate at this rate. The calculations form an image or a simulation of what every interaction looks like, which can be studied to advance our understanding of the universe. Exascale supercomputers quite literally build our knowledge, helping us meet the challenges of tomorrow.

HPE and exascale computing

HPE Cray EX supercomputers deliver revolutionary capabilities for revolutionary questions, setting the tone for the next era of science, discovery and achievement. They are made with zero moving parts and direct liquid cooling (DLC) to promote the utmost sustainability while maintaining the highest level of function for the largest and most complex workloads. With the ability to intermix different generations of CPUs and GPUs, the infrastructure is expandable as developments and upgrades become available with technological advancements in compute environments.

HPE also offers the HPE Cray supercomputer, with a standard 19-inch rack configuration in a 2U compute server. It provides the option to implement a smaller system while still offering the same feature set as HPE Cray EX systems. This system is ideal for businesses that are taking evolutionary steps to supercomputing within their data infrastructure, with expansion capabilities for future boosted performance needs.

To address the needs of organisations with pressing problems, HPE Cray supercomputers are powered by the world’s leading processors: AMD EPYC™ CPUs and AMD Instinct™ GPUs are employed collaboratively to handle the largest data sets with the highest speed and performance. And with HPE Slingshot, your organisation can bridge supercomputing, cloud and data centres to build the top supercomputing environment.