Skip to main content
Exploring what’s next in tech – Insights, information, and ideas for today’s IT and business leaders

Exascale computing: What it is and what it could be

Supercomputers keep bringing the previously impossible within our reach.

For a half century, supercomputers have performed some of the most futuristic tasks imaginable, meeting and often exceeding science fiction writers' most creative scenarios. They've helped simulate nuclear testsmap the human brainbuild better and safer carsgenerate breathtaking graphics for movies, and teach self-driving cars the difference between a garbage can and a baby carriage.

Now, supercomputers are about to enter a new era. A decade after the introduction of the first petascale system, the first computers capable of crunching numbers at exascale levels are set to be installed at government labs.

An exascale computer is one that can perform a quintillion, or 1018, floating point operations per second (FLOPS). That's a billion billion—or 1,000,000,000,000,000,000. To put that number in perspective, the Milky Way galaxy is 1 quintillion kilometers wide, and it would take 40 years for 1 quintillion gallons of water to flow over Niagara Falls. Further, you'd need every single person on earth calculating 24 hours a day for more than four years to do what an exascale supercomputer can do in 1 second. The concept of exascale computing, in sheer numbers, is mind boggling.

Please read: Six ways supercomputers have changed our lives

All that processing power can be put to use across a variety of industries, solving today's and tomorrow's problems. In the meantime, current systems—the ones cranking well into the petascale range—are already enabling organizations to do things they couldn't just a decade ago.

Traditionally associated with universities and big national laboratories, supercomputers have long been running everyday applications for private industry that range far beyond basic science. Industries such as oil exploration, finance, personalized content delivery, and online advertising are deploying high-performance computing (HPC) systems to manage heavy workloads delivering real-time services.

What's new and exciting for the exascale era is that artificial intelligence has arrived. As industries move to incorporate more AI, they are crunching massive amounts of data to teach the systems how to work. Coupling HPC with AI allows industries to train bigger, better, and more accurate models.

The move to exascale computing will position organizations to embark on seemingly limitless searches for answers to what is, what if, what else, and what could be questions. For example:

  • Understanding the structure and properties from exabytes of data by zooming into subatomic-scale measurements from microscopes (e.g., Cryo-EM), scattering sources (e.g., CERN), and light sources (e.g., SLAC), and zooming out into a square-kilometer array of telescopes (e.g., SKA@Pawsey) to study events in the distant universe.
  • Hypothesizing new atomic structures that could be materials with superconducting properties or drugs with the potential to treat novel and rare diseases.
  • Conducting ensemble simulations to predict the path of a hurricane or quantifying the uncertain dynamics of the atmosphere, oceans, land, and human behaviors to climate change.
  • Predicting 15 minutes of extreme weather based on 50 years of climate data and 100 years of physics, or forecasting the species-leaping transmission of a virus to avoid the next pandemic.

Imagine what exascale computing could do in a lab where researchers are testing out remedies for a disease. These machines have up to 1,000 times more power than the first petascale computers. They can crunch processing time down from weeks to days and from days to minutes. They can connect and combine inputs from more sources, create more powerful models, and run more frequent simulations to see what works. Rather than wait weeks for feedback, teams can determine, say, whether a potential pharmaceutical compound is worth pursuing in just a few hours.

A team of researchers from the University of Alabama at Huntsville (UAH), the Medical University of South Carolina (MUSC), and the University of Mississippi at Oxford actually did manage to do exactly that with COVID-19, using HPC to answer a range of questions: What is the 3D structure of the COVID-19 virus? What is new with SARS-COV-2 vs. other coronaviruses? Can the virus be treated with natural products? What else could be used to treat and prevent the disease? And what can be done against a more contagious variant?

Please read: How supercomputers are identifying COVID-19 therapeutics

Jerome Baudry from UAH used the Cray Sentinel supercomputer to perform virtual screening of drug compounds against proteins of COVID-19 virus using molecular dynamics simulation. Yuri Peterson from MUSC co-developed a neural network model called PharML.Bind to evaluate repurposable drug candidates for COVID-19. MUSC and Hewlett Packard Enterprise open sourced the effort. Christopher Rickett, Kristi Maschhoff, and Sreenivas Rangan Sukumar leveraged the Life Sciences Knowledge Graph to compare COVID-19 proteins to more than 4 million known protein sequences of disease-causing organisms, thereby expanding the target pool to improve the odds of finding molecules that modulate the COVID-19 disease target. During their research, they noticed an unusual data point: Individuals exposed to COVID-19 who had been previously vaccinated for tetanus displayed fewer and less severe symptoms. This discovery was published in Medical Hypotheses.

The vast amount of COVID-19 material available posed a huge challenge for researchers. There are millions of known proteins to model against the critical COVID spike, over 30 terabytes of accumulated medical data to process, millions of publications that could be key to discovery, and more than 150 billion facts of medical knowledge available for analysis. Human researchers couldn't possibly digest this amount of information. And it can take months to model protein structures and drug interactions with a single target molecule. Having a supercomputer available to evaluate molecules in seconds enabled the team to move forward with its rapid-response project.

Exascale computing will give scientists breakthrough capabilities by helping them do their work faster, but speed is just the beginning. By allowing them to generate models orders of magnitude faster than they used to, exascale can start to change the very way that science is done.

Shortening tasks from two hours to 10 seconds allows researchers to stay in the moment of scientific discovery and analysis. Instead of walking away and coming back hours or days later, they can keep innovating and trying new things. Teams can bring in new information faster, tap more sources, and test more scenarios—all while keeping their eyes on the next piece of the puzzle.

Please read: Q&A: Argonne National Laboratory's Rick Stevens talks about 'the biggest, baddest' computer in the world

More power means more innovative solutions across industries. In financial services, time is money. Supercomputers performing at exascale can reduce latencies enough to provide an edge in trading transactions. In manufacturing, high-powered systems can determine if a new 3D print material will stand up to day-to-day swings in temperature and pressure. The shipping industry can use exascale computers to determine which paths to take based on water temperature, wind direction, political factors, and prevailing costs.

With every new generation of high-performance computing, the design challenges get harder. Even so, exascale is not the final milestone computers will achieve. The momentum for high-performance computing is strong, and the appetite for AI-driven applications is even stronger.

This article/content was written by the individual writer identified and does not necessarily reflect the view of Hewlett Packard Enterprise Company.