Developing a Supercomputer to Process More Than a Quintillion Calculations per Second: Q&A with HPE’s Bill Mannel

June 15, 2017 • Blog Post • Staff Writer

IN THIS ARTICLE

  • Learn what exascale computing is, why it matters and how it will enable a new era of computational capabilities.
  • Understand the challenges with achieving exascale computing.
  • Learn what exascale computing will look like in five years.

Why the U.S. Department of Energy awarded HPE a research grant to develop new supercomputer architecture

The U.S. Department of Energy made a significant investment in the next frontier of computing research today by awarding Hewlett Packard Enterprise a research grant to develop a new supercomputer architecture that will process more than one quintillion calculations per second. That’s 18 zeros. HPE has been a pioneer in exascale computing, which will make the models used to solve some of the world’s more complex problems more accurate and significantly faster. For example, the models used in genomics to sequence genes will be able to process data much faster than what is possible today, making personalized, predictive medicine a reality. Similarly, weather simulations will become more accurate by crunching through models faster and therefore providing more advance warning of hurricanes and tornados. This grant from the Exascale Computing Project will result in research to develop computing solutions that will support everything from understanding the impact of climate change and designing better energy sources, to nuclear science and cosmology. With exascale, there are virtually no limits on the scale of the problem that can be solved.

6:42

 

This project will require active research into technologies that don't exist yet - and HPE will be working to improve how quickly something computes while reducing energy consumption and the size of the machines themselves by an order of magnitude. To do this, HPE's vision for Memory-Driven Computing architecture, which flips traditional computer architecture on its head, will be essential. Memory-Driven Computing puts memory, not processing, at the center of the computer, making it more performant and efficient. On May 16, 2017, HPE unveiled the latest prototype from this project, the world's largest single memory computer. The Department of Energy's Exascale Computing Project will continue the research and development of this new computing concept, which also has broad appeal to enterprises with commercial workloads facing similar big data challenges.

In this article, Bill Mannel, Vice President and General Manager at HPE shares what HPE will be building for the Exascale Computing Project and why achieving exascale matters to future innovation.

Q: What is exascale computing and why does it matter? How will it enable a new era of computational and scientific capabilities?

A: There are many big problems in our world that are simply not solvable with our current computing capabilities, such as simulating the internal combustion engine. You may be surprised to learn that, based on today's computing infrastructure we cannot model the engines and disciplines - such as structural mechanics, thermodynamics, fluid mechanics, magnetism, electrical - but instead make an approximation for their design. Exascale gives us the ability to solve currently unsolvable problems like this with a resolution and accuracy currently not possible - in industries ranging from medicine to geophysics. By using exascale-type systems and combining them with data-driven models, we can dramatically improve the accuracy of our predictions and make engines more efficient, potentially using less of our natural resources.

It's also significant that scientific discovery has moved to a data-driven model. There's an increasing pressure and convergence of problems between scientific computing and data analytics. There's a new class, called high-performance data-intensive computing which works with much bigger data sets. Applications like search engines, biological analysis and astronomy.

These applications really stress conventional architecture because the amount of memory that any one processor can handle is relatively small. The computer spends a lot of time moving data around rather than actually working on the data. We need a new kind of architecture which breaks that bottleneck. We believe Memory-Driven Computing is the answer.

 

Q: What does this Department of Energy research grant mean for Hewlett Packard Enterprise and the company's innovation roadmap?

A: Exascale refers to a new breed of supercomputers capable of processing data at extremely fast speeds - at least a million trillion mathematical operations per second, usually referred to as "floating point operations per second", or FLOPS. To set the context, the supercomputing community reached 1 PetaFLOPS (one thousand trillion FLOPS) in 2009 and 2016's most powerful supercomputer achieves 93 PetaFLOPS. So, reaching exascale means achieving over 10x improvement in computing capacity, without increasing the power consumption, in 5 years. Being awarded the grant shows that our technology roadmap for high-performance computing systems is compelling to customers. It tells us that the Department of Energy looked at our proposal and said, "Yes, this looks like a very interesting direction to us as well". Because of the grant, we can get to the technology quicker and also realize a higher return on investment than if the project was solely funded by HPE.

 

Q: What are the challenges with achieving exascale computing?

A: For all the benefits that exascale computing will bring, there are still roadblocks that stand in our way today, which well look to overcome through the research grant from the Department of Energy's Exascale Computing Project. Some of the top problems include moving data fast enough between compute elements, increasing power efficiency, reducing physical size of these systems, and making the systems cost-effective.

Perhaps the key challenge of scaling to exascale is to preserve the correct compute/communication balance. Microprocessors are becoming more powerful so quickly that traditional networking between them can't keep up.

What we bring to the table is a new way to communicate, using memory fabrics, which is the foundation of Memory-Driven Computing. Memory-driven doesn't imply necessarily that data is centralized, but rather that data is globally accessible in a very efficient way (that is, through the memory fabric). For some applications, a shared memory approach is best. For exascale, we're taking a distributed memory approach. Both are different implementation points of Memory-Driven Computing.

We are also working to develop new programming tools to harness all this power and hopefully make the systems easier to keep running at peak performance.

Q: Why is a Memory-Driven Computing architecture suitable for this Department of Energy project?

A: The explosive growth of data is said to reach 40 ZB by 2020. We came up with Memory-Driven Computing because it allows us to more efficiently manage this deluge of data and increase performance by storing data in a central place that is quickly accessible instead of breaking it into a hierarchy of storage locations.

Let's say that there are 10,000 commercial and military airplanes flying each day; each plane has many different sensors that collect data, which adds up to a tremendous amount of data. This data could be used to help improve fuel efficiency, design more reliable airplane parts and even prevent airplane crashes. However, the way things have been done in the past, you simply can't process such a large amount of data to find a solution. The result has been choosing specific pieces of data for simulation, which often results in a less-than-accurate conclusion depending on which pieces of data you use.

By using a Memory-Driven Computing architecture, you can process, analyze and manage all of the data and the associated simulations at a very large scale to deliver results faster, in time to make better decisions.

In the last few years, we have seen the emergence of the data-driven scientific discovery (the so-called "fourth paradigm", after experimental, theoretical, and computational). This trend is causing a convergence of the data analytics and scientific computing fields that have been traditionally separate. Memory-Driven Computing technologies are ideally suited to address the needs of this new style of computing, by providing a unified set of mechanism to access, transfer and manipulate data.

Q: This is Phase I of the project. What can we expect Phase II to look like?

A: Phase I is essentially a research project for the Department of Energy, which will culminate in new technology and some proof-of-concept hardware. In Phase II, we plan to take the outcome of our work, and transfer the technology in commercially available systems. Phase III will be to build and deploy the first exascale system with processors, memory and racks all connected through a memory semantics fabric.

 

Q: What will innovation in exascale computing look like in five years?

A: I predict that we will start to see exascale-class systems within three years (roughly by 2020), but they will not be very efficient in terms of programming efficiency, power and footprint. Moreover, they might reach one exaFLOPS briefly, for a few selected programs that have been highly optimized with heroic efforts, but they will only be able to sustain three or four hundred petaFLOPS continuously and on more general applications. We're looking to comfortably exceed one exaFLOPS all day long, on a wide class of applications, with Memory-Driven Computing.

By five years, I expect that we will start to see exascale systems that are much more efficient, which is what we are focusing on with this program. Most likely, the first exascale systems will have a 40 percent efficiency, and then we will start seeing systems with 70 to 80 percent efficiency. Our ultimate goal is to create efficient exascale systems that can reliably process data and code, and can be easily programmed for a broad class of applications.

This is a really exciting time to be involved in supercomputing. I'm really looking forward to seeing our work come to reality, solving real problems for the benefit of all of us.

RELATED NEWS

U.S. Dept. of Energy Taps Hewlett Packard Enterprise's Machine Research Project to Design Memory-Driven Supercomputer

Press Release

U.S. Dept. of Energy Taps Hewlett Packard Enterprises Machine Research Project to Design Memory-Driven Supercomputer

Press Release

HPE Delivers Worlds Largest Arm Supercomputer for U.S. Department of Energy (DOE)

Press Release