Exascale computing: The Space Race of our time
Anyone who remembers the time before human beings stepped on the moon can recall the exhilaration of the challenge President John Kennedy made in 1961: "I believe that this nation should commit itself to achieving the goal, before this decade is out, of landing a man on the moon and returning him safely to Earth."
Nothing in the interim has come close to the excitement of doing something so impossible on the face of it. But PathForward, the exascale computing challenge, comes close. We are going to make a computer so powerful and so fast that it will alter the way we live. If we succeed, life will be as different afterward as it was when we saw astronaut Neil Armstrong take "a giant leap for mankind."
Earlier this year, the annual list of the world’s fastest computers came out: The Chinese are responsible for the top two slots, and the third is held by the Swiss, knocking the U.S. down to the fourth spot. Now, the U.S. Department of Energy (DOE) has awarded six American companies shares of a $258 million grant pool in the pursuit of exascale computing. If successful, the PathForward program will put the U.S. at the head of this list.
The DOE seeks to make the U.S. the clear winner in supercomputing by creating a computer capable of exaFLOPS-level calculation by 2020. It has awarded substantial development funds to Hewlett Packard Enterprise, Advanced Micro Devices, IBM, Intel, NVIDIA, and Cray.
Nic Dubé, chief strategist for HPC and lead system architect for exascale, believes it would be difficult to overstate the importance of the PathForward project. “This is the Space Race of our time,” he says.
Dragsters can’t corner
Exascale means one quintillion operations per second. That’s 10 times faster than TaihuLight, the fastest computer currently up and running. Located at the National Supercomputing Center in Wuxi, China, it can run at 93 petaFLOPS, or 93 quadrillion calculations per second.
China promises to beat the U.S. to exascale by unveiling a prototype by the end of 2017 and a functional supercomputer by 2020. But several researchers at grantee companies believe that, in the end, what China can do is not that critical because the way they do it might limit its utility. Doing this project right is “not just a matter of FLOPS, but also of more memory bandwidth and more system balance," says Dubé. “Otherwise, you’re going to build an 'exaflopper.' You’re going to build a dragster.”
With a dragster, Dubé says, you can go really fast in a straight line, but good luck if you hit a corner. “We’d much rather have a Formula One than a dragster,” he says. “We need to corner.”
In other words, if you build a computer whose raison d'être is to go really fast, that’s what you’ll have—that and nothing more. But computers don’t exist simply to go really fast. They should help deal with the big, complex issues.
“To make real discoveries, it takes more than just speed, more than just compute,” agrees Gabriel Loh, fellow design engineer at AMD Research. “Having information but lacking the ability to access it efficiently doesn’t help.”
“FLOPS will happen; the challenge is to make it capable,” says Dubé. This is more difficult to do if you care about the energy you use to power it and the pollution it leaves behind. But you have to care if you want more than speed.
What will the first exascale computer look like?
It is impossible to say exactly what shape the first functional exascale computer will take. Most awardees are close-mouthed about their strategy, and the DOE declines to comment. But it might be possible to put together an outline by examining some of the exascale obstacles and some possible ways around them.
There are some fundamental physical limitations standing in the way of exascale, according to Loh. The end of Moore’s Law will eventually make it impossible to add transistors to chips. Dennard Scaling is also heading for a cliff, Loh points out, meaning the energy demands will eventually stop decreasing as you reduce linear connectivity.
In other words, just doing the same old thing is not going to get us to exascale, so an exascale computer will not look like what we’ve gotten used to in terms of its architecture.
HPE has expended a great deal of effort in finding ways around the end of Moore’s Law. The keystone development for the company is Memory-Driven Computing, which places memory in the middle and allows any computer to access it. The memory is scalable up to 4,096 yottabytes, 1,000 times the amount of digital information that currently exists. Additionally, HPE is working on photonics, the use of light to send information, which costs the same whether the distance is 1,000 meters or 10 centimeters, as well as neuromorphic and approximate computing.
Like it or not, according to Loh, the era of pulling up the drawbridge is in its twilight. There are too many solutions, too many specialties, and too many customers, even within the DOE, to go it alone. That may explain why PathForward initially has six companies working on the project instead of just one or two.
“We’re going to see far greater interaction between parties across all levels of this stack,” says Loh. “We’ll see more co-design so we can create systems flexible enough without sacrificing performance and other critical capabilities.” This “whole system view” is something AMD has been practicing for years.
According to Dubé, cost is also an issue. Vendors have been impressed by the DOE, with a fiscal duty to make exascale usable on a wide scale, helping to strengthen the U.S. economy first and then the broader global economy.
“They don’t want us to build one-offs that are not shareable,” says Dubé, “They want legs and growth.” While some countries might approach such an undertaking with a “money is no object” attitude, vendors involved with PathForward know it is a significant issue indeed.
Currently, the price of information transfer is approximately $10 per gigabit per second. With the photonics apparatus that HPE uses on its Memory-Driven Computing project and the ring resonators research underway, the target is to bring that cost to 10 cents per gigabit per second, a huge savings. Any approach that reduces cost while maintaining performance goals is going to shape the final form of the PathForward exascale computer.
Why exascale matters
Although Loh and Dubé speak for themselves, they say the same thing when asked about the utility of exascale computing.
“Full-scale planet modeling at high resolution!” says Dubé. “Climate change is the key problem of our time, and exascale computing will allow us to understand it at a level and with a completeness we have not been able to.”
Exascale can give us accuracy at scale with our Earth-based system models, according to Loh. “Climate change is personally key for me,” says Loh. “It is important for my children and my children’s children.”
Modeling at exascale will help us diagnose climate change precisely, and therefore lead to policies with a tangible and exact effect on elements of global warning. But that’s not all it can do, according to Loh. It is likely to lead to new techniques in solar energy and make non-solar energy sources and distribution grids much more efficient, reducing not just a burden on the earth, but our individual energy bills.
Another issue Dubé thought important in the exascale context is whole brain modeling. Use of exascale may prove the lynchpin for ambitious undertakings like the Human Brain Project and DZNE’s neurodegenerative research. Successful exascale could contribute to the cure for diseases like Parkinson’s and Alzheimer’s.
Finally, because artificial intelligence is “data thirsty,” the ability to compute at exascale will allow us to develop machine learning at a much higher rate and “train” intelligent systems more quickly and more precisely, removing burdens from workers and keeping workplaces safer.
Regardless of what shape the PathForward exascale machine might take, and on top of what practical improvements might be possible with such a device, the mere fact that we are working on this “Space Race” brings us together not just as fellow citizens but as citizens of the world. It’s exciting. And we need that kind of excitement in this kind of world.
Exascale: Lessons for leaders
- Developing exascale computing is on par with reaching the moon.
- The utility of exascale computing is full-scale planet modeling, which is important to solving climate change.
- Exascale must be nimble, not just really fast, to solve complex world problems.
Related links:
NASA achieves optimal energy efficiency with its first modular supercomputer
How HPE Persistent Memory Furthers Our Vision of Memory-Driven Computing
NASA and HPE Push the Envelope on HPC in Space
Persistent Memory: The Ideal Technology for Optimizing Enterprise Workloads
This article/content was written by the individual writer identified and does not necessarily reflect the view of Hewlett Packard Enterprise Company.