National leadership in supercomputing and AI matters. Here’s what needs to happen next.
Maintaining a steadfast commitment to accelerating innovation
- The CHIPS and Science Act is a recognition that supercomputing leadership should be a national priority
- Public-private cooperation will help unlock the potential of supercomputing and expand the exascale era
The CHIPS and Science Act is a welcome development in powering the U.S. into a new era of computing, but it’s only an on-ramp—not the whole highway.
If the bill signed into law by President Biden on Aug. 9 is to deliver on its potential and help supercomputing and AI unlock a new era for scientific discovery and innovation, policymakers will need to keep their hands on the wheel.
Much attention to the CHIPS Act has focused on its incentives to rebuild domestic semiconductor manufacturing, but its provisions for next-generation supercomputing and AI are equally as important. Like chip fabrication, it’s an area in which the U.S. let early advantages fade and then vanish as foreign competitors accelerated their own efforts. More than a point of national pride, American excellence in supercomputing is an economic and national security imperative that requires federal and private funding to sustain.
More than a point of national pride, American excellence in supercomputing is an economic and national security imperative that requires federal and private funding to sustain
It’s one of the reasons I’m joining the Information Technology and Innovation Foundation (ITIF) in Washington, DC this week as a panelist to talk about the steps policymakers need to take to keep the United States at the leading edge of this highly globally competitive, truly foundational technology. The ITIF is releasing a new report exploring the promise of supercomputing in the exascale era, and along with Senator Marsha Blackburn and representatives from both private and public sectors, we’ll discuss the report’s findings and examine some of the latest, cutting-edge applications of supercomputer technology.
Earlier this summer, the Frontier supercomputer HPE built for the U.S. Department of Energy’s Oak Ridge National Laboratory reached a huge milestone when it broke through the exascale speed barrier and hit a full 1.1 exaflops – faster than the next seven systems on Top500’s list combined. But supercomputing leadership in general, and exascale computing in particular, isn’t just about putting a lot more zeroes in throughput metrics. It’s about an acceleration in the progress of projects that deliver real-world outcomes.
And Frontier is doing that with exceptional efficiency, ranked at the top of TOP500’s Green500 list of efficient supercomputers.
The ability to process vast amounts of data, at orders of magnitude faster than before, enables a universe of previously impossible projects. Older supercomputers could only model one component of a system—say, one of the compressor stages of a turbofan jet engine. Exascale computing can model the entire power plant, including how it’s affected by external variables such as altitudes, temperatures, weather conditions and flight regimes.
Exascale supercomputers like Frontier will play critical roles in computationally intensive projects such as developing next-generation vaccines, designing resiliency at scale in the electrical grid, human genome sequencing, higher-resolution and longer-term weather forecasting, and building massive machine-learning models to analyze the output of those and other research activities.
When applied to tasks of national significance like those, even tiny steps in efficiency unlocked by supercomputing can deliver tremendous leaps at scale.
Consider the drought conditions in the American Southwest, where water resource managers look to summer rains to know how to best plan and respond to the water crisis. But these rains have been unpredictable—until now. Using supercomputing and machine learning, scientists have a new method to predict summer rainfall across Arizona and New Mexico months in advance, enabling officials to better manage resources and address the worsening drought.
Other governments already are acting accordingly—China, the E.U. and Japan are represented among the top 10 entries of the current TOP500 list. (HPE happened to build four of those top 10 supercomputers.) They know a strong supercomputing capability isn’t just a matter of industrial competitiveness, but ultimately digital sovereignty.
The CHIPS Act recognizes that, but it’s not a self-driving policy vehicle.
Much of the funding outlined in the CHIPS Act won’t happen without action by future sessions of Congress to budget that money specifically—including a small but significant sum of money directed to the Advanced Scientific Computing Research (ASCR) program to fund more public-private partnerships like those behind Frontier and other upcoming exascale systems.
We also need to ensure supercomputing is a distributed resource. Accelerating innovation at scale can’t happen if exascale systems remain confined to certain parts of the country—especially with supercomputing priorities such as developing AI that’s inclusive and responsible. Pushing these resources out across the country also will increase the chances for creative sparks to jump between public and private poles.
Finally, we will need talented, trained people to work with this hardware. We need more STEM-oriented high schools, we need computer science to have a spot in every high school curriculum, and we need more CS departments in universities.
These are broad-based measures, bigger than budgeting new funds to particular research institutions. But the return on investment to society will be exponential if those of us in private industry and our partners in government maintain a steadfast commitment to accelerating innovation. To keep up with our supercomputing innovation journey, visit us at HPE Supercomputing.