Skip to main content
Exploring what’s next in tech – Insights, information, and ideas for today’s IT and business leaders

The space station's new supercomputer

NASA and HPE launched supercomputers into space using off-the-shelf hardware.

On the spacecraft payload for a SpaceX Falcon 9 rocket bearing a Dragon spacecraft sent to the International Space Station (ISS) was the first commercial off-the-shelf (COTS) high-performance computer to orbit the earth. It just so happens to be among the first supercomputers in space.

Surely, you think, there have been supercomputers in space before? But despite fictional presentations, such as 2001’s HAL and the Starship Enterprise's computers, the computer hardware on board spacecraft computers is far from the newest and best around.

The ISS, for example, is run using two sets of three command and control multiplexer demultiplexer (C&C MDM) computers. One is Russian, the other American. Their processors are 20 MHz Intel 80386SX CPUs, right out of 1988.

Other spacecraft, such as the New Horizons spacecraft that recently zoomed by Pluto, is powered by a 12 MHz Mongoose-V CPU, based on the MIPS R3000 CPU. You may remember its much faster ancestor: the chip that took you on adventures in the original Sony PlayStation, circa 1994.

Radiation hardening required

Why do spacecraft use such outdated processors? By NASA's rules, not just any computer can go into space. Computer components must be radiation hardened, especially the CPUs. Otherwise, they tend to fail due to the effects of ionizing radiation. The customized processors undergo years of design work and then more years of testing before they are certified for spaceflight. For instance, NASA expects its next-generation general-purpose processor, an ARM A53 variant you may know from the Raspberry Pi 3, to be ready to run in 2020.

More modern hardware can be found in space; there are laptops on the ISS, the 2007-vintage ThinkPad T61p running Debian, Scientific Linux, and Windows 7. They are being replaced by HP ZBook 15s, which will run the same mix of Linux distributions and Windows 10. The Linux systems act as remote terminals to C&C MDM, while the Windows systems are used for email, the web, and recreation.

But those laptops are not high-availability, high-performance computers. They're ordinary laptops that are expected to fail. Indeed, there are more than a hundred laptops on the ISS and most are defunct.

But for serious data crunching—the scientific reasons we’re doing experiments on the space station—we want high-performance computing.

Hewlett Packard Enterprise contemplated, "Why can't we just use ordinary hardware instead?" After all, most commercial electronics can survive radiation levels in silicon of at least 500 to 1,000 rads, which is far more than the ISS endures. And this “Blast off!” is the result.

The traditional way to radiation-harden a spacecraft computer is to add redundancy to its circuits or use insulating substrates instead of the usual semiconductor wafers on chips. That’s expensive and time-consuming. HPE scientists believe that simply slowing down a system in adverse conditions can avoid glitches and keep the computer running.

The objective of the year-long experiment is to achieve a functional supercomputer for operation in space without spending years hardening it, using off-the-shelf servers and custom-built software. Based on the experiment on the ISS, HPE and NASA are attempting to harden a computer using software by throttling its speed when there's a solar flare or other radiation hazard. If that can be achieved, says Dr. Eng Lim Goh, the mission’s principal investigator and vice president and chief technology officer for SGI, Hewlett Packard Enterprise, “astronauts will have the latest computers available, increasing their onboard capabilities—a factor that becomes essential as they travel farther from Earth.”

To find out if this technique works, the ISS supercomputer is shadowed by an earthbound SGI/HPE twin system in Chippewa Falls, Wisconsin.

What's the future of HPC? What are the challenges on the path to exascale?

Computers going fast: 5 miles per second

The space station orbits Earth (and sees a sunrise) once every 92 minutes. Now it has a computer that can keep up.

The HPE supercomputer on the ISS uses two-socket “pizza-box” servers from the HPE Apollo 40 family with Broadwell-class processors and a high-speed 56 gigabit per second interconnect. It's not going to give serious competition to the world's fastest supercomputer—China's Sunway TaihuLight—but with a speed of more than 1 teraFLOP, this really is a supercomputer.

In 1997, the first supercomputer to break the 1 teraFLOP record was ASCI Red at Sandia National Laboratories. It did so with 7,264 nodes occupying 104 cabinets, and it consumed 850 kilowatts of electricity. The ISS supercomputer will achieve its impressive performance using two nodes on a quarter of a rack with less than a kilowatt of power.

You can't just plug this into the ISS's solar-array-charged 48 volt DC power supply. The computer uses NASA-supplied power inverters to feed it the 110 AC the computer needs to work.

Cooling the supercomputer was another obstacle. “Typically, an HPE computer similar to this one would be air cooled. But for the ISS, HPE created (and the astronauts will be installing) a water-cooled ‘locker’—not your standard datacenter rack enclosure,” says Dave Petersen, the mission’s co-principal investigator for hardware and SGI’s product design and compliance engineer. However, the rest of the system could be bought from HPE and delivered to your office today. The point is to test ordinary COTS equipment, after all.

Just as with its earth-bound supercomputer counterparts, the space-borne supercomputer will run Linux, specifically, standard Red Hat Enterprise Linux 6.8.

So what will really happen? We'll find out.

"We may discover a set of parameters with which a supercomputer can successfully run for at least a year without errors," says Dr. Mark R. Fernandez, the mission’s co-principal investigator for software and SGI's HPC technology officer. "Alternately, one or more components of the system will fail, in which case we will then do the typical failure analysis on earth. That will let us learn what to change to make the systems more reliable in the future."

If this software method of hardening the system works, Fernandez says, everyone will benefit. Today, when a spacecraft needs high-speed computing power, it must rely on earth-based computers. For anything close to real-time interactions, that means only low Earth orbit space vehicles, such as the ISS, can access fast computers. Companies spend years and millions of dollars getting their computer-based projects into space. The use of COTS equipment can revolutionize spaceflight, and science experiments that run in space, by enabling spacecraft to fly with near-current hardware instead of computing gear that's generations behind the state of the art.

Besides gaining the know-how needed to put COTS computers in space, HPE expects to reap other benefits. The software research will help HPE create systems that can deal with harsh conditions. For example, besides spacecraft, computers near nuclear reactors must also deal with radiation issues. Whatever lessons HPE learns will be applied across all its computer lines.

This move is part of a 30-plus-year relationship between HPE/SGI and NASA. For example, Pleiades, NASA's fastest supercomputer and the 15th fastest in the world, was built by SGI.

The end result? If all goes well, the first manned flight to Mars will carry powerful HPE COTS computers. We may yet live to see spacecraft with science-fiction computers within their hulls. I hope so.

Related link:

Supercomputing and the weather: How far we've come, and where we're going

This article/content was written by the individual writer identified and does not necessarily reflect the view of Hewlett Packard Enterprise Company.