ARM in the cloud
The ARM architecture is a RISC design from ARM Holdings, which was recently acquired by Japanese tech giant SoftBank. Unlike CPU stalwarts Intel and AMD, ARM doesn't manufacture its processors. Instead, it designs the core and licenses the design to chip makers, which can then add their own intellectual property to differentiate their versions from those of other licensees. ARM's current core revision is up to Version 8, so it isn't a newcomer to the business.
ARM licensees have taken the chip in many different directions, from high-performance CPUs used in Apple and Samsung phones, to network switches and gateways, to network controller boards. ARM's 800-plus licensees are now moving the CPU into new areas—in particular, the data center and the Internet of Things (IoT).
Initial missteps in ARM technology
Just a few years ago, certain entrepreneurs got the idea to use low-power chips from ARM and Intel in highly dense, low-power servers dedicated to simple functions like file and print or serving HTML pages.
In 2011, Hewlett Packard Enterprise announced Project Moonshot, which would use ARM processors for one of its ultra-dense servers. A startup called SeaMicro came out with a 10U server that held 384 dual-core Atom processors. Another startup, Calxeda, announced plans to make a server-oriented ARM processor. This required a lot of work, because ARM for the smartphone was a streamlined design. To run in servers, it would need high-performance functions like error correction, reliability, scalability, and availability, not to mention developing ARM-specific ports of application software and developer tools.
It all crashed and burned. HPE has put the ARM processors on the back burner in favor of Xeon and AMD's Opteron. "We are taking a market driven approach, responding to customer input," says Tom Bradicich, vice president and general manager for servers and converged edge systems at HPE.
Calxeda failed because "they didn't get the 64-bit memo," says Nathan Brookwood, a research fellow at Insight64 who follows the CPU market. Calxeda chose to go forward with 32-bit processor design, which is fine for a smartphone but a bad idea for a server. A 32-bit chip can access only 4GB of memory, while 64-bit servers can access up to 16 exabytes of memory, in theory.
Learn from Frost & Sullivan how the right infrastructure can prepare your data center for business disruptors.
Even though the industry had long since moved on from 32 bits, Calxeda saw that as a way to get into the tent and upgrade to 64 bits later, says Brookwood. By the time it was ready with a 32-bit chip and started to talk 64 bits, "they spent all their money on a 32-bit chip and nobody wanted it," he says.
Some of the Calxeda technology lives on in a company called Silver Lining Systems (SLS), which purchased the assets in bankruptcy. SLS is a subsidiary of AtGames, an online game streaming service. AtGames bought the Calxeda assets for its cloud gaming and streaming servers because it had already built a server prototype using the Calxeda chip, according to Ping-Kang Hsiung, CEO of SLS and AtGames. "We decided to get involved helping bring that tech back to life," he says.
AtGames created a new chip to just focus on the fabric of the switch for its streaming servers. Hsiung says the 32-bit chips used in the fabric switch are sufficient for most gaming apps. So SLS is using the fabric technology to deliver streaming game experiences to the largest possible audience. AtGames will sell you a fabric for audio and video streaming over the Internet powered by Calxeda technology. But it's just another option it offers and is not a core business.
Rethinking ARM processor use cases
The ARM market is not sitting still after these setbacks but rather is reassessing its target. "Everywhere you see a display that would be a computer is a target market for an ARM application processor," says Tom Hackenberg, principal analyst for embedded processors at IHS Markit. Hardware applications that require an embedded intelligence and are not already committed to the x86 architecture are the biggest growth channels.
That means smart TVs, in-car displays, kiosks, and interactive displays on vending machines. All of these are target markets for an ARM apps processor, he says. This extends into the IoT market, where ARM has a head start and no real competition.
"When we think about where the computing environment is trending right now, we're going to opposite ends of the spectrum in growth," says Hackenberg. "At the lowest end, we're talking about adding intelligence to any device where it would make sense to gather data or sense information. That's a microcontroller scenario. ARM is the No.1 architecture in microcontroller apps."
On the server side, ARM is taking a shot at the midrange server market rather than the low-end segment like initially planned. Instead of targeting general-purpose computing, companies like Qualcomm are producing application processors that target specific use cases involving simple, short computations that are often repeated, such as big data analytics, media application access, streaming, and simple database access. These use cases are a core strength of the RISC architecture.
One thing the new generation of ARM processors have that Calxeda didn't is software. Linux has been ported many times, and now Microsoft is experimenting internally with a Windows Server port. Java is available, as are major development tools and languages. With the software in place, the CPU is irrelevant and abstracted from the user.
ARM also hopes to conquer the supercomputing world. Japan's Fujitsu is currently developing a next-gen supercomputer, a 1,000-petaFLOP monster running ARM processors. The launch has already been pushed out two years, from 2020 to 2022, because engineers need more time to design and test the new processors going into the system. It's a reflection of the challenge of taking a chip designed for smartphones and using it in supercomputers. You can't just repurpose the chip. Instead, a lot of work is needed to add supercomputing features such as caching, scaling, and high availability.
Leading ARM technology companies
Brookwood argues that there are two serious ARM players in the data center: Qualcomm and Cavium. "Both of those guys are basically offering server-based chips that have a lot of memory bandwidth and a lot of integrated I/O capabilities and respectable performance that enables them to compete with the existing standards, Intel x86," he says.
Cavium has built MIPS-based systems on a chip (SoCs) for high-performance communications workloads. Over the past few years, it has switched over to ARM, so it has a lot of experience and technology from which to draw in making its chips.
In 2014, Cavium launched a 48-core server called ThunderX. Larry Wikelius, vice president of the ecosystem software and solutions group at Cavium and a veteran of Calxeda, says the climate is different for Cavium because it waited a few years before entering the ARM server market.
"With startups, you can be too early or too late," Wikelius says. "We were too early with Calxeda. We had to do a lot of the heavy lifting to get that started. Cavium's timing was much better, but also Cavium has the silicon expertise to deliver a fully capable server."
By waiting a few years for the 64-bit ARM core ecosystem to improve, Cavium is able to target typical Linux workloads, from database to analytics to scaleout database, Web services, and Java enablement with Oracle, rather than the menial tasks like file and print like originally planned.
Java doesn't care about the platform it runs on, so porting from x86 is fairly easy. "With the strong Java base, it's easy to bring apps over to an ARM platform because that is architecture-independent," Wikelius notes.
Meanwhile, Qualcomm recently began sending pre-production samples of its new 48-core Centriq 2400 server processor to vendors. "We are focused on general-purpose CPUs," says Ram Peddibhotla, vice president of product management at Qualcomm.
"We see the market in four chunks: cloud, where the growth is, telco, HPC, and enterprise," he adds. "Broadly, research points to enterprise declining, with the other three growing. We think ARM is able to address all of these markets. It's a function of how software gets ready to address those markets."
Hackenberg says ARM licensees need to prove that this new architecture is going to take off. "It needs a lot of considerations. You need software to run on it, which is a challenge. One of the reasons ARM was so successful to begin with is they did a lot of work in smartphones and tablets to get the software ready," he says.
Hackenberg sees the data center as a growth opportunity for ARM vendors. However, he doesn't see the ARM vendors becoming a dominant force or pushing x86 architectures out of the data center any time soon. "I think they will capture additional market share, limited to moderate performance systems, small devices, and networking devices," he says. "You are still going to see x86 servers in big data centers and high-performance computing."
ARM in IoT
At the opposite end of the compute range is IoT, which includes everything from connected refrigerators to wearables to industrial machinery. ARM is already very strong in embedded devices, with a whole line of processors (the M series) dedicated to embedded systems.
But IoT is not a monolithic technology. An industrial machine sending maintenance feedback is a very different animal than a driverless car, with vastly different processing requirements. Vendors that specialize in CPUs for highly vertical, specialized embedded technologies aren't necessarily the right providers for more complex challenges like autonomous vehicles.
"A driverless car needs a heck of a lot of computing capability, along with GPU and image recognition capability," says Brookwood. "That stuff will come from companies that specialize in automotive and from players like NXP, which is in the process of being absorbed into Qualcomm."
Meanwhile, Cavium's IoT focus is on the gateway and edge device handling data coming in and processing it. "We will handle more compute at the edge because of the SoCs, not just ARM cores but integrated accelerators as well," says Wikelius.
Bradicich predicts that enterprise computing will increasingly move processing, storage, and management out of the data center to the network edge, in order to be closer to where the data originates. He expects to see a mix of x86 and ARM-based servers in edge computing. "As the world moves to compute more out on the edge, we're seeing more commingling of ARM technology," he adds. "However, in the data center, there is little interest [in ARM] and take-up has been slow."
The wildcard in all of this? SoftBank. The CEO of the giant Japanese conglomerate has publicly stated that he bought ARM to get in on the IoT wave. Thus far, that's proved a plus for ARM, say the analysts. "SoftBank has lots of money," notes Brookwood. "Through SoftBank, ARM has been able to staff up to address markets they otherwise wouldn't be able to address, and so far they have left them alone. Now if SoftBank starts to meddle, that might be a problem, but I haven't seen anything to suggest that."
Hackenberg concurs. "They feel like they can take more risk in more platforms," he says. "SoftBank is giving them resources to move in directions where they couldn't move before." He notes that some ARM vendors are planning to enter the AI market, while others are improving their performance and penetration in the data center space.
ARM in the cloud: Lessons for leaders
- ARM technologies are already in widespread use in places you don't realize, like networking equipment.
- Success in the server market will depend on putting the right software tools in place.
- ARM will succeed in the back end of the cloud by focusing on specific workloads. Stand-alone CPUs will grow with the IoT market.
- ARM core technology is believed to be as competitive as Intel x86 for server work.
This article/content was written by the individual writer identified and does not necessarily reflect the view of Hewlett Packard Enterprise Company.
Andy Patrizio has been a technology journalist for 25 years, covering a wide range of topics for many publications, including InformationWeek, Byte, Dr. Dobb's Journal, and Computerworld.