Skip to main content

An IT analyst's review of The Machine

Paul Teich, principle analyst at Tirias Research, shares his view of the future of The Machine and artificial intelligence, and what you can expect from the deployment of these technologies.

Paul Teich is a man who is fascinated by technology and the prospects that the latest advanced technologies carry with it. As principle analyst at Tirias Research, his job is to understand technology at the most detailed level. Recently, Paul discussed some of the breakthrough technologies he sees from Hewlett Packard Labs, including The Machine, HPE’s vision for the future of computing.

Paul doesn’t speak just to how he sees The Machine impacting computer technology, but also to the research coming out of Labs that is becoming available in the latest generation of products and shaping new ones.

From the importance of in-memory computing, to NV-DIMM hardware, to silicon photonics changing the way the systems interconnect, Paul shares his views not only on the future of The Machine and how AI will impact things like deep learning, but also on the consequences—both planned and unintended—of deploying these technologies.

Paul also takes a few minutes to opine about what next-generation data centers will look like and his outlook for IT architectural changes over the next 15 to 20 years. Caught on the floor at the HPE Discover conference, Paul draws all this information together and makes the individual pieces understandable to the business user.

 

Transcript — An IT analyst’s review of The Machine

I'm here at this event talking with HPE and Hewlett Packard Labs about next-generation technologies.

Q: What are the most exciting technologies coming out of HPE Discover 2017 in Las Vegas?

Most importantly, I think it is The Machine. So The Machine architecture is the next-generation test bed that HPE and Hewlett Packard Labs have set up to examine large memory systems and the future of large-scale non-volatile technologies.

Q: How is The Machine changing the industry?

The Machine is designed to host very large non-volatile memory spaces, and what this does is it changes the equation for software developers, for application development.

The reason it changes the equation is we spent the last 50 years—50, 60, 70 years—of computer architecture assuming that memory is ephemeral, it disappears when you take the power away, and that we don't have enough of it, so we develop memory virtualization technologies. As it turns out, you design applications much differently when you have large pools of memory that never go away, that never disappear. You don't have to worry about maintaining them. It takes less power, and it also has computational advantages when you don't have to summarize what you're doing every once in a while to pass it to a different part of the application.

Q: How is Hewlett Packard Labs innovating with this technology?

In order to deploy The Machine, Hewlett Packard Labs has had to invent new technologies. So they've been instrumental in driving Gen-Z, a memory fabric technology standard, forward. They've also been developing silicon photonics. I found this unusual in that most of the folks I see developing silicon photonics are in the communications industry or in the chip industry, but Hewlett Packard Labs isn't really either of those. Ostensibly, they're helping Hewlett Packard, HPE, develop system-level technologies. It's gone, to me, way beyond what I see most system developers working on. Between the network technology, the large-scale memory and memory fabric, and then the idea of bringing compute to data and memory instead of moving data to where the compute is, are all fundamentally going to change the way we look at developing applications and accelerating applications at runtime in the future.

Q: What other technologies intrigue you at the event?

Another thing that I saw here that's, to me, very fascinating is everybody's talking about deep learning these days. We have graphics processing units doing deep learning acceleration. We have things like tensor flow processing units doing deep learning acceleration. Although HPE has talked for the longest time, and Hewlett Packard Labs, about the Memristor, I've seen some early demonstrations here on using Memristors to do machine learning applications, deep learning applications. The fact that they're here, they're demonstrating a capability, and that it's kind of a blend of silicon and systems technologies is, again, very advanced R&D to help move the state of the industry forward.
              

Q: How is HPE applying Hewlett Packard Labs research to new products?

Over the last couple years, as Hewlett Packard Labs and HPE have developed some of these core technologies, we've seen HPE, for instance, participate in the Gen-Z consortium to the point at which it's been modeled with ASX and FPGAs and actually starting to be deployed in silicon. What that does is it enables some of these larger pools of non-volatile memory so they're NVM, non-volatile memory, technologies. Gen-10 can host more NVMs and larger pools of memory than previous generations. That's due to this research and development that Hewlett Packard Labs is doing.

In terms of neuromorphic computing, and some would say finally commercializing the Memristor, we see that as ... yeah, at this point, it will help direct where some customers think of deploying deep learning technologies. At the current stage of deep learning deployments, it's running on GPUs, it's running on CPUs and DSPs, and they have a certain power consumption profile when running some of these advanced algorithms. The idea that you can use Memristors, which are non-volatile, which operate in the analog domain and do very heavy-duty deep learning processing, is changing the direction of where customers are looking at deploying these deep learning inference technologies.

If I look at The Machine, the technologies involved in building The Machine prototype, silicon photonics is one of the high-promise areas that I see. Silicon photonics, the X1 connector right now, are HPE's designed for operational efficiency and low cost. The idea that you have a snap on a plastic connector that can be mass produced and is very reliable helps commercialize the VXL technology behind this generation of photonics. As this generation of photonics gets commercialized, and HPE and Hewlett Packard Labs are doing multi-mode, which essentially means four colors. It's almost like a printer. You've got four colors going down one wire at the same time, meaning that each of those modes, those colors, is a 25-gigabyte connection. You've got 100 gigabytes going down one fiber. That is much better than single-mode, larger bundles of fibers can do.

It's reducing the cost, it's reducing the complexity, it's making it more operational, so I think that's likely to be one of the first areas that we'll see The Machine break into overall systems architecture.

Q: How will silicon photonics enable new technologies?

Silicon photonics, in particular this X1 connector helping with VXLs, will help enable chasse-level and rack-level fiber deployments. We see a lot of fiber in data centers today, rack to rack and running across the length of the data center, but the economics of copper have still ruled for short distances. What we're trying to do is get to the point at which we can push much better bandwidth down cheap, inexpensive, fiber optics locally, from chip to chip, from chasse to chasse. That's where X1 and the subsequent silicon photonics will take over and drive newer economies of scale.

Part of the problem in systems design today is that copper cables are thick, and so thick cables impede air flow. Impeding air flow means that you need to have bigger fans. Bigger fans take more electricity, they generate more sound, and so it changes the thermals of the system when you can route very thin fiber cables through the chasses and through the rack. It changes the air flow characteristics and saves power, saves weight, and is easier to deploy.

Q: What are the unintended consequences of new technology?

When we talk about unintended consequences of developing new technologies. It's not always apparent that, for instance, thinner cables are going to improve air flow and save power, but these things happen.

As you look at bringing compute to memory and using high-bandwidth connections, at first copper but moving to photonics, it makes systems lighter, it improves air flow. It saves operational power. You can cool it more effectively. Not only that, you don't have to send data when you don't need to send data. There are two things happening with large memory systems. Number one is improving the networking technology so that you can move data more effectively. Two is you don't have to move data when you have compute close to data, like The Machine's prototype architecture, so you end up saving energy, saving time, and saving operational costs by a) not moving data, and b) doing it really fast when you have to.

Q: What is the future of data center design?

Future data center design will be a combination of higher density, of lower power, and the lower power will be many factors. One is non-volatile memories that don't take power to keep data in memory like current DRAM does. Two, the right choice of compute power for the right task. You may use an X-86 processor, an ARM processor, neuromorphic computing to process data for specific applications. All of those processors can be hanging off of that, accessing that same large pool of memory. You don't have to have one or the other, but you can have a vast pool of memory with different kinds of applications accelerators on it. Those accelerators save power while they also compute faster.

Then you get the networking technologies that when you have to transfer data across the system, they do so faster and without using large, bulky, heavy copper wires. That has its own economics at scale because copper is getting more expensive, glass just isn't. As we do more fiber, fiber gets less expensive. The challenge there is in cost-effective and operationally robust connectors, so connectors that you can connect, disconnect through as many duty cycles as are required. They won't break down, and they'll always align. That seems mundane, but that's really cool, fundamental research in some of the stuff HP is doing.

If we go far in the future and look at the data center, say, 15 or 20 years from now, because I think the next three to five years are one of those fascinating transition points, phase transitions in the industry where the architectures we have today are not the architectures we'll be using in three to five to seven years. We're kind of in the middle of the transition; it hasn't gelled yet. That's why we have experiments like The Machine, to kind of bird dog ... I'm from Texas, so sorry for the colloquialism. To go look at where we think the future state is. The data center of the future is not homogeneous. You'll have these large pools of memory, and you'll have a mix of processors for a given task.

A public cloud might have a fairly even mix of processors and graphics processors, and neuromorphic computing, and maybe at some point, quantum computers. They'll be a mix, they'll be a general mix. If you're operating a data center for a specific injury, if you have IoT customers and you're looking at perhaps manufacturing deployments, that might be a ... in terms of driving analytics, you might have more GPU power. You might have more specialized computing, maybe FPGAs or more exotic special-purpose accelerators. That will all be determined by the application workload that you expect to run. It won't be homogeneous, and the heterogeneous mix is really going to be application dependent.

HPE and Hewlett Packard Labs are investing in next-generation data center architecture. When I say data center architecture, it's everything from the chip and the network in the storage memory level up through systems design. In the future, we can look at designing an entire data center as we look at designing a chip now, and it takes that level of investments. I don't see very many manufacturers in the industry putting forth that basic R&D investment across all three of those areas: storage, networking, and computing.

Related link:

Leveraging Deep Learning for Fraud Detection

This article/content was written by the individual writer identified and does not necessarily reflect the view of Hewlett Packard Enterprise Company.