Skip to main content
Exploring what’s next in tech – Insights, information, and ideas for today’s IT and business leaders

Podcast: Inside story on HPC's role in the Bridges research project at Pittsburgh Supercomputing Center

Learn how high-performance computing and memory-driven architectures are democratizing the benefits from advanced research and business analytics in this discussion with PSC directors Nick Nystrom and Paola Buitrago.

[Editor's note: This podcast was recorded on Nov. 21, 2017.]

At the Pittsburgh Supercomputing Center, researchers have built a capability that puts the power of high-performance computing (HPC) into the hands of organizations that have no experience using HPC, enabling them to use artificial intelligence and big data in ways not possible before. Called Bridges, the project aims to "democratize" supercomputing with low-cost access to processing that wouldn’t otherwise be available to industry and researchers, says Paola Buitrago, head of AI and big data at PSC. Uses cases include everything from disease and genomics research to analysis of all the information on the Internet.

In this HPE Voice of the Customer podcast hosted by Dana Gardner of BriefingsDirect, Buitrago and Nick Nystrom, interim director of research, explain the Bridges project, and how it's helping businesses and organizations across sectors gain the advantages of supercomputing via HPC and big data as a service. 

Dana Gardner: Welcome to the next edition of the BriefingsDirect Voice of the Customer podcast series. I'm Dana Gardner, principle analyst at Interarbor Solutions, your host and moderator for this ongoing discussion on digital transformation success. Stay with us now to learn how agile businesses are fending off disruptionin favor of innovation.

Our next HPC success story interview examines how the Pittsburgh Supercomputing Center (PSC) has developed a research computing capability, Bridges, and how that's providing new levels of analytics, insights, and efficiencies.

We'll now learn how advances in IT infrastructure and memory-driven architectures are combining to meet the new requirements for artificial intelligence (AI), big data analytics, and deep machine learning.

Here to describe the inside story on building Bridges is Dr. Nick Nystrom, interim director of research at Pittsburgh Supercomputing Center. Welcome.

Nick Nystrom: Good morning, Dana, I’m pleased to be here.

Nick Nystrom, Pittsburgh Supercomputing Center

Gardner: We're also here with Paola Buitrago, director of AI and big data at Pittsburgh Supercomputing Center. Welcome.

Paola Buitrago: Thank you, Dana. It’s a pleasure to be here.

Paola Buitrago, Pittsburgh Supercomputing Center

Gardner: Let's begin with what makes Bridges unique. What is it about Bridges that is possible now that wasn't possible a year or two ago?

Equal opportunity HPC

Nystrom: Bridges allows people who have never used HPC before to use it for the first time. These are people in business, social sciences, different kinds of biology and other physical sciences, and people who are applying machine learning to traditional fields. They're using the same languages and frameworks that they've been using on their laptops, and now that is scaling up to a supercomputer. They are bringing big data and AI together in ways that they just haven't done before.

Gardner: It almost sounds like the democratization of HPC. Is that one way to think about it?

Nystrom: It very much is. We have users who are applying tools like R and Python and scaling them up to very large memory—up to 12 terabytes of random access memory (RAM)—and that enables them to gain answers to problems they've never been able to answer before.

Gardner: There is a user experience aspect, but I have to imagine there are also underlying infrastructure improvements that also contribute to user democratization.

Nystrom: Yes, democratization comes from two things. First, we stay closely in touch with the user community, and we look at this opportunity from their perspective first. What are the applications that they need to run? What do they need to do? And from there, we began to work with hardware vendors to understand what we had to build, and what we came up with is a very heterogeneous system.

We have three tiers of nodes having memories ranging from 128 gigabytes to 3 terabytes, to 12 terabytes of RAM. That's all coupled on the same very-high-performance fabric. We were the first installation in the world with the Intel Omni-Path interconnect, and we designed that in a custom topology that we developed at PSC expressly to make big data available as a service to all of the compute nodes with equally high bandwidth, low latency, and to let these new things become possible. 

Gardner: What other big data analytics benefits have you gained from this platform?

Bridges’ new world

Buitrago: A platform like Bridges enables that which was not available before. There's a use case that was recently described by Tuomas Sandholm. [Sandholm is a professor and director of the Electronic Marketplaces Lab at Carnegie Mellon University. The use case involves strategic machine learning using Bridges HPC to play and win at heads-up, no-limit Texas Hold'em poker as a capabilities benchmark.]

This is a perfect example of something that could not have been done without a supercomputer. A supercomputer enables massive and complex models that can actually give an accurate answer.

Right now, we are collecting a lot of data. There's a convergence of having great capabilities right in the compute and storage, and also having the big data to answer really important questions. Having a system like Bridges allows us to, for example, analyze all that there is on the Internet, and put the right pieces together to answer big societal or healthcare-related questions.

Gardner: The Bridges platform has been operating for some months now. Tell us some other examples or use cases that demonstrate its potential.

A guide to deep learning and artificial intelligence. Deployment results and success stories.

Dissecting disease through data

Nystrom: Paola mentioned use cases for healthcare. One example is a National Institutes of Health Center of Excellence in the Big Data to Knowledge program called the Center for Causal Discovery.

They are using Bridges to combine very large data in genomics, such as lung-imaging data and brain magnetic resonance imaging (MRI) data, to come up with real cause-and-effect relationships among those very large datasets. That was never possible before because the algorithms were not scaled. Such scaling is now possible, thanks very large memory architectures and because the data is available.

At CMU and the University of Pittsburgh, we have those resources now, and people are making discoveries that will improve health. There are many others. One of these is on the Common Crawl dataset, which is a very large web-scale dataset that Paola has been working with.

Buitrago: Common Crawl is a dataset that collects all the information on the Internet. The data is currently available on the Amazon Web Services (AWS) cloud in S3. They host these datasets for free. But if you want to actually analyze the data, to search or create any index, you have to use their computing capabilities, which is a good option. However, given the scale and the size of the data, this is something that requires a huge investment.

So we are working on actually offering the same dataset, putting it together with the computing capabilities of Bridges. This would allow the academic community at large to do such things as build natural language processing models or better analyze the data—and they can do it fast, and they can do it free of charge. So that's an important example of what we are doing and how we want to support big data as a whole.

Gardner: So far, we’ve spoken about technical requirements in HPC, but economics plays a role here. Many times we've seen in the evolution of technology that as things become commercially available off-the-shelf technologies, they can be deployed in new ways that just weren’t economically feasible before. Is there an economics story here to Bridges?

Low-cost access to research

Nystrom: Yes, with Bridges, we have designed the system to be extremely cost-effective. That's part of why we designed the interconnect topology the way we did. It was the most cost-effective way to build that for the size of data analytics we had to do on Bridges. That is a win that has been emulated in other places.

So, what we offer is available to research communities at no charge, and that's for anyone doing open research. It's also available to the industrial sector at essentially a very attractive rate because it’s a cost-recovery rate. So, we do work with the private sector. We are looking to do even more of that in future.

Also, the future systems we are looking at will leverage lots of developing technologies. We're always looking at the best available technology for performance, for price, and then architecting that into a solution that will serve research.

Gardner: We’ve heard a lot recently from Hewlett Packard Enterprise (HPE) recently about their advances in large-scale memory processing and memory-driven architectures. How does that fit into your plans?

Nystrom: Large, memory-intensive architectures are a cornerstone of Bridges. We're doing a tremendous amount of large-scale genome sequence assembly on Bridges. That's individual genomes, and it’s also metagenomes with important applications such as looking at the gut microbiome of diabetic patients versus normal patients, and understanding how the different bacteria are affected by and may affect the progression of diabetes. That has tremendous medical implications. We’ve been following memory technology for a very long time, and we’ve also been following various kinds of accelerators for AI and deep learning.

Gardner: Can you tell us about the underlying platforms that support Bridges that are currently commercially available? What might be coming next in terms of HPE Gen10 servers, for example, or with other HPE advances in the efficiency and cost reduction in storage? What are you using now, and what do you expect to be using in the future?

From my perspective, AI is permeating all aspects of computing. The way we see AI as important in an HPC machine is that it is being applied to applications that were traditionally HPC only, things like weather and protein folding. Those were apps that people used to run on just big iron.

Dr. Nick Nystrominterim director of research, Pittsburgh Supercomputing Center

Ever-expanding memory, storage

Nystrom: First of all, I think the acquisition of SGI by HPE was very strategic. Prior to Bridges, we had a system called Blacklight, which was the world’s largest shared-memory resource. It’s what taught us, and we learned how productive that can be for new communities in terms of human productivity. We can’t scale smart humans, and so that’s essential.

In terms of storage, there are tremendous opportunities now for integrating storage-class memory, increasing degrees of flash solid-state drives (SSDs), and other stages. We’ve always architected our own storage systems, but now we are working with HPE to think about what we might do for our next round of this.

Gardner: For those out there listening and reading this information, if they hadn’t thought that HPC and big data analytics had a role in their businesses, why should they think otherwise?

Nystrom: From my perspective, AI is permeating all aspects of computing. The way we see AI as important in an HPC machine is that it is being applied to applications that were traditionally HPC only, things like weather and protein folding. Those were apps that people used to run on just big iron.

These will be enterprise workloads where AI has a key impact. They will use AI as an empowering tool to make what they already do, better.

Now, they are integrating AI to help them find rare events, to do longer term simulations in less time. And they’ll be doing this across other industries as well. These will be enterprise workloads where AI has a key impact. It won’t necessarily turn companies into AI companies, but they will use AI as an empowering tool to make what they already do, better.

Gardner: An example, Nick?

Nystrom: A good example of the way AI is permeating other fields is what people are doing at the Institute for Precision Medicine, [a joint effort between the University of Pittsburgh and the University of Pittsburgh Medical Center], and the Carnegie Mellon University machine learning and computational biology departments.

They are working together on a project called Big Data for Better Health. Their objective is to apply state-of-the-art machine learning techniques, including deep learning, to integrated genomic patient medical records, imaging data, and other things, and to really move toward realizing true personalized medicine.

Gardner: We’ve also heard a lot recently about hybrid IT. Traditionally, HPC required an on-premises approach. Now, to what degree does HPC as a service make sense in order to take advantage of various cloud models?

Nystrom: That’s a very good question. One of the things that Bridges makes available through the democratizing of HPC is big data as a service and HPC as a service. And it does that in many cases by what we call gateways. These are web portals for specific domains.

At the Center for Causal Discovery, which I mentioned, they have the Causal Web. It’s a portal, it can run in any browser, and it lets people who are not experts with supercomputers access Bridges without even knowing they are doing it. They run applications with a supercomputer as the back end.

Another example is the Galaxy Project and Community Hub, which are primarily for bioinformatic workflows but also other things. The main Galaxy instance is hosted elsewhere, but people can run very large memory genome assemblies on Bridges transparently—again, without even knowing. They don’t have to log in, they don’t have to understand Linux; they just run it through a web browser, and they can use HPC as a service. It becomes very cloud-like at that point.

Super-cloud supercomputing

Buitrago: Depending on the use case, an environment like the cloud can make sense. HPC can be used for an initial stage, if you want to explore different AI models, for example. You can fine-tune your AI and benefit from having the data close. You can reduce the time to start by having a supercomputer available for only a week or two. You can find the right parameters, you get the model, and then when you are actually generating inferences, you can go to the cloud and scale there. It supports high peaks in user demand. So, cloud and traditional HPC are complementary among different use cases, for what’s called for in different environments and across different solutions.

Gardner: Before we sign off, a quick look to the future. Bridges has been here for over a year; let's look to a year out. What do you expect to come next?

Nystrom: Bridges has been a great success. It's very heavily subscribed, fully subscribed, in fact. It seems to work; people like it. So we are looking to build on that. We're looking to extend that to a much more powerful engine where we’ve taken all of the lessons we've learned improving Bridges. We’d like to extend that by orders of magnitude, to deliver a lot more capability—and that would be across both the research community and industry.

Gardner: And using cloud models, what should we look for in the future when it comes to a richer portfolio of big-data-as-a-service offerings?

Buitrago: We are currently working on a project to make data more available to the general public and to researchers. We are trying to democratize data and let people do searches and inquiries and processing that they wouldn’t be able to do without us.

We are integrating big datasets that go from web crawls to genomic data. We want to offer them paired with the tools to properly process them. And we want to provide this to people who haven’t done this in the past, so they can explore their questions and try to answer them. That’s something we are really interested in, and we look forward to moving into a production stage.

Gardner: I'm afraid we’ll have to leave it there. We've been examining how the Pittsburgh Supercomputing Center has developed a research capability, Bridges, and how that's providing new levels of analytics, insights, and efficiencies. And we've learned how advances in IT infrastructure and HPC architectures are combining to meet new requirements, for such uses as AI and big data deep learning.

So, please join me in thanking our guests, Dr. Nick Nystrom, interim director of research at the Pittsburgh Supercomputing Center. Thank you.

Nystrom: Thank you.

Dana Gardner: We've also been here with Paola Buitrago, director of AI and big data at the Pittsburgh Supercomputing Center. Thank you.

Buitrago: Thanks, Dana.

Gardner: And thanks also to our audience for joining this BriefingsDirect Voice of the Customer digital transformation success story. I’m Dana Gardner, principal analyst at Interarbor Solutions, your host for this ongoing series of Hewlett Packard Enterprise-sponsored interviews. Thanks again for listening. Please feel free to pass this along in your IT community, and do come back next time.

Related links:

NASA achieves optimal energy efficiency with its first modular supercomputer

This article/content was written by the individual writer identified and does not necessarily reflect the view of Hewlett Packard Enterprise Company.