Design, deliver, and run enterprise blockchain workloads quickly and easily.
All servers and systems
[Editor's note: This podcast was recorded on May 7, 2018.]
HudsonAlpha is a nonprofit research organization that supports hundreds of projects and generates 6 petabytes of new data every year. It leverages a mix of IT infrastructure to quickly deliver the compute power needed to meet the requirements of its users and drive competitive advantage.
The challenge, however, is to better manage the complexity of its hybrid IT environment while keeping costs in check. Listen to this HPE Voice of the Customer podcast hosted by Dana Gardner of BriefingsDirect to learn how the researcher is testing a new Hewlett Packard Enterprise solution, OneSphere, to gain greater visibility across resources via a simplified management interface.
Dana Gardner: Hello, and welcome to the next edition of the BriefingsDirect Voice of the Customer podcast series. I’m Dana Gardner, principal analyst at Interarbor Solutions, your host and moderator for this ongoing discussion on digital transformation success stories. Stay with us now to learn how agile businesses are fending off disruption—in favor of innovation.
Our next hybrid IT management success story examines how the nonprofit research institute HudsonAlpha improves how it harnesses and leverages a spectrum of IT deployment environments. We’ll now learn how HudsonAlpha has been testing a new HPE solution, OneSphere, to gain a common and simplified management interface to rule them all. Here to help explore the benefits of improved levels of multicloud visibility and process automation is Katreena Mullican, senior architect and cloud whisperer at HudsonAlpha Institute for Biotechnology in Huntsville, Alabama. Welcome, Katreena.
Katreena Mullican: Thank you, Dana. Thank you for having me as a part of your podcast.
Katreena Mullican, HudsonAlpha Institute for Biotechnology
Gardner: We’re delighted to have you with us. What’s driving the need to solve hybrid IT complexity at HudsonAlpha?
Mullican: The big drivers at HudsonAlpha are the requirements for data locality and ease of adoption. We produce about 6 petabytes of new data every year, and that rate is increasing with every project that we do.
We support hundreds of research programs with data and trend analysis. Our infrastructure requires quickly iterating to identify the approaches that are both cost effective and the best fit for the needs of our users.
Gardner: Do you find that having multiple types of IT platforms, environments, and architectures creates a level of complexity that’s increasingly difficult to manage?
Gardner: You want to give people enough leash where they can go and roam and experiment, but perhaps not so much that you don’t know where they are, what they are doing.
Mullican: Right. “Software-defined everything” is our mantra. That’s what we aim to do at HudsonAlpha for gaining rapid innovation.
Gardner: How do you gain balance from too hard-to-manage complexity, with a potential of chaos, to the point where you can harness and optimize yet allow for experimentation, too?
Mullican: IT is ultimately responsible for the security and the uptime of the infrastructure. So it’s important to have a good framework on which the developers and the researchers can compute. It’s about finding a balance between letting them have provisioning access to those resources versus being able to keep an eye on what they are doing. And not only from a usage perspective, but from a cost perspective, too.
Gardner: Tell us about HudsonAlpha and its fairly extreme IT requirements.
Mullican: HudsonAlpha is a nonprofit organization of entrepreneurs, scientists, and educators who apply the benefits of genomics to everyday life. We also provide IT services and support for about 40 affiliate companies on our 150-acre campus in Huntsville, Alabama.
Gardner: What about the IT requirements? How do you fulfill that mandate using technology?
Mullican: We produce 6 petabytes of new data every year. We have millions of hours of compute processing time running on our infrastructure. We have hardware acceleration. We have direct connections to clouds. We have collaboration for our researchers that extends throughout the world to external organizations. We use containers, and we use multiple cloud providers.
Gardner: So you have been doing multicloud before there was even a word for multicloud?
Mullican: We are the hybrid-scale and hybrid IT organization that no one has ever heard of.
Gardner: Let’s unpack some of the hurdles you need to overcome to keep all of your scientists and researchers happy. How do you avoid lock-in? How do you keep it so that you can remain open and competitive?
Mullican: It’s important for us to keep our local data centers agnostic, as well as our private and public clouds. So we strive to communicate with all of our resources through application programming interfaces (APIs), and we use open source technologies at HudsonAlpha. We are proud of that. Yet there are a lot of possibilities for arranging all of those pieces. There are a lot [of services] that you can combine with the right toolsets, not only in your local data center but also in the clouds. If you put in the effort to write the code with that in mind—so you don’t lock into any one solution necessarily—then you can optimize and put everything together.
Gardner: Because you are a nonprofit institute, you often seek grants. But those grants can come with unique requirements, even IT use benefits and cloud choice considerations.
Mullican: Right. Researchers are applying for grants throughout the year, and now with the National Institutes of Health (NIH), when grants are awarded, they come with community cloud credits, which is an exciting idea for the researchers. It means they can immediately begin consuming resources in the cloud—from storage to compute—and that cost is covered by the grant. So they are anxious to get started on that, which brings challenges to IT. We certainly don’t want to be the holdup for that innovation. We want the projects to progress as rapidly as possible. At the same time, we need to be aware of what is happening in a cloud and not lose control over usage and cost.
Gardner: Certainly HudsonAlpha is an extreme test bed for multicloud management, with lots of different systems, changing requirements, and the need to provide the flexibility to innovate to your clientele. When you wanted a better management capability, to gain an overview into that full hybrid IT environment, how did you come together with HPE and test what they are doing?
Mullican: We’ve invested in composable infrastructure and hyperconverged infrastructure (HCI) in our data center, as well as blade server technology. We have a wide variety of compute, networking, and storage resources available to us. The key is: How do we rapidly provision those resources in an automated fashion? I think the key there is not only for IT to be aware of those resources, but for developers to be as well. We have groups of developers dealing with bioinformatics at HudsonAlpha. They can benefit from all of the different types of infrastructure in our data center. What HPE OneSphere does is enable them to access—through a common API—that infrastructure. So it’s very exciting.
Gardner: What did HPE OneSphere bring to the table for you in order to be able to rationalize, visualize, and even prioritize this very large mixture of hybrid IT assets?
Mullican: We have been beta testing HPE OneSphere since October 2017, and we have tied it into our VMware ESX Server environment, as well as our Amazon Web Services (AWS) environment successfully—and that’s at an IT level. So our next step is to give that to researchers as a single pane of glass where they can go and provision the resources themselves.
Gardner: What might this capability bring to you and your organization?
Mullican: We want to do more with cross-cloud. Right now, we are very adept at provisioning within our data centers, provisioning within each individual cloud. HudsonAlpha has a presence in all the major public clouds—AWS, Google, Microsoft Azure. But the next step would be to go cross-cloud, to provision applications across them all.
For example, you might have an application that runs as a series of microservices. So you can have one microservice take advantage of your on-premises data center, such as for local storage. And then another piece could take advantage of object storage in the cloud. And even another piece could be in another separate public cloud. But the key here is that our developer and researchers—the end users of OneSphere—they don’t need to know all of the specifics of provisioning in each of those environments. That is not a level of expertise in their wheelhouse. In this new OneSphere way, all they know is that they are provisioning the application in the pipeline—and that’s what the researchers will use. Then it’s up to us in IT to come along and keep an eye on what they are doing through the analytics that HPE OneSphere provides.
Gardner: Because OneSphere gives you the visibility to see what the end users are doing, potentially, for cost optimization and remaining competitive, you may be able to play one cloud off another. You may even be able to automate and orchestrate that.
Mullican: Right, and that will be an ongoing effort to always optimize cost—but not at the risk of slowing the research. We want the research to happen and to innovate as quickly as possible. We don’t want to be the holdup for that. But we definitely do need to loop back around and keep an eye on how the different clouds are being used and make decisions going forward based on the analytics.
Gardner: There may be other organizations that are going to be more cost-focused, and they will probably want to dial back to get the best deals. It’s nice that we have the flexibility to choose an algorithmic approach to business, if you will.
Mullican: Right. The research that we do at HudsonAlpha saves lives, and the utmost importance is to be able to conduct that research at the fastest speed.
Gardner: HPE OneSphere seems geared toward being cloud-agnostic. They are beginning on AWS, yet they are going to be adding more clouds. And they are supporting more internal private cloud infrastructures, and using an API-driven approach to microservices and containers.
As an early tester, and someone who has been a longtime user of HPE infrastructure, is there anything about the combination of HPE Synergy, HPE SimpliVity HCI, and HPE 3PAR intelligent storage—in conjunction with OneSphere—that’s given you a "whole greater than the sum of the parts" effect?
Mullican: HPE Synergy and composable infrastructure is something that is very near and dear to me. I have a lot of hours invested with HPE Synergy Image Streamer and customizing open source applications on Image Streamer—open-source operating systems and applications. The ability to utilize that in the mix that I have architected natively with OneSphere, in addition to the public clouds, is very powerful, and I am excited to see where that goes.
Gardner: Any words of wisdom to others who may be have not yet gone down this road? What do you advise others to consider as they are seeking to better compose, automate, and optimize their infrastructure?
Mullican: It needs to start with IT. IT needs to take on more of a DevOps approach. As far as putting an emphasis on automation—and being able to provision infrastructure in the data center and the cloud through automated APIs—a lot of companies probably are still slow to adopt that. They are still provisioning in older methods, and I think it’s important that they do that. But then, once your IT department is adept with DevOps, your developers can begin feeding from that and using what IT has laid down as a foundation.
So it needs to start with IT. It involves a skill set change for some of the traditional system administrators and network administrators. But now, with software-defined networking (SDN) and with automated deployments and provisioning of resources, that’s a skill set that IT really needs to step up and master. That’s because they are going to need to set the example for the developers who are going to come along and be able to then use those same tools. That’s the partnership that companies really need to foster—and it’s between IT and developers.
And something like HPE OneSphere is a good fit for that, because it provides a unified API. On one hand, your IT department can be busy mastering how to communicate with their infrastructure through that tool. And at the same time, they can be refactoring applications as microservices, and that’s up to the developer teams. So both can be working on all of this at the same time.
Then when it all comes together with a service catalog of options, in the end, it’s just a simple interface. That’s what we want, to provide a simple interface for the researchers. They don’t have to think about all the work that went into the infrastructure. They are just choosing the proper workflow and pipeline for future projects. We want to provide a simple interface to the researchers. They don't have to think about all the work that went into the infrastructure.
Gardner: It also sounds, Katreena, like you are able to elevate IT to a solutions-level abstraction and that OneSphere is an accelerant to elevating IT. At the same time, OneSphere is an accelerant to the adoption of DevOps, which means it’s also elevating the developers. So are we really finally bringing people to that higher plane of business-focus and digital transformation?
Mullican: Yes. HPE OneSphere is an advantage to both of those departments, which in some companies can be still quite disparate. Now at HudsonAlpha, we are DevOps in IT. It’s not a distinguished department, but in some companies, that’s not the case. And I think we have a lot of advantages because we think in terms of automation, and we think in terms of APIs from the infrastructure standpoint. And the tools that we have invested in, the types of composable and hyperconverged infrastructure, are helping accomplish that.
Gardner: I speak with a number of organizations that are global, and they have some data sovereignty concerns. I’d like to explore, before we close out, how OneSphere also might be powerful in helping to decide where data sets reside in different clouds, private and public, for various regulatory reasons. Is there something about having that visibility into hybrid IT that extends into hybrid data environments?
Mullican: Data locality is one of our driving factors in IT, and we do have on-premises storage as well as cloud storage. There is a time and a place for both of those, and they do not always mix, but we have requirements for our data to be available worldwide for collaboration. So, the services that HPE OneSphere makes available are designed to use the appropriate data connections, whether that would be back to your object storage on-premises or AWS Simple Storage Service (S3), for example, in the cloud.
Gardner: Now we can think of HPE OneSphere as also elevating data scientists—and even the people in charge of governance, risk management, and compliance (GRC) around adhering to regulations. It seems like it’s a gift that keeps giving.
Mullican: It is a good fit for hybrid IT and what we do at HudsonAlpha. It’s a natural addition to all of the preparation work that we have done in IT around automated provisioning with HPE Synergy and Image Streamer. HPE OneSphere is a way to showcase to the end user all of the efforts that have been and are being done by IT. That’s why it’s a satisfying tool to implement, because, in the end, you want what you have worked on so hard to be available to the researchers and be put to use easily and quickly.
Gardner: It was a long time coming, right?
Mullican: Yes, yeah, I think so.
Gardner: I’m afraid we will have to leave it there. We have been exploring how nonprofit research institute HudsonAlpha is better managing its multiple cloud and hybrid IT deployment environments. And we have learned how HPE OneSphere is delivering consolidated and deep insights across multiple clouds and IT deployments at HudsonAlpha, an early beta tester and user. So please join me in thanking our guest, Katreena Mullican, senior architect and cloud whisperer at HudsonAlpha Institute for Biotechnology.
Mullican: Thank you very much.
Gardner: And a big thank you to our audience as well for joining us for this BriefingsDirect Voice of the Customer digital transformation success story. I’m Dana Gardner, principal analyst at Interarbor Solutions, your host for this ongoing series of Hewlett Packard Enterprise-sponsored interviews. Thanks again for listening. Please pass this content along to your IT community and do come back next time.
This article/content was written by the individual writer identified and does not necessarily reflect the view of Hewlett Packard Enterprise Company.