Using Ansible and RESTful APIs to provision OpenStack infrastructure

Ansible is used for the difficult job of setting up OpenStack clouds. At the OpenStack Summit, HPE's Jeff Kight demonstrated the process with an instructive case study.

At the recent OpenStack Summit in Boston, more than a dozen presentations showed how Kubernetes and OpenStack work together. But Kubernetes is far from the only DevOps tool that can make the open source infrastructure as a service (IaaS) sit up and sing. DevOps programs such as Chef and Puppet also do well by OpenStack.

So can Ansible, Red Hat's DevOps tool. There’s plenty of integration that benefits OpenStack. Among the examples: Andy McCrae, a Rackspace software developer, leads a project called OpenStack-Ansible (OSA), which is used for deploying production OpenStack clouds.

This is not a simple task. OpenStack is infamous for being hard to install. A 2015 survey of IT professionals by SUSE, a Linux and OpenStack company, found that half of all enterprises that tried to implement an OpenStack cloud have failed. It's gotten better since then, but even now it's difficult.

That’s where Ansible steps in. It promises to automate installations of OpenStack configuration in an organization’s IT department.

To offer insight into practical applications, HPE master solution architect and OpenStack expert Jeff Kight explained how to use Ansible and RESTful APIs to provision OpenStack's physical infrastructure. He demonstrated a case study on automating infrastructure deployment for a Hewlett Packard Enterprise customer.

Kight's talk sprang from a request, he said, to take "complexities like provisioning physical infrastructure and installing OpenStack and fully automate them to be installed on servers in the factory."

Announcing HPE Helion OpenStack 5.0, based on the Newton codebase

In this case, the first step was to use Representational State Transfer (REST) application programming interfaces (APIs) to import the infrastructure inventory into Ansible. This required data collection at a deep hardware level, including the firmware versions of the server components.

"I didn't want to have to have an expensive person required for the factory and onsite installations," Kight explained. The data is provided from two sources. Customer site-specific information is defined in an Excel workbook. Excel Visual Basic then creates the Ansible configuration file. Ansible playbooks use both this customer information and the RESTful API to configure and install the OpenStack solution.

The team used Ansible, along with other open source automation programs for server automation, to spin up management virtual machines that would actually do the work. “We could get some scale out of this [project],” Kight said, and take advantage of Ansible features including Jinja templating to enable dynamic expressions and access to application and server variables for the host files.

Kight’s team used Packer to build the appropriate automation server images. These were deployed with Vagrant, which specifies and automates the VM lifecycle. This includes defining the project's software requirements, packages, operating system configuration, and users.  When all was said and done, the cluster test environment was consistent every single time.

While the team could have used Ansible to automatically generate and configure the network switches, the client opted for a semiautomatic process.

With the test environment network ready to go, they were ready to actually configure the remaining physical infrastructure: iLO, BIOS and server storage arrays. Using a captured XML file, the BIOS is set up on the new servers to be consistent based on the machine model. "Basically," said Kight, "you can configure a certain server exactly the way you want it, dump out the XML, and then upload that file" and use it to plan out the setup. The iLO is configured based on customer input, and the RESTful API is used to discover and configure the server storage arrays.

This Ansible-based procedure wasn't simply a static setup plan. Kight wanted to use it with more than one customer, as well as be able to provision multiple customers simultaneously.

Armed with all this data fed into Ansible Playbooks, the team was ready to install OpenStack. Kight used Cobbler, a build and deployment system used to automate repetitive actions, which is often used with Ansible Dynamic Inventory. This sits outside the production environment and is used "to image the first node in the cluster." This node, in turn, is used with Cobbler to set up the OpenStack nodes via Ansible.

If this sounds complex, it is. But, once you've set it up, the combination of Ansible and Cobbler can be used repeatedly to set up working, consistent OpenStack clouds. That is not a small thing.

You can accomplish the same tasks with other tools, but the important point is that DevOps tools can make automating OpenStack deployments much easier and more consistent than ever before.

This article/content was written by the individual writer identified and does not necessarily reflect the view of Hewlett Packard Enterprise Company.