Skip to main content

Checklist: Optimizing application performance at deployment

Users care passionately about their software being fast and responsive. You need to give your applications both 0-60 speed and the strongest long-term endurance. Here are 12 guidelines for choosing a deployment platform to optimize performance, whether your application runs in the data center or the cloud.

Faster! Faster! Faster! That killer app won’t earn your company a fortune if the software is slow as molasses. Sure, your development team did the best it could to write server software that offers the maximum performance, but that doesn’t mean diddly if those bits end up on a pokey old computer that’s gathering cobwebs in the server closet.

Users don’t care where it runs as long as it runs fast. Your job, in IT, is to make the best choices possible to enhance application speed, including deciding if it’s best to deploy the software in-house or host it in the cloud.

When choosing an application’s deployment platform, there are 12 things you can do to maximize the opportunity for the best overall performance.

I make two assumptions:

First, these guidelines apply only to choosing the best data center or cloud-based platform, not to choosing the application’s software architecture. The job today is simply to find the best place to run the software.

Second, I presume that if you are talking about a cloud deployment, you are choosing infrastructure as a service (IaaS) instead of platform as a service (PaaS). What’s the difference? In PaaS, the operating system is provided by the host, such as Windows or Linux, .NET, or Java; all you do is provide the application. In IaaS, you can provide, install, and configure the operating system yourself, giving you more control over the installation.

So, here’s your checklist:

1. Run the latest software. Whether in your data center or in the IaaS cloud, install the latest version of your preferred operating system, the latest core libraries, and the latest application stack. (That’s one reason to go with IaaS, since you can control updates.) If you can’t control this yourself, because you’re assigned a server in the data center, pick the server that has the latest software foundation.

2. Run the latest hardware. Assuming we’re talking the x86 architecture, look for the latest Intel Xeon processors, whether in the data center or in the cloud. As of mid-2018, I’d want servers running the Xeon E5 v3 or later, or E7 v4 or later. If you use anything older than that, you’re not getting the most out of the applications or taking advantage of the hardware chipset. For example, some E7 v4 chips have significantly improved instructions-per-CPU-cycle processing, which is a huge benefit. Similarly, if you choose AMD or another processor, look for the latest chip architectures.

2. If you are using virtualization, make sure the server has the best and latest hypervisor. The hypervisor is key to a virtual machine’s (VM) performance—but not all hypervisors are created equal. Many of the top hypervisors have multiple product lines as well as configuration settings that affect performance (and security). There’s no way to know which hypervisor is best for any particular application. So, assuming your organization lets you make the choice, test, test, test. However, in the not-unlikely event you are required to go with the company’s standard hypervisor, make sure it’s the latest version.

Ever wonder how real people experience the opportunities and challenges of hybrid IT in their businesses? IDC did and wrote a report about it.

3. Take Spectre and Meltdown into account. The patches for Spectre and Meltdown slow down servers, but the extent of the performance hit depends on the server, the server’s firmware, the hypervisor, the operating system, and your application. It would be nice to give an overall number, such as expect a 15 percent hit (a number that's been bandied about, though some dispute its accuracy). However, there’s no way to know except by testing. Thus, it’s important to know if your server has been patched. If it hasn’t been yet, expect application performance to drop when the patch is installed. (If it’s not going to be patched, find a different host server!)

4. Base the number of CPUs and cores and the clock speed on the application requirements. If your application and its core dependencies (such as the LAMP stack or the .NET infrastructure) are heavily threaded, the software will likely perform best on servers with multiple CPUs, each equipped with the greatest number of cores—think 24 cores. However, if the application is not particularly threaded or runs in a not-so-well-threaded environment, you’ll get the biggest bang with the absolute top clock speeds on an 8-core server.

5. Remember more memory almost always helps. Yes, memory is the lowest hanging of all the low-hanging fruit. When choosing a physical server in a data center, get as much memory as you can, because nobody wants to crack open the pizza box to add chips. When deploying the application on a virtual server in the data center or in an IaaS cloud, you could start out with less memory and add more later with a mouse click. However, the maximum memory available to a virtual server is limited to whatever is in the physical server. So, again, go for the most you can. You’ll regret it otherwise.

6. Make sure the host gives you visibility into physical server workload. Throughput of a server depends on the machine’s entire workload, not only the workload of your applications running in your VMs. Before you sign the cloud IaaS contract, examine your management tools and learn how much visibility you have into the physical machine’s workloads. Also, discover how hard it is to move virtual workloads to new hardware if (when?) you find that other tenants are sucking up all the CPU cycles, memory, and network bandwidth. One option is to pay extra for a dedicated server.

7. If considering cloud, run tests of data center server versus virtual server. You can’t measure what you don’t measure. Even if a cloud server is configured exactly the same as a physical server, the performance will not be the same. Performance might be lower due to other workloads, or it might be higher due to the cloud host having newer hardware. If possible, run like-to-like tests to understand what your data center can do and what the IaaS cloud can do, as well as to set expectations for the future.

8. Locate storage relatively near to the CPU, with a high-bandwidth, low-latency connection. The topic of data center architecture is best left to another article, but the connection between the server box and the storage box(es) is key to delivering a robust application, even if the application doesn’t appear to be storage-heavy. After all, loading webpages requires accessing the disk. Find out the architecture, to begin with. Then examine your options to ensure that storage—particularly in storage-area networks or network-attached storage—have fast, robust, low-latency connections to the server.

9. If using the cloud or a remote data center, make sure the servers are local to you and your users. If the application is inward-facing for your enterprise, it matters where the server physically is located. If your employees are in Manhattan, servers in San Jose, Mumbai, or even London will be slower than identical servers in Yonkers. Not only that, but the fewer Internet hops (assuming you are accessing over the Internet, rather than using fixed lines like MPLS) means more consistent response times, which is important for both the user experience and application throughput. Far-away data centers or clouds are good for backups and disaster recovery, but not for daily operations.

10. Don’t look at only the big name-brand hosts. Sure, everyone knows about Amazon Web Services, Google Cloud Platform, Microsoft Azure, and Oracle Cloud. Those aren’t the only cloud services in the world, however. Smaller companies may offer better pricing, newer technology, superior service, and closer proximity to your employees (see tip No. 9).

11. Consider a third option: Colocation. I’ve been talking about on-premises data centers and cloud hosting, but colocation is another excellent option, where you rent a server cage in a large facility. A good colo facility has benefits, such as redundant power, redundant cooling, cheaper electricity, and 24-hour security. What’s more, some colocation facilities have redundant Internet and other communications services that’s better than most enterprises can afford—often tightly coupled to telecom company services. That’s cloud-scale bandwidth, latency, and jitter, but with your own server. Look into it.

12. Talk to the data center manager or cloud host getting the most performance from their service. No matter the company, there’s only so much information you’ll find on a website. A conversation might present more cloud or hosting options than you can find on your own, as well as act as a test of the customer experience. The same is true of an enterprise data center: If the IT team doesn’t want to advise line-of-business managers or software developers about applications prior to deployment, you’ve got problems that only the CIO or CTO can resolve.

Finally, measure, measure, measure. No matter what you do, continuously measure real-time performance, after deployment, examining all relevant factors, such as end-to-end performance, user response time, and individual components. Be ready to make changes if performance drops unexpectedly or if things change. Operating system patches, updates to core applications, workload from other tenants, and even malware infections can suddenly slow down server applications. We already know that simply saying “It’s slower!” doesn’t mean diddly to a cranky end user. Because, ultimately, it’s all about the performance.

This article/content was written by the individual writer identified and does not necessarily reflect the view of Hewlett Packard Enterprise Company.