Skip to main content
Exploring what’s next in tech – Insights, information, and ideas for today’s IT and business leaders

6 tips for placing your next data center

There's more to the decision on where to plop down a data center footprint than CapEx and local regulatory compliance. Choose poorly, and avoidable efficiency and complexity problems may plague your organization for years to come. Choose wisely, and reap the rewards of enhanced agility.

But even with the traditional data center, location is a major concern. While much is made of the competition among locations rushing to offer tax incentives and other considerations to bring in major data centers, such benefits are only primary drivers for companies building huge facilities, such as Google, Microsoft, and Facebook. For mere mortals, a data center's placement is usually driven by normal business requirements. There are, however, specific considerations for data center placement that should be addressed.

"The goal for a data center is 100 percent uptime, so reducing the number of risk factors is key to selecting the best location," says Paul Savill, senior vice president of global core product management at Level 3 Communications.

To be certain, for large enterprises, downtime events have a big impact, costing upwards of $1 million a pop. But there are other strategic considerations and application- and risk-related factors that figure into the attractiveness of a data center location. Here are six things to consider when making the choice of where to put your data center.

Cloud city, or country independence?

How hybrid is your Hybrid IT? That can make all the difference when choosing between a data center in a bustling urban center and a data center in the boondocks.

Data centers in densely packed cities make for major points of interconnection. The result can be a super-agile "carrier hotel" where network carriers and service providers achieve connectivity with one another while keeping bandwidth fees low, leasing one another's diverse fiber connections for relatively small sums via reverse auction to achieve the interconnect.

Enterprise organizations can further leverage this urban proximity to cloud interconnect themselves by transitioning from a colocation strategy to a cloud one. The agility benefits, according to Savill, can include decreased backhaul costs, more virtualization options, and enhanced business continuity and disaster recovery (BCDR) positioning.

IDC believes that by the end of 2016, 65 percent of large global enterprises will have committed to become information-based companies. What does this mean for your data and keeping it protected?

Conversely, whether because of compliance reasons or because of the particularities of their Hybrid IT strategy, some nephophobic enterprises may prefer to maintain a hard limit on their cloud reliance. Savill reports that these organizations can reduce their IT fragility by deploying their own servers at a larger scale across a larger data center footprint. In these cases, rural data centers—far, far away from major city centers—may hold an advantage because they tend to be larger, more spacious, and newer (and thereby easier to remodel and update) than their urban counterparts. For this reason, rural locales are more appropriate for central data centers, which may require a great deal of space, rather than smaller edge data centers.

Additionally, rural data centers may offer their own BCDR advantages, as discussed below.

Plan for proximity to people

Much like real estate, data centers that provide interactive services are all about the location. But the optimal location is more about access to high-performance connectivity than a nice view—at least for data centers that are most effectively and efficiently placed.

Central data centers delivering low-touch applications such as storage, compute, and analytics may not have as much need for proximity to end users as their "chattier" edge counterparts do, but all things being equal, being close to the user helps. "You" might be a worker at your company's local headquarters or branch office, or "you" might be a customer or other outside user of your company's systems.

Either way, data centers and other IT gateways that are positioned closer to lots of people—specifically, people accessing your data center applications and connectivity—can lead to lower latency. This can be essential for edge data centers that leverage applications requiring high performance, such as over-the-top (OTT) streaming. Moreover, Savill says, edge data centers, which take up less physical space than their rural counterparts, can help drive application performance in large and small metro areas alike.

Alternatively, "you" might be an in-demand network engineer looking to get hired. Proximity to a metro area's bountiful and well-educated information systems workforce can make staffing up a data center that much easier and more affordable, keeping relocation costs down.

Control costs with natural climate control

However, just like any other single factor on this list, proximity to a pre-existing qualified workforce isn't everything. Perhaps you've got a line on a great deal for locating your data center in, say, India—an exponentially growing technology hub and a hot location for engineering talent. But it might be a hot location in more ways than one. It's inherently more difficult and more expensive to keep servers and other hardware cool and running smoothly when data centers are located in warmer climates. So depending on where exactly this hypothetical data center is located within India (i.e., assuming it's not in the Himalayas), temperatures may rise to more than 90 degrees Fahrenheit in the summer months. Any data center in that kind of environment can face an enormous drain on overall efficiency because of the massive energy consumption required to keep things cool.

"The scale factor for the cost benefits [of hardware upgrades] varies by… location, and so that's a whole other part of the equation," says Bill Carter, chief technology officer at the Open Compute Foundation. "For example, if you have a data center located in Florida, and you're trying to run that during the summer, you're going to run chillers, air conditioner units, and air handlers most of the day and maybe even at night. And you're going to lose a lot of energy when you're doing that. That's very inefficient."

Accordingly, IT honchos should consider themselves warned about the cost efficiency of having their data centers in locales that are too equatorial or in or close to a desert. In contrast, northern parts of the globe and certain mountainous areas are generally much cooler, meaning air conditioners/chillers/handlers may need to be run only a few hours a day in the summertime, if at all. In fact, Carter speculates that some areas of the world may allow for a "chiller-less" data center.

Of course, at the other extreme, it is possible for servers and other hardware to get too cold, leading to thermal shutdown or other strange behaviors. Too-cold temperatures can be an even bigger problem when combined with higher levels of humidity because of the potential for short circuits caused by water condensation—or worse. Case in point: In the summer of 2011, "an error in the sequence of operation controls" at Facebook's Prineville, Oregon, data center caused its air conditioning to become too efficient, causing the wrong kind of cloud to form and rain inside the data center.

Extreme levels of humidity can harm hardware in yet other ways. In addition to the risk of short circuits, too much water in the air can lead to rust and corrosion. Too little humidity, on the other hand, can cause buildups of static electricity and electrostatic discharges. While equipment for regulating temperature, humidity, and air flow are crucial to data centers, the less you need to invest in these extra materials and the operation, the more cost-efficiently your data center can operate. In any event, the exact makeup of a data center can affect the overall impact of certain climate factors.

How 'hard' is your hardware?

When it comes to overheating, the sheer amount of IT equipment present in a data center impacts energy-consumption costs and other scalability factors. Hardware, after all, runs hot—and the more actual hardware you have in your data center, regardless of location, the higher your cooling demands will be. This can be even more of a factor if you're looking to scale upward or outward down the line.

Virtualization can help ease this burden while making your servers and data center more agile and efficient for scalability. Not every appliance can be virtualized, however. Carter notes that dedicated hardware can be just as important for efficiency as virtualization. As enterprises trend toward Hybrid IT, therefore, some physical network functions will sit right "beside" their virtualized brethren by virtue of the very definition of Hybrid IT. Thus, your climate-related location considerations may vary depending upon the actual makeup of your data center.

Down with disaster downtime disasters

Perhaps the most intuitive factor in deciding where to locate your data center is the likelihood of a natural disaster. Places where tornados, earthquakes, hurricanes, monsoons, tsunamis, or even lightning storms are commonplace present a definite operational risk. Even if a data center in the danger zone of a natural disaster gets lucky, the impact of evacuation orders and related actions may hamper operations.

The risk of manmade disasters must also be considered. For example, regions of substantial political instability can be prone to the effects of riots, war, or terrorism. Or, a data center too close to certain kinds of industrial infrastructure faces the potential risk of a gas or chemical explosion. Even proximity and relationship to transportation infrastructure must be considered in terms of the risk of damage from a freak airplane crash or train derailment.

But is there danger in being too safe?

On the other hand, there are potential BCDR problems with blindly locating data centers in places that seem safe from disasters—because if one does hit, the likelihood of preparedness is small and the risk of damage is great.

The Northeast U.S., for instance, is used to extreme blizzards, while southern states tend to go into states of emergency over a couple of inches of snow. California, meanwhile, is used to earthquakes, and has regulations and building codes in place to help protect against some of their effects. Ditto for Florida and hurricanes. Particular geographic regions in developed parts of the world tend to have the infrastructure, education, and cultural preparedness for the natural onslaughts to which they are prone.

Consider the counterexample presented by Hurricane Sandy, the 2012 hurricane that uncharacteristically ripped across New York, New Jersey, and Connecticut, catching many data centers unprepared. In particular, New York City data centers suffered damage and outages because their operators had little idea what they were in for and found themselves struggling. In particular, after Con Ed shut off power to mandatory evacuation areas in Lower Manhattan, underprepared data centers in the area reported having to keep their generators running to keep customers online by shutting down their cooling systems or even organizing "bucket brigades" to haul diesel fuel to an upstairs backup generator. Other data centers in the region simply went down—and took their customers with them.

Ultimately, selecting data center locations can carry a bit of "damned if you do, damned if you don't" in the equation. Eliminating one problem may invite two or three more to its funeral. While not all negatives can be avoided in choosing a data center locale, they must at least be weighed appropriately. On the surface, putting data centers in a place like Iceland seems perfect: low-cost power and free climate-based cooling, addressing the major operational expenses of a data center. But because of the cost of backhaul and latency issues, the location has limited data center investment.

There may not be a perfect data center location, in terms of avoiding all possible negatives and risk. Perfect, however, is the enemy of good. All things considered, there will be a choice that, for your needs, is the most cost-effective, the most risk-balanced, and the timeliest for your particular stage and strategy in IT hybridization and evolution.

This article/content was written by the individual writer identified and does not necessarily reflect the view of Hewlett Packard Enterprise Company.