A Wake-up Call for the Worlds Comatose Data Centers

September 16, 2015 • Blog Post • By Atlantic Re:think


  • Millions of servers sit idle in almost every industry from financial services to retail with the average server utilization rate falling between 6 and 12 percent
  • Improving efficiency in the 80 percent of data centers affected represents the biggest opportunity to curb the use of power plants that emit pollution and contribute to climate change

The c-suite is only beginning to understand the multibillion-dollar waste embedded in its technology investments

When AOL embarked on a major reorganization of its data centers two years ago, executives at the sprawling digital media conglomerate confronted the cyber equivalent of a storage room.

Over nearly three decades, AOL racked up enough hardware to fill five massive data centers. An arsenal of nearly 15,000 servers, once used to help power hundreds of websites, advertising and messaging properties that the company operated, were essentially dormant. AOLs so-called utilization rate-a measure of how efficiently it deployed its computing powerwas woefully low, hovering well below 10 percent.

AOL is not alone. Companies in almost every major industry-from financial services to retail-are experiencing data center inefficiencies on a scale that is hard to believe. Millions of servers sit idle, with the average server utilization rate falling between 6 and 12 percent, according to industry experts. Whats more, a recent analysis of nearly 4,000 servers installed across North America found that roughly one in three was "comatose"-a server that consumes electricity but performs no real computing work. Many experts refer to them as zombies.

Applying the findings of the analysis more broadly would mean that roughly 10 million servers worldwide are currently zombies, including more than 3.6 million in the United States. Globally, that amounts to roughly $1.5 billion of unneeded energy and real estate expenses, according to an estimate using the Comatose Server Calculator provided by The Uptime Institute, an advisory organization focused on improving the performance and reliability of critical infrastructure for businesses.

"That is big waste", says Jonathan Koomey, a research fellow at Stanford University who was one of the comatose server study co-authors. He estimated roughly 80 percent of data centers are affected.

Meanwhile, the potential environmental impact is huge. Pierre Delforge of the Natural Resources Defense Council, an environmental advocacy group, calls server underutilization the "biggest and most overlooked source of waste" in a data center industry that consumes roughly two percent of the nations electricity. Improving efficiency, he adds, represents the "biggest opportunity" to curb the use of power plants that emit pollution and contribute to climate change.

The problem has been years in the making. Data center inefficiencies have as much to do with outwitted management as they do with outmoded technology, experts say. At big corporations, most chief executivesparticularly those not steeped in the vagaries of information technology-entrust a top lieutenant with the task of keeping the data centers running. Until recently, most rarely scrutinized the cost of IT services with the same intensity used to review a marketing budget or a new product coming online.

"There has been a real disconnect between what was running inside the business center and what was needed from a business perspective", says Aaron Rallo, chief executive of TSO Logic, a company that provides IT efficiency software to data centers that contributed to the comatose server analysis.

Data center inefficiencies also stemmed from a mix of poor oversight, bad incentives and lousy coordination. Industry experts noted, for example, that different divisions within the same company would buy their own servers without exploring ways to collaborate, or software engineers would request a few hundred machines to use for a short period of time, but then neglect to tell the data center manager that their servers were still spinning even though their work was complete. Or data center operators would intentionally deploy more servers than necessary to handle peak traffic and leave them running all day as a precautionary step. With nobody keeping track, all those underutilized servers simply piled up over time.

"There is terrible inventory", Koomey says. "Most companies cant tell you how many servers they havethey just don't know."

The good news is that this has started to change. Some companies are looking at software solutions to strengthen transparency. TSO Logic, for example, installed proprietary software onto the computer network of a major financial services company in order to gather a holistic view of the utilization of its servers. About 15 percent of its machines didnt register a single transaction. Then, with the help of analytics, the software deployed an automated tool to run its data center more efficiently.

Rallo said TSO Logics typical client sees the operational expenses of their data center fall by around 40 percent by eliminating excess licensing and energy fees. "If you have been growing over time, there are efficiencies to be gained, he notes."

Other companies believe they can improve the efficiency of their data centers through the use of hardware. Three years ago, global media conglomerate Twentieth Century Fox embarked on a plan to streamline its data center operations. Relying on Hewlett-Packards cloud services and converged technology, Fox executives said they lowered the companys operating costs while taking advantage of bigger, faster file transfers to reduce the time it took to send movies and television hits around the world from weeks to minutes. Then Fox accelerated those speeds even more with the addition of HPs customizable Moonshot technology, which can put more than 180 servers into a chassis the size of a suitcase. The result? Fox executives said they reduced the companys data center space by more than 50 percent, its power and cooling expenses by 87 percent and its hardware costs by 57 percent compared with traditional infrastructure. "Moonshot is really helping us redefine the data center", says John Herbert, Foxs chief information officer.

Meanwhile, other companies are simply tightening up management practices. AOL, for example, is now a little more than halfway through a five-year plan to streamline its data center operations. The centerpiece of its strategy has been to shift away from a system that allowed IT costs to remain a "hidden box" toward whats known as total business management, which requires examining the efficiency of a companys entire arsenal of servers so its utilization rates and IT costs are transparent to its business leaders.

Armed with the data, AOL executives purged nearly a quarter of the companys 60,000 physical and virtual servers. They replaced two massive data centers in Virginia and California with new ones that use roughly 60 percent less total power. And they have been aggressively moving more of the companys computing power to the public cloud, an idea that was not even on the table two years ago.

James LaPlaine, AOLs chief information officer, says the company has raised the utilization rate of its remaining servers to around 12 percent and has a plan underway to double that number. Meanwhile, he says the company has reaped millions in annual savings from lower cooling and power costs, and has reduced AOLs carbon footprint by more than 40 percent. This spring, AOL named a "utilization czar" to keep up the momentum.

"We had always viewed this as a technical effort. That's why we didnt get a lot of traction", LaPlaine says. "Once we put this in business terms, we got a lot more credibility."