Facebook's Arctic Data Center
September 14, 2015 • Blog Post • By Quartz Creative
IN THIS ARTICLE
- Read about two major concerns companies express when hosting data and applications overseas
- Learn how a new supercomputer can combat excessive energy concerns by running at room temperature
Onshore data centers and the supercomputers that saved them
Drunk on free web services, enamored of video apps and living in the Cloud, consumers are creating data at a breakneck speed - and they dont care whos paying. For major social networks like Facebook, Twitter and YouTube, this is the future of media and, while increasingly profitable, its a high-overhead business.
For one, delivering content quickly to users all over the world means hosting data centers across continents and preventing localized meltdowns when demand soars. Cooling these enormous spaces is like trying to chill wine in your car cup holder with the A/C on high - not only is it inefficient, it doesn't really work.
In 2011, Facebook tried a new approach by building a 30,000 square meter data center in a small Swedish town called Lulea. The area is filling up quickly - a bitcoin mining group also has a server farm nearby, and Googles data center in Finland is just across the border. All three data centers are there to take advantage of the cold air, water and cheap real estate. Anticipating demand, a Swedish telecom company is building a new fiber cable called Skanova Backbone North, which will span almost 800 miles through northern Sweden.
Not everyone is keen to host their data and applications overseas, however. Concerns about physical security and government snooping have made offshoring data less attractive. And technological breakthroughs in computing are rapidly reducing the amount of heat and power these data centers will need to hum.
Encouragingly, the technological breakthrough that might save data centers from the snow isn't some massive new discovery, but an evolutionary milestone in server design: efficient, highly customizable eight-core machines about the size of a medium-sized book.
At the National Renewable Energy Lab (NREL), a supercomputer called Peregrine, built by HPE and Intel, runs 1,440 of these compact servers powered by eight-core CPU - sall sans air conditioning. Peregrine operates in a room-temperature building in Golden, CO, and is cooled by a warm-water system that uses the same principle as evaporative coolers. That is, water transfers heat better than air.
The NREL is the U.S. Department of Energys national laboratory for renewable energy, so it was only appropriate that the on-premise Peregrine computer make a statement. As the largest supercomputer for energy-related research on Earth, it also reuses its waste heat by piping it into adjacent buildings in the winter. When energy recycling becomes possible, suddenly the most remote data centers are also the most wasteful.
As grids become smarter, energy will be treated more like an asset to be stored or traded, and less like a resource to be consumed and thrown away. That will encourage enterprises to hold their data centers as closely as they hold the data itself, creating a new market for customizable supercomputing solutions that can fit anywhere an enterprise goesnot just the Arctic.
__________________________________________________________________________________________________
Servers power nearly every aspect of your daily life. HPE provides servers for companies of all sizes, across all industries and for any type of workload. To read more about HPE servers, CLICK HERE.