Whether it’s online gaming or streaming devices, or newer tech such as virtual reality (VR) and augmented reality (AR), the wealth of data created as a result has called on data centre providers to offer the necessary infrastructure in order to house this growing data bank. According to the latest report by 451 Research in March 2016, the colocation market is expected to rise to an incredible $33bn by 2018, no doubt being driven by the latest technology innovations. But while data and power volumes are increasing, so is the necessity for efficient and effective cooling methods.
No longer is the traditional small-scale data centre operations of emails and data storage sufficient. Businesses on a whole have now shifted from operating at more typical ‘working hours’ to an ‘always-on’ approach, fuelled by the latest, power-hungry technologies. As power requirements have increased so have the persistent issues surrounding the cooling capabilities of these ever-hungry facilities.
These complexities have been clear for all to see and have pushed data centre operators to act quickly in order to stem the heat. Industry organisations such as ASHRAE (American Society of Heating, Refrigerating and Air-Conditioning Engineers) and The Green Grid have also lent a helping hand in navigating this cooling journey, by releasing a number of guidelines and framework in order to maximise cooling and efficiency procedures. The Green Grid in particular, have recently tried to address the confusion by creating the new data centre metric, the Performance Indicator, to help provide a clearer understanding of cooling performance in both normal operation and for failure and maintenance scenarios.
But this in itself still is unable to successfully mitigate against the growing technological trends. To address this, a wider acknowledgement of how data centres can better position themselves against an influx of new technologies is needed to ensure they can adequately address tomorrow’s challenges.
High Performance Compute
With recent research by MarketsandMarkets expecting the High Performance Compute (HPC) market to grow up to 36.62bn USD by 2020, HPC will become one of the largest trends in the data centre market. HPC has become increasingly adopted across a number of verticals and with the growth in big data and IoT, it also grown in prominence by the enterprise. Yet, for it to truly continue to reach the heights expected of it, data centre operators must carefully future-proof their facility to ensure it has the critical infrastructure in place to support its growth in the long-run.
HPC will require denser configurations, high-powered racks and effective cooling in order to handle the volume of data processed. As HPC can require racks of 20-25KW, many legacy data centres that use perforated floors and computer room air conditioning (CRAC) units to cool the server are often inadequate to cool data halls to the required level for optimal functioning. While the base of the racks is effectively cooled, the further you go up the rack the less it is cooled, contributing to overheat and even network outages.
With the installation of high performance computers however, the data centres processing power is significantly higher, so heat management is of vital importance. Some HPC providers have tried to ensure consistent cooling by deploying larger fans or conductive cooling methods. Typically though, the rate at which heat is produced is greater than the rate at which cooling accelerants or fans can dissipate this heat. It is therefore important for centres to be built with systems in place that far exceed the maximum cooling requirements, especially if there is the possibility in the future that HPC systems will be installed.
For on premise enterprise data centres or those based in Tier-1 cities, where space is often limited, upscaling services to factor in HPC environments can also be a problem, often resulting in considerable CAPEX. Data centre operators must therefore place close attention to the space of a site while also having an acute understanding that the rate at which heat is produced tends to be greater than the air generated. Therefore, incorporating more effective and consistent cooling through procedures including liquid cooling, evaporation or outside air cooling can better help sites manage HPC environments.
Open Compute Project
While many of the largest companies in the world have begun seeing the benefits in housing their data centres in cooler climates, particularly in Scandinavia, this is not often a common option for many data centre providers on limited budgets, resources, or those that want easy access to their information.
However, in an attempt to better facilitate data centre efficiency away from the Nordics, Facebook developed its Open Compute Project (OCP) fostering a collaborative community to support the growing pressures placed on data centres by the latest technologies. OCP members share blueprint designs for efficiency and effectiveness in all aspects of a data centre including costs, cooling, and environmental impact. Having launched the world’s most efficient data centre in the U.S., Facebook’s technologies are now helping other data centre providers deliver consistently low PUE scores and reducing cooling costs.
The data centre of the future is likely to move towards a more collaborative sharing platform where standard blueprints are available to provide basic requirements on cooling, power and architecture. OCP racks are larger than the standard size, up from 19 inches to 21, but this larger capacity will require less power and as a result, require less airflow requirements to lower cooling needs. With major companies including Google, IBM and AT&T having joined the project, the ability for established companies to provide innovations for the OCP will help shape the data centre of the future and ensure cooling is achievable, both to implement and in terms of costs for all players in the data centre market.
As technology propels data centres to run a 24/7 service, effective cooling methods must be in place to fuel future growth in the market and deliver data centres capable of handling the high power demands. While metrics such as power usage effectiveness (PUE) are available for data centre operators to measure their cooling, guaranteeing this is met on a consistent basis in the face of power-hungry technologies will mean that initiatives such as the OCP and trends like HPC will become more common place in data centres wishing to match competitor speeds and storage capacities with effective cooling.