As highlighted organisations are being forced to carry and process unprecedented volumes of data including, social mobile, analytics and the aforementioned rise in cloud. To accommodate this, data centre providers are needing to support denser configurations through power and cooling requirements that possess the ability to outstrip the capabilities of traditional mechanical and electrical infrastructures. But for a lot of data centre sites, the reality of achieving this is very difficult, largely because their original designs were completed at a time when modern densities and configurations simply did not exist. The result of this means its ability to support technologies such as HPC becomes very challenging. So how does a facility address this and what are the considerations?
Space is often seen as a major consideration, especially when the majority of facilities are located in or around major cities, often commanding a high premium per square foot. Building on an existing site is likely to represent a significant CAPEX expense and is typically hindered by infrastructure boundaries. For a lot of firms in this position an often more viable option is building an entirely new site in order to handle HPC capabilities, but again this can only be achieved through significant investment. For those facilities facing this reality it is imperative they are able to understand where the current and future demand for HPC data centres is likely to come from and then make changes accordingly.
Another consideration focuses on cooling capabilities. Data centre providers are going to great lengths in order to minimise their cooling costs. Facilities are advised to run at temperatures anywhere between 20 – 30°C to achieve an optimal environment for the servers. With the installation of high performance computers however, their processing power is significantly higher, so heat management is of vital importance. Some HPC providers have tried to ensure consistent cooling by deploying liquid cooling, larger fans or conductive cooling methods. Typically however, the rate at which heat is produced is greater than the rate at which cooling accelerants or fans can dissipate this heat. It is therefore important for centres to be built with systems in place that far exceed the maximum cooling requirements, especially if there is the possibility in the future that HPC systems will be installed. For those systems already in place and wish to install HPCs, again significant CAPEX spending will be needed to upgrade data halls to meet the required needs.
When it comes to power HPC’s run at much higher processing powers than regular servers. Measured in floating-point operations per second, or FLOPS, the fastest HPC as of last year ran at 33.86 quadrillion FLOPS. Directly tied into the first two requirements for HPC’s, space and cooling, adequate processing power is essential. The necessary power infrastructure must be able to cope with these demands in order to fulfil the requirements of both the HPC and the mid-range servers, depending on the data halls configuration. Those looking to install HPC’s need to factor in the provision of high density power and how this has implications for the cooling requirements and the following environmental impact.
By having these necessary capabilities, customers can be reassured that the right technologies are in place to grow their estates. But in order for this to be successful colocation facilities must not do this at the expense existing customer needs that might not require HPC. Having an entire site dedicated to HPC will noticeably restrict your customer base and therefore it is essential that any investment is still able to account for traditional mission-critical applications as well as increased customer demand for dense configurations, ultimately enabling you to operate efficiency in the future.