The growth and increasing availability of colocation facilities is driving an organisational shift when it comes to IT spend and investment. The rise of cloud computing coupled with growing data volumes and the need for businesses to refresh their existing legacy IT systems, now means committing Capex expenditure over a significant lifetime is becoming more difficult to justify within the boardroom.
In light of this colocation providers have been looking to capitalise by offering increasingly flexible terms under an Opex model, with many businesses keen to embrace this across their IT estates. Evidence to support this lies in recent research carried out by the analyst firm 451 Research, which revealed that the amount of data centre space occupied by colocation providers is up by 11 per cent on 2014, and is forecast to maintain that level of growth through to 2018 as enterprises look to outsource more of their IT. Ultimately, this serves to quash any concerns that the rise in cloud computing will bring about the end of colocation facilities – in fact, as evidence shows the impact is quite the opposite.
For an organisation the benefits of outsourcing IT capabilities mean the initial Capex hit associated with building and managing a facility are taken away – the third party provider has everything already in place and in doing so you can effectively budget for usage based on the agreed terms with the provider. This removes the risk of fluctuation within your financial forecasting process, brought about by unexpected IT changes or failings, allowing for greater accuracy when budgeting and defining long term expenditure.
As a result of this, organisations are free to focus on their core competencies and transition many of their Capex investments to Opex spending, freeing up cash for those investments and other projects that drive revenue and growth, across the business.
In order to ensure colocation data centres able to continue capitalising on the move towards an Opex model, it is imperative that actions are taken to ensure continual investment is made into the infrastructure, as well as future-proofing the facility. An example of this can be seen with the growth in high-performance computing (HPC).
As highlighted organisations are being forced to carry and process unprecedented volumes of data including, social mobile, analytics and the aforementioned rise in cloud. To accommodate this, data centre providers are needing to support denser configurations through power and cooling requirements that possess the ability to outstrip the capabilities of traditional mechanical and electrical infrastructures. But for a lot of data centre sites, the reality of achieving this is very difficult, largely because their original designs were completed at a time when modern densities and configurations simply did not exist. The result of this means its ability to support technologies such as HPC becomes very challenging. So how does a facility address this and what are the considerations?
Space is often seen as a major consideration, especially when the majority of facilities are located in or around major cities, often commanding a high premium per square foot. Building on an existing site is likely to represent a significant CAPEX expense and is typically hindered by infrastructure boundaries. For a lot of firms in this position an often more viable option is building an entirely new site in order to handle HPC capabilities, but again this can only be achieved through significant investment. For those facilities facing this reality it is imperative they are able to understand where the current and future demand for HPC data centres is likely to come from and then make changes accordingly.
Another consideration focuses on cooling capabilities. Data centre providers are going to great lengths in order to minimise their cooling costs. Facilities are advised to run at temperatures anywhere between 20 – 30°C to achieve an optimal environment for the servers. With the installation of high performance computers however, their processing power is significantly higher, so heat management is of vital importance. Some HPC providers have tried to ensure consistent cooling by deploying liquid cooling, larger fans or conductive cooling methods. Typically however, the rate at which heat is produced is greater than the rate at which cooling accelerants or fans can dissipate this heat. It is therefore important for centres to be built with systems in place that far exceed the maximum cooling requirements, especially if there is the possibility in the future that HPC systems will be installed. For those systems already in place and wish to install HPCs, again significant CAPEX spending will be needed to upgrade data halls to meet the required needs.
When it comes to power HPC’s run at much higher processing powers than regular servers. Measured in floating-point operations per second, or FLOPS, the fastest HPC as of last year ran at 33.86 quadrillion FLOPS. Directly tied into the first two requirements for HPC’s, space and cooling, adequate processing power is essential. The necessary power infrastructure must be able to cope with these demands in order to fulfil the requirements of both the HPC and the mid-range servers, depending on the data halls configuration. Those looking to install HPC’s need to factor in the provision of high density power and how this has implications for the cooling requirements and the following environmental impact.
By having these necessary capabilities, customers can be reassured that the right technologies are in place to grow their estates. But in order for this to be successful colocation facilities must not do this at the expense existing customer needs that might not require HPC. Having an entire site dedicated to HPC will noticeably restrict your customer base and therefore it is essential that any investment is still able to account for traditional mission-critical applications as well as increased customer demand for dense configurations, ultimately enabling you to operate efficiency in the future.