The growth in technology in every business is driving a dramatic change in the way that data in processed and used.
Until relatively recently, intense data crunching was the privilege of computer science labs, research institutes, government departments and defence facilities. Today however, data is becoming the new currency, as it’s used increasingly to create competitive edge and drive new business models.
Retailers – they’re harnessing the power of multi-channel shopping to provide their customers with products based on previous shopping choices, or their age, gender and even social media preferences. Using data to predict customer behaviour and trends that are likely to dominate the retailers next quarters sales requires data analysis from thousands of customers, across all platforms gathered over a period of several years.
Manufacturers – they’re are taking a granular approach to diagnosing and correcting processing flaws by analysing data from the production lines. This use of analytics offers a competitive edge to businesses because they can then develop leaner, more efficient processes by reducing waste and improving the quality and yield of their products.
A data centre capable of computing modern enterprise problems
Your typical data centre is a generic concept and standard architecture which has been built for dealing with lots of small problems in series; serving a web page or a file, for example. However, the modern enterprise computing problems mentioned above look a lot like the supercomputing problems academia and research have been facing for some time.
For example, working out predictive models for consumer behaviour based on a complex set of data to deliver a customer experience in real time? That’s a supercomputer problem. Working out the mechanical dynamics of a car crash to support the design of a new chassis to reduce the risk of injury in a collision? That’s a supercomputer problem.
To deliver this, the set-up of today’s data centre maybe outmoded. There’s a growing need for HPC capabilities to enter mainstream data centre facilities. High Performance Compute simply aggregate computing power in a way not typically associated with standard data centre server infrastructure. It requires denser banks of computer resource to minimise latency and increase capacity, whilst minimising floorspace. In a data centre, optimising for this will happen by redesigning server racks, for example removing the need to have cooling slots between processing units. The data centre will then need to provide the power and alternate cooling systems needed to support higher contiguous rack stacking at not a huge price increase for the customer
Being able to fill the rack with more servers, footprint requirements are reduced, but more sophisticated power and cooling is needed. Stacking racks so close together also reduces minute but critical periods of latency between servers during intense parallel processing, enhancing true supercomputing in today’s modern enterprise.
By analysing data and using its outcomes to make informed business decisions, businesses can set themselves apart from the competition by delivering a better overall service or customer experience. This reliance on data however is not possible without the increasing use of HPC in the data centre to help businesses quickly and effectively crunch their data while delivering the power and cooling needed to this new age of computing.