Quantcast
Channel: Data Center Archives - Enterprise Viewpoint
Viewing all articles
Browse latest Browse all 20

The impact of the AI boom on data centres

$
0
0

The data centre industry is all abuzz with the explosion in demand expected from the growth in Artificial Intelligence and Machine Learning (AI/ML) systems.  One of the biggest and most respected investors in the data centre market is predicting more than 300% growth in global data centre capacity over the next 10 years.  Many other think he might be being a bit conservative.  All are agreed, though, that AI/ML is going to drive massive growth in new data centres globally.

As it stands, there are two primary types of AI/ML systems that we expect to exert, albeit differing, pressures on the data centre market: training engines and inference engines. These equate to different stages of the AI process: in brief, in the training phase, AI systems are fed datasets in order to learn everything required to analyse this data; this will then produce a model which is used within the inference phase to make assessments or predictions which can subsequently be translated into actionable results.

Most forms of training engine require huge amounts of computational power, and hence data centre capacity. In most cases, the physical location of the training engines is not important, so decisions on where to base such systems can be based on cost, and we expect these to be placed where electricity and operating costs are the cheapest.

Inference engines, on the other hand, can be sensitive to distance, especially when they are operating in “real-time” environments which require low network latency. Examples of these are manufacturing systems used to control production lines or industrial process control systems which take inputs from Internet of Things (IoT) devices such as sensors and cameras. These systems can be much smaller than training engines, but also need to be much closer to the action – applications like these are already in use and achieving great results.

Training engines work best when they are big – very big.   In the past a typical enterprise data centre may have been built with 2-10MW of power.  Hyperscalers, large cloud service providers that provide computing and storage at an enterprise scale, such as AWS, Google and Microsoft now build data centres that start at 40MW and grown to well over 100MW.  Most specialists are looking at 100-200MW as being “entry-level” for training engines.  It may not be unusual to see data centres of 500MW or more being built in the future, although there are many challenges in building sites of that size in most of Western Europe.

Hyperscalers are already preparing for a future dominated by AI. Analysts from TD Cowen have observed that hyperscalers have already begun to pre-lease capacity 2-3 years in advance of facility delivery, an increase from the 12-18-month pre-leasing window witnessed last year. On this discrepancy, they explained: ‘In 2022, leasing prices increased due to the increased cost of building data centers. Now they are higher simply due to limited supply and high demand.’

It is worth noting that most of the growth over the next two years is expected to be in building very large sites for training engines.  The wider network of smaller, distributed inference engines is likely to follow at a slightly slower pace.  The data centre industry is still debating within itself how that will happen.

Another major impact of this revolution is the need for data centres to increase energy density and energy efficiency.  The huge training engines being built work best when they are very tightly packed into cabinets in data centres.  The GPUs used in most AI servers are very power-hungry and generate a lot of heat.  Historically, data centres would be designed to support 2-4kW of IT power consumption in a single cabinet or rack.  Recently, this has grown to 8kW per rack or more.  AI/ML systems prefer racks that can deliver 50-100kW!  This is an engineering challenge in most data centres but there are solutions available that can deliver this level of energy density.  Cooling systems based on various forms of liquid cooling are the most efficient, and energy efficiency will become a prime factor in the economics of AI systems due to the enormous scale.

To summarise, in an industry that is already growing rapidly and materialising a great impact on our digital climate, AI stands to cement its position even further by innovating demand for data centre capacity worldwide. With this comes two prescient considerations: how the UK and Europe can capitalise in this expanding market, and how we can all work together to mitigate sustainability risks.

Regarding the former, the most immediate action the UK can take is to emulate the standard currently being set by the US. Adrian Josef, the chief data and AI officer at BT, declared in a parliamentary meeting of the Science and Technology Committee that the global tech industry is in an AI ‘arms race’. Specifically, this so-called ‘race’ pits us against the ‘Big Tech Companies’ of the US and potentially China: ‘There’s a very real risk, that unless we begin to leverage and invest and encourage our startup communities to leverage the great academic institutions that we’ve got to ensure that we have public and private sector all working together […] we in the UK could be left behind.’  At a recent conference, Jaap Zuiderveld  of nVidia pointed out that the US is way ahead of the UK and Europe in building out capacity for AI, issuing a wake up call for the data centre industry.

The post The impact of the AI boom on data centres appeared first on Enterprise Viewpoint.


Viewing all articles
Browse latest Browse all 20

Trending Articles