Can your IT stack cope with AI?

Published on the 20/06/2023 | Written by Heather Wright


Can your IT stack cope with AI?

Balancing AI objectives and infrastructure a challenge…

Companies are keen to take advantage of artificial intelligence to improve their business, but a new study shows IT leaders aren’t confident their existing technology stacks can cope with the added demands.

The Equinix-commissioned survey of 2,900 IT decision makers globally shows 83 percent of Asia Pacific respondents were seeking to harness AI and were either already using it or planning to use it. But 44 percent of Apac leaders said their existing IT infrastructure is not fully prepared for the demands of the technology.

Those figures are on par with the global average, which found 85 percent were using, or planning to use, AI, but 42 percent were doubtful their systems could cope.

“44 percent said their existing IT infrastructure is not prepared for the demands of AI.”

It’s not just about the infrastructure either, with Apac leaders among the least confident about their team’s ability to implement the technology – 45 percent said they were ‘not very comfortable’.

But despite that discomfort, 83 percent of organisations surveyed plan to use, or are using, AI in IT operations, cybersecurity (81 percent) and customer experience (78 percent). R&D and marketing round out the top five uses.

The survey highlights the need for a modern, mature IT infrastructure – and the budget required to support the necessary computing power – and the need to align AI objectives, hardware and network performance.

“There are multiple ways to access AI infrastructure, from building your own to working with cloud providers or even using APIs to connect your data with large external models,” George Elissaios, Google Cloud senior director for specialised compute, and Mikhail Chrestkha, Google Cloud outbound product manager, noted in a recent blog.

“Regardless of where you access your AI infrastructure, once you’ve built your models, embedding them into your business decision-making process can require extraordinary amounts of compute power to continuously analyse and generate consumable content.”

The pair note that hardware-imposed ‘procrastination’ can grate on data and AI teams needing to test results.

“It’s counter to the agility that is one of the superpowers of AI and machine learning – rapid experimentation and personalisation are essential to outstanding consumer experiences.”

The higher level of computing requirements required by AI, including generative AI, compared with traditional enterprise workloads is driving a boom time for companies such as Nvidia. The rise of generative AI models is translating into a rise in demand for compute power, which can be achieved via graphics processing units (GPUs) and tensor processing units (TPUs), optimised specifically for AI model development and deployment.

Nvidia saw a record high US$4.3 billion in data centre revenue for the quarter ending April 30.

“Generative AI is driving exponential growth in compute requirements,” says Colette Kress, Nvidia CFO and EVP.

Demand, she says is broad-based across both consumer internet companies, cloud service providers, enterprises and AI startups.

“In networking we saw strong demand at both cloud service providers and enterprise customers for generative AI and accelerated computing, which require high-performance networking,” she adds. “As generative AI applications grow in size and complexity, high performance networks become essential for delivering accelerated computing at data centre scale to meet the enormous demand of all training and inferencing.”

Jensen Huang, Nvidia founder and CEO, says a trillion dollars of installed global data centre infrastructure will transition from general purpose to accelerated computing as companies race to apply generative AI into every product, service and business process.

The company is expecting second quarter revenue to hit around US$11 billion, driven largely by data centre growth on the back of a steep increase in demand related to generative AI and large language models.

“This demand has extended out data centre visibility out a few quarters and we have procured substantially higher supply for the second half of the year,” Kress says, highlighting the growth being seen.

She says we’re at the beginning of a 10 year transition to ‘recycle or reclaim’ the world’s data centres for accelerated computing.

“You’ll have a pretty dramatic shift in the spend of the data centre from traditional computing, and to accelerated computing with smart NICs [network interface cards], smart switches, of course GPUs, and the workload is going to be predominantly generative AI.”

For companies using cloud, much of the responsibility lies with cloud providers for now, at least at the chip level. The large providers – AWS, Microsoft and Google Cloud – are investing heavily in the massive data storage and compute power required to run generative AI, as the battle for supremacy in enterprise-grade generative AI.

Alphabet announced earlier this year that it was building out its cloud data centres, redistributing workloads and investing ‘significantly’ in infrastructure to drive AI opportunities, while Microsoft says it is optimising its Azure infrastructure.

But there are still infrastructure and networking issues for organisations to tackle.

Jeremy Deutsch, Equinix Asia Pacific president, says successful development of accurate AI models depends on secure and high-speed access to both internal and external data sources that can be spread across multiple clouds and data brokers.

“For example, as enterprises embark on creating their own private generative AI solutions, they may want to process their confidential data at a private and secure location with high-speed access to external data sources and AI models,” Deutsch says.

As increasing amounts of data is generated at the edge, AI processing too, will need to move to the edge for performance, privacy and cost.

In order to satisfy those requirements, tech leaders can implement hybrid solutions where AI model training and model inference can occur at different locations, Equinix (which as a big name in the data centre market stands to capitalise on the wave of AI-related data centre demand) says.

“Ultimately, to create scalable AI solutions, businesses must consider whether their IT frameworks can accommodate the required data ingestion, sharing, storage and processing of massive and diverse data sets, while keeping sustainability in mind.”

IT leaders in the survey cited an increase in Opex costs as the biggest issue in adopting newer technologies such as AI, with Apac leaders most concerned about Opex at 51 percent. Lack of internal knowledge was second at 45 percent, with concerns about slow implementation third at 34 percent.

Post a comment or question...

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

MORE NEWS:

Processing...
Thank you! Your subscription has been confirmed. You'll hear from us soon.
Follow iStart to keep up to date with the latest news and views...
ErrorHere