Tech | Visa | Scholarship/School | Info Place

Tech giants form industry group to help develop next-generation AI chip components

Intel, Google, Microsoft, Meta and other tech giants are forming a new industry group, the Ultra Accelerator Link (UALink) Promoter Group, to guide the development of components that connect AI accelerator chips in data centers.

The UALink Initiative, whose members also include AMD (but not Arm), HPE, Broadcom and Cisco, announced Thursday that it is proposing a new industry standard to connect the AI ​​accelerator chips that are increasingly being used in servers. Broadly speaking, AI accelerators include a variety of chips, from GPUs to custom solutions, that are used to speed up the training, fine-tuning and running of AI models.

“The industry needs an open standard that can be advanced quickly and in an open way. [format] “This enables multiple companies to add value to the entire ecosystem,” Forrest Norrod, general manager of AMD’s data center solutions, told reporters at a briefing on Wednesday. “The industry needs a standard that allows innovation to happen quickly without being constrained by any one company.”

The first version of the proposed standard, UALink 1.0, would connect up to 1,024 AI accelerators (GPUs only) on a single computing “pod.” (The group defines a pod UALink 1.0, based on “open standards” including AMD’s Infinity Fabric, will allow direct loads and stores between memories connected to AI accelerators and will generally increase speed and reduce latency for data transfers compared to existing interconnect specifications, according to the UALink promotion group.

UALink
Image Source: UALink Promotion Team

The organization said it will form a consortium, the UALink Alliance, in the third quarter to oversee the future development of the UALink specification. UALink 1.0 will be available to companies that join the alliance at the same time, and the updated specification UALink 1.1 with higher bandwidth will be launched in the fourth quarter of 2024.

Norrod said the first UALink products will be available “in the next few years.”

Surprisingly, the group’s membership list does not include Nvidia, by far the largest producer of AI accelerators, with an estimated 80% to 95% market share. Nvidia declined to comment for this story. But it’s not hard to see why the chipmaker hasn’t enthusiastically supported UALink.

First, Nvidia provides its proprietary interconnect technology for connecting GPUs within data center servers, and the company may be reluctant to support a specification based on a competitor’s technology.

The fact is, Nvidia has enormous power and influence.

In Nvidia’s most recent fiscal quarter (Q1 2025), the company’s data center sales (including sales of its AI chips) grew more than 400% from the same period last year. If Nvidia continues its current momentum, it will surpass Apple to become the world’s second-largest company by market value sometime this year.

So, simply put, if Nvidia doesn’t want to cooperate, it doesn’t have to.

As for Amazon Web Services (AWS), the only public cloud giant not contributing to UALink, it may be in “wait and see” mode as it winds down (no pun intended) its various accelerator hardware efforts in-house. It may also be because AWS has a firm grip on the cloud services market and doesn’t see much strategic value in going up against Nvidia, which provides a lot of GPUs to its customers.

AWS has not yet responded to TechCrunch’s request for comment.

Indeed, aside from AMD and Intel, the biggest beneficiaries of UALink appear to be Microsoft, Meta, and Google, which have collectively spent billions of dollars buying Nvidia GPUs to power their clouds and train their growing fleets of AI models. All of these companies are looking to break free from their reliance on the dominant vendor in the AI ​​hardware ecosystem.

Google has custom chips, TPU and Axion, for training and running AI models. Amazon has multiple AI chip families. Microsoft joined the fray last year with Maia and Cobalt. And Meta is rounding out its accelerator lineup.

Meanwhile, Microsoft and its close partner OpenAI are reportedly planning to spend at least $100 billion to build a supercomputer for training AI models that will be equipped with future versions of Cobalt and Maia chips. These chips will need something to connect them together – perhaps UALink.

#Tech #giants #form #industry #group #develop #nextgeneration #chip #components

Leave a Reply

Your email address will not be published. Required fields are marked *