AMD to Launch Advanced AI GPU to Challenge Nvidia

Amd to Launch Advanced Ai Gpu to Challenge Nvidia | The Entrepreneur Review

Advanced Micro Devices (AMD) announced on Tuesday that it will begin shipping its most sophisticated AI GPU, the MI300X, to select customers later this year. According to analysts, this move represents a significant challenge to Nvidia, the current leader in the AI chip market with over 80% market share.

Utilization of AI GPUs

GPUs, like the MI300X, are crucial components utilized by organizations such as OpenAI in developing cutting-edge AI programs, including ChatGPT. If AMD’s AI chips, referred to as “accelerators,” are embraced by developers and server manufacturers as viable alternatives to Nvidia’s products, it could open up a substantial untapped market for the chipmaker, which is primarily recognized for its traditional computer processors.

During a presentation in San Francisco, AMD CEO Lisa Su stated that AI is the company’s “largest and most strategic long-term growth opportunity.” Su projected substantial growth in the data center AI accelerator market, expecting it to expand from approximately $30 billion in 2023, with a compound annual growth rate exceeding 50%, to over $150 billion by 2027.

How Much Would It Cost?

While AMD did not disclose the price of the MI300X, its entry into the market could exert price pressure on Nvidia’s GPUs, such as the H100, which can cost $30,000 or more. Reduced AI GPU prices may contribute to lowering the high expenses associated with serving generative AI applications.

The demand for AI chips remains strong in the semiconductor industry, especially as PC sales, which have historically driven processor sales, decline.

The MI300X, designed by AMD with its CDNA architecture, specifically targets large language models and other advanced AI models. The chip’s impressive feature is its ability to support up to 192GB of memory, surpassing the capabilities of competing chips like Nvidia’s H100, which can only accommodate 120GB of memory.

Showcasing the Capabilities

Language models utilized in generative AI applications require substantial memory due to the increasing complexity of calculations. AMD showcased the MI300X running a 40 billion parameter model called Falcon, demonstrating its capacity to handle massive models. OpenAI’s GPT-3 model, for example, consists of 175 billion parameters.

AMD CEO Lisa Su expressed her enthusiasm for the MI300X, emphasizing the role of AI GPUs in enabling generative AI. She also highlighted that the added memory on AMD chips would reduce the necessity for multiple GPUs when running the latest large language models.

Additionally, AMD announced an Infinity Architecture that integrates eight M1300X accelerators into a single system. This approach aligns with Nvidia and Google’s development of similar systems that consolidate eight or more GPUs within a single box for AI applications.

AMD unveils new AI GPU to rival Nvidia

Nvidia vs. AMD

One factor that has historically favored Nvidia in the AI chip market is its well-developed software package called CUDA, which enables developers to access the core hardware features of the chips. AMD revealed that it has developed its own software, called ROCm, for its AI chips. While still a work in progress, AMD’s president, Victor Peng, expressed confidence in their powerful software stack, designed to work with an open ecosystem of models, libraries, frameworks, and tools.

With the impending launch of the MI300X and its focus on addressing the growing demands of the AI market, AMD is poised to disrupt Nvidia’s stronghold and gain significant traction in the AI chip industry.

Source: CNBC
Do You Like the Article? Share it Now!