Nvidia has been the driving force behind the recent boom in generative AI products due to its dominance in the consumer and workstation graphics markets and the well-established CUDA libraries and AI-accelerating Tensor cores. However, Microsoft, which already uses thousands of Nvidia GPUs in its Azure data centers, is reportedly collaborating with AMD to enhance its GPUs’ AI capabilities. The partnership aims to improve AMD’s products’ performance in AI workloads with Microsoft’s contribution of engineering resources and support.
According to reports, AMD is working on an AI accelerator named “Athena,” which Microsoft flatly denies. The benefits of a more competitive market for AI-accelerating hardware include saving server costs and expanding AI features into more products. For instance, Microsoft is said to be developing a private version of ChatGPT for businesses that could cost ten times more than the regular version. By reducing server hardware costs, Microsoft could lower its prices, costs, or both to make these products more appealing.
Although the two companies have collaborated before, Surface Edition Ryzen processors used in some older Surface PCs were not much different from regular Ryzen processors. However, they had Microsoft-assisted optimizations to the firmware, drivers, and software stack. Recent tests from Tom’s Hardware show how AMD’s GPU architectures struggle with AI workloads, with AMD’s current flagship Radeon RX 7900 XTX performing slower than Nvidia’s RTX 4090, 4080, and 4070 Ti, even though the RX 7900 XTX is faster than the RTX 4080 in most games. The entry-level RTX 3050 even managed to beat all of AMD’s previous-generation RX 6000-series cards. While software optimizations could help these cards perform better, the current state of things shows the advantage Nvidia holds in this market.