AMD CEO: CPUs Equipped With Xilinx AI Engines Due by 2023

This article is part of TechXchange: AI on the Edge.

AMD said it plans to integrate Xilinx’s differentiated IP in future generations of its central processing units (CPUs), in a bid to better compete with Intel and NVIDIA in data centers and even the embedded market.

The Santa Clara, California-based company will integrate Xilinx’s artificial-intelligence (AI) engines across its CPU portfolio to bolster their AI inference capabilities, said AMD CEO Lisa Su during AMD’s second-quarter conference call with analysts. The first chips in the series are set to be released in 2023.

The move is the latest step in AMD’s ambitions to become an even more sprawling semiconductor giant, following its $35 billion deal to buy Xilinx, which became part of the company in January. Su said Xilinx helps diversify its offerings with a broad range of computing engines—ones suited for artificial intelligence, for example, or data center networking—and gives it a longer list of customers that it can sell to.

“We now have the best portfolio of high-performance and adaptive computing engines in the industry, and we see opportunities to leverage our broad technology portfolio to deliver even stronger products,” she said.

Intelligent Engines

The AI ​​engines are already shipped in Xilinx’s Versal family of adaptive compute acceleration platforms (ACAPs) to take on the likes of Intel, Marvell, and NVIDIA in markets such as cloud servers and networking.

The AI ​​engines are also being equipped to chips that are specifically designed to run real-time workloads such as image recognition in embedded and edge devices, ranging from cars and industrial and medical robots to aerospace and defense systems like satellites.

The Versal series contains scalar engines (Arm CPU cores), intelligent engines (AI accelerators and DSP blocks), as well as the same type of programmable logic at the heart of its FPGAs. The programmable network-on-a-chip (NoC) ties everything together on a system-on-a-chip (SoC), with various hard cores for connectivity (PCIe and CCIX), networking (Ethernet), memory, security, and I/O.

Xilinx’s Vivado ML software stack is used to reconfigure the programmable logic in the Versal chips, while its Vine and Vitis AI development tools are targeted at fine-tuning software to run on the Versal platform.

“We have this AI engine that is already deployed in production in a number of embedded applications and edge devices,” said Victor Peng, Xilinx’s former CEO and president of AMD’s Adaptive and Embedded Computing business. “The same architecture can be scaled and brought into the CPU product portfolio.”

He said the combined company is working on building a unified software stack to help customers take advantage of the AI ​​prowess of Xilinx and AMD chips for inference and training in data centers and at the edge.

The Xilinx deal gives the semiconductor giant a “much broader set of offerings” in the market for AI hardware, specifically on the inferencing side, supplementing the AI ​​acceleration offered by its CPUs and GPUs for data centers, added Su. “AI is a huge opportunity for us.”

Gaining Ground

Business is booming at AMD. It has gained the performance edge on Intel with new generations of CPUs, in part by contracting out production to TSMC, giving it access to process technologies that are years ahead of Intel’s fabs.

Overall, the company reported $5.9 billion of revenue in the second quarter, up 71% from the same quarter a year ago. Su pointed out that every one of its businesses grew by double digits last quarter.

But it has gained some of its sheen for investors in recent years as it gobbled up the market for CPUs and GPUs for PCs as well as video game consoles, including Microsoft’s Xbox Series X, Sony’s PlayStation 5, and Valve’s Steam Deck.

AMD is also gaining ground in the data-center market, where Intel has long dominated. It has been swiping market share from Intel while its largest rival works to recover its chip manufacturing prowess, industry analysts say.

According to AMD, sales of its server processors has more than doubled in eight of the last 10 quarters, highlighting higher demand for EPYC CPUs from its customers in the cloud, enterprise, and high-performance computing (HPC) market.

The company’s sales to the cloud-computing market are also soaring as technology giants such as Amazon, Google, and Microsoft in the US and Alibaba and Baidu in China ramp up investments in server hardware.

While Intel has struggled to move to more advanced technology nodes, AMD is throwing its weight around with TSMC and other third-party foundries to boost production of its most advanced chips for PCs and the data center. The Silicon Valley company is also preparing to roll out to its new generation of CPUs, code-named Genoa, later in the year. It hopes the upgrade will help it win more market share from Intel.

The prospect of strong demand for server processors as well as the addition of Xilinx prompted the company to raise its full-year revenue-growth outlook to 61%. It had previously forecast sales would rise 31% in 2022.

Su said the company’s core business still lies in central processors and graphics chips used in laptops, desktops, and the data center. But Xilinx’s programmable chip families give it “lots of levers for growth” now and in the future.

Diversification Plan

One of its broader strategies lately has been to amass a wide range of chips that can be linked tightly together to improve performance and power efficiency for customers with a broad range of computational needs.

She explained, “What we see in terms of growth going forward is there will be more customization around solutions for large customers whether it is cloud companies, large telcos, or even edge opportunities.”

To pursue its diversification strategy, AMD is also buying networking chip startup Thinking Systems in a $1.9 billion deal. The deal would add a lineup of data-processing units (DPUs) and a software stack that would give it a wide range of technologies for storage, security, and networking suited for the data center.

“The whole strategy behind AMD is to have the best compute engines and then sort of put them together in solutions for specific end markets,” said Su. “I think [having] CPUs, GPUs, the FPGAs, the adaptive SoCs, and then the DPUs that we are adding from Pensando give us just a tremendous range of capabilities.”

Su added, “Having all these compute engines will allow us to essentially optimize those solutions together.”

The company is still facing questions about whether it can continue competing with Intel and executing on its strategy given ongoing turmoil in the supply chain and capacity limitations at TSMC and other foundries.

But it is working through the supply challenges, AMD executives said. “We’re working with the larger scale of AMD to try to bring more supply on board and continuing to ramp overall capacity to support a very strong next few quarters,” Su said.

This article is part of TechXchange: AI on the Edge.

Leave a Comment