AMD takes the AI networking battle to Nvidia with new DPU launch

Click here to visit Original posting

AMD has revealed an upgraded data processing unit (DPU) as it looks to stake its claim to power the next generation of AI.

The new Pensando Salina DPU is the company's third-generation release, promises 2x performance, bandwidth and scale compared to the previous generation.

AMD says it can support 400G throughput, meaning faster data transfer rates than ever before, a huge advantage as companies around the world look for quicker and more efficient infrastructure to keep up with AI demands.

Pensando Salina DPU

As with previous generations, AMD's latest DPU is split into two parts: the front-end, which delivers data and information to an AI cluster, and the backend, which manages data transfer between accelerators and clusters.

Alongside the Pensando Salina DPU (which governs the front-end), the company has also announced the AMD Pensando Pollara 400 to manage the back-end.

The industry’s first Ultra Ethernet Consortium (UEC) ready AI NIC, the Pensando Pollara 400 supports the next-gen RDMA software and is backed by an open ecosystem of networking, offering customers the flexibility needed to embrace the new AI age.

The AMD Pensando Salina DPU and AMD Pensando Pollara 400 are sampling with customers now, with a public release scheduled for the first half of 2025.

More from TechRadar Pro