NVMe-oF™ Technology in the Wild – NVIDIA DGX SuperPOD with NVMe-oF Technology Simplifies Scaling Supercomputing Infrastructure

Blog

By Sebastian Grandjean Perrenod Comtesse, NetApp

High Performance Computing (HPC) and supercomputing have always involved complex system designs featuring highly specialized equipment at a massive scale.

Years ago, some companies realized that IT infrastructure should be streamlined, and together they jointly developed a common reference architecture to simplify the overall design process. The sole purpose was to start deploying IT infrastructure where you knew upfront that the design would work – and so Converged Infrastructure was born.

The Evolution of Converged Infrastructure to NVMe® Fabrics Technology

Converged Infrastructure worked well for most on-premises enterprise applications. However, new latency sensitive workloads—such as Artificial Intelligence (AI) and Machine Learning (ML)—were challenging for default enterprise infrastructure as they needed to process enormous amounts of data at much faster speeds and lower latencies.

These new reference (or validated architecture) designs were adapted over time but only for small system setups. The latency issues were solved by spine-leaf networking between servers called “fabrics”. For the storage infrastructure, latency was brought down quite considerably with the introduction of NVMe and NVMe over Fabrics (NVMe-oF™) technologies, which became enterprise grade in the NVMe 1.4 specification.

DGX SuperPod Leverages Fabrics Technology

For large, compute-intensive workloads, the normal enterprise CPU was a limitation. To address this limitation, NVM Express member NVIDIA developed an optimized GPU solution (DGX) that’s at the heart of its DGX SuperPOD platform for AI data center infrastructure.

The reference design is constructed around compute and storage PODs, which all connect via InfiniBand-based spine-leaf switches. The combination of spine-leaf network design, InfiniBand infrastructure and NVMe technology ensures that the solution is resilient, reaching the highest throughput with the lowest possible latency. The DGX SuperPOD platform’s spine-leaf network topology is truly scalable as new switches can be added almost endlessly as compute and storage needs grow.

The DGX SuperPOD is a great example of how designs—when combined with a fast, low latency spine-leaf network and NVMe-oF technology—optimize reference architecture for AI and HPC.

Join NVM Express, Inc. Fabrics Task Group

Fabrics technology will continue to enable better performance in data demanding applications like AI, HPC and more. NVM Express members are invited to join the Fabrics and Multi-Domain Subsystem Task Group to support the evolution of the technology.

Figure 1. 20 node DGX A100 SU