Vertiv announced the release of a 7MW reference architecture for the NVIDIA GB200 NVL72 platform, developed in collaboration with NVIDIA. This comprehensive architecture will transform traditional data centers into AI factories capable of supporting AI applications across industries.
The new reference architecture is designed to speed up the deployment of the NVIDIA GB200 NVL72 liquid-cooled platform, supporting up to 132kW per rack. The architecture optimizes data center infrastructure to ensure faster deployment, enhanced performance, energy efficiency, and scalability for current and next-generation data centers.
“We are proud to deepen our collaboration with NVIDIA to enable AI-driven data centers of today and tomorrow,” said Giordano (Gio) Albertazzi, CEO of Vertiv. “With our high-performance power and cooling solutions, Vertiv is uniquely positioned to support the NVIDIA GB200 NVL72 platform. This collaboration enables customers to build AI-ready data centers faster and more efficiently, addressing dynamic workloads and future growth.”
Jensen Huang, founder and CEO of NVIDIA, highlighted the increasing complexity of data center designs for AI applications: “With Vertiv’s world-class cooling and power technologies, we are building a new industry of AI factories that produce digital intelligence to benefit every company and industry.”
The Vertiv power and cooling infrastructure, integrated with the NVIDIA Blackwell platform, offers a simplified and accelerated path to AI workload deployment in both new and existing data centers. Key features include hybrid liquid and air-cooling systems, high-density heat removal solutions, and optional Open Compute Project-inspired systems. The design significantly reduces power wastage by aligning AI clusters with data center capacity, supporting optimal performance.
With a global network of approximately 4,000 field service engineers, Vertiv is well-positioned to support both new builds and retrofits. The company's collaboration with NVIDIA sets a new standard for power and cooling infrastructure tailored to the demands of AI and accelerated computing.