AI Hardware , NVIDIA Solution
Instock

NVIDIA DGX GB200 Blackwell 1,440GB AI Supercomputer

0 out of 5 (0)

✓ GPU: 8x NVIDIA Blackwell GPUs

✓ GPU Memory: 1,440GB total

✓ Performance: 72 petaFLOPS for training, 144 petaFLOPS for inference

✓ NVIDIA® NVSwitch™: 2x

✓ System Power Usage: Approximately 14.3kW max

✓ CPU: 2 Intel® Xeon® Platinum 8570 Processors, 112 Cores total, 2.1 GHz (Base), 4 GHz (Max Boost)

✓ System Memory: Up to 4TB

✓ Networking:

4x OSFP ports for NVIDIA ConnectX-7 VPI, up to 400Gb/s InfiniBand/Ethernet

2x dual-port QSFP112 NVIDIA BlueField-3 DPU, up to 400Gb/s InfiniBand/Ethernet

✓ Management Network: 10Gb/s onboard NIC with RJ45, 100Gb/s dual-port ethernet NIC, Host BMC with RJ45

✓ Storage:

OS: 2x 1.9TB NVMe M.2

Internal: 8x 3.84TB NVMe U.2

✓ Software: NVIDIA AI Enterprise, NVIDIA Base Command, DGX OS / Ubuntu

✓ Rack Units (RU): 10 RU

✓ System Dimensions: Height: 17.5in, Width: 19.0in, Length: 35.3in

✓ Operating Temperature: 5–30°C (41–86°F)

Q4 2024 RELEASE. Inquire for more information, lead times and pricing details.


About It

The NVIDIA DGX B200 AI Server delivers next-level performance for AI, deep learning, and generative AI. Powered by NVIDIA Blackwell GPUs, it is engineered to accelerate enterprise-scale AI innovation with unmatched speed, efficiency, and scalability.

     
Get this product for
vipera
Get it in 10 days
vipera
Will be delivered to your location via DHL or UPS. Ask an agent if import tariffs apply.
Powering the Next Generation of AI

Artificial intelligence is transforming almost every business by automating tasks, enhancing customer service, generating insights, and enabling innovation. It’s no longer a futuristic concept but a reality that’s fundamentally reshaping how businesses operate. However, as AI workloads continue to develop, they’re beginning to require significantly more compute capacity than most enterprises have available. To leverage AI, enterprises need high-performance computing, storage, and networking capabilities that are secure, reliable, and efficient.

Enter NVIDIA DGX™ B200, the latest addition to the NVIDIA DGX platform. This unified AI platform defines the next chapter of generative AI by taking full advantage of NVIDIA Blackwell GPUs and high-speed interconnects. Configured with eight Blackwell GPUs, DGX B200 delivers unparalleled generative AI performance with a massive 1.4 terabytes (TB) of GPU memory and 64 terabytes per second (TB/s) of memory bandwidth, making it uniquely suited to handle any enterprise AI workload.

With NVIDIA DGX B200, enterprises can equip their data scientists and developers with a universal AI supercomputer to accelerate their time to insight and fully realize the benefits of AI for their businesses.


Enterprise Infrastructure for Mission-Critical AI

NVIDIA DGX B200 AI server is purpose-built for training and inferencing trillion-parameter generative AI models. Designed as a rack-scale solution, each liquid-cooled rack features 36 NVIDIA GB200 Grace Blackwell Superchips—36 NVIDIA Grace CPUs and 72 Blackwell GPUs—connected as one with NVIDIA NVLink™. Multiple racks can be connected with NVIDIA Quantum InfiniBand to scale up to hundreds of thousands of GB200 Superchips.

Maximize the Value of the NVIDIA DGX Platform

NVIDIA Enterprise Services provide support, education, and infrastructure specialists for your NVIDIA DGX infrastructure. With NVIDIA experts available at every step of your AI journey, Enterprise Services can help you get your projects up and running quickly and successfully.

Review this product
Your Rating
Choose File

No reviews available.