AI Hardware , Servers
Instock

Lenovo NVIDIA HGX H200 141GB 700W 4-GPU Board C3V2

0 out of 5 (0)

✔ Form Factor: H200 SXM1

✔ FP64: 34 TFLOPS

✔ FP64 Tensor Core: 67 TFLOPS

✔ FP32: 67 TFLOPS

✔ TF32 Tensor Core: 989 TFLOPS

✔ BFLOAT16 Tensor Core: 1,979 TFLOPS

✔ FP16 Tensor Core: 1,979 TFLOPS

✔ FP8 Tensor Core: 3,958 TFLOPS

✔ INT8 Tensor Core: 3,958 TFLOPS

✔ GPU Memory: 141GB

✔ GPU Memory Bandwidth: 4.8TB/s

✔ Decoders: 7 NVDEC, 7 JPEG

✔ Max Thermal Design Power (TDP): Up to 700W each card (configurable)

✔ Multi-Instance GPUs: Up to 7 MIGs @16.5GB each

✔ Interconnect: NVIDIA NVLink®: > 900GB/s, PCIe Gen5: 128GB/s

✔ Server Options: NVIDIA HGX™ H200 partner and NVIDIA-Certified Systems™ with 4 or 8 GPUs, NVIDIA AI Enterprise Add-on

✔ Cooling: Liquid Closed Loop with Thermal Heatsinks

✔ Warranty: 3 years return-to-base repair or replace

 

Expected delivery in late December, 2024. All sales final. No returns or cancellations. For bulk inquiries, consult a live chat agent or call our toll-free number.

vipera
2 year Warranty
     
Get this product for
$175,000.00
vipera
Get it in 10 days
Estimate for 682345
vipera
Will be delivered to your location via DHL
Inquiry to Buy
Higher Performance and Larger, Faster Memory

The NVIDIA H200 Tensor Core GPU supercharges generative AI and high-performance computing (HPC) workloads with game changing performance and memory capabilities.
Based on the NVIDIA Hopper architecture, the NVIDIA H200 is the first GPU to offer 141 gigabytes (GB) of HBM3e memory at 4.8 terabytes per second (TB/s)—that’s nearly double the capacity of the NVIDIA H100 Tensor Core GPU with 1.4X more memor bandwidth. The H200’s larger and faster memory accelerates generative AI and large language models, while advancing scientific computing for HPC workloads with better energy efficiency and lower total cost of ownership.

    Technical SpecificationsValues
    Form FactorH200 SXM1
    FP6434 TFLOPS
    FP64 Tensor Core67 TFLOPS
    FP3267 TFLOPS
    TF32 Tensor Core989 TFLOPS
    BFLOAT16 Tensor Core1,979 TFLOPS
    FP16 Tensor Core1,979 TFLOPS
    FP8 Tensor Core3,958 TFLOPS
    INT8 Tensor Core3,958 TFLOPS
    GPU Memory141GB
    GPU Memory Bandwidth4.8TB/s
    Decoders7 NVDEC, 7 JPEG
    Max Thermal Design Power (TDP)Up to 700W (configurable)
    Multi-Instance GPUsUp to 7 MIGs @16.5GB each
    InterconnectNVIDIA NVLink®: > 900GB/s, PCIe Gen5: 128GB/s
    Server OptionsNVIDIA HGX™ H200 partner and NVIDIA-Certified Systems™ with 4 or 8 GPUs, NVIDIA AI Enterprise Add-on
Review this product
Your Rating
Choose File

No reviews available.