Unprecedented performance, scalability, and security for every data center. Designed for deep learning and special workloads.
Part Number: 900-21010-0300-030
The SXM4 (NVLINK native soldered onto carrier boards) version of the cards are available upon request only, and are attached permanently to their respective motherboards via a complete system only, with longer lead times.
Verify with live chat agent in advance for availability as stock is volatile and can change drastically every 48-72 hours. There is currently an embargo on H100 and A100 AI compute cards for certain countries. All sales final. No returns or cancellations. For bulk inquiries, consult a live chat agent or call our toll-free number.
Tap into unprecedented performance, scalability, and security for every workload with the NVIDIA H100 Tensor Core GPU. With NVIDIA® NVLink® Switch System, up to 256 H100s can be connected to accelerate exascale workloads, along with a dedicated Transformer Engine to solve trillion-parameter language models. H100’s combined technology innovations can speed up large language models by an incredible 30X over the previous generation to deliver industry-leading conversational AI.
Form Factor | H100 SXM | H100 PCIe |
---|---|---|
FP64 | 34 teraFLOPS | 26 teraFLOPS |
FP64 Tensor Core | 67 teraFLOPS | 51 teraFLOPS |
FP32 | 67 teraFLOPS | 51 teraFLOPS |
TF32 Tensor Core | 989 teraFLOPS* | 756teraFLOPS* |
BFLOAT16 Tensor Core | 1,979 teraFLOPS* | 1,513 teraFLOPS* |
FP16 Tensor Core | 1,979 teraFLOPS* | 1,513 teraFLOPS* |
FP8 Tensor Core | 3,958 teraFLOPS* | 3,026 teraFLOPS* |
INT8 Tensor Core | 3,958 TOPS* | 3,026 TOPS* |
GPU memory | 80GB | 80GB |
No reviews available.