The NVIDIA A100 Tensor Core GPU delivers unprecedented acceleration—at every scale—to power the world’s highest performing elastic data centers for AI, data analytics, and high-performance computing (HPC) applications.
2-year manufacturer repair or replace warranty included.
Ships in 10 days from payment. All sales final. No cancellations or returns. For volume pricing, consult a live chat agent or all our toll-free number.
Update 06.01.2024: Please note the production of this product ceased in February, 2024, and is now EOL (end-of-life). Remaining stock is limited. Nvidia has recommended the L40/L40S line as a replacement in the meantime, budget-friendly RTX A6000 ADA, or the significantly pricier and performant H100. The true successor has not been disclosed, but we suspect Nvidia will focus on the new Blackwell/GH200 chips for HPC going forward into 2025 and beyond. Given the EOL, there is a mix of old new stock from different OEMs (PNY, Foxconn, HPE), including custom SXM to PCI-E conversions to accommodate the extreme demand we continue to experience. We cannot guarantee which version will be sent, but are committed in doing our best to fulfill orders based on the buyer’s preference.
The NVIDIA A100 Tensor Core GPU delivers unprecedented acceleration—at every scale—to power the world’s highest performing elastic data centers for AI, data analytics, and high-performance computing (HPC) applications. As the engine of the NVIDIA data center platform, A100 provides up to 20x higher performance over the prior NVIDIA Volta generation. A100 can efficiently scale up or be partitioned into seven isolated GPU instances, with Multi-Instance GPU (MIG) providing a unified platform that enables elastic data centers to dynamically adjust to shifting workload demands.
CUDA Cores | 6912 |
---|---|
Streaming Multiprocessors | 108 |
Tensor Cores | Gen 3 | 432 |
GPU Memory | 40 GB or 80 GB HBM2e ECC on by Default |
Memory Interface | 5120-bit |
Memory Bandwidth | 1555 GB/s |
NVLink | 2-Way, 2-Slot, 600 GB/s Bidirectional |
MIG (Multi-Instance GPU) Support | Yes, up to 7 GPU Instances |
FP64 | 9.7 TFLOPS |
FP64 Tensor Core | 19.5 TFLOPS |
FP32 | 19.5 TFLOPS |
TF32 Tensor Core | 156 TFLOPS | 312 TFLOPS* |
BFLOAT16 Tensor Core | 312 TFLOPS | 624 TFLOPS* |
FP16 Tensor Core | 312 TFLOPS | 624 TFLOPS* |
INT8 Tensor Core | 624 TOPS | 1248 TOPS* |
Thermal Solutions | Passive |
vGPU Support | NVIDIA Virtual Compute Server (vCS) |
System Interface | PCIE 4.0 x16 |
No reviews available.