AI Hardware , Enterprise GPUs
Instock

NVIDIA A100 Enterprise PCIe 40GB/80GB

0 out of 5 (0)

The NVIDIA A100 Tensor Core GPU delivers unprecedented acceleration—at every scale—to power the world’s highest performing elastic data centers for AI, data analytics, and high-performance computing (HPC) applications.

2-year manufacturer repair or replace warranty included.

     
Get this product for
vipera
Get it in 10 days
Estimate for 682345
vipera
Will be delivered to your location via DHL

Ships in 10 days from payment. All sales final. No cancellations or returns. For volume pricing, consult a live chat agent or all our toll-free number.

Update 06.01.2024: Please note the production of this product ceased in February, 2024, and is now EOL (end-of-life). Remaining stock is limited. Nvidia has recommended the L40/L40S line as a replacement in the meantime, budget-friendly RTX A6000 ADA, or the significantly pricier and performant H100. The true successor has not been disclosed, but we suspect Nvidia will focus on the new Blackwell/GH200 chips for HPC going forward into 2025 and beyond. Given the EOL, there is a mix of old new stock from different OEMs (PNY, Foxconn, HPE), including custom SXM to PCI-E conversions to accommodate the extreme demand we continue to experience. We cannot guarantee which version will be sent, but are committed in doing our best to fulfill orders based on the buyer’s preference.

Overview

The NVIDIA A100 Tensor Core GPU delivers unprecedented acceleration—at every scale—to power the world’s highest performing elastic data centers for AI, data analytics, and high-performance computing (HPC) applications. As the engine of the NVIDIA data center platform, A100 provides up to 20x higher performance over the prior NVIDIA Volta generation. A100 can efficiently scale up or be partitioned into seven isolated GPU instances, with Multi-Instance GPU (MIG) providing a unified platform that enables elastic data centers to dynamically adjust to shifting workload demands.

    CUDA Cores6912
    Streaming Multiprocessors108
    Tensor Cores | Gen 3432
    GPU Memory40 GB or 80 GB HBM2e ECC on by Default
    Memory Interface5120-bit
    Memory Bandwidth1555 GB/s
    NVLink2-Way, 2-Slot, 600 GB/s Bidirectional
    MIG (Multi-Instance GPU) SupportYes, up to 7 GPU Instances
    FP649.7 TFLOPS
    FP64 Tensor Core19.5 TFLOPS
    FP3219.5 TFLOPS
    TF32 Tensor Core156 TFLOPS | 312 TFLOPS*
    BFLOAT16 Tensor Core312 TFLOPS | 624 TFLOPS*
    FP16 Tensor Core312 TFLOPS | 624 TFLOPS*
    INT8 Tensor Core624 TOPS | 1248 TOPS*
    Thermal SolutionsPassive
    vGPU SupportNVIDIA Virtual Compute Server (vCS)
    System InterfacePCIE 4.0 x16
Review this product
Your Rating
Choose File

No reviews available.