Computers
Instock

NVIDIA A40 Enterprise Tensor Core 48GB 190W

0 out of 5 (0)

Bring accelerated performance to every enterprise workload with NVIDIA A40 Tensor Core GPUs. With NVIDIA Ampere architecture Tensor Cores and Multi-Instance GPU (MIG), it delivers speedups securely across diverse workloads, including AI inference at scale and high-performance computing (HPC) applications. By combining fast memory bandwidth and low-power consumption in a PCIe form factor—optimal for mainstream servers—A30 enables an elastic data center and delivers maximum value for enterprises.


     
Get this product for
$5,500.00
$6,295.00
vipera
Save 13%
Save $795.00
Get it in 10 days
Estimate for 682345
vipera
Will be delivered to your location via DHL
Inquiry to Buy

Ships in 7 days after payment. All sales final. No returns or cancellations. For volume pricing, consult a live chat agent or call our toll-free number.


AI Inference and Mainstream Compute for Every Enterprise
The Data Center Solution

The NVIDIA Ampere architecture is part of the unified NVIDIA EGX™ platform, incorporating building blocks across hardware, networking, software, libraries, and optimized AI models and applications from the NVIDIA NGC™ catalog. Representing the most powerful end-to-end AI and HPC platform for data centers, it allows researchers to rapidly deliver real-world results and deploy solutions into production at scale.

DEEP LEARNING TRAINING

Training AI models for next-level challenges such as conversational AI requires massive compute power and scalability.

NVIDIA A40 Tensor Cores with Tensor Float (TF32) provide up to 10X higher performance over the NVIDIA T4 with zero code changes and an additional 2X boost with automatic mixed precision and FP16, delivering a combined 20X throughput increase. When combined with NVIDIA® NVLink®, PCIe Gen4, NVIDIA networking, and the NVIDIA Magnum IO™ SDK, it’s possible to scale to thousands of GPUs.

Tensor Cores and MIG enable A30 to be used for workloads dynamically throughout the day. It can be used for production inference at peak demand, and part of the GPU can be repurposed to rapidly re-train those very same models during off-peak hours.

NVIDIA set multiple performance records in MLPerf, the industry-wide benchmark for AI training.

    FP645.2 teraFLOPS
    FP64 Tensor Core10.3 teraFLOPS
    FP3210.3 teraFLOPS
    TF32 Tensor Core82 teraFLOPS | 165 teraFLOPS*
    BFLOAT16 Tensor Core165 teraFLOPS | 330 teraFLOPS*
    FP16 Tensor Core165 teraFLOPS | 330 teraFLOPS*
    INT8 Tensor Core330 TOPS | 661 TOPS*
    INT4 Tensor Core661 TOPS | 1321 TOPS*
    Media engines1 optical flow accelerator (OFA)
    1 JPEG decoder (NVJPEG)
    4 video decoders (NVDEC)
    GPU memory48GB HBM2
    GPU memory bandwidth933GB/s
    InterconnectPCIe Gen4: 64GB/s
    Third-gen NVLINK: 200GB/s**
    Form factorDual-slot, full-height, full-length (FHFL)
    Max thermal design power (TDP)165W
    Multi-Instance GPU (MIG)4 GPU instances @ 6GB each
    2 GPU instances @ 12GB each
    1 GPU instance @ 24GB
    Virtual GPU (vGPU) software supportNVIDIA AI Enterprise
    NVIDIA Virtual Compute Server
Review this product
Your Rating
Choose File

No reviews available.