AI Hardware , Servers
Instock

NVIDIA DGX Station 4x A100 160GB

0 out of 5 (0)

✓ CPU: 1x AMD EPYC 7742, 64 cores, 2.25-3.4 GHz

✓ GPU: 4x NVIDIA A100 Tensor Core GPUs, 40 GB each (160 GB total)

✓ Memory: 512 GB DDR4 RAM

✓ Data Storage: 7.68 TB NVMe U.2 drive

✓ OS Storage: 1.92 TB NVMe M.2 drive

✓ Power Supply: 1500W; 100-240V AC, 50/60Hz

✓ Dimensions: 26.1 x 10.1 x 20.4 in (639 x 256 x 518 mm)

✓ Weight: 91 lbs (43.1 kg)

✓ Operating Temp: 10°C to 35°C (50°F to 95°F)

✓ Non-Operating Temp: 5°C to 40°C (41°F to 104°F)

✓ Humidity: 10% to 80% (non-condensing)

✓ Networking: Dual 10 Gb Ethernet ports

✓ Cooling: Air-cooled

✓ Software: DGX OS with pre-installed deep learning frameworks, supports NVIDIA Docker

✓ Warranty: 2 Years

✓ Nvidia Part Number: 920-23487-2531-000

Lead time is 4-6 weeks. All sales final, no returns or cancellations. For bulk inquiries consult a live chat agent or call our toll-free number.

vipera
CYCLESAFE™ Plus
     
Get this product for
$85,000.00
vipera
Get it in 10 days
Estimate for 682345
vipera
Will be delivered to your location via DHL
Inquiry to Buy
AI Across Industries With DGX

The NVIDIA DGX A100 is revolutionizing various industries by offering unparalleled AI performance and versatility. From accelerating medical research with advanced machine learning models to enhancing financial forecasting with deep learning algorithms, the DGX A100 provides the computational power needed to drive significant advancements. Its 8 NVIDIA A100 Tensor Core GPUs, extensive memory, and high-speed networking capabilities enable businesses across healthcare, finance, manufacturing, and more to deploy sophisticated AI solutions, streamline operations, and achieve better outcomes faster and more efficiently.

NVIDIA DGX Station 4x A100 160GB

A Comprehensive AI Platform

NVIDIA DGX A100 is a complete AI platform that seamlessly integrates state-of-the-art hardware and software, enabling researchers and enterprises to maximize their AI capabilities. Equipped with NVIDIA A100 Tensor Core GPUs, the DGX A100 delivers exceptional performance for training and inference tasks. It supports a wide range of AI frameworks and tools, simplifying the development and deployment of AI models. This comprehensive platform ensures that users can focus on innovation and scaling their AI projects without the complexity of managing infrastructure, making it ideal for diverse AI applications.

The Fastest Path to NVIDIA AI is Through the Cloud

Unlock the full potential of NVIDIA DGX A100 by accessing its capabilities through cloud-based solutions. This approach offers organizations unmatched flexibility, scalability, and cost efficiency. By leveraging DGX A100 in the cloud, businesses can accelerate AI research, development, and deployment without the need for significant upfront investments in hardware. This ensures rapid scalability to meet the growing demands of AI workloads, seamless integration with existing workflows, and quicker time-to-insight. Embrace cloud-based AI with DGX A100 for a powerful, scalable, and efficient AI infrastructure.

    ComponentSpecification
    CPU1x AMD EPYC 7742, 64 cores, 2.25 GHz (base) to 3.4 GHz (max boost)
    GPU4x NVIDIA A100 Tensor Core GPUs, 40 GB each (160 GB total), with MIG support
    System Memory512 GB DDR4 RAM (8x 64 GB modules)
    Data Storage7.68 TB NVMe U.2 drive for cache/data storage
    OS Storage1.92 TB NVMe M.2 drive for the operating system
    Power Supply1500W; 100-115VAC/15A, 115-120VAC/12A, or 200-240VAC/10A at 50/60Hz
    DimensionsHeight: 26.1 in (639 mm), Width: 10.1 in (256 mm), Depth: 20.4 in (518 mm)
    Weight91 lbs (43.1 kg)
    Operating Temperature10°C to 35°C (50°F to 95°F)
    Non-Operating Temperature5°C to 40°C (41°F to 104°F)
    Humidity10% to 80% (non-condensing) for both operating and non-operating conditions
    NetworkingDual 10 Gb Ethernet ports
    CoolingAir-cooled system
    SoftwareDGX OS with pre-installed software for deep learning frameworks, supports NVIDIA Docker
Review this product
Your Rating
Choose File

No reviews available.