AI Hardware , Servers
Instock

Exeton Phenom A7 450 MH/s AI Processing Rigs

0 out of 5 (0)

✔ GPU: Nvidia A4000, A5000, A6000, A40, A800, H100

✔ Processor: AMD Ryzen 7 5600G (Substitute Intel i5-10400K)

✔ Power Supply: 2x Corsair HX 1200i

✔ Motherboard:Asus Prime X570 (Substitute MSI Z490-A Pro)

✔ Memory:32GB Corsair Vengeance RGB Pro 3600 Mhz DDR4

✔ Storage:512GB Samsung EVO 870 SSD

✔ Frame:Exeton Phenom Deluxe with Stackable 5-inch HDMI touchscreen Quantum Grey 8 Slot

✔ Cooling:9x Cooler Master Halo RGB 120mm (Substitute EZDIY-FAB Moonlight 120mm) with Corsair H100i Capellix Liquid Cooler

✔ Risers:6x version 010X 1x-16x pci-e

✔ Operating System:Windows 11 Pro, Windows 10 Pro, Linux, HiveOS

✔ Algorithms:Any GPU mineable coin such as Ethereum, Ergo, Ravencoin, Metaverse, Ubiq, Ethereum Classic & many more.

✔ Hashrate Performance:720MH/s Maximum (Stable)

✔ Power Consumption:1,800W±10% Undervolted

✔ Operating Conditions:0-40℃ – 5%RH-95%RH non condensation

✔ Network Connection:Ethernet & TP-Link Archer 1300AC Wifi

✔ Warranty:Vipera 3-year parts and labor

 

6 weeks lead time plus transit for assembled unit. Wood crating included. PCI-E connections may become loose during transport and may require to be reseated. Can be sent deconstructed as a kit for faster delivery (14 days). Remote Teamviewer troubleshooting and management included for 1 year.

Gaming hybrid variant available upon special request (16x full speed Gen 4 pci-e extender cable to 1 GPU output with T-Rex disable script).

     
Get this product for
vipera
Get it in 10 days
Estimate for 682345
vipera
Will be delivered to your location via DHL
Description
  • AI & Deep Learning Solution

    Embrace AI with Exeton Deep Learning technology

    Deep Learning, a subset of Artificial Intelligence (AI) and Machine Learning (ML), is the state-of-the-art procedure in Computer Science that implements multi-layered artificial neural networks to accomplish tasks that are too complicated to program. For example, Google Maps processes millions of data points every day to figure out the best route to travel, or to predict the time to arrive at the desired destination. Deep Learning comprises two parts- training and Inference. The training part of Deep Learning involves processing as many data points as possible to make the neural network ‘learn’ the feature on its own and modify itself to accomplish tasks like image recognition, speech recognition, etc. The inference part refers to the process of taking a trained model and using it to make useful predictions and decisions. Both training and inferencing require enormous amounts of computing power to achieve the desired accuracy and precision.

AI & Deep Learning Solution Ready Server

Powerhouse for Computation

The Phenom N6 AI & Deep Learning cluster is powered by Exeton, which are high density and compact powerhouses for computation. The cluster features the latest GPUs from Vipera / Exeton partner NVIDIA. Each compute node utilizes NVIDIA® GPUs.

Faster Processing with Tensor Core

NVIDIA A4000, A5000, A6000, A40, A800, H100 GPUs utilize the Tensor Core architecture. Tensor cores contain Deep Learning support and can deliver up to 125 Tensor TFLOPS for training and inference applications.

Built with AMD EPYC™ 7003 Series Processor

Providing incredible compute, IO and bandwidth capability – designed to meet the huge demand for more compute in big data analytics, HPC and cloud computing.

  • Built on 7nm advanced process technology, allowing for denser compute capabilities with lower power consumption
  • Up to 64 core per CPU, built using Zen 2 high performance cores and AMD’s innovative chiplet architecture
  • Supporting PCIe Gen 4.0 with a bandwidth of up to 64GB/s, twice of PCIe Gen 3.0
  • Embedded security protection to help defend your CPU, applications, and data
High Performance

Supports AMD Instinct™ MI250 Accelerator

Massive datasets and complex simulations require multiple GPUs with extremely fast AMD Infinity Fabric™ links amongst GPUs and fast PCIe 4.0 links between CPU and GPU. The OAM form factor combines AMD Instinct™ MI250 accelerators with high-speed interconnects to define the world’s most powerful servers.

    SpecificationGB200 NVL72
    Configuration1 Grace CPU : 2 Blackwell GPUs
    FP4 Tensor Core (with sparsity)40 PFLOPS
    FP8/FP6 Tensor Core (with sparsity)20 PFLOPS
    INT8 Tensor Core (with sparsity)20 POPS
    FP16/BF16 Tensor Core (with sparsity)10 PFLOPS
    TF32 Tensor Core5 PFLOPS
    FP32180 TFLOPS
    FP6490 TFLOPS
    FP64 Tensor Core90 TFLOPS
    GPU MemoryUp to 384 GB HBM3e
    GPU Memory Bandwidth16 TB/s
    NVLink Bandwidth3.6 TB/s
    CPU Core Count72 Arm® Neoverse V2 cores
    CPU MemoryUp to 480 GB LPDDR5X
    CPU Memory BandwidthUp to 512 GB/s

     

    1. Preliminary specifications. May be subject to change.
    2. With sparsity.

Review this product
Your Rating
Choose File

No reviews available.