Vipera / Private Cloud

Executive Summary

Vipera Secures Strategic Datacenter Acquisition in Partnership with Qatar Central Bank Subsidiary | Vipera proudly announces the successful acquisition of two state-of-the-art datacenters from EDAA, a subsidiary of the Qatar Central Bank, under a landmark strategic agreement.

As part of this milestone, Vipera has entered into a long-term lease arrangement with the Central Bank for the buildings housing these facilities, reinforcing its operational presence and strengthening institutional ties within Qatar's financial ecosystem.

This acquisition firmly positions Vipera among the key players in Qatar's data center market, enabling the delivery of secure, scalable, and sovereign digital infrastructure solutions for both public and private sector clients.

More than a transaction, this is a strategic alliance with the Qatar Central Bank, focused on enhancing resilience, innovation, and digital transformation in the financial sector.

Aligned with Qatar National Vision 2030, Vipera is committed to:

  • Driving economic diversification
  • Strengthening data sovereignty
  • Supporting advanced digital infrastructure
  • Enabling financial institutions with agility and security

Datacenter Location

1st Datacenter – C-Ring Road

2 Buildings (Building 1 & Building 2) | DC located in Building 1 (refer to green highlight)
C-Ring Road map
C-Ring Road building


2nd Datacenter – Lusail City

1 Building | DC is located within the basement
Lusail City map
Lusail City building

    Architectural layout – building overview – I3

Architectural layout I3

    Architectural layout – building overview – I1

Datacenter Services

Building image

Qatar Based Services

Data Centre
Services

  • Colocation
  • Data Suites
  • Remote Hands Services
  • Workplace Recovery
  • GPU as Service
  • High Performance Computing

Cloud
Services

  • Private & Public Cloud
  • Email and Collaboration
  • Email Security Gateway
  • Media Analytics
  • Public Cloud Monitoring
  • Storage as a Service
  • Backup as a Service

Managed IT
Services

  • Managed Application
  • Managed Computing
  • Managed OS
  • Managed Hardware
  • Managed Network
  • Connectivity/InterDC
  • NOC Monitoring

Security
Services

  • SOC & Security
  • End-user security
  • Network Security
  • Threat Intelligence
  • Application Security
  • Professional Services

End-User
Services

  • Business Environment
  • End-user devices
  • Workplace Services
  • Service Desk as a Service

Solution Services

Business Continuity/DR, Information Security, Workplace Services, Enterprise Applications, Dedicated Infrastructure, Smart Services








Strategic Partnership
Scalable, secure, and ready for future growth

    Infrastructure & Capacity

  • 1,136 vCPUs
  • 4,800 GB
  • 100 TB
  • 75 TB
  • 50 TB
  • 200

    Security & Networking

    Operations & Facility


  • A secure, high-performance multi-tenancy cloud platform designed for scalability, operational efficiency, and Service Provider-grade reliability.
  • Hosting over 200 virtual machines with robust backup, security, and networking infrastructure optimized for business growth.

Server Room
Wisdom Private Cloud

Backup Infra License & Operation

Backup & Restoration

OS License & Operation

OS License & Mgmt

Perimeter Network Security & Operations

Perimeter Network Security
(Next Gen FW, WAF, NLB)

Datacenter Network

Perimeter Network Security
(Next Gen FW, WAF, NLB)

Virtual Resource Operation

Virtual Compute & Storage
Virtual Networking Operation
Hypervisor License & Operation
Compute HW & Storage HW Operation
Compute HW & Storage HW Procurement & Supply

Datacenter Facility

Datacenter Facility Operation & Maintenance
Datacenter Hosting Facility


Vipera Private Cloud Supporting Qatar Cloud Strategies

picture

Solution Overview – IaaS Offerings

IT services consumption

Private cloud and PaaS

Management and automation

Infrastructure

AnalyticsSecurity

High-Level Design - Physical



High-Level Design Physical


High-Level Design - Virtual


High-Level Design Virtual

    IaaS Zone Architecture Overview

Datacentre (DC) Zone

  • Core compute infrastructure for tenant workloads.Components:
  • Compute nodes, ToR switches, core switches (spine-leaf topology).
  • Security & Services: DC firewalls, virtual load balancers, AAA servers.

Management Zone

  • Hosts infrastructure management tools and appliances.Isolation:
  • Separated from production workloads for stability and control.

DMZ Zone

  • Hosts public-facing services with strong access control.
  • Security: WAF, virtual load balancers, MFA systems.
  • Isolation:Segregated from internal zones via firewalls.

Connectivity, Security & Design Principles

Perimeter Zone

  • Internet-facing zone with perimeter firewalls.
  • Functions: VPN gateway, ISP/CPE termination, ingress/egress control.

WAN Zone

  • Connects remote sites via MPLS/SD-WAN.
  • Purpose: Business continuity and distributed access.

Perimeter Firewalls

  • Centralized cluster serving Perimeter, DMZ, and WAN zones.
  • Connectivity: Core switches (east-west), zone switches (north-south).
  • Function: Unified Zone-Based access control.

Design Principles

  • Segmentation: Zone isolation to limit lateral movement.
  • High Availability: Redundant critical components.
  • Scalability: Leaf-spine architecture enables horizontal growth.
  • Security by Design: Integrated firewalls, WAFs, AAA, MFA for zero-trust.

WHO IS VIPERA

Paving the way to future technologies


ABOUT VIPERA LLC

      What We Offer

  • AI Hardware
  • Computers
  • Data Center
  • Digital Signage

ABOUT VIPERA

AN INTRODUCTION

What started out as a passion for groundbreaking technological innovation quickly became a thriving ecosystem of high-end technological and electronic solutions adapted and curated by a team of dedicated specialists leading the way in all things digital.

VIPERA OFFICE FRONT

TOMORROW'S TECHNOLOGY TODAY

Vipera is a premier source for selective, highly sought-after electronics and cutting-edge technology solutions catering to the digital advertising, cryptocurrency, A.I. processing, corporate I.T. and PC gaming industries.



⚙️

AI HARDWARE

💻

COMPUTER SYSTEMS

🖥️

DIGITAL SIGNAGE


AI Hardware banner

NEXT GENERATION â€" AUTOMATED SERVERS & GPU


BREAKING BOUNDARIES

Our fully integrated AI platform enables a diverse portfolio of machine learning products and services across industries that work seamlessly together to create AI at scale.

No matter the task or workload, our integrated AI systems can help break boundaries.

We combine AI capabilities with world class technology to enable organizations to solve a broader set of challenges.

AI Abstract Illustration

DESCRIPTION

Power your most intensive workloads with this high-performance 10U single-node server, expertly engineered for High Performance Computing, Conversational AI, and Deep Learning Training. Business Intelligence & Analytics, and Industrial Automation. Driven by dual Intel® Xeon® Platinum 8597 processors, offering a combined 96 cores and bolstered by 32 high-speed 96GB DDR5-6400 RDIMM memory modules, this system ensures exceptional processing power and responsiveness across diverse projects and capacity at 3.84TB NVMe devices and 8x 3.84TB U.2 NVMe drives. Accelerate your AI initiatives with the cutting-edge NVIDIA H100 80GB GPUs. High-bandwidth connectivity is guaranteed through 4x 400GbE QSFP ports, a 200GbE DPU, and 2x 10GbE RJ45 LAN ports, facilitating seamless data transfer. Ready to deploy immediately, this robust server ships within 24 hours, minimizing downtime and maximizing productivity.

SPECIFICATIONS

CPU2Ã"" Intel® Xeon® Platinum 8570 (56-Cores, 2.10GHz, 300MB Cache, 350W)
Memory32 Ã"" 96GB DDR5-5600MHz ECC RDIMM Server Memory
Storage-18 Ã"" 3.8TB 2.5" 7450 PRO NVMe (15mm) PCIe 4.0 Solid State Drives(1X DWPD)
Storage-22 Ã"" 1.9TB M.2 PM9A3 NVMe Solid State Drives
GPU-11 Ã"" NVIDIA HGX GPU Baseboard with 8 Ã"" B200 (180GB)
AOC-18 Ã"" 400-Gigabit MCX7531OAAS-NEAT (1 Ã"" OSFP) Ethernet Network Adapter
GPU-21 Ã"" BlueField-3 BF3220 DPU (200GBE Dual Port, Crypto Enabled)
TPM1 Ã"" AOM-TPM-9670V-P - Trusted Platform Module (TPM) 2.0
1 Ã"" 2 Ã"" RJ45 10GB base-T
Ports
Onboard Network Service SupportNVIDIA Enterprise Business Standard Support For BlueField-3(8-Year, Software Only)
Supermicro GPU Server Image

Gold Series GPU Server

Supermicro 10U B200 (SYS-A21GE-NBRT-G1)




DP Intel 8U System with NVIDIA HGX H100/H200 8-GPU and Rear I/O

SuperMicro SuperServer SYS-821GE-TNHR (Complete System Only)

SuperMicro SuperServer

KEY APPLICATIONS

  • High Performance Computing
  • AI/Deep Learning Training
  • Industrial Automation, Retail
  • Healthcare
  • Conversational AI
  • Business Intelligence & Analytics
  • Drug Discovery
  • Climate and Weather Modeling
  • Finance & Economics

KEY FEATURES

  • 5th/4th Gen Intel® Xeon® Scalable processor support
  • 32 DIMM slots Up to 8TB: 32Ã""256 GB DRAM Memory Type: 5600MT/s ECC DDR5
  • 8 PCIe Gen 5.0 x16 LP
  • 2 PCIe Gen 5.0 x16 FHHL Slots, 2 PCIe Gen 5.0 x16 FHHL Slots (option alt)
  • Flexible networking options

KEY APPLICATIONS

  • High Performance Computing
  • AI/Deep Learning Training
  • Industrial Automation, Retail
  • Climate and Weather Modeling

KEY FEATURES

  • High density 8U system for NVIDIA® HGXâ"žÂ¢ H100/H200 8-GPU
  • Highest GPU communication using NVIDIA® NVLINKâ"žÂ¢ + NVIDIA® NVSwitchâ"žÂ¢
  • 8 NIC for GPU direct RDMA (1:1 GPU Ratio)
  • 24 DIMM slots DDR5; up to 6TB 4800MT/s ECC LRDIMM/RDIMM
  • Up to 8 PCIe 5.0 x16 LP + 4 PCIe 5.0 x16 FHFL slots
  • Flexible networking options
  • 12 Hot-swap 2.5" NVMe drive bays + 2 hot-swap 2.5" SATA drive bays
  • + 4 hot-swap 2.5" NVMe drive bays (optional)
Supermicro GPU Server Image

DP AMD 8U System with NVIDIA HGX H100/H200 8-GPU

SuperMicro GPU A+ Server AS -8125GS-TNHR (Complete System Only)

DP Intel 4U Liquid-Cooled System with NVIDIA HGX H100/H200 8-GPU (onsite service is required for liquid-cooling)

SuperMicro GPU SuperServer SYS-421GE- TNHR2-LCC (Complete System Only )

SuperMicro SuperServer

KEY APPLICATIONS

  • High Performance Computing
  • AI/Deep Learning Training
  • Industrial Automation, Retail
  • Healthcare
  • Conversational AI
  • Business Intelligence & Analytics
  • Drug Discovery
  • Climate and Weather Modeling
  • Finance & Economics

KEY FEATURES

  • 5th/4th Gen Intel® Xeon® Scalable processor support
  • 32 DIMM slots Up to 8TB: 32Ã""256 GB DRAM Memory Type: 5600MT/s ECC DDR5
  • 8 PCIe Gen 5.0 x16 LP
  • 2 PCIe Gen 5.0 x16 FHHL Slots, 2 PCIe Gen 5.0 x16 FHHL Slots (option alt)
  • Flexible networking options

KEY APPLICATIONS

  • Artificial Intelligence (AI)
  • HPC
  • AI / Deep Learning
  • Deep Learning/AI/Machine Learning Development

KEY FEATURES

  • High density 4U system with NVIDIA® HGXâ"žÂ¢ H100/H200 8-GPU
  • 1.8 NVMe for NVIDIA GPUDirect Storage
  • 2.8 NIC for NVIDIA GPUDirect RDMA (1:1 GPU Ratio)
  • Highest GPU communication using NVIDIA® NVLink®
  • Dual-Socket, AMD EPYCâ"žÂ¢ 9004/9005 Series Processors
  • 24 DIMM slots Up to 6TB: 4800 ECC DDR5
  • 8 PCIe 5.0 x16 LP slots
Supermicro GPU Server Image

DP AMD 4U Liquid-Cooled System with NVIDIA HGX H100/H200 8-GPU (onsite service is required for liquid-cooling)

SuperMicro GPU A+ Server AS -4125GS- TNHR2-LCC (Complete System Only)




HPC/AI Server - AMD EPYCâ"žÂ¢ 9005/9004 - 5U DP NVIDIA HGXâ"žÂ¢ H200 8-GPU

GIGABYTE (G593-ZD1-AAX3)

GIGABYTE Server

KEY APPLICATIONS

  • NVIDIA HGXâ"žÂ¢ H200 8-GPU
  • 900GB/s GPU-to-GPU bandwidth with NVIDIA® NVLinkâ"žÂ¢ and NVSwitchâ"žÂ¢
  • Dual AMD EPYCâ"žÂ¢ 9005/9004 Series Processors
  • 12-Channel DDR5 RDIMM, 24 x DIMMs

SPECIFICATIONS

ServerGIGABYTE G593-ZD1-AAX3, 5U 2CPU NVIDIA® HGXTM H200 8-GPU Server
SystemSystem for HPC / AI, Dual AMD EPYCTM 9004, 24x DDR5 DIMM Slots, 8x Hot-Swap Storage Bays, 4+2 3000W 80+ Titanium PSUs, Air Cooling, 2x 10Gb/s RJ45 LAN Ports (Intel® X710-AT2), 2x IPMI RJ45 Ports, 8x NVIDIA H200 SXM5 Modules, NVLink Fabric
System Memory2x AMD EPYC 9754 3.1GHZ 128 Cores Processor
CPU24x 96GB DDR5 ECC RDIMM, 4800MHZ
SSD 1(OS)2x 1.92TB NVMe PCIe Gen4 U.2 SSD
SSD 2(Scratch)3x 7.68TB NVMe PCle Gen4 U.2 SSD
Network Card8x NVIDIA ConnectX-7 VPI 400GbE/NDR IB, Dual-Ports OSFP, PCIe Gen5 x16 HHHL, Crypto Disabled, Secure Boot Enabled, 3-Year Warranty, MCX75310AAS-NEAT
Network Card2x NVIDIA ConnectX-7 VPI 200GbE/NDR200 IB, Dual-Ports QSFP112, PCIe Gen5 x16 HHHL, Crypto Disabled, Secure Boot Enabled, 3-Year Warranty, MCX755106AS-HEAT
Accessories  6x Server Power Cables, 1x Slide Rail Kit, 2x CPU Heatsinks
Service 1Assembly and Testing by GIGABYTE

KEY FEATURES

  • CPU+GPU Direct liquid cooling solution
  • Liquid-cooled NVIDIA HGXâ"žÂ¢ H100 8-GPU
  • 900GB/s GPU-to-GPU bandwidth with NVIDIA® NVLink® and NVSwitchâ"žÂ¢
  • Dual AMD EPYCâ"žÂ¢ 9004 Series Processors
  • 12-Channel DDR5 RDIMM, 24 x DIMMs
  • Dual ROM Architecture
  • 2 x 10Gb/s LAN ports via Intel® X710-AT2
  • 2 x M.2 slots with PCIe Gen3 x4 and x1 interface
  • 8 x 2.5" Gen5 NVMe/SATA/SAS-4 hot-swap bays
  • 4 x FHHL PCIe Gen5 x16 slots
  • 8 x LP PCIe Gen5 x16 slots
  • 4+2 3000W 80 PLUS Titanium redundant power supplies

DESCRIPTION

The path to AMD's 5nm 'Zen 4' architecture was paved with many successful generations of EPYC innovations and chiplet designs, and AMD EPYC 9004 Series processors continue this progression

GIGABYTE Server Image

HPC/AI Server - 5U DP NVIDIA HGXâ"žÂ¢ H100 8-GPU

GIGABYTE (G593-ZD2-LAX1)




HPC/AI Server - AMD EPYCâ"žÂ¢ 9004 - 5U DP NVIDIA HGXâ"žÂ¢ H200 8-GPU DLC

GIGABYTE (G593-ZD1-LAX3)

GIGABYTE Server

DESCRIPTION

The NVIDIA HGXâ"žÂ¢ H200 combines H200 Tensor Core GPUs with high-speed interconnects to deliver extraordinary performance, scalability, and security for every data center. Configurations of up to eight GPUs deliver unprecedented acceleration, with a staggering 32 petaFLOPS of performance to create the world's most powerful accelerated scale-up server platform for AI and HPC

Key Features

  • CPU+GPU Direct liquid cooling solution
  • Liquid-cooled NVIDIA HGXâ"žÂ¢ H100 8-GPU
  • 900GB/s GPU-to-GPU bandwidth with NVIDIA® NVLink® and NVSwitchâ"žÂ¢
  • Dual AMD EPYCâ"žÂ¢ 9004 Series Processors
  • 12-Channel DDR5 RDIMM, 24 x DIMMs
  • Dual ROM Architecture
  • 2 x 10Gb/s LAN ports via Intel® X710-AT2
  • 2 x M.2 slots with PCIe Gen3 x4 and x1 interface
  • 8 x 2.5" Gen5 NVMe/SATA/SAS-4 hot-swap bays
  • 4 x FHHL PCIe Gen5 x16 slots
  • 8 x LP PCIe Gen5 x16 slots
  • 4+2 3000W 80 PLUS Titanium redundant power supplies

AI Computing GPU

NVIDIA H100 (94 GB HBM3) – EOL 2025

NVIDIA H100 NVL

DESCRIPTION

The H100 NVLhas a full 6144-bit memory interface (1024-bit foreach HBM3 stack) and memory speed up to 5.1 Gbps. This means that the maximum throughput is 7.8GB/s, more than twice as much as the H100 SXM. Large Language Models require large buffers and higher bandwidth will certainly have an impact as well.

Key Features

Specification     H100 NVL^2
FP64     68 teraFLOPS
FP64 Tensor Core     134 teraFLOPS
FP32     134 teraFLOPS
TF32 Tensor Core     1,979 teraFLOPS'
BFLOAT16 Tensor Core     3,958 teraFLOPS
FP16 Tensor Core     3,958 teraFLOPS
FP8 Tensor Core     7,916 teraFLOPS
INT8 Tensor Core     7,916 TOPS
GPU memory     188GB
GPU memory bandwidth     7.8TB/s
Decoders     14 NVDEC
     14 JPEG
Max thermal design power (TDP)     2x 350-400W (configurable)
Multi-Instance GPUs     Up to 14 MIGS @ 12GB each
Form factor     2x PCle
Interconnect     Dual-slot air-cooled NVLink: 600GB/s PCIe Gen5: 128GB/s
Server options     Partner and NVIDIA-Certified Systems with 2-4 pairs
NVIDIA AI Enterprise     Add-on

Key Features

Form Factor     H100 PCle
FP64     26 teraFLOPS
FP64 Tensor Core     51 teraFLOPS
FP32     51 teraFLOPS
TF32 Tensor Core     756teraFLOPS
BFLOAT16 Tensor Core     1,513 teraFLOPS
FP16 Tensor Core     1,513 teraFLOPS*
FP8 Tensor Core     3,026 teraFLOPS
INT8 Tensor Core     3,026 TOPS
GPU memory     80GB

DESCRIPTION

Tap into unprecedented performance, scalability, and security for every workload with the NVIDIA H100 Tensor Core GPU. With NVIDIA® NVLink® Switch System, up to 256 H100s can be connected to accelerate exascale workloads, along with a dedicated Transformer Engine to solve trillion-parameter language models. H100's combined technology innovations can speed up large language models by an incredible 30X over the previous generation to deliver industry-leading conversational AI.

NVIDIA H100 PCIe

AI Computing GPU

NVIDIA H100 (80 GB PCIe-4) – EOL 2025




AI Computing GPU

Nvidia H200 NVL
(141 GB Passive PCIe)

NVIDIA H200 NVL

DESCRIPTION

The NVI DIA H200 Tensor Core G PU in its PCIe form factor offers g rou nd brea king performa nce for AI workloads, featu ring 141G B of memory a nd a staggering 4.8TB/s ba ndwidth. This config u ration is optimized for la rge-sca le deployments, su pporting u p to 8 G PUs per server a nd utilizing NVLin k bridges for hig h-speed data tra nsfer at 900G B/s.

Key Features

Specification     H200 NVL (PCIe)
FP64     34 TFLOPS
FP64 Tensor Core     67 TFLOPS
FP32     67 TFLOPS
TF32 Tensor Core²     989 TFLOPS
BFLOAT16 Tensor Core²     1,979 TFLOPS
FP16 Tensor Core²     1,979 TFLOPS
FP8 Tensor Core²     3,958 TFLOPS
INT8 Tensor Core²     3,958 TFLOPS
GPU memory     141 GB
GPU Memory Bandwidth     4.8TB/s
Decoders     7 NVDEC, 7 JPEG
Confidential Computing     Supported
Max Thermal Design Power (TDP)     Up to 600W (configurable)
Multi-Instance GPUs     Up to 7 MIGS @ 16.5GB each
Form Factor     PCle
Interconnect     2- or 4-way NVIDIA NVLink bridge: 900GB/s, PCIe Gen5: 128GB/s
Server Options     NVIDIA MGXâ"žÂ¢ H200 NVL partner and NVIDIA-Certified Systems with up to 8 GPUs
NVIDIA AI Enterprise     Add-on

KEY FEATURES

Product     NVIDIA L40
Architecture     NVIDIA Ada Lovelace Architecture
Process Size     4nm NVIDIA Custom Process | TSMC
Transistors     76.3 Billion
Die Size     608.44 mm2
CUDA Cores     18176
Tensor Cores     568 | Gen 4
RT Cores     142 | Gen 3
GPU Memory     48 GB GDDR6 ECC
Memory Interface     384-bit
Memory Bandwidth     864 GB/s
Display Connectors     4x DP 1.4a
Maximum Digital Resolution     4x 5K at 60 Hz | 2x 8K at 60 Hz
     4x 4K at 120 Hz | 30-bit Color
Form Factor     4.4″ H x 10.5″ L | Dual Slot
Thermal Solution     Passive
Maximum Power Consumption     300 W
vGPU Software Support     NVIDIA vApps, vPC, vWS | Early 2023
vGPU Profiles Supported     1 GB, 2 GB, 3 GB, 4 GB, 6 GB, 8 GB, 12 GB, 16 GB, 24 GB, 48 GB
Graphics APIs     DirectX 12 Ultimate, Shader Model 6.6, OpenGL 4.6, Vulkan 1.3
NVENC | NVDEC     3x ENC | 3x DEC | Includes AV1 Encode and Decode
Compute APIs     CUDA 12.0, DirectCompute, OpenCL 3.0
NVIDIA 3D Vision and 3D Vision Pro     Support via Optional 3-pin mini-DIN Bracket
Frame Lock     Supported with optional NVIDIA Quadro Sync II
Power Connector     1x PCIe CEM5 16-pin
NEBS Ready     Level 3
Secure Boot with Root of Trust     Supported
NVIDIA L40 / L40S

AI Computing GPU

NVIDIA L40 & L40S Enterprise 48GB






AI Computing GPU

NVIDIA RTX Pro 6000 Blackwell (96 GB) –
Release End of May

RTX Pro 6000 Blackwell

Specifications

NVIDIA RTX PRO 6000 Blackwell Workstation Edition

NVIDIA Architecture     NVIDIA Blackwell
AI TOPS     4000
Tensor Cores     5th Gen
Ray Tracing Cores     4th Gen
NVIDIA Encoder (NVENC)     4x 9th Gen
NVIDIA Decoder (NVDEC)     4x 6th Gen
Memory Configuration     96 GB GDDR7 with error-correcting code (ECC)
Memory Bandwidth     1792 GB/sec
Max Power Consumption     600 W

NVIDIA RTX PRO 6000 Blackwell Max-Q Workstation Edition

NVIDIA Architecture     NVIDIA Blackwell
AI TOPS     3511
Tensor Cores     5th Gen
Ray Tracing Cores     4th Gen
NVIDIA Encoder (NVENC)     4x 9th Gen
NVIDIA Decoder (NVDEC)     4x 6th Gen
Memory Configuration     96GB GDDR7 with error-correcting code (ECC)
Memory Bandwidth     1792 GB/sec
Max Power Consumption     300 W
AI Data Center Hosting & Cloud Infrastructure | Viperatech