top of page

    OEM

    HGX B200

    Blackwell

    Built for AI and High-Performance Computing


    Artificial intelligence, complex simulations, and large-scale datasets require GPUs with multiple high-speed interconnects and a fully accelerated software stack. The NVIDIA HGX™ platform brings together the full capabilities of NVIDIA GPUs, NVIDIA NVLink™, and NVIDIA networking, along with a fully optimized AI and high-performance computing (HPC) software stack. This delivers the best application performance and accelerates time to insight for every data center.


    Specifications

    HGX B200 (Intel/AMD)


    GPU

    8x NVIDIA Blackwell GPUs (SXM)

    GPU Memory

    8x 180GB

    Performance

    72 petaFLOPS FP8 training and 144 petaFLOPS FP4 inference

    NVIDIA® NVSwitch™

    Included

    NVIDIA NVLink Bandwidth

    14.4 TB/s aggregate bandwidth

    System Power Usage

    ~14.3 kW

    CPU (Intel/AMD)

    Dual Intel® Xeon® 6900 series processors with P-cores Intel® Xeon® Gen4/5 scalable Dual EPYC 9004/9005

    System Memory

    24x DIMM slots, ECC RDIMM / MRDIMM DDR5 up to 8800MT/s (Up to 6TB) 32x DIMM slots, ECC RDIMM / MRDIMM DDR5 up to 5600MT/s (Up to 4TB) 32x DIMM slots, ECC RDIMM / MRDIMM DDR5 up to 4400MT/s (Up to 8TB) 24x DIMM slots, ECC RDIMM / MRDIMM DDR5 up to 4800MT/s (Up to 6TB) 24x DIMM slots, ECC RDIMM / MRDIMM DDR5 up to 6400MT/s (Up to 6TB)

    Interface

    2 RJ45 1 GbE 2 USB 3.0 ports 1 VGA port 1 TPM header

    Network

    Max 8 NIC/IB

    Storage

    8/10 x 2.5" Gen5 NVMe/SATA

    Power

    6 5250W(3+3)

    Rack Units (RU)

    10 RU

    Brand

    SMCI, Asus, Gigabyte, Dell


    bottom of page