top of page

    Nvidia

    NVIDIA H200 141GB NVL

    Hooper

    Higher Performance With Larger, Faster Memory


    The NVIDIA H200 Tensor Core GPU supercharges generative AI and high-performance computing (HPC) workloads with game-changing performance and memory capabilities.

    Based on the NVIDIA Hopper™ architecture, the NVIDIA H200 is the first GPU tooffer 141 gigabytes (GB) of HBM3e memory at 4.8 terabytes per second (TB/s)—that’s nearly double the capacity of the NVIDIA H100 Tensor Core GPU with1.4X more memory bandwidth. The H200’s larger and faster memory acceleratesgenerative AI and large language models, while advancing scientific computing forHPC workloads with better energy efficiency and lower total cost of ownership.


    Specifications

    H200 141GB NVL


    FP64

    30 TFLOPS

    FP64 Tensor Core

    60 TFLOPS

    FP32

    60 TFLOPS

    TF32 Tensor Core

    835 TFLOPS

    BFLOAT16 Tensor Core

    1,671 TFLOPS

    FP16 Tensor Core

    1,671 TFLOPS

    FP8 Tensor Core

    3,341 TFLOPS

    INT8 Tensor Core

    3,341 TFLOPS

    NGPU Memory

    141 GB

    GPU Memory Bandwidth

    4.8 TB/s

    Decoders

    7 NVDEC 7 JPEG

    Confidential Computing

    Supported

    Max Thermal Design

    Power (TDP)

    Up to 600W (configurable)

    Multi-Instance GPUs

    Up to 7 MIGS @ 16.5GB each

    Form Factor

    PCIe Dual-slot air-cooled

    Interconnect

    2- or 4-way NVIDIA NVLink bridge: 900GB/s

    per GPU

    PCIe Gen5: 128GB/s

    NVIDIA AI Enterprise

    Included


    bottom of page