NVIDIA H100 NVL

  • NVIDIA Hopper GPU architecture
  • Compute-optimized GPU
  • 14592 NVIDIA CUDA Cores
  • 456 NVIDIA Tensor Cores
  • 94GB HBM2e memory with ECC
  • Up to 3.9TB/s memory bandwidth
  • Max. power consumption: 400W
  • Graphics bus: PCI-E 5.0 x16
  • Thermal solution: Passive
Produs epuizat

Livrare gratuita de la 500 RON

Promocja cenowa na model HDR-15-5

Produs destinat exclusiv utilizării profesionale
NVIDIA H100 NVL

NVIDIA H100 NVL

Descriere

The NVIDIA H100 Tensor Core GPU delivers exceptional performance, scalability, and security for every workload. H100 uses breakthrough innovations based on the NVIDIA Hopper architecture to deliver industry-leading conversational AI, speeding up large language models (LLMs) by 30X. H100 also includes a dedicated Transformer Engine to solve trillion-parameter language models.

Specificație tehnică

H100 SXM H100 NVL
FP64 34 teraFLOPS 30 teraFLOPS
FP64 Tensor Core 67 teraFLOPS 60 teraFLOPS
FP32 67 teraFLOPS 60 teraFLOPS
TF32 Tensor Core* 989 teraFLOPS 835 teraFLOPS
BFLOAT16 Tensor Core* 1,979 teraFLOPS 1,671 teraFLOPS
FP16 Tensor Core* 1,979 teraFLOPS 1,671 teraFLOPS
FP8 Tensor Core* 3,958 teraFLOPS 3,341 teraFLOPS
INT8 Tensor Core* 3,958 TOPS 3,341 TOPS
GPU Memory 80 GB 94 GB
GPU Memory Bandwidth 3.35 TB/s 3.9 TB/s
Decoders 7 NVDEC, 7 JPEG 7 NVDEC, 7 JPEG
Max Thermal Design Power (TDP) Up to 700 W (configurable) 350-400 W (configurable)
Multi-Instance GPUs Up to 7 MIGs @ 10GB each Up to 7 MIGs @ 12GB each
Form Factor SXM PCIe dual-slot air-cooled
Interconnect NVIDIA NVLink™: 900 GB/s, PCIe Gen5: 128 GB/s NVIDIA NVLink: 600 GB/s, PCIe Gen5: 128 GB/s
Server Options NVIDIA HGX H100, Partner and NVIDIA-Certified Systems™ with 4 or 8 GPUs NVIDIA DGX H100 with 8 GPUs, Partner and NVIDIA-Certified Systems with 1–8 GPUs
NVIDIA Enterprise Add-on Included Included
Note *With sparsity

Contactați un specialist Elmark

Ai întrebări? Ai nevoie de sfaturi? Sună-ne sau scrie-ne!