The NVIDIA H100 PCIe Tensor Core GPU will join the Scaleway ecosystem in 2023
Scaleway announces today that the NVIDIA H100 PCIe Tensor Core GPU will join its cloud ecosystem by the end of 2023
With artificial intelligence (AI) usages currently booming - ChatGPT is the fastest-growing consumer app in history - it’s essential to stay ahead of the curve… without excessive cost for users, or for the planet.
Indeed, H100 PCIe Tensor Core GPU, with its 80GB of VRAM, has approximately 6x the peak compute throughput of A100, according to NVIDIA, notably because each Streaming Multiprocessor is 2x faster, thanks to its new fourth-generation Tensor Core; the new FP8 format and associated transformer engine of each Transformer Core provides another 2x improvement; and increased clock frequencies in the Hopper architecture deliver another approximately 1.3x performance improvement.
This increased performance comes with ambitious energy efficiency improvements, largely thanks to the thinner engraving/carving of the chip. This means, as NVIDIA puts it, that “H100 enables companies to slash costs for deploying AI, delivering the same AI performance with 3.5x more energy efficiency and 3x lower total cost of ownership, while using 5x fewer server nodes over the previous generation.”
Specifically, Scaleway will offer the H100 PCIe Tensor Core GPU, with its 80GB of super fast HBM2e memory, within its complete cloud ecosystem by the end of 2023. This is an ideal configuration for applications that scale to 1 or 2 GPUs at a time, for uses like training large Deep Learning models towards a faster time to ROI, or optimizing GPU deployments on Kubernetes.
Users interested in accessing real NVIDIA H100 PCIe Tensor Core GPU performance in Scaleway data centers can sign up for more information today.