ScalewaySkip to loginSkip to main contentSkip to footer section

Flexible sizing

Scale your machine learning training with a flexible on-demand GPU cluster. Choose the exact capacity you need, from 2 nodes to 127, don’t overcommit, pay only for what you use.

Top-tier performance

Train your models with NVIDIA H100 Tensor Core GPUs and SpectrumX interconnects, ensuring seamless, high-performance distributed AI training with zero interruptions.

No long-term commitment

Use the cluster for as long as you need – from one week to a few months. You decide when to start and stop, without the burden of long-term contracts. Ideal for temporary or spike AI workloads.

Tech specs

From 16 to 504 GPUs to support your development

Reserve the Cluster fitted for your need — from 16 to 504 GPUs — to secure your access to efficient NVIDIA H100 Tensor Core GPUs.

Fast Networking and GPU-GPU communication for distributed training

NVIDIA HGX H100 with NVlink & Spectrum-X Network accelerates the key communication bottleneck between GPUs and is one of the top solution on the market to run distributed training.

Private and secured environment

NVIDIA Spectrum-X —the latest Networking technology developed by NVIDIA— enables us to build multi-tenant clusters hosted in the same adiabatic Data Center.

Use cases

Clusters are a big step? Maybe start with a GPU Instance