ScalewaySkip to loginSkip to main contentSkip to footer section

ai-PULSE 2024 white paper - discover the insights!

Flexible sizing

Scale your machine learning training with a flexible on-demand GPU cluster. Choose the exact capacity you need, from 2 nodes to 127, don’t overcommit, pay only for what you use.

Top-tier performance

Train your models with NVIDIA H100 Tensor Core GPUs and SpectrumX interconnects, ensuring seamless, high-performance distributed AI training with zero interruptions.

No long-term commitment

Use the cluster for as long as you need – from one week to a few months. You decide when to start and stop, without the burden of long-term contracts. Ideal for temporary or spike AI workloads.

Tech specs

From 16 to 504 GPUs to support your development

Reserve the Cluster fitted for your need — from 16 to 504 GPUs — to secure your access to efficient NVIDIA H100 Tensor Core GPUs.

Fast Networking and GPU-GPU communication for distributed training

NVIDIA HGX H100 with NVlink & Spectrum-X Network accelerates the key communication bottleneck between GPUs and is one of the top solution on the market to run distributed training.

Private and secured environment

NVIDIA Spectrum-X —the latest Networking technology developed by NVIDIA— enables us to build multi-tenant clusters hosted in the same adiabatic Data Center.

Use cases

Clusters are a big step? Maybe start with a GPU Instance

H100 PCIe GPU

€2.73/hour (~€1992.9/month)

Accelerate your model training and inference with the most high-end AI chip of the market!

Learn more

L40S GPU Instance

€1.4/hour (~€1,022/month)

Accelerate the next generation of AI-enabled applications with the universal L40S GPU Instance, faster than L4 and cheaper than H100 PCIe.

Learn more

L4 GPU Instance

€0.75/hour (~€548/month)

Optimize the costs of your AI infrastructure with a versatile entry-level GPU.

Learn more