GPU-powered infrastructure

Our comprehensive lineup of NVIDIA GPUs, including P100, H100, L4, L40S and GH200 covers a wide range of computing needs. Harness the speed and efficiency of Graphics Processing Units (GPUs) for parallelized workloads thanks to instances or supercomputers.

Available zones:
Paris:PAR 1PAR 2

Available GPU Instances


€2.52/hour (~€1387/month)

Accelerate your model training and inference with the most high-end AI chip of the market!

Launch your H100 PCIe GPU


€1.24/hour (~€891/month)

Dedicated Tesla P100s for all your Machine Learning & Artificial Intelligence needs.

Launch your Render GPU


Available in Q1 2024

Optimize the costs of your AI infrastructure with a versatile entry-level GPU.

Pre-register your interest


Available in H1 2024

Expand capacity for AI or run a mix of workloads, including visual computing with the universal L40s GPU.

Pre-register your interest

Available GPU-powered infrastructure

Nabu 2023


Build the next Foundation Model with Nabu 2023, one of the fastest and most energy-efficient supercomputers in the world.

Tell us more about your needs

Jero 2023


Fine-tune Transformers models and deploy them on Jero 2023, the 2-DGX AI supercomputer that can scale up to 16 nodes.

Tell us more about your needs

Grace Hopper

Available in 2024

NVIDIA GH200 Grace Hopper Superchip combines the NVIDIA Grace CPU and the H100 Tensor Core GPU for an order-of-magnitude performance leap for large-scale AI and HPC

Pre-register your interest

Choose the right machine

OffersRender GPU InstanceL4 GPU InstanceL40S GPU InstanceH100 PCIe GPU InstanceJero & Nabu 2023GH200 Grace Hopper™
Nvidia GPUP100 16GB PCIe 3L4 24GB PCIe 4L40S 48GB PCIe 4H100 80GB PCIe 5H100 80GB Tensor Core SXM5NVIDIA GH200 Grace Hopper™ Superchip
NVIDIA architecturePascal 2016LoveLace 2022LoveLace 2022Hopper 2022Hopper 2022GH200 Grace Hopper™ Architecture
TypeInstancesInstancesInstancesInstancesSupercomputerInstance to SuperComputer
Performance (training in FP16 Tensor Cores)no Tensor Cores (not stable in FP16)Up to 242 TFLOPsUp to 362 TFLOPSUp to 1513 TFLOPsUp to 2010 PFLOPsUp to 989 peak TFLOPs per GH200
Specifications- 10 vCPU (Skylake)
- 400GB NVMe
- Boot on Block
- 1 gbps
under constructionunder construction - 24 vCPU (Zen4)
- 240GB RAM DDR5
- 3TB NVMe scratch
- Boot on Block
- 10 gbps
- up to 14,224 CPU cores(Zen4)
- 254TB RAM
- DDN low latency
- 400 gbps
- GH200 SuperChip with 72 ARM Neoverse V2 cores alongside 480 GB of LPDDR5X DRAM and 96GB of HBM3 GPU memory, the memory is fully merged for up to 576GB of global usable memory.
-1,92TB of scratch storage
Up to 25GBps of networking performances
Price €1.24/hour (~€891/month)coming sooncoming soon€1.9/hour (~€1387/month)depending on your projectcoming soon
Format & FeaturesMulti-GPU (under construction)Multi-GPU (under construction)- Multi-GPU (up 2)
- Multi Instance GPU (MIG)
- up to 127 DGX servers
- customizable for your project
Single ship up to DGX GH200 architecture. (For larger setup needs, contact us)
Use cases- Best price/perf ratio for Computer Vision
- 3D Graphism
- Image/Video Encoding/ Decoding (~4k)
- Medium DL model training
- Best price/perf ratio for inference for S/M/L DL models
- Small LLM fine-tuning (PEFT)
- 3D Graphism
- Image/Video Encoding/ Decoding (8k)
Large DL model training
Large DL model inference
Medium LLM Fine Tuning (PEFT), & Inference
3D Graphism
Image/Video processing applications (Encoding/ Decoding (8k))
- Extra Large DL model training
- Extra Large DL model inference
- Large LLM fine-tuning (PEFT) and inference
- Extra Large DL model training
- Extra Large DL model inference
- Large LLM training
- Extra Large LLM and DL models inference
What they are not made forLLMTrain LLMGraphismGraphismGraphism, Training
Start your GPU now