Kubernetes Kapsule to optimize costs
Deploy and Scale your infrastructure with Kubernetes.
Our comprehensive lineup of NVIDIA GPUs, including P100, NVIDIA H100 Tensor Core GPU, NVIDIA L4 Tensor Core GPU, NVIDIA L40S and NVIDIA GH200 covers a wide range of computing needs. Harness the speed and efficiency of GPUs for parallelized workloads thanks to instances or supercomputers.
€2.52/hour (~€1387/month)
Accelerate your model training and inference with the most high-end AI chip of the market!
€1.24/hour (~€891/month)
Dedicated Tesla P100s for all your Machine Learning & Artificial Intelligence needs.
€0.75/hour (~€548/month)
Optimize the costs of your AI infrastructure with a versatile entry-level GPU.
€1.4/hour (~€1,022/month)
Accelerate the next generation of AI-enabled applications with the universal L40S GPU Instance, faster than L4 and cheaper than H100 PCIe.
127 NVIDIA DGX H100
Build the next Foundation Model with Nabu 2023, one of the fastest and most energy-efficient supercomputers in the world.
2 NVIDIA DGX H100
Fine-tune Transformers models and deploy them on Jero 2023, the 2-DGX AI supercomputer that can scale up to 16 nodes.
Available in 2024
NVIDIA GH200 Grace Hopper Superchip combines the NVIDIA Grace CPU and the H100 Tensor Core GPU for an order-of-magnitude performance leap for large-scale AI and HPC
GPU | GPU memory | FP16 peak performance | Recommended for | Price |
---|---|---|---|---|
2x NVIDIA H100 Tensor Core GPU | 80GB | 3,026 TFLOPS | 70B LLM Fine-Tuning / Inference | €5.04/hour |
1x NVIDIA H100 Tensor Core GPU | 80GB | 1,513 TFLOPS | 7B LLM Fine-Tuning / Inference | €2.52/hour |
8x NVIDIA L40S GPU | 48GB | 2,896 TFLOPS | Fine-Tuning / Inference of GenAI (image video) model up to 70B | €11.2/hour |
4x NVIDIA L40S GPU | 48GB | 1,448 TFLOPS | Inference of Mixtral 8x22B | €5.6/hour |
2x NVIDIA L40S GPU | 48GB | 724 TFLOPS | 7B LLM Inference | €2.8/hour |
1x NVIDIA L40S GPU | 48GB | 362 TFLOPS | Image & Video Encoding (8K) | €1.4/hour |
8x NVIDIA L4 Tensor Core GPU | 24GB | 1,936 TFLOPS | 70B LLM Inference | €6.00/hour |
4x NVIDIA L4 Tensor Core GPU | 24GB | 968 TFLOPS | 7B LLM Inference | €3.00/hour |
2x NVIDIA L4 Tensor Core GPU | 24GB | 484 TFLOPS | Video Encoding (8K) | €1.50/hour |
1x NVIDIA L4 Tensor Core GPU | 24GB | 242 TFLOPS | Image Encoding (8K) | €0.75/hour |
NVIDIA P100 | 16GB | 19 TFLOPS | Image / Video Encoding (4K) | €1.24/hour |
Option | Value | Price |
---|---|---|
Zone | Paris 2 | |
Instance | 1x | 0€ |
Volume | 10GB | 0€ |
Flexible IPv4 | Yes | 0.004€ |
Total | |
---|---|
Daily | 0€ |
Weekly | 0€ |
Monthly | 0€ |
Kubernetes Kapsule to optimize costs
Deploy and Scale your infrastructure with Kubernetes.
Object Storage to store and prepare your data
Deploy models with NVIDIA Triton Inference Server on Scaleway Object Storage
Docker AI Image to speed up your task
Launch the container of your choice step by step.