ScalewaySkip to loginSkip to main contentSkip to footer section

High-performance GPU Instances

High-performance GPUs on Scaleway's cloud to accelerate your model training and inference cost-effectively.

Get started with €100 free credit when you create a Business account.

Cloud GPU made for superior performance

Access NVIDIA's most powerful cloud GPU Instances and push the boundaries of performance for your AI models. These powerhouses leverage NVLink, NVIDIA's low-latency GPU-to-GPU interconnect, delivering the massive bandwidth and multi-node scalability required for the training and inference of advanced Large Language Models (LLMs).

Train & scale

B300-SXM

Ideal for superior LLM performance and real-time reasoning.

From €7.5/GPU/hour¹

Deploy this GPU
  • NVIDIA Blackwell (2024)
  • 8 GPUs NVIDIA B300-SXM
  • 288 GB VRAM (HBM3e, 7.7 TB/s)
  • 224 vCPUs (Xeon 6)
  • 3,840 GB DDR5 RAM
  • 23.3 TB ephemeral Scratch NVMe
  • 99.5% SLA
  • HDS compliant

Billed per minute
¹ Price and specs for 8 GPUs

Train & scale

H100-SXM

Ideal for LLM fine-tuning and larger LLM model inference.

From €2.88/GPU/hour¹

Deploy this GPU
  • NVIDIA Hopper (2022)
  • 2-8 GPUs NVIDIA H100-SXM
  • 80 GB VRAM (HBM3, 3.35 TB/s)
  • 32-128 vCPUs (Sapphire Rapids)
  • 240-960 GB DDR5 RAM
  • 3.2-12,8 TB ephemeral Scratch NVMe
  • 99.5% SLA
  • HDS compliant

Billed per minute
¹ Price and specs for 8 GPUs

Best-in-class performance without breaking the bank

Scale your AI production efficiently with our versatile GPU lineup. Whether you need cost-effective LLM fine-tuning for 7B-70B models, high throughput GenAI or low-latency inference, these GPU Instances offer the perfect balance of performance and price. Drive value across a wide range of workloads without breaking the bank.

Fine-tune

H100 PCIe

Ideal for 7B LLM fine-tuning and inference.

From €2.73/GPU/hour

Deploy this GPU
  • NVIDIA Hopper (2022)
  • 1-2 GPUs NVIDIA H100 (PCIe 5)
  • 80 GB VRAM (HBM2e, 2TB/s)
  • 24-48 vCPUs (AMD Zen 4)
  • 240-480 GB DDR5 RAM
  • 3-6 TB ephemeral Scratch NVMe
  • 99.5% SLA
  • HDS compliant

Billed per minute

Inference

L40S

Ideal for graphics, inference, and Gen AI.

From €1.14/GPU/hour

Deploy this GPU
  • NVIDIA Ada Lovelace (2022)
  • 1-8 NVIDIA L4 (PCIe 4)
  • 48 GB VRAM (GDDR6, 864 GB/s)
  • 8-64 vCPUs (AMD Zen 3)
  • 96-768 GB DDR4 RAM
  • 1.6-12.8 TB ephemeral Scratch NVMe
  • 99.5% SLA
  • HDS compliant

Billed per minute

Inference

L4

Ideal for image, video, and LLM inference.

From €0.75/GPU/hour

Deploy this GPU
  • NVIDIA Ada Lovelace (2022)
  • 1-8 GPUs NVIDIA L4 (PCIe 4)
  • 24 GB VRAM (GDDR6, 300GB/s)
  • 8-64 vCPUs (AMD Zen 3)
  • 48-384 GB DDR4 RAM
  • 99.5% SLA
  • HDS compliant

Billed per minute

Use cases

Find the perfect GPU configuration for your use case and budget

GPUVRAMRAMPerformanceRecommended forPrice per GPU
8x NVIDIA B300-SXM8x80 GB3,840 GB DDR58x1,979 TFLOPSLarge scale AI model training & inference workloads€60 /Hour
8x NVIDIA H100-SXM8x80 GB960 GB DDR58x1,979 TFLOPSLLM fine-tuning & inference€23.03 /Hour
4x NVIDIA H100-SXM4x80 GB480 GB DDR54x1,979 TFLOPSLLM fine-tuning & inference€11.61 /Hour
2x NVIDIA H100-SXM2x80 GB240 GB DDR52x1,979 TFLOPSLLM fine-tuning & inference€6.02 /Hour
1x NVIDIA H10080 GB240 GB DDR51,513 TFLOPS7B LLM fine-tuning & inference€2.73 /Hour
8x NVIDIA L40S8x 48 GB768 GB DDR42,896 TFLOPS70B text-to-image model fine-tuning & inference€11.20 /HOUR
4x NVIDIA L40S4x 48 GB384 GB DDR41,448 TFLOPS7B text-to-image model fine-tuning & inference€5.60 /HOUR
2x NVIDIA L40S2x 48 GB192 GB DDR4724 TFLOPSGenAI (image/video)€2.80 /HOUR
1x NVIDIA L40S1x 48 GB96 GB DDR 4362 TFLOPSGenAI (image/video)€1.4 /HOUR
8x NVIDIA L48x 24 GB384 GB DDR48x 242 TFLOPS70B LLM Inference€6.00 /HOUR
4x NVIDIA L4
4x 24 GB192 GB DDR44x 242 TFLOPS7B LLM Inference€3.00 /HOUR
2x NVIDIA L42X 24 GB96 GB DDR42x 242 TFLOPSVideo Encoding (8K)€1.50 /HOUR
1x NVIDIA L41x 24 GB24 GB DDR4242 TFLOPSImage Encoding (8K)€0.75 /HOUR

Why choose Scaleway?

Grow flexibly with a no lock-in European cloud

The most complete cloud ecosystem in Europe, from Bare Metal to serverless and everything in between.

Excellent price/performance ratio

Our wide range of services is designed to sustain your growth cost-effectively, at all stages of development.

Leverage multi-cloud solutions

Our products are compatible with market standards so that you can enjoy the freedom of no lock-in.

Sustainable by design

100% of electricity consumed in our data centers comes from renewable energy. Decommissioned hardware is securely reused & recycled.

Start free trial with a Business account