Skip to navigationSkip to main contentSkip to footerScaleway DocsAsk our AI
Ask our AI

Scaleway Documentation

Everything you need to make the most of Scaleway’s products.

Frequently searched:

Getting started

Take a console tour

Learn how to create an account and make the most of the Scaleway console.

Start tour

Understand quotas

Discover Organization quotas and learn how to increase them.

Explore quotas

Launch your first Instance

Create your first Instance and start running your applications.

Get started

Secure your server

Add an extra layer of safety to your server and resources with VPC.

Discover VPC
Discover the Scaleway console

Get an overview of the Scaleway console with our step-by-step interactive demos.

See all demos

Tutorials

Installing PgBouncer on Ubuntu/Debian

Read more

Configuring a GitHub Actions Runner on a Mac mini for enhanced CI/CD

Read more

Configuring an Nginx HTTPS Reverse Proxy on Ubuntu Bionic

Read more

Load Testing with Vegeta

Read more
API Documentation

Discover our API and DevTools and check out integration tools for Scaleway products.

Go to API Documentation

Changelog

  • Key Manager

  • GPU Instances

    We are excited to announce expanded availability for L4 Instances, our most versatile and cost-effective GPU offering.

    L4 GPUs are now available in a second Availability Zone in Paris (par-1, in addition to par-2), making it easier to build highly available inference infrastructure for your projects.

    As a reminder, L4 GPU Instances are also available in the Warsaw region (waw-2).

    Key features include:

    • Nvidia L4 24 GB (Ada Lovelace architecture)
    • 4th generation Tensor cores
    • 4th generation RT cores (graphics capability)
    • Available in multi-GPU 1, 2, 4 or 8 GPUs
  • GPU Instances

    Following the launch of our H100-SXM GPU Instances — delivering industry-leading conversational AI performance and accelerating large language models (LLMs) — we’re pleased to announce the availability of new 2-GPU and 4-GPU configurations.

    With NVLink GPU-to-GPU communication, the 4-GPU option unlocks even greater possibilities and higher performance for your deployments. Now available in the Paris (PAR2) region.

    Key features include:

    • Nvidia H100 SXM 80GB GB (Hopper architecture)
    • 4th generation Tensor cores
    • 4th generation NVlink, which offers 900 GB/s of GPU-to-GPU interconnect
    • Transformer Engine
    • Available now in 2, 4 and 8 GPUs per VM (Additional stock deployments on-going)
View the full changelog
Questions?

Visit our Help Center and find the answers to your most frequent questions.

Visit Help Center
No Results