ScalewaySkip to loginSkip to main contentSkip to footer section

Build, train, and deploy your AI with the sovereign European cloud

Scale your AI projects seamlessly with Scaleway, Europe’s trusted cloud provider. Explore our comprehensive and sustainable AI solutions.

Get started with €100 free credit when you create a Business account.

Why power your AI projects with Scaleway?

Boost innovation sustainably: 50% less power

Scaleway's DC5 (par2) is one of Europe's greenest data centers, with a PUE of 1.16 (vs. the 1.55 industry average). It slashes energy use by 30-50% compared to traditional data centers.

Keep sensitive data in Europe

Scaleway stores all its data in Europe and thus, it is not subject to any extraterritorial legislation, and fully compliant with the principles of the GDPR.

Benefit from a complete cloud ecosystem

We offer the full range of Cloud services: from data collection, model creation, infrastructure development, delivery to end-customers, and all in between.

Take control of your costs and optimize your resources

Optimize your performance and costs with scalable resources and transparent pricing, freeing up budget for innovation and growth.

From Infrastructure-as-a-Service to Managed solutions, we've got you covered

  • GPU Instances – Parallel computing power when you need it

    Need occasional access to powerful GPU Instances for training or inference? Our range of NVIDIA GPU Instances gives you the flexibility to scale up as needed, perfect for specific workloads without investing in permanent infrastructure.

H100 PCIe GPU Instance

€2.73/hour (~€1,993/month)

Accelerate your model training and inference with the most high-end AI chip of the market!

Learn more

RENDER GPU

€1.24/hour (~€891/month)

Dedicated Tesla P100s for all your Machine Learning & Artificial Intelligence needs.

Learn more

L4 GPU Instance

€0.75/hour (~€548/month)

Optimize the costs of your AI infrastructure with a versatile entry-level GPU.

Learn more

L40S GPU Instance

€1.4/hour (~€1,022/month)

Accelerate the next generation of AI-enabled applications with the universal L40S GPU Instance, faster than L4 and cheaper than H100 PCIe.

Learn more
  • Model-as-a-Service for simplified AI deployment

    Deploy models without the hassle of managing infrastructure. Access pre-configured, serverless endpoints featuring the most popular AI models billed per 1M tokens or hourly-billed with a dedicated infrastructure for more security and better cost anticipation.

Managed Inference

Serve Generative AI models and answer prompts from European end-consumers securely thanks to a dedicated infrastructure billed per hour.

Discover more

Generative APIs

Access to pre-configured, serverless endpoints featuring the most popular AI models, all hosted in secure European data centers and priced per 1M tokens.

Sign up for the discovery
  • Flexible Clusters to match your evolving AI needs

    When you need scalable resources for training or developing large models, our clusters provide the flexibility to adapt to your demands with or without long-term commitments. Choose between on-demand access for short-term needs or a custom-built solution for sustained, risk-free support.

On demand Cluster

Don’t commit and rent an On Demand Cluster for a week to unlock your team's ability to train or build large models efficiently. Explore your options to find the perfect setup before committing.

Learn more

Custom-built Clusters

Design the solution you need to support your development for the next years. Chose the GPU, the storage and the interconnexion solution, we do the rest. focus on OPEX while we handle CAPEX.

Learn more

Successful projects powered by Scaleway's infrastructure

Moshi from Kyutai

Moshi, Kyutai’s revolutionary AI voice assistant brings unprecedented vocal capabilities. Trained using Scaleway’s high-performance Cluster and served with our L4 GPU instances, Moshi excels in conveying emotions and accents with 300x codec compression. This setup enabled Moshi to process 70 different emotions and accents with ultra-low latency, allowing for seamless, human-like conversations. Thanks to this high-performance environment, Kyutai was able to achieve this breakthrough.