Scaleway Managed Inference brings sovereign, open source AI to all in Europe

Today, we’re proud to announce the public access of Scaleway Managed Inference, a powerful new tool designed to democratize access to generative AI while ensuring technological sovereignty, by reducing dependence on non-European actors. Its mission is to make state-of-the-art AI models accessible to everyone, whether you're an enterprise, a startup, or a public administration - with the assurance that no data ever leaves Europe. Managed Inference is a scalable, easy-to-use product that eliminates the complexities of creating and deploying your own AI models.

Forget about the tech headaches of sourcing, building, quantizing and debugging large language models – focus on what you do best: innovating for your users. Scaleway’s inference product grows with your needs, offers a user-friendly interface, delivers lightning-fast, low-latency inference, full integration with our existing cloud services, and ensures top-notch data confidentiality within a sovereign European framework.

Unlike with US or Asian providers, our product ensures that your data stays in Europe, adhering to the highest standards of continental data protection laws, including GDPR. Furthermore, Managed Inference’s use of open source AI models means your projects are fully transparent and accountable. The service was designed as a drop-in replacement for OpenAI APIs, for seamless transition of applications that may already use its libraries.

We've built this new service using our technical infrastructure already validated for AI by leaders such as Mistral and Kyutai. Our infrastructure is hosted entirely in Europe, ensuring full control and compliance with European regulations.

Since early 2024, we’ve deployed hundreds of generative AI models and delivered generations working hand-in-hand with select partners from the public sector, corporates, AI consultancies, and some of the very best startups hosted at Scaleway. The inputs of France’s Ministry of the Interior, Free, Capgemini, Veepee - just to name a few - have helped us to shape this new Managed Inference service with us.

Now, your real-world experience and feature requests are welcomed to keep refining this service: we're looking forward to hearing from the whole Scaleway community.

By using this product in beta, you'll get early access to the latest AI models, you’ll influence development, and receive hands-on support from the product team.

Ready to jump in? Want to refine your RAG-based application with us? Log in to the Scaleway Console and start transforming your AI apps today.

Scaleway is committed to empowering ALL European entities with cutting-edge AI tools, regardless of your AI expertise or team size. Our mission is to build a robust and sovereign AI ecosystem that benefits all Europeans.

Currently, we are focusing on dedicated endpoints and have a limited catalog of open-weight models, starting with large language models and embeddings. The service is available in the FR-PAR region first. We will be gradually adding more models and features based on feedback.

When it comes to pricing, we understand the importance of budget considerations. A key advantage of using Scaleway Managed Inference is the unlimited tokens available with each plan. No matter how extensive your usage, the price remains predictable and fixed, unlike Serverless products, where costs can fluctuate. This predictability is crucial for budget management and strategic planning.

Here’s a sneak peek:

ModelQuantizationGPUPriceApprox. per month

More models and conditions available on this page.

We can’t wait for you to join us on this exciting journey. Thank you for your support and participation in the Scaleway Managed Inference public beta. We invite you to share your experience through #inference-beta on Slack and help shape the future of a sovereign, European AI together!

Need assistance? Check out our Quickstart Guide for all the details. Our team is ready to assist you on Slack for every step of the way.

Published on