Quickstart
Learn how to create, connect to, and delete a Managed Inference endpoint in a few steps.
View QuickstartDive into seamless language processing with our easy-to-use LLM endpoints. Perfect for everything from data analysis to creative tasks, all with clear pricing.
Managed Inference QuickstartLearn how to create, connect to, and delete a Managed Inference endpoint in a few steps.
View QuickstartCore concepts that give you a better understanding of Scaleway Managed Inference.
View ConceptsCheck our guides about creating and managing Managed Inference endpoints.
View How-tosGuides to help you choose a Managed Inference endpoint, understand pricing and advanced configuration.
View additional contentLearn how to create and manage your Scaleway Managed Inference endpoints through the API.
Go to Managed Inference APIAll new public endpoints for Managed Inference deployments will be using the scaleway.com
domain.
Legacy endpoints in scw.cloud
will remain functional until the deployment gets deleted.
You can now deploy your own secure Pixtral-12b-2409
model with Scaleway Managed Inference.
With a privacy-focused, fully managed stack, this solution opens up new possibilities for applications that require both textual and visual understanding, knowing your images remain safe.
Read our dedicated documentation to get started with Pixtral.
All AI models can now reliably generate JSON output when required. The Chat Completions API supports the response_format
parameter, which can be set to either json_object
or json_schema
, following OpenAI's specifications exactly.
Refer to our documentation for code examples, including usage with Pydantic models or manual schema definitions.
Visit our Help Center and find the answers to your most frequent questions.
Visit Help Center