Using the Chat API
Scaleway Generative APIs are designed as a drop-in replacement for the OpenAI APIs. If you have an LLM-driven application that uses one of OpenAI's client libraries, you can easily configure it to point to Scaleway's Chat API, and get your existing applications running using open-weight instruct models hosted at Scaleway.
Create chat completion
Creates a model response for the given chat conversation.
Request sample:
curl --request POST \
--url https://api.scaleway.ai/v1/chat/completions \
--header 'Authorization: Bearer ${SCW_SECRET_KEY}' \
--header 'Content-Type: application/json' \
--data '{
"model": "llama-3.1-8b-instruct",
"messages": [
{
"role": "system",
"content": "<string>"
},
{
"role": "user",
"content": "<string>"
}
],
"max_tokens": integer,
"temperature": float,
"top_p": float,
"presence_penalty": float,
"stop": "<string>",
"stream": boolean,
}'Headers
For information about the required headers, see the Using Generative APIs page.
Body
Required parameters
| Param | Type | Description |
|---|---|---|
| messages | array of objects | A list of messages comprising the conversation so far. |
| model | string | The name of the model to query. |
Our Chat API is OpenAI compatible. Refer to OpenAI’s API reference for detailed information on usage.
Supported parameters
temperaturetop_pmax_tokensstreamstream_optionspresence_penaltyresponse_format(For more information, see How to use structured outputs.)logprobsstopseedtools(For more information, see How to use function calling.)tool_choice(For more information, see How to use function calling.)
Unsupported parameters
frequency_penaltyntop_logprobslogit_biasuser
If you have a use case requiring one of these unsupported parameters, contact us via Slack using the #ai channel.
Going further
- Python code examples to query text models using Scaleway's Chat API
- How to use structured outputs with the
response_formatparameter - How to use function calling with the
toolsandtool_choiceparameters
Still need help?Create a support ticket