Using Chat API
Reviewed on 03 September 2024 • Published on 03 September 2024
Scaleway Generative APIs are designed as a drop-in replacement for the OpenAI APIs. If you have an LLM-driven application that uses one of OpenAI’s client libraries, you can easily configure it to point to Scaleway Chat API, and get your existing applications running using open-weight instruct models hosted at Scaleway.
Create chat completion
Creates a model response for the given chat conversation.
Request sample:
curl --request POST \--url https://api.scaleway.ai/v1/chat/completions \--header 'Authorization: Bearer ${SCW_SECRET_KEY}' \--header 'Content-Type: application/json' \--data '{"model": "llama-3.1-8b-instruct","messages": [{"role": "system","content": "<string>"},{"role": "user","content": "<string>"}],"max_tokens": integer,"temperature": float,"top_p": float,"presence_penalty": float,"stop": "<string>","stream": boolean,}'
Headers
Find required headers in this page.
Body
Required parameters
Param | Type | Description |
---|---|---|
messages* | array of objects | A list of messages comprising the conversation so far. |
model* | string | The name of the model to query. |
Our chat API is OpenAI compatible. Use OpenAI’s API reference for more detailed information on the usage.
Supported parameters
- temperature
- top_p
- max_tokens
- stream
- stream_options
- presence_penalty
- response_format
- logprobs
- stop
- seed
- tools
- tool_choice
Unsupported parameters
- frequency_penalty
- n
- top_logprobs
- logit_bias
- user
If you have a use case requiring one of these unsupported parameters, please contact us via Slack on #ai channel.
Going further
- Python code examples to query text models using Scaleway’s Chat API.
- How to use structured outputs with the
response_format
parameter - How to use function calling with
tools
andtool_choice
Was this page helpful?