Note
Go further with Python code examples to query text models using Scaleway’s Chat API.
Scaleway Generative APIs are designed as a drop-in replacement for the OpenAI APIs. If you have an LLM-driven application that uses one of OpenAI’s client libraries, you can easily configure it to point to Scaleway Chat API, and get your existing applications running using open-weight instruct models hosted at Scaleway.
Creates a model response for the given chat conversation.
Request sample:
curl --request POST \--url https://api.scaleway.ai/v1/chat/completions \--header 'Authorization: Bearer ${SCW_SECRET_KEY}' \--header 'Content-Type: application/json'--data '{"model": "llama-3.1-8b-instruct","messages": [{"role": "system","content": "<string>"},{"role": "user","content": "<string>"}],"max_tokens": integer,"temperature": float,"top_p": float,"presence_penalty": float,"stop": "<string>","stream": boolean,}'
Find required headers in this page.
Param | Type | Description |
---|---|---|
messages* | array of objects | A list of messages comprising the conversation so far. |
model* | string | The name of the model to query. |
Our chat API is OpenAI compatible. Use OpenAI’s API reference for more detailed information on the usage.
If you have a use case requiring one of these unsupported parameters, please contact us via Slack on #ai channel.
Go further with Python code examples to query text models using Scaleway’s Chat API.