Skip to navigationSkip to main contentSkip to footerScaleway DocsAsk our AI
Ask our AI

How to query reasoning models

Scaleway's Generative APIs service allows users to interact with language models benefiting from additional reasoning capabilities.

A reasoning model is a language model that is capable of carrying out multiple inference steps and systematically verifying intermediate results before producing answers. You can specify how much effort it should put into reasoning via dedicated parameters, and access reasoning content in its outputs. Even with default parameters, such models are designed to perform better on reasoning tasks like maths and logic problems than non-reasoning language models.

Language models supporting the reasoning feature include gpt-oss-120b. See Supported Models for a full list.

You can interact with reasoning models in the following ways:

Before you start

To complete the actions presented below, you must have:

  • A Scaleway account logged into the console
  • Owner status or IAM permissions allowing you to perform actions in the intended Organization
  • A valid API key for API authentication
  • Python 3.7+ installed on your system

Querying reasoning language models via the playground

Accessing the playground

Scaleway provides a web playground for instruct-based models hosted on Generative APIs.

  1. Navigate to Generative APIs under the AI section of the Scaleway console side menu. The list of models you can query displays.
  2. Click the name of the chat model you want to try. Alternatively, click more icon next to the chat model, and click Try model in the menu. Ensure that you choose a model with reasoning capabilities.

The web playground displays.

Using the playground

  1. Enter a prompt at the bottom of the page, or use one of the suggested prompts in the conversation area.
  2. Edit the parameters listed on the right column, for example the default temperature for more or less randomness on the outputs.
  3. Switch models at the top of the page, to observe the capabilities of chat models offered via Generative APIs.
  4. Click View code to get code snippets configured according to your settings in the playground.
Note

You cannot currently set values for parameters such as reasoning_effort, or access reasoning metadata in the model's output, via the console playground. Query the models programmatically as shown below in order to access the full reasoning feature set.

Querying reasoning language models via API

You can query models programmatically using your favorite tools or languages. In the example that follows, we will use the OpenAI Python client.

Chat Completions API or Responses API?

Both the Chat Completions API and the Responses API allow you to access and control reasoning for supported models. Scaleway's support of the Responses API is currently in beta.

Note however, that the Responses API was introduced in part to better support features for reasoning workflows, among other tasks. It provides richer support for reasoning than Chat Completions, for example, by providing chain-of-thought reasoning content in its responses.

For more information on Chat Completions versus Responses API, see the information provided in the querying language models documentation.

Installing the OpenAI SDK

Install the OpenAI SDK using pip:

pip install openai

Initializing the client

Initialize the OpenAI client with your base URL and API key:

from openai import OpenAI

# Initialize the client with your base URL and API key
client = OpenAI(
    base_url="https://api.scaleway.ai/v1",  # Scaleway's Generative APIs service URL
    api_key="<SCW_SECRET_KEY>"  # Your unique API secret key from Scaleway
)

Generating a chat completion with reasoning

You can now create a chat completion with reasoning, using either the Chat Completions or Responses API, as shown in the following examples:

Exceptions and legacy models

Some legacy models such as deepseek-r1-distill-llama-70b do not output reasoning data as described above, but make it available in the content field of the response inside special tags, as shown in the example below:

response.content = "<think> The user asks for questions about mathematics (...) </think>  Answer is 42."

The reasoning content is inside the <think>...</think> tags, and you can parse the response accordingly to access such content. There is, however, a known bug that can lead the model to omit the opening <think> tag, so we suggest taking care when parsing such outputs.

Note that the reasoning_effort parameter is not available for this model.

Impact on token generation

Reasoning models generate reasoning tokens, which are billable. Generally these are in the model's output as part of the reasoning content. To limit the generation of reasoning tokens, you can adjust settings for the reasoning effort and max completion/output tokens parameters. Alternatively, use a non-reasoning model to avoid the generation of reasoning tokens and subsequent billing.

Still need help?

Create a support ticket
No Results