A batch is an asynchronous process that executes multiple requests. It takes a file containing the requests to perform as input, and writes the results to an output file.
Files are stored within Scaleway Object Storage.
See How to use batch processingOpen in new context
for code snippets using openai Python client.
List batches
List batches including their properties and status.
path Parameters
project_idThe ID of the Project you want to target. If this value is not provided, your default Project will be used.
Specifying this value allows you to limit access through IAM policies, or to allocate consumption and billing to a specific project.
query Parameters
afterPagination cursor which value should be a batch UUID. When response consists of multiple pages, provide the last batch UUID obtained in previous request to fetch next page.
limitMaximum number of batches to retrieve.
List batches › Responses
objectType of response object, always set to list.
List of batches.
first_idUUID of first batch in the response.
last_idUUID of last batch in the response.
has_moreDefines whether there are more results to retrieve not returned by this query.
Create a batch
Process multiple requests asynchronously in batch.
path Parameters
project_idThe ID of the Project you want to target. If this value is not provided, your default Project will be used.
Specifying this value allows you to limit access through IAM policies, or to allocate consumption and billing to a specific project.
Create a batch › Request Body
completion_windowTime range during which the batch should be processed. Currently only 24h is supported.
endpointPath to use to process requests in the batch. Currently /v1/chat/completions, /v1/responses,
/v1/embeddings and /v1/audio/transcriptions are supported.
input_file_idURL of the file in Scaleway Object Storage. File should contain all request to process in JSONL formatOpen in new context.
Results will be stored within the same bucket and folder, and named {filename}-output.jsonl and {filename}-error.jsonl.
See How to use batch processingOpen in new context
for code snippets using openai Python client.
output_expires_afterExpiration rules for the output and error files generated by the batch.
Create a batch › Responses
idUUID of the batch.
objectType of batch object, always set to batch.
endpointPath used to process requests in the batch.
modelModel used to process the batch
Error object
input_file_idURL of the input file.
completion_windowTime range during which the batch should be processed.
statusStatus of the batch.
output_file_idURL of the input file.
error_file_idURL of the input file.
created_atTimestamp when the batch was created (Unix format, in seconds).
in_progress_atTimestamp when the batch processing started (Unix format, in seconds).
expires_atTimestamp when the batch will expire (Unix format, in seconds).
finalizing_atTimestamp when the batch started finalizing (Unix format, in seconds).
completed_atTimestamp when the batch was completed (Unix format, in seconds).
failed_atTimestamp when the batch failed (Unix format, in seconds).
expired_atTimestamp when the batch expired (Unix format, in seconds).
cancelling_atTimestamp when the batch started cancelling (Unix format, in seconds).
cancelled_atTimestamp when the batch was cancelled (Unix format, in seconds).
Number of requests by status.
Usage information generated by this request, either in tokens or duration depending on how the model is billed.
Get a batch
Retrieve a batch's properties and status.
path Parameters
project_idThe ID of the Project you want to target. If this value is not provided, your default Project will be used.
Specifying this value allows you to limit access through IAM policies, or to allocate consumption and billing to a specific project.
batch_idUUID of the batch.
Get a batch › Responses
idUUID of the batch.
objectType of batch object, always set to batch.
endpointPath used to process requests in the batch.
modelModel used to process the batch
Error object
input_file_idURL of the input file.
completion_windowTime range during which the batch should be processed.
statusStatus of the batch.
output_file_idURL of the input file.
error_file_idURL of the input file.
created_atTimestamp when the batch was created (Unix format, in seconds).
in_progress_atTimestamp when the batch processing started (Unix format, in seconds).
expires_atTimestamp when the batch will expire (Unix format, in seconds).
finalizing_atTimestamp when the batch started finalizing (Unix format, in seconds).
completed_atTimestamp when the batch was completed (Unix format, in seconds).
failed_atTimestamp when the batch failed (Unix format, in seconds).
expired_atTimestamp when the batch expired (Unix format, in seconds).
cancelling_atTimestamp when the batch started cancelling (Unix format, in seconds).
cancelled_atTimestamp when the batch was cancelled (Unix format, in seconds).
Number of requests by status.
Usage information generated by this request, either in tokens or duration depending on how the model is billed.
Cancel a batch
When a batch is cancelled, results already processed are stored
in corresponding output.jsonl and errors.jsonl files, while remaining
requests will not be processed.
path Parameters
project_idThe ID of the Project you want to target. If this value is not provided, your default Project will be used.
Specifying this value allows you to limit access through IAM policies, or to allocate consumption and billing to a specific project.
batch_idUUID of the batch.
Cancel a batch › Responses
idUUID of the batch.
objectType of batch object, always set to batch.
endpointPath used to process requests in the batch.
modelModel used to process the batch
Error object
input_file_idURL of the input file.
completion_windowTime range during which the batch should be processed.
statusStatus of the batch.
output_file_idURL of the input file.
error_file_idURL of the input file.
created_atTimestamp when the batch was created (Unix format, in seconds).
in_progress_atTimestamp when the batch processing started (Unix format, in seconds).
expires_atTimestamp when the batch will expire (Unix format, in seconds).
finalizing_atTimestamp when the batch started finalizing (Unix format, in seconds).
completed_atTimestamp when the batch was completed (Unix format, in seconds).
failed_atTimestamp when the batch failed (Unix format, in seconds).
expired_atTimestamp when the batch expired (Unix format, in seconds).
cancelling_atTimestamp when the batch started cancelling (Unix format, in seconds).
cancelled_atTimestamp when the batch was cancelled (Unix format, in seconds).
Number of requests by status.
Usage information generated by this request, either in tokens or duration depending on how the model is billed.