Our scalable object storage service is based on the S3 protocol. S3 is the de facto object storage protocol created by Amazon for its object storage service. Object Storage officially supports a subset of S3. This cloud-based storage is designed to be highly available and durable.
Yes, Object Storage is available in our
nl-ams (Amsterdam, The Netherlands) and
fr-par(Paris, France) region.
You can make up to 1000 requests per second to write and read from a bucket. However, if you need to increase that limit, you can always contact our assistance.
The service is accessible through our console for simple operations.
We provide an S3-compatible API for programmatic access or for usage with any compatible software.
We provide you with a comfortable and integrated UI in the console for convenience.
But it’s not easy to browse infinite storage from the web, so we came up with some engineering trade-offs:
For all these reasons, we recommend using dedicated tools like s3cmd to manage large data sets.
You can store any kind of data, structured or not, in any format (documents, images, videos, databases, binaries, archives, backups, web content and assets…).
You can monitor your Object Storage consumption directly from the Scaleway Console. Metrics such as storage and bucket usage help you estimate your consumption:
We currently support a subset of S3 operations. The exhaustive list is available on the S3 Object Storage API page.
There is literally no limit on the total volume of data of a bucket, or the number of objects it can contains. Nevertheless, in order to get the best experience as possible, we advise to split your objects in different buckets if the amount of objects is greater than 500 000.
You can create up to 200 buckets per account, cross region, and there is no storage usage limitation at all. As for the rate limit, if you need to increase that limit, you can always contact our assistance.
Bucket names are unique in relation to our whole platform. This means if a bucket exists already in one region, the name cannot be reused in another.
Bucket already exists error message is triggered when the name you intended to use for your bucket is already reserved by another user (or yourself).
When deleting a bucket, its name becomes available again. Anybody can reuse it on a first-come, first served principle.
Note: We recommend using non-conspicuous names for your buckets.
Bucket names must be unique and comply with the following:
For buckets with a name containing a dot (.), users must use the canonical path. The subdomain form, for instance, with the bucket
https://assets.personalproject.s3.nl-ams.scw.cloud will not work as the SSL certificate
*.s3.nl-ams.scw.cloud is not recursive.
In addition, when using buckets with Secure Sockets Layer (SSL), the SSL wildcard certificate only matches buckets that do not contain periods. Therefore it is not recommended to use bucket names with multiple dots, as it will have an impact on accessing your bucket via HTTPS. To avoid this, it is recommended to use dashes in your bucket names.
Bucket ownership is not transferable. A bucket is owned by the account which created it.
Unfortunately, recovering a deleted bucket is not be possible. Also, recovering the objects in a deleted bucket is not possible even if versioning is enabled on that bucket.
The object storage service was not designed to be used as a CDN: it is not fine-tuned for this kind of usage. We do not recommend it.
You can still use Object Storage as a backend for a CDN. All you need is to put a caching layer in front of it.
You can access your files via HTTPS by creating a public link from the control panel. Click on the file name and enable the public link by clicking on the corresponding button.
You can use Object Storage as a file-system with fuse based project such as s3fs, s3ql or goofys.
Yes, this is allowed with tools like
s3cmd, but only
cache-control or those prefixed by
In order to make an object public, click on Visibility in the drop-down menu, set it to Public and save the setting.
If you upload a file by using the CLI, you can make it public by using the parameter:
Note: You can only make an object public or private (each one at a time). This action cannot be performed on a whole bucket, which is by default private. Allowing public access on a bucket only allows a public listing of the objects that are stored in the bucket.
Object Storage supports multipart upload. We recommend uploading by chunks, in a limit of 1000 chunks per upload and 5TB per object.
What you’re asking for are federated tokens. We don’t support these for the moment as we want to keep tokens simple.
You can create a token in the Credentials section of the management console. To connect to Object storage, use the Access-Key and Secret-Key displayed during token creation.
Note The Secret-Key is only shown once. Please take a note of it and keep it safe. It can’t be recovered if you lose it.
Object Storage is designed to provide 99.999% durability of objects over a given year. This durability level corresponds to an average annual expected loss of 0.001% of objects. In addition, the service is covered by SLAs to ensure data availability and durability.
As with any environment, the best practice is to have a backup and to put in place safeguards against malicious or accidental deletion.
A full list of features is available on the S3 API operations page.