Our scalable object storage service is based on the S3 protocol. S3 is the de facto object storage protocol created by Amazon for its own object storage service. Object Storage officially supports a subset of S3. This cloud-based storage is designed to be highly available and durable.
Object Storage is currently available in our
nl-ams (Amsterdam, The Netherlands) Availability Zone and will be available in other geographical locations soon.
Object Storage general availability is scheduled for February 1st, 2019. Pricing starts at €5 and includes:
Network transit to Scaleway instances and Dedibox dedicated servers remain free within the region used. Traffic inter-region (AMS/PAR) is billed 0,01€/GB.
The Object Storage service is calculated as follows:
Data transfer towards the Internet is billed at €0.02/GB per GB transferred if you exceed the included transfer amount.
You can make up to 250 requests per second for write and read to a bucket. However, if you need to increase that limit, you can always contact our support team.
The service is accessible through our console for simple operations.
We provide an S3-compatible API for programmatic access or for usage with any compatible software.
We provide you with a comfortable and integrated UI in the console for convenience.
But it’s not easy to browse infinite storage from the web so we came up with some engineering trade-offs:
For all these reasons, we recommend using dedicated tools like s3cmd to manage large data sets.
You can store any kind of data, in any format (documents, images, videos and much more).
Once the service reaches general availability, you will be able to monitor your service directly from the Scaleway Console. Metrics such as storage and bandwith usage will help you estimate your bucket consumption.
We currently support a subset of S3 operations. The exhaustive list is available on the S3 Object Storage API page.
There is no limit on the total volume of data of a bucket, or the number of objects it can contain. However the bucket might slow down with a very large number of objects.
You can create up to 100 buckets per user and there is no storage usage limitation per bucket.
Bucket names are unique in relation to our whole platform.
Bucket already exists error message is triggered when the name you intended to use for your bucket is already reserved by another user (or yourself).
When deleting a bucket, its name becomes available again. Anybody can reuse it on a first-come, first-served principle.
Note: We recommend using non-conspicuous names for your buckets.
Bucket names are unique and can consist of the set of the 26 alphabetic characters, a to z, and the 10 Arabic numerals, 0 to 9.
When using buckets with Secure Sockets Layer (SSL), the SSL wildcard certificate only matches buckets that do not contain periods. Therefore it is not recommended to use bucket names with multiple dots, as it will have an impact on accessing your bucket via HTTPS. To avoid this, it is recommend to use dashes in your bucket names.
Bucket ownership is not transferable. A bucket is owned by the account which created it.
Unfortunately, recovering a deleted bucket is not be possible. Also, recovering the objects in a deleted bucket is not possible even if versioning is enabled on that bucket.
The object storage service was not designed to be used as a CDN: it is not fine-tuned for this kind of usage. We do not recommend it.
You can still use Object Storage as a backend for a CDN. All you need is to put a caching layer on front of it.
You can access your files via HTTPS by creating a public link from the control panel. Click on the file name and enable the public link by clicking on the corresponding button.
You can use Object Storage as a file-system with fuse based project such as s3fs.
Yes, this is allowed with tools like
s3cmd, but only
cache-control or those prefixed by
In order to make an object public, click on Visibility in the drop down menu, set it to Public and save the setting.
If you upload a file by using the CLI, you can make it public by using the parameter:
Note: You can only make an object public or private (each one at a time). This action cannot be performed on a whole bucket, which is by default private. Allowing public access on a bucket only allows a public listing of the objects that are stored in the bucket.
Object Storage supports multipart upload. We recommend uploading by chunks, in a limit of 1000 chunks per upload and 5TB per object.
What you’re asking for are federated tokens. We don’t support these for the moment as we want to keep tokens simple.
You can create a token in the Credentials section of the management console. To connect to Object storage, use the Access-Key and Secret-Key displayed during token creation.
Note The Secret-Key is only shown once. Take a note of it and keep it safe. It can’t be recovered if you lose it.
The official kickoff date will be announced soon.
You can either delete your data before the general availability or continue to use the service where you’ll be charged the regular pricing.
The Object Storage is designed to provide 99.999% durability of objects over a given year. This durability level corresponds to an average annual expected loss of 0.001% of objects. In addition, the service is covered by SLAs to ensure data availability and durability.
As with any environment, the best practice is to have a backup and to put in place safeguards against malicious or accidental deletion.
A full list of features is available on the S3 API operations page.