SIS stands for Scaleway Infinite Storage and is our scalable object storage service, whose API is 100% compatible with Amazon’s S3. This cloud-based storage is designed to be highly available and durable .
SIS is open to the public but we’re currently scaling up our infrastructure to handle the load generated by all our new users.
IMPORTANT: Bucket creation is temporarily deactivated starting 2015-09-03 to preserve the service quality for current users. You will be notified once everything’s back to normal. In the mean time, users who created a bucket before that restriction can continue to enjoy them as usual.
All your resources and data will be left untouched. You should have received an e-mail with detailed instructions on the migration process.
Network traffic in and out of SIS (egress and ingress) is unlimited, unmetered and free of charge.
Only the storage is invoiced.
HTTP requests are unlimited, unmetered and free of charge. Including and not limited to:
Only the storage is invoiced.
We still need to fine-tune some details about our object storage service. In the mean time, the service is not currently accounted for in invoices and is free of charge.
There is no precise ETA for now, but you’ll be noticed enough in advance once details are sorted out.
Because the service is currently free of charge , your SIS usage is not measured yet.
The service is accessible through our console for simple operations.
We provide an S3-compatible API for programmatic access or for usage with any compatible software.
We provide you with a comfortable and integrated UI in the console for convenience.
But it’s not easy to browse infinite storage from the web so we came up with some engineering trade-offs:
You can store any kind of data, in any format. Be it documents, images, videos, …
Data uploaded to SIS are required to be stored in a bucket. You need to create a bucket to upload content.
Once uploaded to a bucket, a file is referred to as an object.
The console UI allow you to organize objects in a bucket by folders. We chose to present it that way for convenience. But this is quite misleading: under the hood, data are not organized as a tree structure.
In fact, the whole path leading to an object from a bucket is considered as a unique key. Objects are stored flat in a bucket. That’s why you should not use the object storage as a file-system.
There is no limit on the total volume of data of a bucket, or the number of objects it can contain.
There is still an individual object size limit of 5 terabytes.
Everything you store in our object storage is replicated three times to avoid data loss and to ensure high availability.
Bucket names are unique in relation to our whole platform.
The Bucket already exists error message is triggered when the name you intended to use for your bucket is already reserved by another user (or yourself).
On deletion, a bucket make its name available again, so anybody can reuse it on a first-come, first-served principle.
IMPORTANT: To not leak information to other users, we recommend using non-conspicuous names (UUIDs and the likes) for your buckets.
Bucket ownership is not transferable.
A bucket is owned by the account which created it.
Unfortunately not, as we enforce your actions right away.
The object storage service was not designed to be used as a CDN: it is not fine-tuned for this kind of usage. We do not recommend it.
You can still use SIS as a backend for a CDN. All you need is to put a caching layer on front of it. See how we did it in our Varnish tutorial .
There are some FUSE-based projects available out there wrapping object storage APIs and presenting them as file-systems. If this might somehow work for tiny data sets and little usage, it is far from scalable.
SIS was not designed to be abused with this kind of workload in mind. Underlaying data structures are not organized like typical file-systems : we do not recommend using our object storage as such.
If you want to use SIS for file sharing, you should instead rely on tools we already tested:
SIS support multipart upload. We recommend uploading by chunks, in a limit of 100 chunks by upload.
Yes, this is allowed with tools like
s3cmd, but only
cache-control or those prefixed by
What you’re asking for are federated tokens. We don’t support these for the moment as we want to keep tokens simple. We still plan to allow fine-tuning of tokens in the future .