How to transfer Object Storage buckets to the new storage platform
Since the 27th of January 2022 the Object Storage platform has been migrated to our internally developed backend: Hive. We use the robust Erasure Code (6+3) to protect the data stored on Scaleway Object Storage. This mechanism now moves from one datacenter to three Availability Zones (AZ), which means that the 6+3 fragments will be spread over three independent Availability Zones in one region.
In summary, the main advantage of Multi-AZ is that it ensures the resilience of your data storage, even in the event of a total loss of an AZ. Scaleway Object Storage is built entirely on Scaleway’s technical stack, which guarantees true sovereignty at all levels, from infrastructure to software. Thanks to our new backend, you can store millions of objects in a single bucket and scale your infrastructure, without friction and without limit.
You can easily transfer the contents of your existing buckets to newly created buckets in the
STANDARD class in the Paris region using the AWS CLI to benefit from the improvements of the new platform.
You may need certain IAM permissions to carry out some actions described on this page. This means:
- Click on Object Storage in the Storage section of the Scaleway console. The list of your Object Storage buckets displays.
- Click + Create bucket to create a new
STANDARDclass bucket in the Paris region.Note:
To benefit from the Multi-AZ infrastructure in the Paris region, choose the
STANDARDstorage class. More information is available in our blog.
- Run a full sync of the two buckets:
aws s3 sync s3://BUCKET-SOURCE s3://BUCKET-TARGETNote:
Depending on the number of objects in your bucket, this step may take a while.
- Run a second full sync to ensure that new objects, that might have been added to the source bucket in between are also copied.
- Point your application to the new bucket.
- Perform a recursive copy from the old bucket to the new bucket to ensure that new objects added in between are also copied:
aws s3 cp s3://BUCKET-SOURCE s3://BUCKET-TARGET --recursiveTip:
Do not run a
syncin this step, to avoid that new objects added to the new bucket are deleted.
- The old bucket is ready to be deleted.