NavigationContentFooter
Jump toSuggest an edit

Migrating Scaleway Object Storage data with Rclone

Reviewed on 23 September 2024Published on 20 March 2019
  • Rclone
  • object-storage

Rclone provides a modern alternative to rsync. The tool communicates with any Amazon S3-compatible cloud storage provider as well as other storage platforms and can be used to migrate data from one bucket to another, even if those buckets are in different regions.

Before you start

To complete the actions presented below, you must have:

  • A Scaleway account logged into the console
  • Owner status or IAM permissions allowing you to perform actions in the intended Organization
  • An SSH key
  • A valid API key
  • An Instance running on Ubuntu Focal Fossa
  • At least 2 Object Storage buckets

Installing Rclone

  1. Connect to your server as root via SSH.
  2. Update the APT package cache and the software already installed on the Instance.
    apt update && apt upgrade -y
  3. Download and install Rclone with the following sequence of commands:
    wget https://downloads.rclone.org/rclone-current-linux-amd64.zip
    apt install zip
    unzip rclone-current-linux-amd64.zip
    cd rclone*linux-amd64/
    mv rclone /usr/bin/

Configuring Rclone

  1. Begin Rclone configuration with the following command:

    rclone config

    If you do not have any existing remotes, the following output displays:

    2021/01/18 16:03:28 NOTICE: Config file "/root/.config/rclone/rclone.conf" not found - using defaults
    No remotes found - make a new one
    n) New remote
    s) Set configuration password
    q) Quit config

    If you have previously configured rclone you may see a slightly different output. However, that does not affect the following steps.

  2. Type n to make a new remote. You are then prompted to type a name - here we type remote-sw-paris:

    n/s/q> n
    name> remote-sw-paris

    The following output displays:

    Type of storage to configure.
    Enter a string value. Press Enter for the default ("").
    Choose a number from below, or type in your own value
    1 / 1Fichier
    \ "fichier"
    2 / Alias for an existing remote
    \ "alias"
    3 / Amazon Drive
    \ "amazon cloud drive"
    4 / Amazon S3 Compliant Storage Provider (AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, Tencent COS, etc)
    \ "s3"
    5 / Backblaze B2
    \ "b2"
    6 / Box
    \ "box"
    7 / Cache a remote
    \ "cache"
    8 / Citrix Sharefile
    \ "sharefile"
    9 / Dropbox
    \ "dropbox"
    10 / Encrypt/Decrypt a remote
    \ "crypt"
    11 / FTP Connection
    \ "ftp"
    12 / Google Cloud Storage (this is not Google Drive)
    \ "google cloud storage"
    13 / Google Drive
    \ "drive"
    14 / Google Photos
    \ "google photos"
    15 / Hubic
    \ "hubic"
    16 / In memory Object Storage system.
    \ "memory"
    17 / Jottacloud
    \ "jottacloud"
    18 / Koofr
    \ "koofr"
    19 / Local Disk
    \ "local"
    20 / Mail.ru Cloud
    \ "mailru"
    21 / Mega
    \ "mega"
    22 / Microsoft Azure Blob Storage
    \ "azureblob"
    23 / Microsoft OneDrive
    \ "onedrive"
    24 / OpenDrive
    \ "opendrive"
    25 / OpenStack Swift (Rackspace Cloud Files, Memset Memstore, OVH)
    \ "swift"
    26 / Pcloud
    \ "pcloud"
    27 / Put.io
    \ "putio"
    28 / QingCloud Object Storage
    \ "qingstor"
    29 / SSH/SFTP Connection
    \ "sftp"
    30 / Sugarsync
    \ "sugarsync"
    31 / Tardigrade Decentralized Cloud Storage
    \ "tardigrade"
    32 / Transparently chunk/split large files
    \ "chunker"
    33 / Union merges the contents of several upstream fs
    \ "union"
    34 / Webdav
    \ "webdav"
    35 / Yandex Disk
    \ "yandex"
    36 / http Connection
    \ "http"
    37 / premiumize.me
    \ "premiumizeme"
    38 / seafile
    \ "seafile"
    Storage>
  3. Type s3 and hit enter to confirm this storage type. The following output displays:

    Choose your S3 provider.
    Enter a string value. Press Enter for the default ("").
    Choose a number from below, or type in your own value
    1 / Amazon Web Services (AWS) S3
    \ "AWS"
    2 / Alibaba Cloud Object Storage System (OSS) formerly Aliyun
    \ "Alibaba"
    3 / Ceph Object Storage
    \ "Ceph"
    4 / Digital Ocean Spaces
    \ "DigitalOcean"
    5 / Dreamhost DreamObjects
    \ "Dreamhost"
    6 / IBM COS S3
    \ "IBMCOS"
    7 / Minio Object Storage
    \ "Minio"
    8 / Netease Object Storage (NOS)
    \ "Netease"
    9 / Scaleway Object Storage
    \ "Scaleway"
    10 / StackPath Object Storage
    \ "StackPath"
    11 / Tencent Cloud Object Storage (COS)
    \ "TencentCOS"
    12 / Wasabi Object Storage
    \ "Wasabi"
    13 / Any other S3 compatible provider
    \ "Other"
  4. Type Scaleway and hit enter to confirm this S3 provider. The following output displays:

    Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
    Only applies if access_key_id and secret_access_key is blank.
    Enter a boolean value (true or false). Press Enter for the default ("false").
    Choose a number from below, or type in your own value
    1 / Enter AWS credentials in the next step
    \ "false"
    2 / Get AWS credentials from the environment (env vars or IAM)
    \ "true"
    env_auth>
  5. Type false and hit enter, to enter your credentials in the next step.

    The following output displays:

    AWS Access Key ID.
    Leave blank for anonymous access or runtime credentials.
    Enter a string value. Press Enter for the default ("").
    access_key_id>
  6. Enter your API access key and hit enter.

    The following output displays:

    AWS Secret Access Key (password)
    Leave blank for anonymous access or runtime credentials.
    Enter a string value. Press Enter for the default ("").
    secret_access_key>
  7. Enter your API secret key and hit enter.

    The following output displays:

    Region to connect to.
    Enter a string value. Press Enter for the default ("").
    Choose a number from below, or type in your own value
    1 / Amsterdam, The Netherlands
    \ "nl-ams"
    2 / Paris, France
    \ "fr-par"
    region>
  8. Enter your chosen region and hit enter. Here we choose fr-par.

    The following output displays:

    Endpoint for Scaleway Object Storage.
    Enter a string value. Press Enter for the default ("").
    Choose a number from below, or type in your own value
    1 / Amsterdam Endpoint
    \ "s3.nl-ams.scw.cloud"
    2 / Paris Endpoint
    \ "s3.fr-par.scw.cloud"
    endpoint>
  9. Enter your chosen endpoint and hit enter. Here we choose s3.fr-par.scw.cloud.

    The following output displays:

    Canned ACL used when creating buckets and storing or copying objects.
    This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too.
    For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl
    Note that this ACL is applied when server side copying objects as S3
    doesn't copy the ACL from the source but rather writes a fresh one.
    Enter a string value. Press Enter for the default ("").
    Choose a number from below, or type in your own value
    1 / Owner gets FULL_CONTROL. No one else has access rights (default).
    \ "private"
    2 / Owner gets FULL_CONTROL. The AllUsers group gets READ access.
    \ "public-read"
    / Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access.
    3 | Granting this on a bucket is generally not recommended.
    \ "public-read-write"
    4 / Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access.
    \ "authenticated-read"
    / Object owner gets FULL_CONTROL. Bucket owner gets READ access.
    5 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
    \ "bucket-owner-read"
    / Both the object owner and the bucket owner get FULL_CONTROL over the object.
    6 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
    \ "bucket-owner-full-control"
    acl>
  10. Enter your chosen ACL and hit enter. Here we choose private (1).

    The following output displays:

    The storage class to use when storing new objects in S3.
    Enter a string value. Press Enter for the default ("").
    Choose a number from below, or type in your own value
    1 / Default
    \ ""
    2 / The Standard class for any upload; suitable for on-demand content like streaming or CDN.
    \ "STANDARD"
    3 / Archived storage; prices are lower, but it needs to be restored first to be accessed.
    \ "GLACIER"
    storage_class>
  11. Enter your chosen storage class and hit enter. Here we choose STANDARD (2).

    The following output displays:

    Edit advanced config? (y/n)
    y) Yes
    n) No (default)
    y/n>
  12. Type n and hit enter. A summary of your config displays:

    Remote config
    --------------------
    [remote-sw-paris]
    type = s3
    provider = Scaleway
    env_auth = false
    access_key_id = <ACCESS-KEY>
    secret_access_key = <SECRET-KEY>
    region = fr-par
    endpoint = s3.fr-par.scw.cloud
    acl = private
    storage_class = STANDARD
    --------------------
    y) Yes this is OK (default)
    e) Edit this remote
    d) Delete this remote
    y/e/d>
  13. Type y to confirm that the remote configuration is OK, and hit enter.

    The following output displays:

    Current remotes:
    Name Type
    ==== ====
    remote-sw-paris s3
    e) Edit existing remote
    n) New remote
    d) Delete remote
    r) Rename remote
    c) Copy remote
    s) Set configuration password
    q) Quit config
    e/n/d/r/c/s/q> q
  14. Type q to quit the config, and hit enter.

Note

If you want to be able to transfer data to or from a bucket in a different region than the one you just set up, repeat steps 1-14 to set up a new remote in the required region. Simply enter the required region in steps 7 and 8. Similarly, you may wish to set up a new remote for a different Object Storage provider.

Migrating data

Two commands can be used to migrate data from one backend to another.

  • The copy command copies data from source to destination.
rclone copy --progress <SOURCE_BACKEND>:<SOURCE_PATH> <DEST_BACKEND>:<DEST_PATH>

For example, the following command copies data from a bucket named my-first-bucket in the remote-sw-paris remote backend that we previously set up, to another bucket named my-second-bucket in the same remote backend. The --progress flag allows us to follow the progress of the transfer:

rclone copy --progress remote-sw-paris:my-first-bucket remote-sw-paris:my-second-bucket
  • The sync command copies data from one backend to another, but also deletes objects in the destination that are not present in the source:
rclone sync --progress <SOURCE_BACKEND>:<SOURCE_PATH> <DEST_BACKEND>:<DEST_PATH>

The following command copies data from a bucket named my-first-bucket in the remote-sw-paris remote backend that we previously set up, to another bucket named my-third-bucket in a different remote backend that we configured for the nl-ams region and named remote-sw-ams. It also deletes any data present in my-third-bucket that isn’t also present in my-first-bucket:

rclone sync --progress remote-sw-paris:my-first-bucket remote-sw-ams:my-third-bucket
Note

This migration may incur costs from the Object Storage tool you are migrating from since they may or may not bill egress bandwidth.

For further information, and to discover other commands available, refer to the official RClone S3 Object Storage Documentation.

Transferring data to Scaleway Glacier

When you copy or sync, you can determine which storage class you wish to transfer your data to.

At Scaleway you can choose from two classes:

  • STANDARD: The Standard class for any upload; suitable for on-demand content like streaming or CDN.
  • GLACIER: Archived, long-term retention storage.

If the storage class is not specified, the data will be transferred as STANDARD by default.

To transfer data to Scaleway Glacier class, add --s3-storage-class=GLACIER to your command, as such:

rclone copy --progress --s3-storage-class=GLACIER <SOURCE_BACKEND>:<SOURCE_PATH> <DEST_BACKEND>:<DEST_PATH>

You can verify the storage class of the transferred data by accessing your bucket on the Scaleway console.

Offsite backup

The sync command from rclone should be preferred to copy because it deletes files from the destination.

First, create the rclone.conf:

[other-provider]
type = s3
provider = <PROVIDER_NAME>
env_auth = false
access_key_id = xxxx
secret_access_key = xxxx
endpoint = <OTHER_S3_PROIVER_ENDPOINT>
region = <S3_REGION>
location_constraint = <S3_REGION>
[scw-fr-par]
type = s3
provider = Scaleway
env_auth = false
access_key_id = xxxx
secret_access_key = xxxx
endpoint = https://s3.fr-par.scw.cloud
region = fr-par
[scw-nl-ams]
type = s3
provider = Scaleway
env_auth = false
access_key_id = xxxx
secret_access_key = xxxx
endpoint = https://s3.nl-ams.scw.cloud
region = nl-ams

Then sync buckets from your other Block Storage provider -> SCW and SCW -> SCW:

# sync from the other Object Storage provider to Scaleway (nl-ams)
rclone --config rclone.conf sync other-provider://my-bucket/my-key-prefix scw-nl-ams://my-bucket/my-key-prefix
# sync from Scaleway (fr-par) to Scaleway (nl-ams)
rclone --config rclone.conf sync scw-fr-par://my-bucket/my-key-prefix scw-nl-ams://my-bucket/my-key-prefix
Was this page helpful?
API DocsScaleway consoleDedibox consoleScaleway LearningScaleway.comPricingBlogCareers
© 2023-2024 – Scaleway