Data Migration C14 Classic

C14 Classic Migration - Overview

The C14 Classic storage service reaches its end of the life soon. It is therefore required to migrate your data to the new storage platform C14 Cold Storage. C14 Cold Storage can be used with any S3-compliant tool through a storage class called Glacier. It is completely integrated into our Object Storage.

The migration of your data from the existing C14 Classic platform to the new C14 Cold Storage can be done in a few simple steps. Either manually using rclone or via the command line interface c14-cli.
Rclone provides a modern alternative to rsync. It is able to automatize the data migration using the copy function of rclone.

You may run rclone either on your local computer or on a Virtual Instance to improve transfer speeds.

Requirements

Data Migration using Rclone

Rclone is an open source software and it is available for various operating systems. It can be installed on your local computer with a few easy steps:

Note: Make sure to install the latest version of rclone to transfer your data.

Installing Rclone on Linux and xBSD

Download and Install rclone on Linux and xBSD systems using the following command:

curl https://rclone.org/install.sh | sudo bash

Installing Rclone on MacOS

Download and Install rclone on MacOS using the Homebrew packet manager:

brew install rclone

If you are using Windows, you can download and unpack the rclone archive manually on your local computer.

Preparing the C14 Classic Archive

Important: Operations on your C14 archives are billed depending on your price plan.

1 . Log into your Scaleway Dedibox Console.

2 . Click Storage > C14 to enter the C14 section of the console.

3 . Your safes list displays. Click on Manage next to the safe you want to configure:

4 . The list of archives stored in the previously selected safe displays. Click on See Details next to the archive you want to migrate.

5 . The archive overview page displays. Click on Unarchive to unarchive your data into a temporary transit space.

6 . Select and enter the options for unarchiving:

  • File transfer protocol: Select SSH
  • Select your authorized SSH key: Optional Select one or several of your registered SSH keys.
  • Your archive key: The AES-256-CBC encryption key used to encrypt your data.

Click Unarchive to launch the unarchive and decryption process.

Note: Unarchiving may take a while, depending on the amount of data stored in your archive.

7 . Once the data is unarchived, the status of the archive changes to Temporary Space open. Click See details to retrieve the connection parameters:

8 . The SSH credentials are displayed on the archive information page:

Take a note of the SSH username, password, hostname and port, as they are required in the next step.

Configuring Rclone for C14 Classic

rclone supports SSH/SFTP file transfers. To use it for the migration of the data, create a new remote endpoint:

$ rclone config
2020/05/11 12:02:51 NOTICE: Config file "./.config/rclone/rclone.conf" not found - using defaults
No remotes found - make a new one
n) New remote
s) Set configuration password
q) Quit config
n/s/q> n  <- Type "n"

name> c14-classic <- Type "c14-classic" or any name for the configuration

Type of storage to configure.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
 1 / 1Fichier
   \ "fichier"
 2 / Alias for an existing remote
   \ "alias"
 3 / Amazon Drive
   \ "amazon cloud drive"
 4 / Amazon S3 Compliant Storage Provider (AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, etc)
   \ "s3"
[...]
29 / SSH/SFTP Connection
   \ "sftp"
[...]
Storage> sftp  <- Type "sftp"

Configure the SFTP remote part as follows:

** See help for sftp backend at: https://rclone.org/sftp/ **

SSH host to connect to
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
 1 / Connect to example.com
   \ "example.com"
host> 12be9c0c-23b8-4938-893f-fdd81a259800.buffer.c14.io  <- Enter the hostname of the temporary C14 space
SSH username, leave blank for current username, bene
Enter a string value. Press Enter for the default ("").
user> c14ssh <- Type "c14ssh"
SSH port, leave blank to use default (22)
Enter a string value. Press Enter for the default ("").
port> 40557 <- Enter the SSH port
SSH password, leave blank to use ssh-agent.
y) Yes type in my own password
g) Generate random password
n) No leave this optional password blank (default)
y/g/n> y  <- Type "y" and enter the SSH password and its confirmation when prompted to do so
Enter the password:
password:
Confirm the password:
password:

Path to PEM-encoded private key file, leave blank or set key-use-agent to use ssh-agent.
Enter a string value. Press Enter for the default ("").
key_file>  <- Leave empty and press Enter


The passphrase to decrypt the PEM-encoded private key file.

Only PEM encrypted key files (old OpenSSH format) are supported. Encrypted keys
in the new OpenSSH format can't be used.
y) Yes type in my own password
g) Generate random password
n) No leave this optional password blank (default)
y/g/n> <- Leave empty and press Enter


When set forces the usage of the ssh-agent.

When key-file is also set, the ".pub" file of the specified key-file is read and only the associated key is
requested from the ssh-agent. This allows to avoid `Too many authentication failures for *username*` errors
when the ssh-agent contains many keys.
Enter a boolean value (true or false). Press Enter for the default ("false").
key_use_agent> <- Leave empty and press Enter


Enable the use of insecure ciphers and key exchange methods.

This enables the use of the following insecure ciphers and key exchange methods:

- aes128-cbc
- aes192-cbc
- aes256-cbc
- 3des-cbc
- diffie-hellman-group-exchange-sha256
- diffie-hellman-group-exchange-sha1

Those algorithms are insecure and may allow plaintext data to be recovered by an attacker.
Enter a boolean value (true or false). Press Enter for the default ("false").
Choose a number from below, or type in your own value
 1 / Use default Cipher list.
   \ "false"
 2 / Enables the use of the aes128-cbc cipher and diffie-hellman-group-exchange-sha256, diffie-hellman-group-exchange-sha1 key exchange.
   \ "true"
use_insecure_cipher> <- Leave empty and press Enter


Disable the execution of SSH commands to determine if remote file hashing is available.
Leave blank or set to false to enable hashing (recommended), set to true to disable hashing.
Enter a boolean value (true or false). Press Enter for the default ("false").
disable_hashcheck>
Edit advanced config? (y/n)
y) Yes
n) No (default)
y/n> n <- type "n"

Remote config
--------------------
[c14-classic]
type = sftp
host = 12be9c0c-23b8-4938-893f-fdd81a259800.buffer.c14.io
user = c14ssh
port = 40557
pass = *** ENCRYPTED ***
--------------------
y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d> y <- type "y"

Preparing the Object Storage Bucket

1 . Log into your Scaleway Elements console and click on Object Storage in the storage section of the side menu.

2 . Click + Create a bucket to create a new Object Storage bucket.

3 . Enter a name for the bucket, choose the desired geographical region and set the bucket visibility. Then click Create a bucket to launch its creation.

4 . To create an API key for your Project, click on the Credentials tab of the project dashboard.

5 . Click on Generate new API Key and a pop-up appears giving you the option of adding the API Key purpose (for internal organization). Click on Generate API Key to proceed.

6 . The API Key information displays. Take a note of the Access Key and Secret Key, you need them in the next step. Click OK to close the popup:

Configuring Rclone for Object Storage

rclone supports S3 file transfers. Configure a new S3 endpoint by re-running rclone config:

$ rclone config
Current remotes:

Name                 Type
====                 ====
c14-classic          sftp

e) Edit existing remote
n) New remote
d) Delete remote
r) Rename remote
c) Copy remote
s) Set configuration password
q) Quit config
e/n/d/r/c/s/q>  <- type "n"

name> object-storage  <- Enter a name for the new endpoint

Type of storage to configure.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
 1 / 1Fichier
   \ "fichier"
 2 / Alias for an existing remote
   \ "alias"
 3 / Amazon Drive
   \ "amazon cloud drive"
 4 / Amazon S3 Compliant Storage Provider (AWS, Alibaba, Ceph, Digital Ocean, Dreamhost, IBM COS, Minio, etc)
   \ "s3"
[...]
Storage> s3 <- type "s3"

** See help for s3 backend at: https://rclone.org/s3/ **

Choose your S3 provider.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
 1 / Amazon Web Services (AWS) S3
   \ "AWS"
[...]
11 / Any other S3 compatible provider
   \ "Other"
provider> other <- type "other"

Get AWS credentials from runtime (environment variables or EC2/ECS meta data if no env vars).
Only applies if access_key_id and secret_access_key is blank.
Enter a boolean value (true or false). Press Enter for the default ("false").
Choose a number from below, or type in your own value
 1 / Enter AWS credentials in the next step
   \ "false"
 2 / Get AWS credentials from the environment (env vars or IAM)
   \ "true"
env_auth> false <- type "false"

AWS Access Key ID.
Leave blank for anonymous access or runtime credentials.
Enter a string value. Press Enter for the default ("").
access_key_id> ACCEESSKEY <- Enter your Access Key

AWS Secret Access Key (password)
Leave blank for anonymous access or runtime credentials.
Enter a string value. Press Enter for the default ("").
secret_access_key> SECRETKEY <- Enter your Secret Key

Region to connect to.
Leave blank if you are using an S3 clone and you don't have a region.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
 1 / Use this if unsure. Will use v4 signatures and an empty region.
   \ ""
 2 / Use this only if v4 signatures don't work, eg pre Jewel/v10 CEPH.
   \ "other-v2-signature"
region> fr-par <- Enter either "fr-par" for buckets in Paris or "nl-ams" for buckets in Amsterdam

Endpoint for S3 API.
Required when using an S3 clone.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
endpoint> https://s3.fr-par.scw.cloud <- Enter "https://s3.fr-par.scw.cloud" for buckets in Paris or "https//s3.nl-ams.scw.cloud" for buckets in Amsterdam


Location constraint - must be set to match the Region.
Leave blank if not sure. Used when creating buckets only.
Enter a string value. Press Enter for the default ("").
location_constraint> fr-par  <- Enter either "fr-par" for buckets in Paris or "nl-ams" for buckets in Amsterdam

Canned ACL used when creating buckets and storing or copying objects.

This ACL is used for creating objects and if bucket_acl isn't set, for creating buckets too.

For more info visit https://docs.aws.amazon.com/AmazonS3/latest/dev/acl-overview.html#canned-acl

Note that this ACL is applied when server side copying objects as S3
doesn't copy the ACL from the source but rather writes a fresh one.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
 1 / Owner gets FULL_CONTROL. No one else has access rights (default).
   \ "private"
 2 / Owner gets FULL_CONTROL. The AllUsers group gets READ access.
   \ "public-read"
   / Owner gets FULL_CONTROL. The AllUsers group gets READ and WRITE access.
 3 | Granting this on a bucket is generally not recommended.
   \ "public-read-write"
 4 / Owner gets FULL_CONTROL. The AuthenticatedUsers group gets READ access.
   \ "authenticated-read"
   / Object owner gets FULL_CONTROL. Bucket owner gets READ access.
 5 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
   \ "bucket-owner-read"
   / Both the object owner and the bucket owner get FULL_CONTROL over the object.
 6 | If you specify this canned ACL when creating a bucket, Amazon S3 ignores it.
   \ "bucket-owner-full-control"
acl> private <- type "private"

Edit advanced config? (y/n)
y) Yes
n) No (default)
y/n> n <- type "n"

Remote config
--------------------
[object-storage]
type = s3
provider = Other
env_auth = false
access_key_id = ACCESSKEY
secret_access_key = SECRETKEY
region = fr-par
endpoint = https://s3.fr-par.scw.cloud
location_constraint = fr-par
acl = private
--------------------
y) Yes this is OK (default)
e) Edit this remote
d) Delete this remote
y/e/d> y <- type "y"

Once the configuration is complete, leave rclone by pressing on q.

Transferring Data

Launch the data migration by running the rclone copy command. Your data is available inside the /buffer folder on C14 Classic. Use the following command to transfer it into the Object Storage bucket c14-coldstorage (replace it with your bucket name), using the glacier storage class. Note that there is maximum size of 5 TB for each object on Object Storage. Make sure to split very large files before transferring them.

$ rclone copy --progress --s3-storage-class=GLACIER --s3-chunk-size=20M c14-classic:/buffer object-storage:c14-coldstorage

Note: You may run rclone on a Virtual Instance to improve the transfer speeds if you have large amounts of data to transfer.

Once you have completed the transfer of your data and checked if the migration was successful, you can delete your C14 Classic archives from the Scaleway Dedibox console. If you have any question regarding the migration of your data, contact our technical assistance by ticket from your Scaleway console.

Data Migration using C14-CLI

To ease the migration process we updated c14-cli with a new subcommand.

Requirements are the same as before, except:

  • An Object Storage bucket can be created beforehand or using c14-cli.
  • rclone has to be installed, but no configuration is needed.

To install c14-cli, simply download, the latest version of the binary corresponding to your architecture, from the releases page.

Example on macOS:

> wget https://github.com/scaleway/c14-cli/releases/download/v0.5.0/c14-darwin-amd64
> chmod +x c14-darwin-amd64
> c14-darwin-amd64
Usage: c14 [OPTIONS] COMMAND [arg...]

Interact with C14 from the command line.

Options:
  -D, --debug        Enable debug mode
  -V, --verbose      Enable verbose mode

Commands:
    create    Create a new archive
    files     List the files of an archive
    freeze    Lock an archive
    help      Help of the c14 command line
    login     Log in to Online API
    ls        Displays the archives
    rename    Rename an archive
    remove    Remove an archive
    unfreeze  Unlock an archive
    upload    Upload your file or directory into an archive
    verify    Schedules a verification of the files on an archive's location
    bucket    Display all information of bucket
    version   Show the version information
    download  Download your file or directory from an archive
    migrate   Migration helper to S3 Cold Storage

Run 'c14 COMMAND --help' for more information on a command.

Logging into C14 and Unfreezing Data

1 . Login to c14:

$ c14 login
Please opens this link with your browser: https://console.online.net/oauth/v2/device/usercode
Then copy paste the code XXXXXX

2 . In the following example we want to migrate the archive docs below, with UUID 2c34b3f1-e0f8-4ced-b695-c9935bba3185.

> ./c14 ls
NAME                STATUS              UUID
docs                active              2c34b3f1-e0f8-4ced-b695-c9935bba3185
archive             busy                3a04741f-f75f-49db-a64d-c5179c7fa092

3 . Open the archive by unfreezing it:

> c14 unfreeze 2c34b3f1-e0f8-4ced-b695-c9935bba3185
2c34b3f1-e0f8-4ced-b695-c9935bba3185[==================================] 100.00%

Migrating Data

Migration of the data is done by using the migrate subcommand.

» c14 migrate
Usage: c14 migrate [OPTIONS] [ACTION] [ARCHIVE]

Migrate an archive to Cold Storage

[ACTION] is one of [precheck, generate-rclone-config, rclone-sync]

Options:
  -h, --help=false      Print usage
  --s3-access-key=""    aws_access_key_id
  --s3-bucket=""        Destination bucket name
  --s3-create-bucket=false Prefix in destination bucket
  --s3-prefix=""        Prefix in destination bucket
  --s3-profile=""       aws_profile
  --s3-secret-key=""    aws_secret_access_key

Examples:
        $ c14 migrate --s3-access-key xxx --s3-secret-key yyy precheck d28d0f7b-4524-4f7c-a7a3-7341503e9110
        $ c14 migrate --s3-profile scw-par generate-rclone-config d28d0f7b-4524-4f7c-a7a3-7341503e9110

1 . Run precheck. It will make sure of a few things:

  • There are no files bigger than 5 TB in the archive (hard limit of object storage)
  • The Object Storage credentials are valid
  • Check if the destination bucket exist
> c14 migrate --s3-profile default precheck 2c34b3f1-e0f8-4ced-b695-c9935bba3185
Using AWS profile default
Making sure all files are < 5 TB for S3 compatibility
Checking S3 API credentials...
Checking if S3 migration destination bucket c14-7f4bc270-d03a-4650-b8ac-e79f8e73d279 exists...
You can use --create-bucket to automatically create the bucket
All good!

Important: Since we already have a valid awscli profile named “default” on this machine, we can use --s3-profile default. Otherwise, specify the S3 credentials directly using --s3-access-key xxx --s3-secret-key yyy.

2 . Create the destination bucket (--s3-create-bucket), which will by default named after the C14 safe UUID, or the name specified after (--s3-bucket).

> c14 migrate --s3-profile default --s3-create-bucket --s3-bucket migration-c14-test precheck 2c34b3f1-e0f8-4ced-b695-c9935bba3185
Using AWS profile default
Making sure all files are < 5 TB for S3 compatibility
Checking S3 API credentials...
Checking if S3 migration destination bucket migration-c14-test exists...
Creating bucket...
Waiting for bucket "migration-c14-test" to be created...
Bucket "migration-c14-test" successfully created
All good!

3 . Now that we are ready for both C14 and Object Storage, you can generate the rclone config:

> c14 migrate --s3-profile default --s3-create-bucket --s3-bucket migration-c14-test generate-rclone-config 2c34b3f1-e0f8-4ced-b695-c9935bba3185
Using AWS profile default
Converting SFTP password for rclone config
Writing config file to /Users/cloudrider/rclone-c14-migration_7f4bc270-d03a-4650-b8ac-e79f8e73d279_2c34b3f1-e0f8-4ced-b695-c9935bba3185.conf
The following config file has been generated:
[c14]
type = sftp
host = xxx.buffer.c14.io
port = 60351
user = c14ssh
pass = xxx
md5sum_command = md5sum
sha1sum_command = sha1sum

[default]
type = s3
provider = Scaleway
env_auth =  true
region = fr-par
endpoint = https://s3.fr-par.scw.cloud
storage_class = GLACIER

The configuration contains two remote endpoints: C14 trough SFTP and Object Storage trough S3. The SFTP password is automatically converted to rclone format, and the S3 remote uses GLACIER.

4 . Either run rclone manually using the generated rclone config, or by using c14-cli directly:

> c14 migrate --s3-profile default --s3-create-bucket --s3-bucket migration-c14-test rclone-sync 2c34b3f1-e0f8-4ced-b695-c9935bba3185
Using AWS profile default
Checking if rclone is installed
rclone executable is in /usr/local/bin/rclone
Running sync
Running command: rclone --config=/Users/cloudrider/rclone-c14-migration_7f4bc270-d03a-4650-b8ac-e79f8e73d279_2c34b3f1-e0f8-4ced-b695-c9935bba3185.conf sync c14:/buffer/ default:migration-c14-test/2c34b3f1-e0f8-4ced-b695-c9935bba3185 --log-level=INFO
2020/05/26 10:48:23 INFO  : dir2/screenshot_2020-05-05_10-11-55@2x.png: Copied (new)
2020/05/26 10:48:23 INFO  : S3 bucket migration-c14-test path 2c34b3f1-e0f8-4ced-b695-c9935bba3185: Waiting for checks to finish
2020/05/26 10:48:23 INFO  : S3 bucket migration-c14-test path 2c34b3f1-e0f8-4ced-b695-c9935bba3185: Waiting for transfers to finish
2020/05/26 10:48:23 INFO  : IMG_20190903_133904.jpg: Copied (new)
2020/05/26 10:48:23 INFO  : dir2/screenshot_2020-05-05_14-02-24@2x.png: Copied (new)
2020/05/26 10:48:23 INFO  : dir2/screenshot_2020-05-05_13-27-47@2x.png: Copied (new)
2020/05/26 10:48:24 INFO  : dir2/screenshot_2020-05-05_13-27-48@2x.png: Copied (new)
2020/05/26 10:48:24 INFO  : dir2/screenshot_2020-05-05_17-25-58@2x.png: Copied (new)
2020/05/26 10:48:24 INFO  : dir2/screenshot_2020-05-05_16-08-30@2x.png: Copied (new)
2020/05/26 10:48:24 INFO  : dir2/screenshot_2020-05-05_17-32-25@2x.png: Copied (new)
2020/05/26 10:48:24 INFO  : dir1/screenshot_2020-05-06_16-53-53@2x.png: Copied (new)
2020/05/26 10:48:24 INFO  : Waiting for deletions to finish
2020/05/26 10:48:24 INFO  :
Transferred:       14.198M / 14.198 MBytes, 100%, 8.213 MBytes/s, ETA 0s
Transferred:           10 / 10, 100%
Elapsed time:         1.7s


Sync done.
Please freeze or delete your archive once you made sure it migrated properly.
To freeze the archive, run: c14 freeze 2c34b3f1-e0f8-4ced-b695-c9935bba3185

5 . One the migration is complete, make sure you freeze back the archive.

> c14 freeze 2c34b3f1-e0f8-4ced-b695-c9935bba3185
330606b4-0492-4441-8004-8f3c9f804c30[==================================] 100.00%

6 . Once you made sure everything has been properly migrated to Cold Storage, you can delete the C14 archive.

Discover the Cloud That Makes Sense