How to use Object Storage with AWS-CLI

Object Storage AWS-CLI Overview

Object Storage allows you to store any kind of object (documents, images, videos, etc.) and retrieve them at a later time from anywhere.

For instance, you can store images and they will be accessible using HTTP. You can use the control panel to manage your storage. Some tools exist to interact with Object Storage.

Requirements

Retrieving S3 Credentials

To retrieve your credentials, refer to S3 credentials.

Installing AWS-CLI

The AWS-CLI is an open source tool built on top of the AWS SDK for Python (Boto) that provides commands for interacting with AWS services. With minimal configuration, you can start using all of the functionality provided by the AWS Management.

To interact with AWS, aws-cli and awscli-plugin-endpoint need to be installed. The awscli-plugin-endpoint is a great plugin to help people more easily access third party S3 providers.

1 . Install aws-cli and awscli-plugin using pip

pip3 install awscli
pip3 install awscli-plugin-endpoint

2 . Create the file ~/.aws/config with the following content

[plugins]
endpoint = awscli_plugin_endpoint

[default]
region = nl-ams
s3 =
  endpoint_url = https://s3.nl-ams.scw.cloud
  signature_version = s3v4  
  max_concurrent_requests = 1000
  max_queue_size = 10000
  multipart_threshold = 50MB
  # Edit the multipart_chunksize value according to the file sizes that you want to upload. The present configuration allows to upload files up to 10 GB (1000 requests * 10MB). For example setting it to 5GB allows you to upload files up to 5TB.
  multipart_chunksize = 10MB
s3api =
  endpoint_url = https://s3.nl-ams.scw.cloud

Important: Set the endpoint_url and region corresponding to the geographical region of your bucket. nl-ams for buckets in Amsterdam, The Netherlands or fr-par for buckets in Paris, France.

3 . Insert your Scaleway Credentials in the ~/.aws/credentials file

[default]
aws_access_key_id=<ACCESS_KEY>
aws_secret_access_key=<SECRET_KEY>

4 . Test your cluster

aws s3 ls

To configure another S3 client, you can follow the instructions shown on the following page, hosted on our Object Storage infrastructure.

Creating a Bucket

1 . Create a bucket

aws s3 mb s3://$BucketName

2 . Upload your files as objects in your bucket

aws s3 cp $FileName s3://$BucketName

By default, the objects takes the name of the files but they can be renamed

aws s3 cp $FileName s3://$BucketName/$ObjectName

3 . List your bucket(s)

aws s3 ls

Enabling Bucket Versioning

Once enabled, the versioning feature supports the retrieval of objects that are deleted or overwritten. It’s a mean to keep multiple variants of an object in the same bucket. If the feature is enabled, you can list archived versions of an object or permanently delete an archived version. Note that, once versioning is enabled, you cannot return to an un-versioned state of your bucket.

Important: If you enable versioning, you will have several object versions. Listing your objects bucket will only display the latest version of the object. Keep in mind that each version of an object stored in your bucket takes disk space and is billed at 0.01€/Go

To enable versioning:

1 . Enabled versioning on your bucket

aws s3api put-bucket-versioning --bucket $BucketName --versioning-configuration Status=Enabled

2 . Add a test file to test the feature out

aws cp test s3://$BucketName

3 . List the versions of the object in the bucket

aws s3api list-object-versions --bucket $BucketName

which returns

{
    "Versions": [
        {
            "Key": "test",
            "Owner": {
                "ID": "4e129ace-76cf-47ef-8894-bbaf2e8ee480:4e129ace-76cf-47ef-8894-bbaf2e8ee480",
                "DisplayName": "4e129ace-76cf-47ef-8894-bbaf2e8ee480:4e129ace-76cf-47ef-8894-bbaf2e8ee480"
            },
            "LastModified": "2018-11-26T15:12:34.000Z",
            "VersionId": "1543245154846491",
            "Size": 20,
            "IsLatest": true,
            "StorageClass": "STANDARD",
            "ETag": "\"0137d789c45cf8374605b69579b93640\""
        }
    ]
}

4 . Add a new version of the test file in same the bucket

aws cp $FileName s3://$BucketName

5 . List the versions of the object in the bucket

aws s3api list-object-versions --bucket $BucketName

which returns

{
    "Versions": [
        {
            "Size": 15,
            "Owner": {
                "ID": "4e129ace-76cf-47ef-8894-bbaf2e8ee480:4e129ace-76cf-47ef-8894-bbaf2e8ee480",
                "DisplayName": "4e129ace-76cf-47ef-8894-bbaf2e8ee480:4e129ace-76cf-47ef-8894-bbaf2e8ee480"
            },
            "VersionId": "1543245250165373",
            "ETag": "\"d0e08e00cf91fb4911aa7def0f0c9f46\"",
            "IsLatest": true,
            "LastModified": "2018-11-26T15:14:10.000Z",
            "StorageClass": "STANDARD",
            "Key": "test"
        },
        {
            "Size": 20,
            "Owner": {
                "ID": "4e129ace-76cf-47ef-8894-bbaf2e8ee480:4e129ace-76cf-47ef-8894-bbaf2e8ee480",
                "DisplayName": "4e129ace-76cf-47ef-8894-bbaf2e8ee480:4e129ace-76cf-47ef-8894-bbaf2e8ee480"
            },
            "VersionId": "1543245154.00000",
            "ETag": "\"0137d789c45cf8374605b69579b93640\"",
            "IsLatest": false,
            "LastModified": "2018-11-26T15:14:10.000Z",
            "StorageClass": "STANDARD",
            "Key": "test"
        }
    ]
}

Managing Objects within a Bucket

1 . Download an object in a bucket

aws s3 cp s3://$BucketName/$ObjectName .

2 . List the object(s) of your bucket

aws s3 ls s3://$BucketName

3 . (Optional) Upload an object from a bucket to another bucket:

aws s3 cp s3://$BucketName/$ObjectName s3://$BucketCopy

4 . (Optional) Download or upload a whole bucket to host/bucket

aws s3 cp s3://$BucketName . --recursive
aws s3 cp s3://$BucketName s3://$BucketCopy --recursive

Moving Objects between Buckets

Moving a Host File to a Bucket

aws s3 mv $FileName s3://$BucketName

Moving an object from a Bucket to a Host

aws s3 mv s3://$BucketName/$ObjectName

Moving an object to a Bucket to another Bucket

aws s3 mv s3://$BucketName/$ObjectName s3://$BucketCopy

The same way than aws s3 cp you can use --recursive

aws s3 mv s3://$BucketName . --recursive

Synchronizing Buckets

Synchronizing a directory with a bucket

aws s3 sync . s3://$BucketName

Synchronizing two buckets together

aws s3 sync s3://$BucketName s3://$BucketCopy

Deleting Objects and Buckets

Deleting an object

aws s3 rm s3://$BucketName/$ObjectName

Deleting all objects from a bucket

aws s3 rm s3://$BucketName --recursive

Deleting a bucket. To delete a bucket, it must be empty.

aws s3 rb s3://$BucketName

If the bucket is not deleted, you can use the same command with the --force option. This command deletes all the objects from the bucket and then deletes the bucket.

aws s3 rb s3://$BucketName --force

Batch deleting all the objects and versions in a bucket.

aws s3api list-object-versions --bucket bucket_name | jq '.Versions | to_entries[] | .value | .Key, .VersionId' | xargs -n2 sh -c 'aws s3api delete-object --bucket bucket_name --key $0 --version-id $1'

Setting Tags on Buckets

1 . PUT a tag set to an bucket:

aws s3api put-bucket-tagging --bucket mybucket --tagging 'TagSet=[{Key=client,Value=scaleway}]'

2 . GET the tag set of an bucket:

aws s3api get-bucket-tagging --bucket mybucket
{
    "TagSet": [
        {
            "Key": "client",
            "Value": "scaleway"
        }
    ]
}

3 . DELETE the tag set of an bucket:

aws s3api delete-bucket-tagging --bucket mybucket

4 . GET the tag set of a bucket without configured tagging, will return an error message:

aws s3api get-bucket-tagging --bucket mybucket
An error occurred (NoSuchTagSet) when calling the GetBucketTagging operation: There is no tag set associated with the bucket or object.

Setting Tags on Objects

1 . PUT a tag set on an object:

aws s3api put-object-tagging --bucket mybucket --key myfile.txt --tagging 'TagSet=[{Key=client,Value=scaleway},{Key=service,Value=objectstorage}]'

2 . GET the tag set of an object:

aws s3api get-object-tagging --bucket mybucket --key myfile.txt
{
    "TagSet": [
        {
            "Key": "client",
            "Value": "scaleway"
        },
        {
            "Key": "service",
            "Value": "objectstorage"
        }
    ],
    "VersionId": "1552299728854249"
}

3 . DELETE the tag set of an object:

ws s3api delete-object-tagging --bucket mybucket --key myfile.txt

Setting ACL

1 . Retrieve your organisation ID from the Scaleway Console under your username > account in the top right corner

2 . Add full control to all users from same organization on a bucket

aws s3api put-bucket-acl --bucket $BucketName --grant-full-control id=$ORG_ID:$ORG_ID

3 . Add full control to all users from same organization on an object

aws s3api put-object-acl --bucket $BucketName --key $ObjectName --grant-full-control id=$ORG_ID:$ORG_ID

Configuring multiple Profiles

It is possible to manage buckets in multiple regions from the same computer with aws-cli by creating different profiles.

1 . Open the configuration file ~/.aws/config and edit it by adding a new profile for the fr-par region:

[plugins]
endpoint = awscli_plugin_endpoint

[default]
region = nl-ams
s3 =
  endpoint_url = https://s3.nl-ams.scw.cloud
  signature_version = s3v4  max_concurrent_requests = 1000
  max_queue_size = 10000
  multipart_threshold = 50MB
  # Edit the multipart_chunksize value according to the file sizes that you want to upload. The present configuration allows to upload files up to 10 GB (1000 requests * 10MB). For example setting it to 5GB allows you to upload files up to 5TB.
  multipart_chunksize = 10MB
s3api =
  endpoint_url = https://s3.nl-ams.scw.cloud

[profile fr-par]
region = fr-par
s3 =
  endpoint_url = https://s3.fr-par.scw.cloud
  signature_version = s3v4  max_concurrent_requests = 1000
  max_queue_size = 10000
  multipart_threshold = 50MB
  # Edit the multipart_chunksize value according to the file sizes that you want to upload. The present configuration allows to upload files up to 10 GB (1000 requests * 10MB). For example setting it to 5GB allows you to upload files up to 5TB.
  multipart_chunksize = 10MB
s3api =
  endpoint_url = https://s3.fr-par.scw.cloud

2 . Add authentication information for the new region in the file ~/.aws/credentials:

[default]
aws_access_key_id=<ACCESS_KEY>
aws_secret_access_key=<SECRET_KEY>
[fr-par]
aws_access_key_id=<ACCESS_KEY>
aws_secret_access_key=<SECRET_KEY>

Important: <ACCESS_KEY> and <SECRET_KEY> can have the same value for both regions.

3 . Run commands with the prefix --profile fr-par for actions to be performed in the Paris region. Otherwise the default configuration will be considered as active and commands are executed in the Amsterdam region.

Important: It is not possible to have buckets with the same name in both regions.

Setting CORS configuration

Cross-origin ressource sharing (CORS) is a browser security system, when two domains communicate with each other through the browser. The web browser will always send a header:

Origin: http://URL_OF_THE_SITE

The browser will then expect the opposite server replying that it authorizes the URL of the original site with a header:

Access-Control-Allow-Origin: http://URL_OF_THE_SITE

CORS allows the user to configure this and a lot of other security related options like accepting PUT, GET, DELETE, specific headers and more.

PUT bucket CORS

1 . The CORS configuration shall be provided in a JSON file. Create the file and open a text editor:

nano cors.json

2 . Put the configuration into the file:

{
  "CORSRules": [
    {
      "AllowedOrigins": ["http://www.example.com"],
      "AllowedHeaders": ["*"],
      "AllowedMethods": ["GET", "HEAD", "POST", "PUT", "DELETE"],
      "MaxAgeSeconds": 3000,
      "ExposeHeaders": ["Etag"]
    }
  ]
}

3 . Save the file and send the configuration to the bucket:

aws s3api put-bucket-cors --bucket bucketname --cors-configuration file://cors.json

GET bucket CORS

1 . Retrieve the CORS configration of the bucket by running the following command:

aws s3api get-bucket-cors --bucket bucketname

2 . The command will return the CORS configuration of the bucket:

{
    "CORSRules": [
        {
            "AllowedOrigins": [
                "http://www.example.com"
            ],
            "AllowedMethods": [
                "GET",
                "HEAD",
                "POST",
                "PUT",
                "DELETE"
            ],
            "ExposeHeaders": [
                "Etag"
            ],
            "AllowedHeaders": [
                "*"
            ],
            "MaxAgeSeconds": 3000
        }
    ]
}

In case no CORS rules are defined for the bucket, an error message will be displayed:

An error occurred (NoSuchCORSConfiguration) when calling the GetBucketCors operation: The CORS configuration does not exist

DELETE bucket CORS

1 . It is possible to delete the CORS configuration of a bucket, by running the following command:

aws s3api delete-bucket-cors --bucket bucketname

2 . No output is returned if the operation is successful.

OPTIONS CORS

1 . To determine if the actual request is supported with the specific origin, HTTP method and headers, a browser can send a preflight request to the Object Storage platform: ```curl -X OPTIONS -H ‘Origin: http://www.example.com’ http://bucketname.s3.nl-ams.scw.cloud/index.html -H “Access-Control-Request-Method: GET”


2 . If everything is working as expected, the Object Storage platform will send a confirmation:

HTTP/1.1 200 OK


3 . In case the CORS request is not allowed, an error message is displayed:

CORSResponse: This CORS request is not allowed. This is usually because the evalution of Origin, request method / Access-Control-Request-Method or Access-Control-Request-Headers are not whitelisted by the resource’s CORS spec. ```


For more information, you can refer to the official AWS S3 and S3api documentation.

Discover a New Cloud Experience

Deploy SSD Cloud Servers in seconds.