Important: This documentation applies only to C1 Bare Metal compute instances. For the latest generation of Virtual Cloud Instances refer to the Block Storage Volumes documentation.
This page shows how to connect an additional volume manually on C1 Bare Metal compute instances.
Requirements
- You have an account and are logged into console.scaleway.com
- You have configured your SSH Key
- You have a running Bare Metal instance
- Your instance has additional volumes
When you create a C1 instance with additional storage, the connection to a the storage device is automatic at the instance boot.
If you want to avoid this behavior and connect additional storage devices manually, execute the following command on your instance echo manual > /etc/init/nbd-add-extra-volumes.override
(Ubuntu only).
There are three steps to connect a storage device manually:
Important: The maximum number of volumes attach to a BareMetal instance is limited to 15 devices.
The NBD client requires the IP address and the port number of our NBD server exporting your volume.
These settings are available when you are logged into your instance by running the scw-metadata
command:
NAME=scw-unruffled-babbage
TAGS=0
STATE_DETAIL=booted
HOSTNAME=scw-unruffled-babbage
PUBLIC_IP='DYNAMIC ID ADDRESS'
PUBLIC_IP_DYNAMIC=False
[...]
BOOTSCRIPT='KERNEL TITLE DEFAULT DTB ID INITRD BOOTCMDARGS ARCHITECTURE ORGANIZATION PUBLIC'
BOOTSCRIPT_KERNEL=http://169.254.42.24/kernel/x86_64-mainline-lts-4.9-4.9.93-rev1/vmlinuz-4.9.93
BOOTSCRIPT_TITLE='x86_64 mainline 4.9.93 rev1'
BOOTSCRIPT_DEFAULT=False
BOOTSCRIPT_DTB=''
BOOTSCRIPT_ID=15fbd2f7-a0f9-412b-8502-6a44da8d98b8
BOOTSCRIPT_INITRD=http://169.254.42.24/initrd/initrd-Linux-x86_64-v3.14.6.gz
BOOTSCRIPT_BOOTCMDARGS='LINUX_COMMON scaleway boot=local nbd.max_part=16'
BOOTSCRIPT_ARCHITECTURE=x86_64
BOOTSCRIPT_ORGANIZATION=11111111-1111-1111-1111-111111111111
BOOTSCRIPT_PUBLIC=True
PRIVATE_IP=10.1.87.115
VOLUMES=0
VOLUMES_0='NAME MODIFICATION_DATE EXPORT_URI VOLUME_TYPE CREATION_DATE STATE ORGANIZATION SERVER ID SIZE'
VOLUMES_0_NAME=snapshot-de728daa-0bf6-4c64-abf5-a9477e791c83-2019-03-05_10:13
VOLUMES_0_MODIFICATION_DATE=2019-03-12T11:13:40.819486+00:00
-> VOLUMES_0_EXPORT_URI=nbd://10.1.130.237:6272
VOLUMES_0_VOLUME_TYPE=l_ssd
VOLUMES_0_CREATION_DATE=2019-03-12T11:13:40.569925+00:00
VOLUMES_0_STATE=available
VOLUMES_0_ORGANIZATION=04dcf44f-a6ca-4e69-a74c-f0c557d87d79
VOLUMES_0_SERVER='ID NAME'
VOLUMES_0_SERVER_ID=6de091ca-efeb-46e0-b406-6fed0741feed
VOLUMES_0_SERVER_NAME=scw-unruffled-babbage
VOLUMES_0_ID=ca369d02-0061-4549-9392-d81fe80e9ed3
VOLUMES_0_SIZE=50000000000
IPV6='NETMASK GATEWAY ADDRESS'
IPV6_NETMASK=127
IPV6_GATEWAY=2001:bc8:4400:2000::5d32
IPV6_ADDRESS=2001:bc8:4400:2000::5d33
[...]
VOLUMES_0
/ VOLUMES_0_*
always match the root volume of the server. Server connects and mounts it automatically at boot time.
VOLUMES_[1-15]
/ VOLUMES_[1-15]_*
are additional volumes attached to the server.
VOLUMES_[1-15_EXPORT_URI=nbd://10.1.0.44:4321
this entry shows NBD server IP address and the port number of our NBD server exporting your volume.
An instance of the NBD client must be started for each storage device to import.
For instance:
root@c1-X-Y-Z-T:~# nbd-client 10.1.0.44 4321 /dev/nbd1
Negotiation: ..size = 9536MB
bs=1024, sz=9999998976 bytes
root@c1-X-Y-Z-T:~# fdisk -l -u /dev/nbd1
Disk /dev/nbd1: 100.0 GB, 99999997952 bytes
255 heads, 63 sectors/track, 12157 cylinders, total 195312496 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
In the above example, nbd-client 10.1.2.21 4129 /dev/nbd1
connects to the NBD server.
The output of fdisk -l -u /dev/nbd1
command shows that the storage device /dev/nbd1
is attached to the server with success.
If the new volume has never been formatted, you need to format the volume using mkfs
before you can mount it.
For instance, the following command creates an ext4
file system on the volume:
root@c1-X-Y-Z-T:~# mkfs -t ext4 /dev/nbd1
mke2fs 1.42.9 (4-Feb-2014)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
610800 inodes, 2441406 blocks
122070 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=2503999488
75 block groups
32768 blocks per group, 32768 fragments per group
8144 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632
Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
To mount the device manually as /mnt/data, run the following commands:
root@c1-X-Y-Z-T:~# mkdir -p /mnt/data
root@c1-X-Y-Z-T:~# mount /dev/nbd1 /mnt/data
root@c1-X-Y-Z-T:~# ls -la /mnt/data/
total 24
drwxr-xr-x 3 root root 4096 Jan 1 00:07 .
drwxr-xr-x 3 root root 4096 Jan 1 00:07 ..
drwx------ 2 root root 16384 Jan 1 00:07 lost+found
To mount the additional volume automatically, you can create a systemd script that will mount your volumes automatically during the boot of your cloud instance.
If not yet done, create the directory into you want to mount your volume: mkdir -p /mnt/data
As the volume is empty by default, you have to create a filesystem before you can use it. To format it with with an ext4
filesystem, use the following command: mkfs -t ext4 /dev/nbd1
To get the UUID
of your volume, run the command blkid
and take a note of the ID as you will need it in the next step.
Create or edit the file that corresponds to the path of your directory nano /etc/systemd/system/mnt-data.mount
and edit is as following: The file name of the script must correspond to the path where you mount the volume (/mnt/data
⇒ mnt-data.mount
)
[Unit]
Description=Mount NDB Volume at boot
[Mount]
What=UUID="16575a81-bb2c-46f3-9ad8-3bbe20157f7c"
Where=/mnt/data
Type=ext4
Options=defaults
[Install]
WantedBy=multi-user.target
Replace UUID
with the ID of your volume.
Now reload systemd: systemctl daemon-reload
Launch the script to mount the volume: systemctl start mnt-data.mount
Finally enable the script to mount your volume automatically during boot: systemctl enable mnt-data.mount
Your volume will automatically be mounted after a reboot. You can run the df -h
command, this command will list all your devices and where they are mounted:
root@scw-65acb0:~# df -h
Filesystem Size Used Avail Use% Mounted on
udev 1009M 0 1009M 0% /dev
tmpfs 203M 12M 191M 6% /run
/dev/nbd0 46G 454M 43G 2% /
tmpfs 1011M 0 1011M 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 1011M 0 1011M 0% /sys/fs/cgroup
/dev/nbd1 46G 52M 44G 1% /mnt/data