Discover the new server backup feature 💾

Help


Documentation & Tutorials

Configure the Second Network Interface on a C2 Instance

Configuring the Second Network Interface on a C2 BareMetal Cloud Instance

C2M and C2L instances come with two 2.5Gbit/s interfaces. By default, all the traffic is routed through eth0.

This tutorial explains how to configure routing to use eth1 for internal trafic, and eth0 for internet (and nbd connections).

Requirements

Routing network traffic depending on its destination

Your volumes are connected to your instance via the network, by using the NBD-Protocol.

To prevent your nbd devices from being disconnected later in the tutorial, we must explicitely route them through eth0.

Note: The network interface names may be different, depending on the OS on your Cloud Server. You can run ifconfig to determine the interface names of your server.

1 . Get the nbd server IP address:

$> ps auxf | grep xnbd-client
root      1830  0.0  0.0   1772    64 ?        S    09:01   0:00 @xnbd-client --blocksize 4096 --retry=900 10.1.133.45 4896 /dev/nbd0
root      4265  0.0  0.0  12956   964 pts/0    S+   09:08   0:00          \_ grep --color=auto xnbd-client

Here, the nbd server is 10.1.133.451.

2 . Show the default routes:

$> ip route
default via 10.1.169.92 dev eth0
10.1.169.92/31 dev eth0  proto kernel  scope link  src 10.1.169.93
10.1.169.94/31 dev eth1  proto kernel  scope link  src 10.1.169.95

The eth0 gateway is 10.1.169.92 (second line), the eth1 gateway is 10.1.169.94 (third line).

3 . Route the nbd connection explicitly through eth0:

# replace 10.1.133.45 with nbd server IP
# replace 10.1.169.92 with eth0 gateway IP
$> ip route add 10.1.133.45/32 via 10.1.169.92
$> ip route
default via 10.1.169.92 dev eth0
10.1.133.45 via 10.1.169.92 dev eth0 # the new route
10.1.169.92/31 dev eth0  proto kernel  scope link  src 10.1.169.93
10.1.169.94/31 dev eth1  proto kernel  scope link  src 10.1.169.95

If you have several NBD connections, you need to follow this procedure for each of them.

The internal traffic is in the subnets 10.0.0.0/8 and 169.254.0.0/16.

4 . Add the routes:

# replace 10.1.169.94 with eth1 gateway IP
$> ip route add 10.0.0.0/8 via 10.1.169.94
$> ip route add 169.254.0.0/16 via 10.1.169.94

Verifying the configuration

1 . To ensure your NBD connection is still valid, try to create a file:

$> touch testfile

2 . Since it should work, you can remove the file with rm testfile.

Check you can still reach the metadata API:

$> curl http://169.254.42.42
{"api": "api-metadata", "description": "Metadata API, just query http://169.254.42.42/conf or http://169.254.42.42/conf?format=json to get info about yourself"}

3 . Ensure your are really using eth1 for internal traffic:

$> ip route get 169.254.42.42
169.254.42.42 via 10.1.169.94 dev eth1  src 10.1.169.95
    cache
$> ip route get 10.0.1.2 # 10.0.1.2 is a random IP address in 10.0.0.0/8
10.0.1.2 via 10.1.169.94 dev eth1  src 10.1.169.95
    cache

Discover a New Cloud Experience

Deploy SSD Cloud Servers in seconds.