Terraform: how to init your infrastructure
If you want to quickly and easily set up a cloud infrastructure, one of the best ways to do it is to create a Terraform repository. Learn the basics to start your infrastructure on Terraform.
We are going to cover some new concepts of Terraform to help you manage your infrastructure easily. As we are continuously working on new products and our Terraform provider, I’m also excited to cover new products that will strengthen your architecture.
First, we will use our first repository as a base to set up our new infrastructure. We will transform our main directory into a sub-section to have multiple modules inside one big Scaleway module. This structure opens up the possibility to deploy all the infrastructure you need in one click. Then, if you don’t need products (like a database for example), you will simply not call this sub-module in your main.tf.
You can find this second and more developed repository righthere.
Before starting on the project, you need to have an account and your credentials all set up. You’ll also need to install Terraform on the server you are using, or locally, using the last version of the Scaleway Terraform provider.
Hashicorp documentation defines a module as a container for multiple resources used together. More specifically, a module is a small portion of your code that you put inside a specific repository to use again later. It will allow you to:
In this tutorial, I will turn one specific product (in this case, an instance) and turn it into a module. Then, we will deploy our infrastructure.
We are going to deploy the basics of infrastructure to allow you to build any kind of application: an instance, a database, a Kubernetes cluster, a load balancer. We’ll wrap everything in a private network, behind a public gateway, for safety reasons. However, the integration of the Kubernetes cluster inside a VPC is not done yet, so it will be completed outside of our private network.
The first change is that we will modify the tree structure of the repository. Instead of having everything on the same level and one file per product/feature, we are going to divide our resources into sub-directory, with the same layout each time.
We are going from this:
To this:
tree
├── LICENSE
├── README.md
├── Terraform_Module_Scaleway_Schema.webp
├── backend.tf
├── main.tf
├── module
│ ├── database
│ │ ├── main.tf
│ │ ├── outputs.tf
│ │ ├── provider.tf
│ │ └── variables.tf
│ ├── instance
│ │ ├── main.tf
│ │ ├── outputs.tf
│ │ ├── provider.tf
│ │ └── variables.tf
│ ├── kapsule
│ │ ├── main.tf
│ │ ├── outputs.tf
│ │ ├── provider.tf
│ │ └── variables.tf
│ ├── loadbalancer
│ │ ├── main.tf
│ │ ├── outputs.tf
│ │ ├── provider.tf
│ │ └── variables.tf
│ └── vpc
│ ├── main.tf
│ ├── outputs.tf
│ ├── provider.tf
│ └── variables.tf
├── provider.tf
├── terraform.tfvars
└── variables.tf
6 directories, 28 files
Although it looks more complicated at first glance, this is an easier and more scalable way to deploy and maintain your infrastructure. In fact, your root main.tf will be only used to call the resources that you need by sourcing the right module.
Your variables and outputs will be linked to your dedicated resources in the right module. And if you need global variables, you will still have the root variables.tf file.
Finally, if you want to deploy more/new resources, you simply have to create a new sub-directory and call it as many times as you need.
Let’s take a deep dive into our new main.tf, where all the logic is going to be gathered.
We went from this:
resource “scaleway_instance_ip” “public_ip” {}
resource “scaleway_instance_volume” “scw-instance” {
size_in_gb = 30
type = “l_ssd”
}
resource “scaleway_instance_server” “scw-instance” {
type = “DEV1-L”
image = “ubuntu_focal”
tags = [“terraform instance”, “scw-instance”]
ip_id = scaleway_instance_ip.public_ip.id
additional_volume_ids = [scaleway_instance_volume.scw-instance.id]
root_volume {
# The local storage of a DEV1-L instance is 80 GB, subtract 30 GB from the additional l_ssd volume, then the root volume needs to be 50 GB.
size_in_gb = 50
}
}
to this:
module "instance" {
source = "./module/instance"
instance_size_in_gb = var.instance_size_in_gb
instance_type = var.instance_type
instance_image = var.instance_image
volume_size_in_gb = var.volume_size_in_gb
volume_type = var.volume_type
tags = var.tags
private_network_id = module.vpc.private_network_id
}
module "database" {
source = "./module/database"
rdb_instance_node_type = var.rdb_instance_node_type
rdb_instance_engine = var.rdb_instance_engine
rdb_is_ha_cluster = var.rdb_is_ha_cluster
rdb_disable_backup = var.rdb_disable_backup
rdb_instance_volume_type = var.rdb_instance_volume_type
rdb_instance_volume_size_in_gb = var.rdb_instance_volume_size_in_gb
rdb_user_root_password = var.rdb_user_root_password
rdb_user_scaleway_db_password = var.rdb_user_scaleway_db_password
instance_ip_addr = module.instance.instance_ip_addr
private_network_id = module.vpc.private_network_id
user_name = var.user_name
zone = var.zone
region = var.region
env = var.env
}
module "kapsule" {
source = "./module/kapsule"
kapsule_cluster_version = var.kapsule_cluster_version
kapsule_pool_size = var.kapsule_pool_size
kapsule_pool_min_size = var.kapsule_pool_min_size
kapsule_pool_max_size = var.kapsule_pool_max_size
kapsule_pool_node_type = var.kapsule_pool_node_type
cni = var.cni
zone = var.zone
region = var.region
env = var.env
}
module "loadbalancer" {
source = "./module/loadbalancer"
lb_size = var.lb_size
inbound_port = var.inbound_port
forward_port = var.forward_port
forward_protocol = var.forward_protocol
private_network_id = module.vpc.private_network_id
zone = var.zone
region = var.region
env = var.env
}
module "vpc" {
source = "./module/vpc"
public_gateway_dhcp = var.public_gateway_dhcp
public_gateway_type = var.public_gateway_type
bastion_port = var.bastion_port
zone = var.zone
region = var.region
env = var.env
}
As you can see, we deleted the instance resources from our previous example to call all the modules. Our main.tf will be our “call-center”. It will call all the modules we need to deploy our infrastructure.
If we compare it with our previous repository, we see the logic of the resources has been moved to module/instance/main.tf (I just added the private network to my instance).
Also, I advise you to transform every argument into a variable. First, it will help add granularity to your project: You will be able to deploy multiple resources easily, with different configurations. Second, you will have an overview of your configuration at a glance.
The main.tf from your instance’s module will look quite similar to the previous version (I just added the private network to my instance).
Then, I simply used the same logic to all my previous files (database.tf, kapsule.tf, etc…) and created modules for each product.
All the values are still in the tfvars, which will allow you to have a clear overview of what kind of resources you have deployed. And if you want to, for example, switch to a bigger database, you simply have to update this file. There’s no need to modify the structure of your architecture or touch the resource itself.
Scaleway launched VPC at the end of 2021. And for the last six months, we have continuously added features to our network ecosystem. Here is the example I used to deploy our VPC:
#Private Network creation
resource "scaleway_vpc_private_network" "scaleway_pn" {
name = "${var.env}-private_network"
}
#DHCP
resource "scaleway_vpc_public_gateway_dhcp" "scaleway_dhcp" {
subnet = var.public_gateway_dhcp
push_default_route = var.dhcp_push_default_route
}
#Public Gateway
resource "scaleway_vpc_public_gateway_ip" "scaleway" {
}
resource "scaleway_vpc_public_gateway" "scaleway_pg" {
name = "${var.env}-public_gateway"
type = var.public_gateway_type
ip_id = scaleway_vpc_public_gateway_ip.scaleway.id
bastion_enabled = var.bastion_enabled
bastion_port = var.bastion_port
}
#Routing
resource "scaleway_vpc_gateway_network" "scaleway" {
gateway_id = scaleway_vpc_public_gateway.scaleway_pg.id
private_network_id = scaleway_vpc_private_network.scaleway_pn.id
dhcp_id = scaleway_vpc_public_gateway_dhcp.scaleway_dhcp.id
cleanup_dhcp = var.cleanup_dhcp
enable_masquerade = var.enable_masquerade
depends_on = [scaleway_vpc_public_gateway.scaleway_pg, scaleway_vpc_public_gateway_ip.scaleway, scaleway_vpc_private_network.scaleway_pn]
}
With our current infrastructure, you can see we implemented our virtual private cloud:
You can now implement a secure infrastructure thanks to the combination of Public Gateway + Private Networks (!).
I did not go through all the code of this repository or all the “specificities”. For example, I didn’t discuss how to secure sensitive information or how to manage outputs through modules. The examples I gave speak for themselves.
However, now you have a fully functional module that allows you to deploy rapidly a fully usable infrastructure to launch your first project. At the end of the day, all you have to update is your tfvars. This repository can serve as a guideline for anyone looking to write and maintain their Terraform infrastructure.
More products/features are coming in the next months. We’re preparing Terraform integration for Serverless, CaaS, and more new products in the near future, so stay tuned!
If you want to quickly and easily set up a cloud infrastructure, one of the best ways to do it is to create a Terraform repository. Learn the basics to start your infrastructure on Terraform.
We recently migrated out of AWS and into Scaleway - and a lot of European Startups will face the same mission to get back the sovereignty of their data. We wanted to share our experience doing so.
A year ago, WeScale had the idea to create a coding game oriented towards DevOps and Infra-as-Code: a treasure hunt that could amuse experts to improve their skills by solving technical puzzles.