How to create and manage a GPU Instance
Scaleway GPU Instances are virtual compute instances equipped with dedicated high-end Nvidia graphical processing unit (GPUs). They are ideal for data processing, artificial intelligence, rendering and video encoding. After you have created your GPU Instance, you can connect to it via SSH and run one of our ready-made Docker images to access a preinstalled environment with all your favorite AI libraries and tools already installed. In addition to this, GPU Instances have all the features of our regular Instances, including flexible IPs, Security Groups, Private Networks, backups and more. When you are done using your GPU Instance, you can easily delete it from the Scaleway console.
You may need certain IAM permissions to carry out some actions described on this page. This means:
- you are the Owner of the Scaleway Organization in which the actions will be carried out, or
- you are an IAM user of the Organization, with a policy granting you the necessary permission sets
- You have an account and are logged into the Scaleway console
- You have created your SSH key and added it to your account
How to create a GPU Instance
- Click Instances in the Compute section of the side menu. The Instance creation page displays.
- Click Create Instance. The Instance creation wizard displays.
- Complete the following steps in the wizard:
- Choose an Availability Zone, which is the geographical region where your GPU Instance will be deployed. GPU Instances are currently available in PAR-1 and/or PAR-2, depending on the GPU Instance type.
- Select GPU as the Instance type, and then choose the type of GPU Instance you want. Different types have different prices, processing power, memory, storage options and bandwidth.
- Choose an Image to run on your GPU Instance. The following images are available, depending on your GPU Instance type:
- Ubuntu Jammy GPU OS 12: our latest GPU OS image, with preinstalled Nvidia drivers and an Nvidia Docker environment. Other software is not preinstalled, as you are expected to use Docker to launch a working environment suitable for your needs. You can build your own container, use an official Docker image, or choose from our range of Scaleway Docker images, each of which has CUDA installed and is customized for different purposes.
- Add Volumes. This is an optional step. Volumes are storage spaces used by your Instances. You can leave the default settings of a minimum local storage, or choose to add more Block and/or reduce Local Storage to your GPU Instance. You can also choose which volume to run the OS from.
- Enter a Name for your GPU Instance, or leave the randomly generated name in place. Optionally, you can also add tags to help you organize your GPU Instance.
- Click Advanced options if you want to configure a flexible IP, a local bootscript or a cloud-init configuration. Otherwise, leave these options at their default values.
- Verify the SSH keys that will give you access to your GPU Instance
- Verify the Estimated Cost of your GPU Instance, based on the specifications you chose.
- Click Create Instance to finish. The creation of your GPU Instance begins, and you are informed when the GPU Instance is ready.
How to connect to a GPU Instance
See our documentation on how to connect to your Instance via SSH.
Once you have connected via SSH, you can launch a Docker container to start working on your AI projects.
How to use Instance features
For instructions on using any type of GPU Instance feature, including flexible IPs, placement groups, Private Networks, backups, and much more, check out our full Instance how-to documentation.
How to delete a GPU Instance
See our documentation on how to delete your Instance.