Jump toUpdate content
Scaleway GPU Instances are virtual compute instances equipped with dedicated high-end Nvidia graphical processing unit (GPUs). They are ideal for data processing, artificial intelligence, rendering and video encoding. After you have created your GPU Instance, you can connect to it via SSH and run one of our ready-made Docker images to access a preinstalled environment environment with all your favorite AI libraries and tools pre-installed. In addition to this, GPU Instances have all the features of our regular Instances, including flexible IPs, Security Groups, Private Networks, backups and more. When you are done using your GPU Instance, you can easily delete it from the Scaleway console.
Click Instances in the Compute section of the side menu. The Instance creation page displays.
Click Create an Instance. The Instance creation wizard displays.
Complete the following steps in the wizard:
- Choose an Availability Zone, which is the geographical region where your GPU Instance will be deployed. GPU Instances are curently available in PAR-1 and/or PAR-2, depending on the GPU Instance type.
- Select GPU as the Instance type, and then choose the type of GPU Instance you want. Different types have different prices, processing power, memory, storage options and bandwidth.
- Choose an Image to run on your GPU Instance. The following images are available:
- Ubuntu Focal GPU OS11: our latest GPU OS image, with preinstalled Nvidia drivers and an Nvidia Docker environment. Other software is not pre-installed, as you are expected to use Docker to launch a working environment suitable for your needs. You can build your own container, use an official Docker image, or choose from our range of Scaleway Docker images, each of which has CUDA installed and is customised for different purposes.
- Ubuntu Bionic ML 10.1: a legacy GPU OS image, with preinstalled Nvidia drivers, CUDA 10.1, an Nvidia Docker environment and a ready-to-use “ai” conda environment (preinstalled libraries include Numpy, pandas, scikit-learn, Tensorflow, Pytorch, Jax and Rapids).
- Ubuntu Bionic ML 9.2: a legacy GPU OS image, with preinstalled Nvidia drivers, CUDA 9.2, an Nvidia docker environment and a ready-to-use “ai” conda environment (preinstalled libraries include Numpy, pandas, scikit-learn and Pytorch).
- Add Volumes. This is an optional step. Volumes are storage spaces used by your Instances. You can leave the default settings of a minimum local storage, or choose to add more Block and/or reduce Local Storage to your GPU Instance. You can also choose which volume to run the OS from.
- Enter a Name for your GPU Instance, or leave the randomly generated name in place. Optionally, you can also add tags to help you organize your GPU Instance.
- Click Advanced Options if you want to configure a flexible IP, a local bootscript or a cloud-init configuration. Otherwise, leave these options at their default values.
- Verify the SSH Keys that will give you access to your GPU Instance
- Verify the Estimated Cost of your GPU Instance, based on the specifications you chose.
Click Create a new Instance to finish. The creation of your GPU Instance begins, and you are informed when the GPU Instance is ready.
See our documentation on how to connect to your Instance via SSH.
To access a pre-installed working environment with all your favourite Python packages, you need to launch a Docker container.
If you created your GPU Instance with one of the legacy Ubuntu Bionic ML OS images, once you connect to your Instance you are already in a pre-installed ready-to-use Python environment, managed with conda. You do not need to follow the steps for launching a Docker container, and can get right to work.
Choose one of our Docker AI images (eg Tensorflox, Pytorch, Jax) based on your needs.
Run the following command to launch the Docker container. In the following example, we launch a container based on the Tensorflow image:
docker run --runtime=nvidia -it --rm -p 8888:8888 -p 6006:6006 rg.fr-par.scw.cloud/scw-ai/tensorflow:latest /bin/bash
When you run your Docker container as shown above, the container launches and you are taken to its
ai directory, where the pipenv virtual environment is already activated. You can get right to work!
Use the command
pipenv graph to see a list of all installed packages and their versions, as well as all the dependencies of each package. For more help with pipenv, see our dedicated documentation.
Some applications, such as Jupyter Lab, Tensorboard and Code Server, require a browser to run. You can launch these from the
ai virtual environment of your Docker container, and view them in the browser of your local machine. This is thanks to the possibility to add port mapping arguments when launching a container with the
docker run command. In our example, we added the port mapping arguments
-p 8888:8888 -p 6006:6006 when we launched our container, mapping
8888:8888 for Jupyter Lab and
6006:6006 for Tensorboard.
Code Server runs in Jupyter Lab via Jupyter Hub, so does not need port mapping in this case. You can add other port mapping arguments for other applications as you wish.
Launch an application. Here, we launch Jupyter Lab:
Within the output, you should see something similar to the following:
[I 2022-04-06 11:38:40.554 ServerApp] Serving notebooks from local directory: /home/jovyan/ai
[I 2022-04-06 11:38:40.554 ServerApp] Jupyter Server 1.15.6 is running at:
[I 2022-04-06 11:38:40.554 ServerApp] http://7d783f7cf615:8888/lab?token=e0c21db2665ac58c3cf124abf43927a9d27a811449cb356b
[I 2022-04-06 11:38:40.555 ServerApp] or http://127.0.0.1:8888/lab?token=e0c21db2665ac58c3cf124abf43927a9d27a811449cb356b
[I 2022-04-06 11:38:40.555 ServerApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation).Tip:
Jupyter Lab is launched automatically when you run any Scaleway container image. You will see a message upon start up telling how to access the notebook in your browser. To override Jupyter Lab being launched automatically in this way, add
/bin/bashto the end of your
docker runcommand, e.g.
docker run --runtime=nvidia -it --rm -p 8888:8888 -p 6006:6006 rg.fr-par.scw.cloud/scw-ai/pytorch:latest /bin/bash. This preempts the launch of Jupyter Lab at container startup, and replaces it with the specified command, in this case a bash shell.
On your local computer, open a browser window and enter the following URL. Replace
<ip-address>with the IP address of your Scaleway GPU Instance, and
<my-token>with the token shown displayed in the last lines of terminal output after theh
You can find the IP address of your Instance in the Scaleway console. In the side menu, click Instances to see a list of your Instances. The IP address of each of them is shown in the list that displays.
Jupyter Lab now displays in your browser. You can use the Notebook, Console, or other features as required:
You can display the GPU Dashboard in Jupyter Lab to view information about CPU and GPU resource usage. This is accessed via the System Dashboards icon in the left side menu (the third icon from the top).
Use CTRL+C in the terminal window of your GPU Instance / Docker container to close down the Jupyter server when you’ve finished.
When you are in the activated pipenv virtual environment, your command line prompt will normally be prefixed by the name of the environment. Here, for example, from
(ai) jovyan@d73f1fa6bf4d:~/ai we see that we are in the activated
ai environment, and from
jovyan@d73f1fa6bf4d:~/ai that we are in the
~/ai directory of our container:
exitthe following command to leave the pre-installed
You are now outside the pre-installed virtual environment.
exitagain to exit the Docker container.
Your prompt should now look similar to the following. You are still connected to your GPU Instance, but you have left the Docker container:
exitonce more to disconnect from your GPU Instance.