Cookies time! 🍪

We use cookies in order to improve our website and to offer you a better experience. You can also consult our Cookie policy.

GPU Render Instances

Dedicated Tesla P100s for all your Machine Learning & Artificial Intelligence needs.

Accelerate data processing

Process large videos & images with ease or run GPU-intensive Machine Learning models.


Deploy GPU nodes directly from Kubernetes Kapsule or use the NVIDIA Container Toolkit.

Easy to use

Two pre-loaded Ubuntu distributions for Machine Learning.

PinAvailable zones:
ParisParis:PAR 1PAR 2

Technical specifications

  • gpu

    GPUDedicated NVIDIA Tesla P100 16GB PCIe

  • processor_frequency

    Processor Frequency2.40 GHz

  • memory


  • bandwidth

    Bandwidth1 Gbit/s

  • processor

    Processor10 Intel Xeon Gold 6148 cores

  • gpu_memory

    GPU Memory16GB CoWoS HBM2

  • memory_type

    Memory typeDDR4-2666

  • storage

    StorageLocal Storage or Block Storage on demand

Use cases

GPU Instances have been designed to train complex models at high speed so you can improve your algorithms’ predictions and decisions. The dedicated NVIDIA Tesla P100 makes them particularly well-suited for Neural Networks and Deep Learning applications.

GPU Instances allow you to manipulate large datasets and extract the meaningful information you are looking for at high speed. They help data scientists summarize and classify non-structured data.

GPU Instances can speed up ultra-high-definition video encoding and render 3D models at high speed. Optimize the cost and duration of your post-production needs, whether they are one-off or regular.

Cloud ecosystem
Cloud ecosystem

A complete cloud ecosystem

Grow flexibly with a no lock-in European cloud

The most complete cloud ecosystem in Europe, from Bare Metal to serverless and everything in between.

Excellent price/performance ratio

Our wide range of services is designed to sustain your growth cost-effectively, at all stages of development.

Leverage multi-cloud solutions

Our products are compatible with market standards so that you can enjoy the freedom of no lock-in.

Sustainable by design

100% of electricity consumed in our data centers comes from renewable energy. Decommissioned hardware is securely reused & recycled.

Get started with tutorials


Integrated with Kapsule & Registry at €1.06/hour ex. VAT

Go to pricing

Frequently asked questions

Scaleway offers the world’s best price/performance ratio for NVIDIA P100 GPUs: €1/hour excl. VAT, capped at €500/month excl. VAT. This price includes one dedicated GPU NVIDIA TESLA P100, 10 vCPU Intel Xeon Gold, one IP address, and 400GB SSD storage. And keep in mind that you will benefit from free unlimited transfers - we will not charge you for the inbound and outbound transfer of these virtual machines!

The NVIDIA Pascal architecture enables the Tesla P100 to deliver superior performance for HPC and hyperscale workloads. With more than 21 teraFLOPS of 16-bit floating-point (FP16) performance, Pascal is optimized to drive exciting new possibilities in Deep Learning applications. Pascal also delivers over 5 and 10 teraFLOPS of double- and single-precision performance for HPC workloads.

Our outstanding GPUs are even more valuable when used with our various cloud services. Benefit from our Block Storage offers and our S3-compatible Object Storage, offering 75GB of storage for free every month. Discover our free and managed Kubernetes Kapsule control plane, allowing you to easily create your autoscaled CPU and GPU clusters.

Read our article on How to deploy Kubeflow on Kubernetes Kapsule

Our Ubuntu ML (for Machine Learning) images are Ubuntu bionic images pre-packaged with the most popular tools, framework and libraries, such as Cuda, Conda, TensorFlow, Keras, RAPIDS, JAX, and several NLP and visualization tools.

In addition to “Ubuntu ML” images, you can use almost every other image that Scaleway provides for General Purpose Instances. You can also bring your own images.

If you want to use our “Ubuntu ML” images without Conda, you can save some disk space with the invit conda deactivate, then conda env remove -n ai.