ScalewaySkip to loginSkip to main contentSkip to footer section

ai-PULSE 2024 white paper - discover the insights!

Accelerate data processing

Process large videos & images with ease or run GPU-intensive Machine Learning models.

Container-ready

Deploy GPU nodes directly from Kubernetes Kapsule or use the NVIDIA Container Toolkit.

Easy to use

Two pre-loaded Ubuntu distributions for Machine Learning.

Technical specifications

  • gpu

    GPU

    Dedicated NVIDIA Tesla P100 16GB PCIe

  • processor_frequency

    Processor Frequency

    2.40 GHz

  • memory

    Memory

    42GB

  • bandwidth

    Bandwidth

    1 Gbit/s

  • processor

    Processor

    10 Intel Xeon Gold 6148 cores

  • gpu_memory

    GPU Memory

    16GB CoWoS HBM2

  • memory_type

    Memory type

    DDR4-2666

  • storage

    Storage

    Local Storage or Block Storage on demand

Use cases

Artificial Intelligence & Machine Learning

GPU Instances have been designed to train complex models at high speed so you can improve your algorithms’ predictions and decisions. The dedicated NVIDIA Tesla P100 makes them particularly well-suited for Neural Networks and Deep Learning applications.

H100-GPU-Instances-Schema-1040px-Dark.webp

A complete cloud ecosystem

Grow flexibly with a no lock-in European cloud

The most complete cloud ecosystem in Europe, from Bare Metal to serverless and everything in between.

Excellent price/performance ratio

Our wide range of services is designed to sustain your growth cost-effectively, at all stages of development.

Leverage multi-cloud solutions

Our products are compatible with market standards so that you can enjoy the freedom of no lock-in.

Sustainable by design

100% of electricity consumed in our data centers comes from renewable energy. Decommissioned hardware is securely reused & recycled.

Get started with tutorials

Frequently asked questions

What are the advantages of NVIDIA P100 GPUs?

The NVIDIA Pascal architecture enables the Tesla P100 to deliver superior performance for HPC and hyperscale workloads. With more than 21 teraFLOPS of 16-bit floating-point (FP16) performance, Pascal is optimized to drive exciting new possibilities in Deep Learning applications. Pascal also delivers over 5 and 10 teraFLOPS of double- and single-precision performance for HPC workloads.

What is included in an Scaleway “Ubuntu ML” image?

Our Ubuntu ML (for Machine Learning) images are Ubuntu bionic images pre-packaged with the most popular tools, framework and libraries, such as Cuda, Conda, TensorFlow, Keras, RAPIDS, JAX, and several NLP and visualization tools.

Am I required to use a Scaleway "Ubuntu ML" image?

In addition to “Ubuntu ML” images, you can use almost every other image that Scaleway provides for General Purpose Instances. You can also bring your own images.

If you want to use our “Ubuntu ML” images without Conda, you can save some disk space with the invit conda deactivate, then conda env remove -n ai.

Instances Quickstart