AI Supercomputers

Build the next Foundation Model with one of the fastest and most energy-efficient supercomputers in the world.

The most powerful AI-training hardware on the market

Train AI models at an unprecedented pace using Scaleway's AI supercomputers. With lightning-fast NVIDIA H100 Tensor Core GPUs, a non-blocking NVIDIA Quantum-2 InfiniBand networking platform, and high-performance DDN storage, this machine can effortlessly scale to hundreds or thousands of nodes, addressing the most significant challenges of the next generation of AI applications.

Hosted in Europe

Maintain total control of your AI journey thanks to Scaleway's guarantee of European data sovereignty. Our comprehensive storage solutions ensure your data and innovations remain out of reach of extraterritorial legislation throughout the machine learning lifecycle.

In one of Europe's most eco-friendly data centers

Housed in the eco-friendly DC5 data center and featuring power-efficient H100 chips, Scaleway's AI Supercomputers deliver outstanding AI performance with higher performance per watt and reduced ownership costs.

Leaders of the AI industry are using these Supercomputers

Mistral AI

"We're currently working on Scaleway SuperPod, which is performing exceptionally well.", Arthur Mensch on the Master Stage at ai-PULSE 2023 talked about how Mistral 7B is available on large hyperscalers like Scaleway and how businesses are using it to replace their current APIs

Nabu 2023

  • CPUDual Intel® Xeon® Platinum 8480C Processors 112 Cores total

  • Total CPU cores14,224 cores

  • GPU1,016 Nvidia H100 Tensor Core GPUs (SXM5)

  • Total GPU Memory81,280GB

  • Processor frequencyMax of 3.80 GHz

  • Total RAM Memory254 TB of RAM

  • Storage type1.8 PB of a3i DDN low latency storage

  • Storage capacity per DGX2.7 TB/s Read and 1.95 TB/s Write

  • Inter-GPU BandwidthInfiniband 400 Gb/s

  • Jero 2023

  • CPUDual Intel® Xeon® Platinum 8480C Processors 112 Cores total

  • Total CPU Cores224 cores

  • GPU16 Nvidia H100 Tensor Core GPUs (SXM5)

  • Total GPU memory1,280GB

  • Processor frequencyMax of 3.80 GHz

  • Total RAM memory4 TB of RAM

  • Storage type64TB of a3i DDN low latency storage

  • Inter-GPU BandwidthInfiniband 400 Gb/s

  • Made of the most high-end technologies for AI

    NVIDIA H100 Tensor Core GPUs, the best engines for AI

    Our supercomputers, Nabu & Jero 2023 are made of NVIDIA DGX H100 systems with Nvidia H100 Tensor Core GPUs 80GB (SXM5). They can reach lightning fast multi-node scaling for AI, thanks to their latest generation GPUs:

    • Hopper architecture
    • Chip with 80 billion transistors spread over an area of 814 mm²
    • Tensor Core 4th generation up to 6x faster than A100 Tensor Core
    • Transformer Engine up to 30x faster AI inference speedup on language models compared to the prior generation A100
    • 2nd generation of secure MIG up to 7x secure tenants

    NVIDIA ConnectX-7 and Quantum-2 networks for seamless scalability

    Thanks to the InfiniBand NDR interconnection (400Gb/s), each 8 GPU compute node offers 3.2 Tb/s of bandwidth to all the other nodes on a totally non-blocking network architecture.

    Its brand new GPUDirect RDMA technology accelerates direct communication across all nodes of the cluster using InfiniBand, enabling:

    • 15% faster Deep Learning recommendations,
    • 17% faster for NLP,
    • 15% faster for fluid dynamics simulations,
    • 36% lower power consumption.

    DDN Storage made for HPC and co-developed with NVIDIA for artificial intelligence

    The AI Supercomputers benefit from DDN a3i storage optimized for ultra-fast computing. With over:

    • 2.7 TB/s read
    • 1.9 TB/s write
    • a write speed of over 15GB/s per DGX systems
      The DDN storage enables regular checkpoints for more security.

    SLURM for comprehensive management

    Benefit from a comprehensive management for the supercomputer with SLURM. An Open-source cluster management and job scheduling system for Linux cluster.

    Numerous AI applications and use cases

    Generative AI

    Generates new content, such as images, text, audio or, code. It autonomously produces novel and coherent outputs, expanding the realm of AI-generated content beyond replication or prediction.
    With models and algorithms specialized in:

    • Image generation
    • Text generation with Transformer Models also called LLMs (Large Language Models) such as GPT2
    • Code generation