ScalewaySkip to loginSkip to main contentSkip to footer section

ai-PULSE 2024 white paper - discover the insights!

L40S GPU Instance

Accelerate the next generation of AI-enabled applications with the universal L40S GPU Instance, faster than L4 and cheaper than H100 PCIe.

Universal usage

The L40S GPU Instance offers unparalleled performance across a spectrum of tasks, including gen AI, LLM inference, small-model training and fine-tuning to 3D graphics, rendering, and video applications.

Cost-effective scalability

Starting at €1.4/hour for 1 GPU with 48GB of GPU memory and available in 4 different formats (1, 2, 4, 8 GPUs), the L40S GPU Instance enables cost-efficient scaling according to workload demands, ensuring optimal resource utilization on top of high-performance capability.

K8s compatibility

Seamlessly integrate the L40S GPU Instance into your existing infrastructure with Kubernetes support, streamlining deployment and management of AI workloads while maintaining scalability and flexibility.

Join leaders and engineers for a one-day technical conference dedicated to AI breakthroughs, research & demonstrations !
Nov 7, 2024, Station F, Paris

Pre-register now!

L40S GPU technical specifications

  • gpu

    GPU

    NVIDIA L40S GPU

  • gpu_memory

    GPU memory

    48GB GDDR6 (864GB/s)

  • processor

    Processor

    8 vCPUs AMD EPYC 7413

  • processor_frequency

    Processor frequency

    2.65 Ghz

  • memory

    Memory

    96GB of RAM

  • memory_type

    Memory type

    DDR4

  • bandwidth

    Network Bandwidth

    2.5 Gbps

  • storage

    Storage

    1.6TB of Scratch Storage and additional Block Storage

  • threads_cores

    Cores

    Tensor Cores 4th generation RT Cores 3rd generation

Ideal use cases with the L40S GPU Instance

LLM fine-tuning & training

Use H100 PCIe GPU Instances for Medium to Large scale Foundational model training but harness L40S capabilities to fine tune in hours and train in days small LLMs.

  • An infrastructure powered by L40S GPUs can train Models in days
    To train Llama 2-7B (100B tokens) it would require 64 L40S GPUs and take 2.9 days (versus 1 day with H100 NVlink GPUs, like on Nabu2023)
  • Fine-tune Models in hours
    To fine tune Llama 2-70B SFT (1T tokens), it will require 64 L40S GPUs L40S and take 8.2 hours (versus 2.5hours with H100 NVlink GPUs, like on Nabu2023)

Source: NVIDIA L40S Product Deck, October 2023

"We have been using a L40S Instance, and it’s fantastic alternative to H100 PCIe considering the price point and speed.",
Wilson Wongso Machine Learning Engineer at Bookbot

Try it today

Estimate the GPU cost

Choose your plan

Select...
Select...
GB

Min. 10 GB

You need a Flexible IP if you want to get an Instance with a public IPv4. Uncheck this box if you already have one available on your account, or if you don’t need an IPv4.

Estimated cost

OptionValuePrice
ZoneParis 2
Instance1x0€
Volume10GB0€
Flexible IPv4Yes0.004€
Total
Daily0
Weekly0
Monthly0
Give the L40S GPU Instance a try today

Scale your infrastructure effortlessly

Choose your Instance's format

With four flexible formats, including 1, 2, 4, and 8 GPU options, you can now easily scale your infrastructure according to your specific requirements.

Instance NameNumber of GPUTFLOPS FP16 Tensor CoresVRAMprice per hourprice per minute
L40S-1-48G1 NVIDIA L40S GPU362 TFLOPS48GB€1.4/hour€0.0235/min
L40S-2-48G2 NVIDIA L40S GPUs724 TFLOPS2x 48GB€2.8/hour€0.047/min
L40S-4-48G4 NVIDIA L40S GPUs1,448 TFLOPS4x 48GB€5.6/hour€0.094/min
L40S-8-48G8 NVIDIA L40S GPUs2,896 TFLOPS8x 48GB€11.2/hour€0.188/min

Build and monitor a flexible and secured cloud infrastructure powered by GPU

KubernetesDedicatedControlPlane-Schema-1040px-Dar.webp

Benefit from a complete cloud ecosystem

Kubernetes Kapsule

Match any growth of resource needs effortlessly with an easy-to-use managed Kubernetes compatible with a dedicated control plane for high-performance container management.

Learn more

Load Balancer

Distribute workloads across multiple servers with Load Balancer to ensure continued availability and avoid servers being overloaded.

Learn more

Frequently asked questions

What is included in the Instance price?

Our GPU Instance's price include the vCPU, the RAM needed for optimal performance, a 1.6TB of Scratch Storage. It doesn't include Block Storage and Flexible IP.
To launch the L40S GPU Instance we strongly recommend that you provision an extra Block Storage volume, as Scratch Storage is ephemeral storage that disappears when you switch off the machine. Scratch Storage purpose is to speed up the transfer of your data sets to the gpu.
If you want more information about how to use Scratch storage: Follow the guide
Any doubt about the price, use the calculator, it's made for it!

What are the differences between L40S-1-48G, L40S-2-48G, L40S-4-48G and L40S-8-48G?

These are 4 formats of the same instance embedding NVIDIA L40SGPU.

  • L40S-1-48G embeds 1 NVIDIA L40S GPU, offering a GPU memory of 48GB
  • L40S-2-48G embeds 2 NVIDIA L40S GPUs, offering a GPU memory of 2 times 48GB.
  • L40S-4-48G embeds 2 NVIDIA L40S GPUs, offering a GPU memory of 4 times 48GB.
  • L40S-8-48G embeds 2 NVIDIA L40S GPUs, offering a GPU memory of 8 times 48GB.
Can I use MIG to get the most out of my GPU?

NVIDIA Multi-Instance GPU (MIG) is a technology introduced by NVIDIA to enhance the utilization and flexibility of their data center GPUs, specifically designed for virtualization and multi-tenant environments. This features is available on H100 PCIe GPU Instance but not on the L40S GPU Instance. However users can benefit from Kubernetes Kapsule compatibility to optimize their infrastructure.

Learn more

How to choose the right GPU for my workload?

There are many criteria to take into account to choose the right GPU instance:

  • Workload requirements
  • Performance requirements
  • GPU type
  • GPU memory
  • CPU and RAM
  • GPU driver and software compatibility
  • Scaling

For more guidance read the dedicated documentation on that topic