Universal usage
The L40S GPU Instance offers unparalleled performance across a spectrum of tasks, including gen AI, LLM inference, small-model training and fine-tuning to 3D graphics, rendering, and video applications.
Accelerate the next generation of AI-enabled applications with the universal L40S GPU Instance, faster than L4 and cheaper than H100 PCIe.
The L40S GPU Instance offers unparalleled performance across a spectrum of tasks, including gen AI, LLM inference, small-model training and fine-tuning to 3D graphics, rendering, and video applications.
Starting at €1.4/hour for 1 GPU with 48GB of GPU memory and available in 4 different formats (1, 2, 4, 8 GPUs), the L40S GPU Instance enables cost-efficient scaling according to workload demands, ensuring optimal resource utilization on top of high-performance capability.
Seamlessly integrate the L40S GPU Instance into your existing infrastructure with Kubernetes support, streamlining deployment and management of AI workloads while maintaining scalability and flexibility.
GPU NVIDIA L40S GPU
GPU memory48GB GDDR6 (864GB/s)
Processor8 vCPUs AMD EPYC 7413
Processor frequency2.65 Ghz
Memory96GB of RAM
Memory typeDDR4
Network Bandwidth2.5 Gbps
Storage1.6TB of Scratch Storage and additional Block Storage
CoresTensor Cores 4th generation RT Cores 3rd generation
Use H100 PCIe GPU Instances for Medium to Large scale Foundational model training but harness L40S capabilities to fine tune in hours and train in days small LLMs.
Source: NVIDIA L40S Product Deck, October 2023
L40S GPU Instance, with 1 GPU, delivers inference performance that enables you to put Llama 2-7B main use cases in production efficiently:
*serving a LLama 2-7B model in FP8
Rendering is essential for bringing visual elements together, from lighting effects to textures, to generate the final result. Leveraging the Lovelace architecture, the L40S GPU Instance demonstrates superior performance compared to the Ampere architecture, particularly in rendering and graphics tasks. Some concrete results for you to compare:
Source: NVIDIA L40S Product Deck, October 2023
Choose your Instance's format
With four flexible formats, including 1, 2, 4, and 8 GPU options, you can now easily scale your infrastructure according to your specific requirements.
Instance Name | Number of GPU | TFLOPS FP16 Tensor Cores | VRAM | price per hour | price per minute |
L40S-1-48G | 1 NVIDIA L40S GPU | 362 TFLOPS | 48GB | €1.4/hour | €0.0235/min |
L40S-2-48G | 2 NVIDIA L40S GPUs | 724 TFLOPS | 2x 48GB | €2.8/hour | €0.047/min |
L40S-4-48G | 4 NVIDIA L40S GPUs | 1,448 TFLOPS | 4x 48GB | €5.6/hour | €0.094/min |
L40S-8-48G | 8 NVIDIA L40S GPUs | 2,896 TFLOPS | 8x 48GB | €11.2/hour | €0.188/min |
Match any growth of resource needs effortlessly with an easy-to-use managed Kubernetes compatible with a dedicated control plane for high-performance container management.
Distribute workloads across multiple servers with Load Balancer to ensure continued availability and avoid servers being overloaded.
Secure your cloud resources with ease on a resilient regional network.