H100-SXM GPU Instance
Ideal for LLM fine-tuning and larger LLM model Inference.
- NVIDIA Hopper (2022)
- 2-8 GPUs NVIDIA H100-SXM
- 80 GB VRAM (HBM3, 3.35 TB/s)
- 32-128 vCPUs (Sapphire Rapids)
- 240-960 GB RAM
- 3.2-12,8 TB ephemeral Scratch NVMe
- 99.5% SLA
From €6.018/hour
(billed per minute)

