Skip to navigationSkip to main contentSkip to footerScaleway DocsSparklesIconAsk our AI
SparklesIconAsk our AI

Managed Inference footprint calculation

AlertCircleIcon
Important

The calculations on this page take into consideration all the elements described on the Environmental Footprint calculation breakdown page. Refer to said page for a full breakdown of the Environmental Footprint calculation performed at Scaleway.

Calculation aspects

Deploy a model

The environmental footprint of Scaleway's Managed Inference product is calculated by aggregating the impact of all underlying resources dedicated to your inference (GPU instances and all resources needed to make the product work).

Since Managed Inference is built on top of other Scaleway products (Instances, Block Storage, Object Storage, Kubernetes), our methodology relies on the sum of these individual components.

The Managed Inference carbon footprint is the sum of the impact of:

  • Nodes: The nodes are based on Scaleway GPU Instances. We apply the Instance environmental footprint methodology. Each node corresponds to a specific Scaleway Instance type (e.g., an H100-SXM-4 database node uses an H100-SXM-4 Instance). If you choose to add several nodes, your inference runs on several nodes. Therefore, the node impact is multiplied by the number of nodes.

  • Inference infrastructure: To deploy a model on Managed Inference, we deploy a Kubernetes-based complex infrastructure, to which the nodes are attached. All elements needed to deploy this infrastructure are taken into account in our calculation.

  • Control plane: The control plane represents the shared infrastructure managed by Scaleway to orchestrate, monitor, and maintain your managed inference. We allocate a fixed share of the global control plane's power consumption and manufacturing impact to each active database node.

Import a model

When you import a model, it is stored in Object Storage. The size, and therefore the impact, varies depending on the size of the model.

SearchIcon
No Results