Managed Inference footprint calculation
Calculation aspects
Deploy a model
The environmental footprint of Scaleway's Managed Inference product is calculated by aggregating the impact of all underlying resources dedicated to your inference (GPU instances and all resources needed to make the product work).
Since Managed Inference is built on top of other Scaleway products (Instances, Block Storage, Object Storage, Kubernetes), our methodology relies on the sum of these individual components.
The Managed Inference carbon footprint is the sum of the impact of:
-
Nodes: The nodes are based on Scaleway GPU Instances. We apply the Instance environmental footprint methodology. Each node corresponds to a specific Scaleway Instance type (e.g., an
H100-SXM-4database node uses anH100-SXM-4Instance). If you choose to add several nodes, your inference runs on several nodes. Therefore, the node impact is multiplied by the number of nodes. -
Inference infrastructure: To deploy a model on Managed Inference, we deploy a Kubernetes-based complex infrastructure, to which the nodes are attached. All elements needed to deploy this infrastructure are taken into account in our calculation.
-
Control plane: The control plane represents the shared infrastructure managed by Scaleway to orchestrate, monitor, and maintain your managed inference. We allocate a fixed share of the global control plane's power consumption and manufacturing impact to each active database node.
Import a model
When you import a model, it is stored in Object Storage. The size, and therefore the impact, varies depending on the size of the model.