Serverless Containers
Are applications deployed on Serverless Containers stateless?
Yes, all applications deployed on Serverless Containers are stateless. This means the server does not store any state about the client session. Instead, the session data is stored on the client and passed to the server as needed.
How am I billed for Serverless Containers?
Principles
Serverless Containers are billed on a pay-as-you-go basis, strictly on resource consumption (Memory and CPU).
- Memory consumption: The memory consumption is obtained by multiplying the memory tier chosen by the container run duration.
- vCPU consumption: The vCPU consumption is obtained by multiplying the vCPU tier chosen by the container run duration.
- Ephemeral storage: Is free of charge, the maximum value of ephemeral storage depends on the memory value.
Prices
-
Memory consumption: €0.10 per 100 000 GB-s, and we provide a 400 000 GB-s free tier per account and per month.
Memory Price per second 128 MB €0.000000125 256 MB €0.00000025 512 MB €0.0000005 1024 MB €0.000001 2048 MB €0.000002 3072 MB €0.000003 4096 MB €0.000004 -
vCPU consumption: €1.00 per 100 000 vCPU-s, and we provide a 200 000 vCPU-s free tier per account per month.
CPU Price per second 0.07 vCPU €0.0000007 0.14 vCPU €0.0000014 0.28 vCPU €0.0000028 0.56 vCPU €0.0000056 1.12 vCPU €0.0000112
Example
Criteria | Value |
---|---|
Monthly duration | 30 000 000 s |
Amount of memory allocated | 128 MB |
Amount of vCPU allocated | 70 mvCPU |
Free tier | Yes |
Price calculation
- Memory consumption
- Allocated Memory conversion: 128 MB = 0.125 GB
- Resource consumption: 30 000 000 s * 0.125 GB = 3 750 000 GB-s
- Free tier: 400 000 GB-s
- Billed resources: 3 750 000 - 400 000 = 3 350 000 GB-s
- Cost: 3 350 000 * €0.0000010 = €3.35
- vCPU consumption
- Allocated vCPU conversion: 70mvCPU = 0.070 vCPU
- Resource consumption: 30 000 000 s * 0.070 vCPU = 2 100 000 vCPU-s
- Free tier: 200 000 vCPU-s
- Billed resources: 2 100 000 - 200 000 = 1 900 000 vCPU-s
- Cost: 1 900 000 * €0.0000100 = €19.00
Monthly Cost: €22.35
How can I select the right resources (vCPU/RAM/ephemeral storage) for Serverless Containers ?
Insufficient vCPU, RAM or ephemeral storage can lead to containers going to error status. Make sure to provision enough resources for your container.
We recommend you set high values, use metrics to monitor the resource usage of your container, then adjust the values accordingly.
How can I reduce the cold-starts of Serverless Containers ?
-
Optimize the startup: Cold-starts can be affected by a loading a large number of dependencies and opening lot of resources at startup. Ensure that your code avoids heavy computations or long-running initialization at startup and optimize the number of loaded libraries.
-
Keep your container warm: You can use CRON triggers at certain intervals to keep your container warm or set the min-scale parameter to
1
when required. -
Increase resources: Adding more vCPU and RAM can help to significantly reduce the cold-starts of your container.
-
Use sandbox v2: We recommend you use sandbox v2 (advanced settings) to reduce cold starts.
What are the limitations of Serverless Containers?
Refer to our dedicated page about Serverless Containers limitations and configuration restrictions for more information.
Where should I host my container images for deployment ?
Scaleway's Container Registry allows for a seamless integration with Serverless Containers and Jobs at a competitive price. Serverless products support external public registries (such as Docker Hub), but we do not recommend using them due to uncontrolled rate limiting, which can lead to failures when starting resources, unexpected usage conditions, and pricing changes.
How can I copy an image from an external registry to Scaleway Container Registry?
You can copy an image from an external registry using the Docker CLI, or open source third-party tools such as Skopeo. Refer to the dedicated documentation for more information.
Can I whitelist the IPs of my containers?
Serverless Containers does not yet support Private Networks. However, you can use the Scaleway IP ranges defined at https://www.scaleway.com/en/peering/ on Managed Databases and other products that allow IP filtering.
How can I deploy my Containers?
There are several ways to deploy containers. Refer to the dedicated documentation to determine the best method for your use case.
Which protocols are supported by Serverless Containers?
Serverless Containers use the http1 protocol by default, but some services (e.g., gRPC) only support http2.
Protocol switching is available in the Console under the Advanced options
section in the Deployment
tab.
Why does my gRPC container not respond?
Containers use http1 by default, yet the gRPC protocol requires http2. You can upgrade the protocol to http2 (h2c
).
How does Serverless Container healthcheck work ?
A Serverless Container is set to ready
once the specified port is correctly bound to the container, and will start receiving traffic. If your application needs to perform some tasks before receiving traffic (e.g. connect to a database), it’s important to run them before binding to the port (starting the webserver).
For now, the HEALTHCHECK
Docker directive has no impact on container readiness. In the future, the healthcheck will be customizable for your applications.
How can I configure access to a Private Network?
Scaleway Serverless Containers does not currently support Scaleway VPC or Private Networks, though this feature is under development.
To add network restrictions on your resource, consult the list of prefixes used at Scaleway. Serverless resources do not have dedicated or predictable IP addresses.
How can I attach Block Storage to a Serverless Container?
Scaleway Serverless Containers do not currently support attaching Block Storage. These containers are designed to be stateless, meaning they do not retain data between invocations. For persistent storage, we recommend using external solutions like Scaleway Object Storage.
Why does my container have an instance running after deployment, even with min-scale 0?
Currently, a new container instance will always start after each deployment, even if there is no traffic and the minimum scale is set to 0. This behavior is not configurable at this time.