Cloud FinOps: how to optimize your cloud bill

Hana Khelifa
7 min read

Should early-stage startups focus on optimizing cloud spending or growth? I recently read an article on TechCrunch that said:

In early product or go-to-market stages, optimizing cloud spending should be the last thing on a founder’s mind besides utilizing as much cloud resource credits as possible
—Shomik Ghosh, partner at Boldstart Venture

But the issue isn’t as black and white as you would think.

There are other priorities when building a product before getting to cloud cost optimization. Leveraging in as many cloud credits as you can possible get may result in a rocky foundation for your whole enterprise: it can lead you to relying on beefy Instances that are way too large for your actual needs – because, why not? you have the credits and you’d rather have too much of it than too little –but once those credits run out, you are left with an enormous machine – in terms of resources – that’s probably going to be under-utilized.

Without proper planning, credits can become something of a drug – your product will need more and more over time without regard of the actual business requirements. Long-term optimization initiatives need to happen before they run out, so you don’t crave for them while in production.

Of course, hyper-focusing on optimization is a job in and of itself, one most early-stage startups can’t afford to spend time on. But there are best practices you can implement to build an efficient baseline and guidelines for your future growth. Just like your product, your infrastructure will evolve. And that evolution will require continuous optimization.

When it comes to product, startups constantly test, iterate, learn, get feedback, and improveyou should bring that same mentality to your infrastructure. When it comes to optimizing your monthly cloud bill, you can take action across your organization and leverage automation tools, Serverless, and your storage to help close costly loopholes.

1. Encourage collaborative behavior

The first step to optimize your bill is to be able to predict and fully understand that it, which boils down to three pillars:

  • Visibility
  • Control
  • Accountability

Multiple tools are available to enhance the visibility and governance of your infrastructure layers. But the real first factor is people. You won’t be able to implement a successful and optimized strategy without a commitment from yourself and your team that ensures visibility, accountability, and control over your infrastructure.

Consult with all the teams who utilize cloud resources to understand how they use your current cloud services, their needs, and their frustrations. Which resources or services are actually unused, unnecessary, or unloved?

Once you understand more about the current situation, consider how you can track idle resources. After all, you’ll need to understand what your team is currently using to prevent paying unnecessarily. Another couple of questions to consider: What’s the process when it comes time to shut down a resource? And can anyone spin out an Instance, or is it limited to your platform team?

Keep in mind, cloud optimization is an always-present, evolving effort, so you need all parties on-board with the culture of cost awareness.

Build collective patterns, techniques, and blueprints for effective deployment in your organization, and share that knowledge. Even the smallest changes can add up to a big win.

2. Get comfortable with your cost visibility tools

Once you understand your costs and, importantly, their origins, you can start putting controls in place to optimize spending. To help with this, we provide cost management tools to give you the visibility and insights you need to keep up.

Estimated Cost Calculator

Before finalizing your order on the console, a simulator displays the Estimated Cost Calculator. This appears whenever you create a resource, and shows you how much the resource will cost over a set period of time.

Estimated Cost view

The Current Consumption Dashboard

The Current Consumption dashboard is the first thing you see when you log in to your Scaleway account. It helps monitor and correct anomalies. The dashboard shows your resources by product category (Compute, Network, Storage, etc.) so you can easily keep track of everything.

Current Consumption view

Billing alert

You can set up an alarm system to get notified as soon as your consumption exceeds a defined amount. You can opt to receive the alert by SMS, email or API webhook.

The budget is the limit of your expenses, in euros. The threshold is a percentage of this limit. Both are previously defined by you in the console.

For example, if you have defined a budget of 1000€, you can configure your billing alert so you receive an email notification once you have consumed 50% of this budget (limit). So in this case, you’ll receive an email once you consume 500€ of resources.

Billing Alerts view

3. Rightsize your compute resources

Rightsizing refers to the process of finding an optimal cloud configuration that maximizes your resources at the lowest cost. In other words, it is about finding unused or idle resources. Every cloud provider offers a large range of Instances suited for a variety of workloads.

The process of matching Instance sizes to your workload performance is challenging, and you may not get it right the first time–kind of like when you have to choose the proper Tupperware for leftovers. You’ll need your application up and running to really understand the real workload before you adjust. But don’t forget to do regular checks once in a while to ensure you aren’t leaving compute power idle because you’ve committed to oversized Instances.

One of the most effective ways to build an optimized infrastructure when it comes to cost and profitability is to proactively monitor your compute utilization, and rightsize it when needed. To make that as convenient as possible, start automating your provisioning as soon as you can.

Leveraging infrastructure-as-code to do so, with Terraform for example, is one of the principles of running efficiently on the cloudand will help you to avoid unnecessary manual task (and errors).

4. Serverless makes pay-per-use possible

Serverless Functions

Serverless is a model in which the code is executed on-demand. That code can represent an entire application, but the most common use of Serverless Functions is to integrate application functionalities as a unit. You can add multiple functions to your application or your software and use the same function in various applications.

Serverless's "on-demand" functionality enables infrastructures to be more flexible and you only pay for what you use. You send us your code in the form of a function, and we will set it up for you and scale it for you when needed.

For example, let’s say you have an application running where the user needs to upload an image. You could set up triggers to execute the function whenever the user uploads the image, and with Serverless Functions, stop running as soon as the task is complete. Then, the functions get triggered again when a new one is uploaded.

Without Serverless Functions, you would need all the parts of your application to run constantly, instead of configuring specific components to run only when neededthus saving resources.

Serverless Containers

Serverless Containers provides a scalable managed compute platform to run your application. We will set up your containers so they can run your application no matter the language of your app or library.

You can focus on building applications and accelerating deployment and leave the container deployment, scheduling, and cluster configuration to Serverless Containers.

As you can easily add or remove resources, you'll have absolute control over your resource consumptionand therefore your bill.Containers are only executed when an event is triggered, allowing users to optimize and save money when no code is running.

5. Adapt the storage to the use case

We often see organizations using the same storage solution across their application to simplify the setup of their infrastructure. In fact, multiple types of storage exist, each of them with its own sets of characteristics, latency, and pricing.

For instance, if you want to send data to a deep archive, you may want to consider Cold Storage. That’s an Object Storage solution which stores data to which you wouldn't need immediate access–there’s often a certain amount of latency to the first byte. For example, the Scaleway Glacier storage requires 24-48 hours of latency to the first byte. On the other hand, you pay only for what you store, with no minimum commitments, and it has the lowest storage cost for archived data.

Adapting your storage solution to your use case, especially when you handle a massive amount of data, can lead to significant cost savings.

Local Storage vs Block Storage

6. Autoscale on Kubernetes

Building an application that aims to scale? This is where Kubernetes, and more specifically, Kubernetes Autoscaling, comes in handy.

Kubernetes provides multiple layers of autoscaling functionality: Pod-based scaling with the Horizontal Pod Autoscaler (HPA) and the Vertical Pod Autoscaler (VPA), as well as node-based with the Cluster Autoscaler. It automatically scales up your cluster whenever needed, and scales it back down when the load is lower. These layers ensure each pod and cluster has the right performance to serve your current needs.

The Autoscaling feature provided by Kubernetes Kapsule allows you to set parameters to manage your costs and configure your cluster with all the flexibility you need.

7. Get hyper-personalized infrastructure optimization

The advice we’ve listed so far could be implemented by anyone looking to optimize their infrastructure, but all architectures are unique, and at a certain point, require a more personalized approach. With that reality in mind, we updated our support plan to include an architectural review by a Solutions Architect. This technical review has plenty of benefits, most notably how to optimize your cloud operations for cost-efficacy.

8. Leverage Scaleway's Compute ranges

If you are looking for the proper ally to deploy your project on, you might want to look into our range of Virtual Instances. We divided in four main categories to help you navigate among all the different specs.


The Instances from the Learning range are perfect for small workloads and simple applications. They are built to host small internal applications, staging environments, or low-traffic web servers.


The Cost-Optimized range balances compute, memory, and networking resources. They can be used for a wide range of workloads - scaling a development and testing environment, but also Content Management Systems (CMS) or microservices. They're also a good default choice if you need help determining which instance type is best for your application.


The Production-Optimized range includes the highest consistent performance per core to support real-time applications, like Enterprise Instances. In addition, their computing power makes them generally more robust for compute-intensive workloads.


Expanding the Production-Optimized range, the Worload-Optimized range will be launched in the near future and will provide the same highest consistent performance than the Production-Optimized instances But they will come with the added flexibility of additional vCPU:RAM ratio in order to perfectly fits to your application’s requirements without wasting any vCPU or GB of RAM ressources.

You can find more information, and our roadmap for 2023 incoming products right here.

Final thoughts

Optimizing your cloud operations does not have to be complicated, but it requires a disciplined approach that establishes good rightsizing habits and analytics-driven insights and action.

And obviously, one of the best ways to optimize your cloud bill is to choose a cloud provider with an interesting cost/performance.

Share on

Recommended articles