Meet Cockpit, Scaleway’s observability product! Cockpit is a fully-managed, multi-cloud solution to store, monitor, and get alerts for your metrics and logs. And it’s built using open-source tools.
As Scaleway’s ecosystem grows, both we and our customers are building increasingly complex cloud architectures on Scaleway services. Accordingly, building, securing, and running APIs at scale is becoming a time-consuming job for developers. We decided to begin the development of an API Gateway that will relieve developers from infrastructure and security management aspects so that they can focus on API development.
To help us build the best product we can, we are running an early access phase for the future API Gateway product; you can think of it as a pre-beta of sorts. This means we’ll be releasing an initial prototype and developing in the open, taking on internal and external feedback.
We are making the early access API Gateway, including a CLI and some basic configuration and management utilities, available on GitHub. We are looking for a group of engaged users to help us in this process, to try out the prototype, provide feedback on its functional scope, and let us know what the final product should look like.
What we mean by “API Gateway”
An API gateway is a system that provides a single entry point to a distributed application, which may itself be made up of multiple underlying services. The API gateway centralizes the configuration, routing, and security needed to handle all incoming requests, freeing the underlying components from these concerns.
Concretely, an API gateway may provide multiple services:
- Routing: receiving requests from a single entry point and forwarding them to the relevant upstream system (e.g., a container or a serverless function)
- High availability and scaling: load-balancing requests over a resilient, distributed set of gateway nodes
- Authentication: verifying that the API request sender is who it says it is
- Authorization: integrating with underlying systems’ own authorization mechanisms to permit or deny access to certain endpoints at the highest level
- Centralized configuration: handling cross-cutting concerns such as Cross-Origin Resource Sharing (CORS) and Transport Layer Security (TLS), avoiding the need to configure these on each individual endpoint
- Monitoring and alerting: giving a complete overview of the status of the traffic into the system, including error rates and latencies
- Rate limiting: protecting the underlying application from malicious users and smoothing spikes in traffic
- Protocol adaptation: supporting multiple protocols and clients through the single gateway and translating between protocols (e.g., between REST and gRPC)
- Load balancing: balancing traffic across multiple replicas of an upstream system
- Caching: reducing load on upstream services and reducing latency for API calls
We have spent the last few months investigating API gateways and finding an approach that we think can work for both our external and internal users. We followed a few key principles:
- clearly separate key requirements from nice-to-haves;
- use open-source technologies wherever possible;
- take advantage of the Scaleway ecosystem for deployment and scaling.
In addition, we recognize that an API gateway has a broad set of features and can be a daunting prospect to configure. For this reason, we have focused on building something that makes it easy to do the basics (routing, authentication, monitoring and alerting) while giving users the flexibility to opt in to more complex features (authorization and authentication, protocol adaptation, etc.).
Building our API Gateway prototype
We have settled on an approach of running Kong Gateway on Scaleway Serverless Containers backed by a Scaleway Managed Relational Database. We have built a command-line interface (CLI) that provides a “one-click” deployment, along with some utilities for configuring and managing the gateway.
Kong Gateway is a popular open-source API gateway with production-grade performance and a rich plugin ecosystem. Kong fits our requirements perfectly from both a technical and product standpoint: we can offer an easy-to-use standard Kong deployment using Scaleway infrastructure that can be integrated with the rest of the Scaleway ecosystem while giving users access to the full Kong ecosystem to customize their own gateways as they see fit.
The early access gateway includes the CLI with one-click deployment and route management, as well as extended support for configuring CORS and JWT.
Moreover, the CLI deployment automatically configures the Scaleway Cockpit and deploys all other resources to the user’s Scaleway account. Users also have direct access to the Kong Admin API via a private Serverless Container, letting them configure Kong itself and install plugins from the Kong plugin library.
Next steps: developing a cloud-ready architecture
Our prototype is currently just that, a prototype. To release a fully managed cloud-scale product, we need to improve the current architecture in several ways:
The current deployment runs on Serverless Containers, which support auto-scaling by default. However, the cold start times on containers prevent us from scaling down to zero, which would introduce unacceptable latency for an API gateway. For now, we always run at least one container at all times, but this is not in line with a true serverless product. We will be working on reducing these cold starts to make scale-to-zero more feasible.
The Scaleway Serverless SQL Database product is still in development, but once it’s ready, we would like to move from a managed relational database to a serverless database. This will make the storage layer of the gateway scale-to-zero, in addition to the compute layer running on Serverless Containers.
The added latency from routing requests through an API gateway needs to be absolutely minimal, so we will be profiling and optimizing the request flow from the Scaleway load balancer to the underlying containers.
Multi-region and AZ
Kong’s architecture allows for distribution across multiple regions and AZs, connected via a single database. To improve resiliency and performance, we will be working on distributing the gateway components across multiple Scaleway regions and AZs.
In addition to adding scale-to-zero on the gateway components, we will be looking to further optimize the resources allocated to both the gateway containers and the underlying database. Clearly, we want to maintain performance and scalability but need to avoid allocating idle resources.
High integration with the Cloud ecosystem
Since our users will be routing API requests to other Cloud resources hosted by Scaleway, we want to create a great user experience (UX) when it comes to enabling cross-product functionalities between the API gateway and other Cloud resources such as compute, storage, messaging and CDN products, among others.
The path from early access to fully-managed API Gateway
The early access version of the gateway offers some of the “must-have” features of an API gateway. However, in addition to the architectural shortcomings listed above, it is still far from a fully-managed service in a product sense. A final API Gateway product should be an easy-to-use solution that developers put faith in to serve high traffic to their business-critical applications.
From a User Experience (UX) standpoint, the very first version of the final product should allow developers to easily secure APIs and route them toward any Scaleway resource without worrying about the underlying server infrastructure. We want to be clear that we’re not there yet, hence the early access version.
The fully managed Scaleway API Gateway will also provide a high integration with other Scaleway ecosystem products, such as Serverless Functions, Containers, and Storage products, so API creation and routing to them is done in one click.
The final product will also be accompanied by documentation, example use cases, and recommended best practices. For a product as complex and business-critical as an API gateway, documentation is essential for users to get the most out of their gateways and the rest of the Scaleway ecosystem. This documentation will take the form of “getting started” guides, example applications, and open-source templates.
Early access vs. a “normal” beta
This phase of the development of the API Gateway is open to users much earlier than our standard betas. A beta is usually a final version of the product, missing a few finishing touches. An early-access product is something we are developing in the open. It may change significantly before the beta and General Availability.
The objective of the early access is to get user feedback on the functional scope and desired user experience. The current early access version of the API Gateway is our best guess at what the basic features of the final product will look like, and they may even meet the needs of some non-critical use cases already. However, we estimate that a full release of the final API Gateway product is roughly a year away in the first half of 2024.
In general, early access is great for users for several reasons:
- they can tell us what they need from a much earlier stage, from the smallest documentation point to the biggest functional scope decision;
- they can start using the product early, allowing them to do hands-on experiments integrating with their own systems;
- they can review, fork, and contribute to the code, allowing them to add customizations and features.
Early access is also great for us because:
- we know that what we’re building fits our users’ needs and don’t have to wait for the beta to get any nasty surprises;
- we can give users a deeper insight into our architectural decisions and codebase, something we can’t do when developing other proprietary products;
- having users using the product from very early on avoids “big bang” releases as usage grows more gradually.