NavigationContentFooter
Jump toSuggest an edit

Public connectivity - best practices

Reviewed on 30 September 2024Published on 30 September 2024

This document sets out best practices for securing and optimizing public connectivity for your Scaleway resources.

Public vs private connectivity

Public vs private connectivity defines how resources are accessed and exposed over networks.

  • Public connectivity: Your resource has a public IP address and is reachable over the public internet. Anyone with the right credentials can access the resource via its public IP address, e.g. over an SSH connection or simply by directly accessing its IP address in a browser to retrieve and display any content it is serving, e.g. over HTTP.
  • Private connectivity: Your resources is reachable over an attached Private Network. The resource has a private IP address, but it can only be accessed via this address from within the VPC of the Private Network. Such a resource may or may not also have a public IP address.

Effectively managing IP addresses

Flexible IP addresses: definition

Public connectivity for Instances, Elastic Metal, Load Balancers and Public Gateways is facilitated by a flexible IP address.

  • A flexible IP address is a public IP address that you can attach and detach from the resource at will.
  • If you detach it, it returns to the pool of flexible IP addresses kept in your account for that product, and you can attach it to a different resource (or reattach it to the same one as before).
  • Flexible IP addresses are scoped to a single product and a single Availability Zone (AZ). If you create a flexible IP address for an Instance in PAR-2, you can move it to a different Instance in PAR-2, but not to an Instance in PAR-1, nor to an Elastic Metal server in any AZ.

Other resource types generally facilitate public connectivity in other ways, e.g. via public endpoints that cannot be modified by the user. Public connectivity may be mandatory with no option to deactivate (e.g. for Apple Silicon), or optional (e.g. for Managed Database). See the specific documentation for the product in question to find out more.

Exploiting the benefits of flexible IPs

As flexible IP addresses can be moved between resources, they provide the following advantages:

  • Seamless failover and disaster recovery: If your Instance, for example, goes down, you can move its public IP to a different Instance in the same AZ to ensure the service remains available.
  • Zero downtime during maintenance and migration: When you need to carry out updates, migrations or maintenance on a resource, you can temporarily move its public IP to a backup resource to avoid disruption for users.
  • IP retention and consistent endpoints: Maintaining the same IP avoids the need for frequent DNS or firewall rule updates, and makes it easier to manage your dynamic cloud environments.

In the future, look out for even more improvements to our flexible IP offering, such as the ability to move flexible IPs between different types of products, and to manage all your public flexible IPs from your IPAM dashboard.

Limiting public connectivity, prioritizing Private Networks

We strongly recommend that you disable public connectivity on all of your Scaleway resources, unless it is absolutely required. Attaching resources to Private Networks, and limiting their communication to these networks brings the following advantages:

  • Minimized attack surface: Without a public IP address, the resource is not exposed directly to the internet, decreasing the risk of DDoS or brute force attacks, or unauthorized access.
  • Reduced cost: Public (flexible) IP addresses are billed, whereas Private Networks and the private IP addresses that attach resources to Private Networks are free of charge (except for Elastic Metal servers).
  • Improved latency: Communication between resources over a Private Network is generally faster, as it does not need to be routed through the public internet.

Depending on the resource type, public connectivity can be disabled by:

  • Toggling off Public connectivity when creating the resource
  • Detaching an existing flexible IP address (after resource creation)
  • Deactivating public connectivity (after resource creation)
Note

For some products, e.g. Apple Silicon, public connectivity cannot be disabled at any stage, and for other resources, eg Managed Databases for Redis, public connectivity options cannot be modified after resource creation. Check the documentation for your specific product to learn more.

Favor resources such as Public Gateways and Load Balancers to provide access to the public internet over the Private Network. This allows Instances and other attached resources to send and receive packets to the internet through a single, secure point of access. You can use the Public Gateway’s SSH bastion feature to connect to your resource via its private IP address.

Find out more about how to get the most from Private Network in our dedicated documentation

Implementing security controls

Different products offer different security features and controls to help place limits and restrictions on the traffic arriving over your resource’s public interface. We strongly recommend that you implement all available measures to minimize security risk and optimize the security of your resource. Some of the available security controls are listed below.

Instances: Security groups

Security groups act as firewalls, filtering public internet traffic on your Instances. They can be stateful or stateless, and allow you to create rules to drop or allow public traffic to and from your Instance. Find out how to create and configure security groups.

Load Balancers: ACLs

Access Control Lists (ACLs) allow you to control traffic arriving at your Load Balancer's frontend, and set conditions to allow traffic to pass to the backend, deny traffic from passing to the backend, or redirect traffic. Conditions can be set based on the traffic's source IP address and/or HTTP path and header, or you can choose to carry out unconditional actions. ACLs allow you to build extra security into your Load Balancer, as well as letting you redirect traffic, for example from HTTP to HTTPS.

Learn how to use the ACL feature in our dedicated how-to and go deeper with our reference documentation.

Other controls

For resources such as Instances and Elastic Metal servers, you may wish to implement third-party manual solutions in front of your public services to enhance security, for example:

  • Deploying a reverse proxy, e.g. Nginx (/tutorials/nginx-reverse-proxy/), and configuring it to enforce rate limits and to throttle traffic. This helps to prevent abuse and DDOS attacks on your public-facing services.
  • Installing a Web Application Firewall that can filter out malicious traffic such as requests containing attack patterns, or requests from blacklisted IPs.

Handling traffic surges

Exposing your resource to the public internet can present risks of unexpected traffic surges. These may be malicious DDoS attacks, or legitimate surges that are simply the result of high demand. If correct mechanisms are not put in place to deal with high load, you risk facing downtime, service unavailability and performance degradation. A number of possibilities exist to help you handle this scenario:

Autoscaling

Scaleway currently offers Autoscaling in Public Beta. Autoscaling allows you to dynamically adjust the number of Instances within a given Instance group based on defined scaling policies. Scaling actions (scale up or down) are triggered when the monitored metric exceeds the configured thresholds from your policies. Check out the API documentation.

Load Balancer

Placing a Scaleway Load Balancer in front of your backend servers allows you to expose multiple Instances through a single public IP. The Load Balancer distributes workload across the servers in the backend pool, ensurable scalable and continuously available applications, even during heavy traffic. You can manually add and remove servers from the backend pool as necessary, and configure the best balancing method for your particular needs. Find out more in the Load Balancer documentation.

Edge Services

Available for Load Balancers and Object Storage buckets, Scaleway Edge Services provides a caching service to reduce load on your origin. This means that content can be served directly to users from Edge Services’ servers, instead of from your Load Balancer or Object Storage bucket. Learn more about Edge Services.

Kubernetes Kapsule

Hosting your containerized application in a managed Kubernetes cluster brings many benefits in terms of scaling and managing fluctuating demand. Kubernetes can automatically adjust the number of running resources within defined limits, based on current demand. It also offers self-healing capabilities in the case of node failure. Find out more in the Scaleway Kubernetes documentation.

Monitoring and alerting via Scaleway Cockpit

We recommend that you use Scaleway Cockpit to monitor your resources. Cockpit stores metrics, logs and traces and provides a dedicated dashboarding system on Grafana for easy visualisation. Different metrics are available for different resource types, with metrics for network traffic being available for many, enabling you to monitor connections over the public interface. You can also configure managed and pre-configured alerts for your resources, to receive warnings for potentially abnormal behavior or unusual network activity.

Read more about Scaleway Cockpit.

Was this page helpful?
API DocsScaleway consoleDedibox consoleScaleway LearningScaleway.comPricingBlogCareers
© 2023-2024 – Scaleway