ScalewaySkip to loginSkip to main contentSkip to footer section

ai-PULSE 2024 white paper - discover the insights!

Easily improve your speed and latency

Boost the performance and scalability of your applications with a powerful and managed in-memory caching solution.

A managed caching service in a few clicks

Stay focused on your core values: we handle the mundane tasks like updates, setup, and settings.

Open-source and managed by Scaleway

Keep your customers’ data in Europe with open-source solutions hosted in our regions in Paris, Amsterdam, and Warsaw.

*Redis is a registered trademark of Redis Ltd. Any rights therein are reserved to Redis Ltd. Any use by Scaleway is for referential purposes only and does not indicate any sponsorship, endorsement or affiliation between Redis and Scaleway.

Technical Specifications

  • check

    REDIS®

    Version 6.2

  • check

    Memory

    2 to 64GB DDR4

  • check

    High availability

    Available with 3 to 6 nodes clusters

  • check

    Processor

    2 to 16 vCPUs (AMD EPYC™ 7000 series)

  • check

    Security and administration

    Private Network

  • check

    Isolation

    1 node = 1 dedicated VM

Be faster than your competitors with Scaleway

Redis_SCW_DO_OVH_Benchmark_EN_oct2022_Website_a3d150f0e4.webp

Internal testing conducted by Scaleway in September 2022, in a stress test scenario, monitoring Ops/sec for the case of 50 clients over four threads testing GET and SET requests, on a ratio of 1 to 10 for 32 Bytes objects with Redis®.

Engineered for performance

The caching power of Redis®

Leverage our in-memory Managed Database for Redis® to store copies of your most frequently used data, like user data, sessions, and API responses.

Horizontal and vertical upscaling made easy

Your production environments can benefit from of our high availability and managed clusters with 3 to 6 nodes. Scale up horizontally or vertically in just a few clicks.

Integrated and automated

Easily integrate your workflow and Cloud products such as Private Networks. Compatible with Scaleway's Terraform provider plugin validated by HashiCorp.

The Benefits of Choosing Scaleway

24/7 Support

Enjoy peace of mind with our 24/7 customer support. We ensure your infrastructure is always up and running.

Enriched experience

We offer a new experience with API access, Linux distributions, an intuitive console, and Terraform.

Easy-to-use console

Our user interface was created with developers in mind. To give you the best & fun experience managing your cloud projects.

True cloud ecosystem

Our cloud products are designed & built to work together, offering you a seamless, world-class cloud experience.

Popular use cases

Get started with tutorials

Frequently asked questions

Why is it ideal for Cache usage?

Based on the Remote Dictionary Server technology, Scaleway Database for Redis® stores your data in the RAM of the underlying machine rather than on a disk (SSD/HDD). In other words, for each request to read, to insert, or to update data in a database, this can be executed using data available in the fastest and closest storage of your compute resource, the memory.

Traditional databases like MySQL or PostgreSQL store data on a disk which inevitably introduces IOPs and results in latency on each operation. Redis® is known for delivering millisecond response time and high performance for millions of requests per second to empower demanding workloads. The combination of a powerful in-memory data storage such as Redis and the management of the resources set-up, securisation, scaling and maintenance makes Scaleway Database for Redis a handy solution to improve the usability of your application.

One of the most common ways to implement cache is storing frequently accessed data in Redis® (therefore in memory), and serving your application's request from there, should data be available, instead of from your traditional database. If data is not available in memory, it can always be retrieved from the primary database.

What’s the difference between High Availability (HA) and Cluster mode?

A Redis™ cluster contains a minimum of 3 nodes and up to 6 nodes. Each node contains a source and a replica. The cluster nodes use hash partitioning to split the keyspace into key slots. Each replica copies the data of a specific source and can be reassigned to replicate another source or be elected as a source node if needed. This is much better for scaling as the operation is spread across multiple nodes instead of having a single entry point.

Two-node High Availability configurations are available with Redis™ Database Instances. This configuration type allows you to create a standby node, with an up-to-date replica of the database. If the main node fails for any reason, the standby can take over requests, reducing downtime.

The HA standby node is linked to the main node, using asynchronous replication. Asynchronous replication allows you to maintain good performance.

What is the logic behind the cluster mode?

A Redis® cluster contains a minimum of 3 nodes and up to 6 nodes. Each node contains a source and a replica. The cluster nodes use hash partitioning to split the keyspace into 16,384 key slots. Each source of the cluster is responsible for a subset of those slots. Each replica copies the data of one of the sources. For example, on a three-node Redis® Database Instance cluster, three Instances host each a source and a replica of one of the other nodes’ sources. If one of the sources fails, the remaining nodes hold a vote and elect the replica that will be promoted as the new source. When the failing source rejoins the cluster, it automatically becomes a replica. It begins to copy the data of the source of another node.

You can scale your cluster horizontally up to six nodes. Below is an example of a configuration for a three nodes cluster:

Instance A contains hash slots from 0 to 5,500
Instance B contains hash slots from 5,501 to 11,000
Instance C contains hash slots from 11,001 to 16,383
Each of the three Instances acts as a primary node and replicates one of the others as a secondary node.

Read more