Choosing between shared and dedicated resources
When creating a Cloud Essentials for OpenSearch deployment, selecting the appropriate node type is crucial for optimizing performance and cost. Two types of CPU provisioning are available: shared and dedicated vCPUs.
Understanding the difference between these two techniques is the key to creating a deployment adapted to your needs.
Comparison of shared and dedicated offers
Feature | Shared vCPU | Dedicated vCPU |
---|---|---|
CPU access | Physical cores shared across multiple deployments | Exclusive access to physical CPU cores |
Isolation | Strong virtual isolation, no data sharing between deployments | Full physical resource isolation |
Performance consistency | Variable – depends on other workloads on the host | High – consistent and predictable performance |
Resource contention risk | Possible during peak usage | None |
Latency sensitivity | Not suitable for latency-sensitive apps | Ideal for latency-critical applications |
Cost | Lower | Higher |
Use case | Dev/staging, personal projects, blogs, low-traffic sites | Production apps, eCommerce, CI/CD, ML, real-time processing |
Best for | Non-critical or experimental workloads | Business-critical, latency-sensitive or high-performance workloads |
Shared offers
Nodes with shared vCPU are cost-effective computational units in which CPU resources are shared among multiple deployments. This means that while each deployment gets its own vCPUs, these vCPUs are scheduled on physical cores that are shared across multiple deployments.
As a result, deployments share physical CPU time, and during peak demand from other resources on the same host machine, your workloads might temporarily slow down due to CPU contention (also known as "CPU steal").
While physical CPU threads are shared between deployments, vCPUs are dedicated to each deployment, and no data can be shared or accessed between resources through this setup.
Typical use cases
- Development and staging environments
- Small and non-critical production environments
- Applications tolerant to occasional performance variability
- Experimental or proof-of-concept projects
- Small-scale applications with limited traffic
Summary
- Shared offers provide an affordable solution for non-critical workloads.
- CPU performance is less predictable and may fluctuate depending on neighboring workloads ("noisy neighbors").
- During peak usage, your workloads might experience temporary slowdowns due to CPU steal.
Dedicated offers
Nodes with dedicated vCPU, provide exclusive access to physical CPU cores. This ensures consistent and predictable performance at all times. Dedicated offers are perfect for applications that require high CPU utilization and low latency.
Typical use cases
- Production applications with high CPU demands
- Machine learning and scientific computing
- Real-time data processing and analytics
- High-traffic/utilization environments
Summary
- Dedicated vCPU allocation ensures consistent and predictable performance.
- No risk of performance degradation due to neighboring workloads.
- Dedicated offers are more expensive than shared vCPU deployments, but offer guaranteed CPU performance.
Choosing the right configuration
Choose a shared vCPU offer if:
- You are running non-critical or experimental workloads
- Budget is a priority over performance consistency
Choose a dedicated vCPU offer if:
- Your application requires stable, predictable CPU performance
- You are in a production environment with strict performance requirements
Consider your needs and workload requirements to choose the best vCPU provisioning option for your Cloud Essentials for OpenSearch deployment.