Canary Deployment is a progressive release strategy where a new application version is gradually introduced to a small subset of users or traffic. This incremental rollout allows teams to monitor system behavior, collect metrics, and ensure stability before a full-scale deployment. The approach significantly reduces risk, especially in production environments.
Argo Rollouts is a Kubernetes-native progressive delivery controller that facilitates advanced deployment strategies. It extends Kubernetes capabilities by offering features like Canary, Blue-Green Deployments, Analysis Runs, Experimentation, and Automated Rollbacks. It integrates with observability stacks for metric-based health checks and provides CLI and dashboard-based control over application delivery.
Key Concepts:
- Rollout: A custom resource definition (CRD) in Kubernetes that replaces standard Deployment resources, enabling advanced deployment control such as blue-green, canary deployment.
- Canary Steps: A series of incremental traffic shifting actions, such as directing 25%, then 50% of traffic to the new version.
- Pause Steps: Introduce wait intervals for manual or automatic validation before progressing to the next canary step.
Argo Rollouts supports canary deployment strategy to rollout a deployment, and control the traffic through Gateway API Plugin. In ACP, you could use ALB to act as a Gateway API Provider to implement the traffic control for Argo Rollouts.
Start by defining the "stable" version of your application. This is the current version that users will access. Create a Kubernetes deployment with the appropriate number of replicas, container image version (e.g., hello:1.23.1
), and proper labels such as app=web
.
Use the following YAML:
Explanation of YAML fields:
apiVersion
: The version of the Kubernetes API used to create the resource.kind
: Specifies that this is a Deployment resource.metadata.name
: The name of the deployment.spec.replicas
: Number of desired pod replicas.spec.selector.matchLabels
: Defines how the Deployment finds which pods to manage.template.metadata.labels
: Labels applied to pods, used by Services to select them.spec.containers
: The containers to run in each pod.containers.name
: Name of the container.containers.image
: Docker image to run.containers.ports.containerPort
: Port exposed by the container.Apply the configuration using kubectl
:
This sets up the production environment.
Alternative, you could use helm chart to create the deployments and services.
Create a Kubernetes Service
that exposes the stable deployment. This service will forward traffic to the pods of stable version based on matching labels. Initially, the service selector targets pods labeled with app=web
.
Explanation of YAML fields:
apiVersion
: The version of the Kubernetes API used to create the Service.kind
: Specifies this resource is a Service.metadata.name
: Name of the Service.spec.selector
: Identifies pods to route traffic to, based on labels.ports.protocol
: The protocol used (TCP).ports.port
: Port exposed by the Service.ports.targetPort
: The port on the container to which the traffic is directed.Apply it using:
This allows external access to the stable deployment.
Create a Kubernetes Service
that exposes the canary deployment. This service will forward traffic to the pods of canary version based on matching labels. Initially, the service selector targets pods labeled with app=web
.
Explanation of YAML fields:
apiVersion
: The version of the Kubernetes API used to create the Service.kind
: Specifies this resource is a Service.metadata.name
: Name of the Service.spec.selector
: Identifies pods to route traffic to, based on labels.ports.protocol
: The protocol used (TCP).ports.port
: Port exposed by the Service.ports.targetPort
: The port on the container to which the traffic is directed.Apply it using:
This allows external access to the canary deployment.
Use example.com
as the domain to access the service, create the gateway to expose the service with the domain:
Use the command:
The gateway will be allocated an external IP address, get the IP address from the status.addresses
of type IPAddress
in the gateway resource.
Configure the domain in your dns server to resolve the domain to the IP address of the gateway. Verify the dns resolve with the command:
It should return the address of the gateway.
Use the command:
Outside the cluster, use the command to access the service from the domain:
Or you can access http://example.com
in the browser.
Next, creating the Rollout
resource from Argo Rollouts with Canary
strategy.
Explanation of YAML fields:
spec.selector
: Label selector for pods. Existing ReplicaSets whose pods are selected by this will be the ones affected by this rollout. It must match the pod template's labels.
workloadRef
: Specify the workload reference and scale down strategy to apply the rollouts.
scaleDown
: Specifies if the workload (Deployment) is scaled down after migrating to Rollout. The possible options are:
strategy
: The rollout strategy, support BlueGreen
and Canary
strategy.
canary
: The Canary
rollout strategy definition.
canaryService
: Reference to a service which the controller will update to select canary pods. Required for traffic routing.stableService
: Reference to a service which the controller will update to select stable pods. Required for traffic routing.steps
: Steps define sequence of steps to take during an update of the canary. Skipped upon initial deploy of a rollout.
setWeight
: Sets the ratio of canary ReplicaSet.pause
: Pauses the rollout indefinitely or for a time. Supported units: s, m, h. {}
means indefinitely.plugin
: executes the configured plugin, here we configure it with the gatewayAPI
plugin.Apply it with:
This sets up the rollouts for the deployment with Canary
strategy. It will set weight to 50 initially, and wait for the promoting. The 50% of the traffic will forward to the canary service. After promoting the rollout, the weight will be set to 100, and 100% of the traffic will forward to the canary service. Finally, the canary service will become the stable service.
After the Rollout
was created, the Argo Rollouts will create a new ReplicaSet with same template of the deployment. While the pods of new ReplicaSet is healthy, the deployment is scaled down to 0.
Use the following command to ensure the pods are running properly:
Next, prepare the new version of the application as the green deployment. Update the deployment web
with the new image version (e.g., hello:1.23.2
). Use the command:
This sets up the new application version for testing.
The rollouts will create a new Replicaset to manage the canary pods, and the 50% traffic will forward to the canary pods. Use the following command to verify:
Currently, there are 3 pods running, with stable and canary version. And the weight is 50, 50% of the traffic will forward to the canary service. The rollout process is paused to wait for the promoting.
If you use helm chart to deploy the application, use helm tool to upgrade the application to the canary version.
Accessing http://example.com
, the 50% traffic will forward to the canary service. You should have different response from the URL.
When the canary version is tested ok, you could promote the rollout to switch all traffic to the canary pods. Use the following command:
To Verify if the rollout is completed:
If the stable Images
is updated to hello:1.23.2
, and the ReplicaSet of revision 1 is scaled down to 0, that means the rollout is completed.
Accessing http://example.com
, the 100% traffic will forward to the canary service.
If you found the canary version has some problems during rollout process, you can abort the process to switch all traffic to the stable service. Use the command:
To verify the results:
Accessing http://example.com
, the 100% traffic will forward to the stable service.