Deployments

TOC

Understanding Deployments

Refer to the official Kubernetes documentation: Deployments

Deployment is a Kubernetes higher-level workload resource used to declaratively manage and update Pod replicas for your applications. It provides a robust and flexible way to define how your application should run, including how many replicas to maintain and how to safely perform rolling updates.

A Deployment is an object in the Kubernetes API that manages Pods and ReplicaSets. When you create a Deployment, Kubernetes automatically creates a ReplicaSet, which is then responsible for maintaining the specified number of Pod replicas.

By using Deployments, you can:

  • Declarative Management: Define the desired state of your application, and Kubernetes automatically ensures the cluster's actual state matches the desired state.
  • Version Control and Rollback: Track each revision of a Deployment and easily roll back to a previous stable version if issues arise.
  • Zero-Downtime Updates: Gradually update your application using a rolling update strategy without service interruption.
  • Self-Healing: Deployments automatically replace Pod instances if they crash, are terminated, or are removed from a node, ensuring the specified number of Pods are always available.

How it works:

  1. You define the desired state of your application through a Deployment (e.g., which image to use, how many replicas to run).
  2. The Deployment creates a ReplicaSet to ensure the specified number of Pods are running.
  3. The ReplicaSet creates and manages the actual Pod instances.
  4. When you update a Deployment (e.g., change the image version), the Deployment creates a new ReplicaSet and gradually replaces the old Pods with new ones according to the predefined rolling update strategy until all new Pods are running, then it removes the old ReplicaSet.

Creating Deployments

Creating a Deployment by using CLI

Prerequisites

  • Ensure you have kubectl configured and connected to your cluster.

YAML file example

# example-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment # Name of the Deployment
  labels:
    app: nginx # Labels for identification and selection
spec:
  replicas: 3 # Desired number of Pod replicas
  selector:
    matchLabels:
      app: nginx # Selector to match Pods managed by this Deployment
  template:
    metadata:
      labels:
        app: nginx # Pod's labels, must match selector.matchLabels
    spec:
      containers:
        - name: nginx
          image: nginx:1.14.2 # Container image
          ports:
            - containerPort: 80 # Container exposed port
          resources: # Resource limits and requests
            requests:
              cpu: 100m
              memory: 128Mi
            limits:
              cpu: 200m
              memory: 256Mi

Creating a Deployment via YAML

# Step 1: Create Deployment via yaml
kubectl apply -f example-deployment.yaml

# Step 2: Check the Deployment status
kubectl get deployment nginx-deployment # View Deployment
kubectl get pod -l app=nginx # View Pods created by this Deployment

Creating a Deployment by using web console

Prerequisites

Obtain the image address. The source of the images can be from the image repository integrated by the platform administrator through the toolchain or from third-party platforms' image repositories.

  • For the former, the Administrator typically assigns the image repository to your project, and you can use the images within it. If the required image repository is not found, please contact the Administrator for allocation.

  • If it is a third-party platform's image repository, ensure that images can be pulled directly from it in the current cluster.

Procedure - Configure Basic Info

  1. Container Platform, navigate to Workloads > Deployments in the left sidebar.

  2. Click on Create Deployment.

  3. Select or Input an image, and click Confirm.

INFO

Note: When using images from the image repository integrated into web console, you can filter images by Already Integrated. The Integration Project Name, for example, images (docker-registry-projectname), which includes the project name projectname in this web console and the project name containers in the image repository.

  1. In the Basic Info section, configure declarative parameters for Deployment workloads:

    ParametersDescription
    ReplicasDefines the desired number of Pod replicas in the Deployment (default: 1). Adjust based on workload requirements.
    More > Update StrategyConfigures the rollingUpdate strategy for zero-downtime deployments:
    Max surge (maxSurge):
    • Maximum number of Pods that can exceed the desired replica count during an update.
    • Accepts absolute values (e.g., 2) or percentages (e.g., 20%).
    • Percentage calculation: ceil(current_replicas × percentage).
    • Example: 4.1 → 5 when calculated from 10 replicas.
    Max unavailable (maxUnavailable):
    • Maximum number of Pods that can be temporarily unavailable during an update.
    • Percentage values cannot exceed 100%.
    • Percentage calculation: floor(current_replicas × percentage).
    • Example: 4.9 → 4 when calculated from 10 replicas.
    Notes:
    1. Default values: maxSurge=1, maxUnavailable=1 if not explicitly set.
    2. Non-running Pods (e.g., in Pending/CrashLoopBackOff states) are considered unavailable.
    3. Simultaneous constraints:
    • maxSurge and maxUnavailable cannot both be 0 or 0%.
    • If percentage values resolve to 0 for both parameters, Kubernetes forces maxUnavailable=1 to ensure update progress.
    Example:
    For a Deployment with 10 replicas:
    • maxSurge=2 → Total Pods during update: 10 + 2 = 12.
    • maxUnavailable=3 → Minimum available Pods: 10 - 3 = 7.
    • This ensures availability while allowing controlled rollout.

Procedure - Configure Pod

Note: In mixed-architecture clusters deploying single-architecture images, ensure proper Node Affinity Rules are configured for Pod scheduling.

  1. Pod section, configure container runtime parameters and lifecycle management:

    ParametersDescription
    VolumesMount persistent volumes to containers. Supported volume types include PVC, ConfigMap, Secret,emptyDir, hostPath, and so on. For implementation details, see Volume Mounting Guide.
    Pull SecretRequired only when pulling images from third-party registries (via manual image URL input).
    Note: Secret for authentication when pulling image from a secured registry.
    Close Grace PeriodDuration (default: 30s) allowed for a Pod to complete graceful shutdown after receiving termination signal.
    - During this period, the Pod completes inflight requests and releases resources.
    - Setting 0 forces immediate deletion (SIGKILL), which may cause request interruptions.
  1. Node Affinity Rules
ParametersDescription
More > Node SelectorConstrain Pods to nodes with specific labels (e.g. kubernetes.io/os: linux).
Node OS Selector
More > AffinityDefine fine-grained scheduling rules based on existing.
Affinity Types:
  • Pod Affinity: Schedule new Pods to nodes hosting specific Pods(same topology domain).
  • Pod Anti-affinity: Prevent co-location of new Pods with specific Pods.
Enforcement Modes:
  • requiredDuringSchedulingIgnoredDuringExecution: Pods are scheduled only if rules are satisfied.
  • preferredDuringSchedulingIgnoredDuringExecution: Prioritize nodes meeting rules, but allow exceptions.
Configuration Fields:
  • topologyKey: Node label defining topology domains (default:kubernetes.io/hostname).
  • labelSelector: Filters target Pods using label queries.
  1. Network Configuration

    • Kube-OVN

      ParametersDescription
      Bandwidth LimitsEnforce QoS for Pod network traffic:
      • Egress rate limit: Maximum outbound traffic rate (e.g., 10Mbps).
      • Ingress rate limit: Maximum inbound traffic rate.
      SubnetAssign IPs from a predefined subnet pool. If unspecified, uses the namespace's default subnet.
      Static IP AddressBind persistent IP addresses to Pods:
      • Multiple Pods across Deployments can claim the same IP, but only one Pod can use it concurrently.
      • Critical: Number of static IPs must ≥ Pod replica count.
    • Calico

      ParametersDescription
      Static IP AddressAssign fixed IPs with strict uniqueness:
      • Each IP can be bound to only one Pod in the cluster.
      • Critical: Static IP count must ≥ Pod replica count.

Procedure - Configure Containers

  1. Container section, refer to the following instructions to configure the relevant information.

    ParametersDescription
    Resource Requests & Limits
    • Requests: Minimum CPU/memory required for container operation.
    • Limits: Maximum CPU/memory allowed during container execution. For unit definitions, see Resource Units.
    Namespace overcommit ratio:
    • Without overcommit ratio:
      If namespace resource quotas exist: Container requests/limits inherit namespace defaults (modifiable).
      No namespace quotas: No defaults; custom Request.
    • With overcommit ratio:
      Requests auto-calculated as Limits / Overcommit ratio (immutable).
    Constraints:
    • Request ≤ Limit ≤ Namespace quota maximum.
    • Overcommit ratio changes require pod recreation to take effect.
    • Overcommit ratio disables manual request configuration.
    • No namespace quotas → no container resource constraints.
    Extended ResourcesConfigure cluster-available extended resources (e.g., vGPU, pGPU).
    Volume MountsPersistent storage configuration. See Storage Volume Mounting Instructions.
    Operations:
    • Existing pod volumes: Click Add
    • No pod volumes: Click Add & Mount
    Parameters:
    • mountPath: Container filesystem path (e.g., /data)
    • subPath: Relative file/directory path within volume.
      For ConfigMap/Secret: Select specific key
    • readOnly: Mount as read-only (default: read-write)
    See Kubernetes Volumes.
    PortsExpose container ports.
    Example: Expose TCP port 6379 with name redis.
    Fields:
    • protocol: TCP/UDP
    • Port: Exposed port (e.g., 6379)
    • name: DNS-compliant identifier (e.g., redis)
    Startup Commands & ArgumentsOverride default ENTRYPOINT/CMD:
    Example 1: Execute top -b
    - Command: ["top", "-b"]
    - OR Command: ["top"], Args: ["-b"]
    Example 2: Output $MESSAGE:
    /bin/sh -c "while true; do echo $(MESSAGE); sleep 10; done"
    See Defining Commands.
    More > Environment Variables
    • Static values: Direct key-value pairs
    • Dynamic values: Reference ConfigMap/Secret keys, pod fields (fieldRef), resource metrics (resourceFieldRef)
    Note: Env variables override image/configuration file settings.
    More > Referenced ConfigMapsInject entire ConfigMap/Secret as env variables. Supported Secret types: Opaque, kubernetes.io/basic-auth.
    More > Health Checks
    • Liveness Probe: Detect container health (restart if failing)
    • Readiness Probe: Detect service availability (remove from endpoints if failing)
    See Health Check Parameters.
    More > Log FilesConfigure log paths:
    - Default: Collect stdout
    - File patterns: e.g., /var/log/*.log
    Requirements:
    • Storage driver overlay2: Supported by default
    • devicemapper: Manually mount EmptyDir to log directory
    • Windows nodes: Ensure parent directory is mounted (e.g., c:/a for c:/a/b/c/*.log)
    More > Exclude Log FilesExclude specific logs from collection (e.g., /var/log/aaa.log).
    More > Execute before StoppingExecute commands before container termination.
    Example: echo "stop"
    Note: Command execution time must be shorter than pod's terminationGracePeriodSeconds.
  2. Click Add Container (upper right) OR Add Init Container.

    See Init Containers. Init Container:

    1. Start before app containers (sequential execution).
    2. Release resources after completion.
    3. Deletion allowed when:
      • Pod has >1 app container AND ≥1 init container.
      • Not allowed for single-app-container pods.
  3. Click Create.

Reference Information​

Storage Volume Mounting instructions
TypePurpose
Persistent Volume ClaimBinds an existing PVC to request persistent storage.

Note: Only bound PVCs (with associated PV) are selectable. Unbound PVCs will cause pod creation failures.
ConfigMapMounts full/partial ConfigMap data as files:
  • Full ConfigMap: Creates files named after keys under mount path
  • Subpath selection: Mount specific key (e.g., my.cnf)
SecretMounts full/partial Secret data as files:
  • Full Secret: Creates files named after keys under mount path
  • Subpath selection: Mount specific key (e.g., tls.crt)
Ephemeral VolumesCluster-provisioned temporary volume with features:
  • Dynamic provisioning
  • Lifecycle tied to pod
  • Supports declarative configuration

Use Case: Temporary data storage. See Ephemeral Volumes
Empty DirectoryEphemeral storage sharing between containers in same pod:
- Created on node when pod starts
- Deleted with pod removal

Use Case: Inter-container file sharing, temporary data storage. See EmptyDir
Host PathMounts host machine directory (must start with /, e.g., /volumepath).

Heath Checks

Managing Deployments

Managing a Deployment by using CLI

Viewing a Deployment

  • Check the Deployment was created.
 kubectl get deployments
  • Get details of your Deployment.
kubectl describe deployments

Updating a Deployment

Follow the steps given below to update your Deployment:

  1. Let's update the nginx Pods to use the nginx
    .16.1 image.
kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.16.1

or use the following command:

kubectl set image deployment/nginx-deployment nginx=nginx:1.16.1

Alternatively, you can edit the Deployment and change .spec.template.spec.containers[0].image from nginx:1.14.2 to nginx:1.16.1:

kubectl edit deployment/nginx-deployment
  1. To see the rollout status, run:
kubectl rollout status deployment/nginx-deployment
  • Run kubectl get rs to see that the Deployment updated the Pods by creating a new ReplicaSet and scaling it up to 3 replicas, as well as scaling down the old ReplicaSet to 0 replicas.
kubectl get rs
  • Running get pods should now show only the new Pods:
kubectl get pods

Scaling a Deployment

You can scale a Deployment by using the following command:

kubectl scale deployment/nginx-deployment --replicas=10

Rolling Back a Deployment

  • Suppose that you made a typo while updating the Deployment, by putting the image name as nginx:1.161 instead of nginx:1.16.1:
kubectl set image deployment/nginx-deployment nginx=nginx:1.161
  • The rollout gets stuck. You can verify it by checking the rollout status:
kubectl rollout status deployment/nginx-deployment

Deleting a Deployment

Deleting a Deployment will also delete its managed ReplicaSet and all associated Pods.

kubectl delete deployment <deployment-name>

Managing a Deployment by using web console

Viewing a Deployment

You can view a deployment to get information of your application.

  1. Container Platform, and navigate to Workloads > Deployments.
  2. Locate the Deployment you wish to view.
  3. Click the deployment name to see the Details, Topology, Logs, Events, Monitoring, etc.

Updating a Deployment

  1. Container Platform, and navigate to Workloads > Deployments.
  2. Locate the Deployment you wish to update.
  3. In the Actions drop-down menu, select Update to view the Edit Deployment page.

Deleting a Deployment

  1. Container Platform, and navigate to Workloads > Deployments.
  2. Locate the Deployment you wish to delete.
  3. In the Actions drop-down menu, Click the Delete button in the operations column and confirm.

Troubleshooting by using CLI

When a Deployment encounters issues, here are some common troubleshooting methods.

Check Deployment status

kubectl get deployment nginx-deployment
kubectl describe deployment nginx-deployment # View detailed events and status

Check ReplicaSet status

kubectl get rs -l app=nginx
kubectl describe rs <replicaset-name>

Check Pod status

kubectl get pods -l app=nginx
kubectl describe pod <pod-name>

View Logs

kubectl logs <pod-name> -c <container-name> # View logs for a specific container
kubectl logs <pod-name> --previous         # View logs for the previously terminated container

Enter Pod for debugging

kubectl exec -it <pod-name> -- /bin/bash # Enter the container shell

Check Health configuration

Ensure livenessProbe and readinessProbe are correctly configured, and your application's health check endpoints are responding properly. Troubleshooting probe failures

Check Resource Limits

Ensure container resource requests and limits are reasonable and that containers are not being killed due to insufficient resources.