Pods

TOC

Understanding Pods

Refer to the official Kubernetes website documentation: Pod

A Pod is the smallest deployable unit of computing that you can create and manage in Kubernetes. A Pod (as in a pod of whales or a pea pod) is a group of one or more containers (such as Docker containers), with shared storage and network resources, and a specification for how to run the containers. Pods are the fundamental building blocks on which all higher-level controllers (like Deployments, StatefulSets, DaemonSets) are built.

YAML file example

pod-example.yaml
apiVersion: v1
kind: Pod
metadata:
  name: my-nginx-pod
  labels:
    app: nginx
spec:
  containers:
    - name: nginx
      image: nginx:latest # The container image to use.
      ports:
        - containerPort: 80 # Container ports exposed.
      resources: # Defines CPU and memory requests and limits for the container.
        requests:
          cpu: '100m'
          memory: '128Mi'
        limits:
          cpu: '200m'
          memory: '256Mi'

Managing a Pod by using CLI

While Pods are often managed by higher-level controllers, direct kubectl operations on Pods are useful for troubleshooting, inspection, and ad-hoc tasks.

Viewing a Pod

  • To list all Pods in the current namespace:

    kubectl get pods
  • To list all Pods across all namespaces:

    kubectl get pods --all-namespaces
    # Or a shorter version:
    kubectl get pods -A
  • To get detailed information about a specific Pod:

    kubectl describe pod <pod-name> -n <namespace>
    
    # Example
    kubectl describe pod my-nginx-pod -n default

Viewing a Pod Logs

  • To stream logs from a container within a Pod (useful for debugging):

    kubectl logs <pod-name> -n <namespace>
  • If a Pod has multiple containers, you must specify the container name:

    kubectl logs <pod-name> -c <container-name> -n <namespace>
  • To follow the logs (stream new logs as they appear):

    kubectl logs -f <pod-name> -n <namespace>

Executing Commands in a Pod

To execute a command inside a specific container within a Pod (useful for debugging, like accessing a shell):

kubectl exec -it <pod-name> -n <namespace> -- <command>

# Example (to get a shell):
kubectl exec -it my-nginx-pod -n default -- /bin/bash

Port Forwarding to a Pod

To forward a local port to a port on a Pod, allowing direct access to a service running inside the Pod from your local machine (useful for testing or direct access without exposing the service externally):

kubectl port-forward <pod-name> <local-port>:<pod-port> -n <namespace>

#Example
kubectl port-forward my-nginx-pod 8080:80 -n default

After running this command, you can access the Nginx web server running in my-nginx-pod by visiting localhost:8080 in your web browser.

Deleting a Pod

  • To delete a specific Pod:

    kubectl delete pod <pod-name> -n <namespace>
    
    # Example
    kubectl delete pod my-nginx-pod -n default
  • To delete multiple Pods by their names:

    kubectl delete pod <pod-name-1> <pod-name-2> -n <namespace>
  • To delete Pods based on a label selector (e.g., delete all Pods with the label app=nginx):

    kubectl delete pods -l app=nginx -n <namespace>

Managing a Pod by using web console

Viewing a Pod

The platform interface provides various information about the pods for quick reference.

Procedure

  1. Container Platform, navigate to Workloads > Pods in the left sidebar.

  2. Locate the Pod you wish to view.

  3. Click the deployment name to see the Details, YAML, Configuration, Logs, Events, Monitoring, etc.

Pod Parameters

Below are some parameter explanations:

ParameterDescription
Resource Requests & LimitsResource Requests and Limits define the CPU and memory consumption boundaries for Containers within a Pod, which then aggregate to form the Pod's overall resource profile. These values are crucial for Kubernetes' scheduler to efficiently place Pods on Nodes and for the kubelet to enforce resource governance.
  • Requests: The minimum guaranteed CPU/memory required for a container to be scheduled and run. This value is used by the Kubernetes scheduler to decide which Node a Pod can run on.
  • Limits: The maximum CPU/memory a container is allowed to consume during its execution. Exceeding CPU limits results in throttling, while exceeding memory limits leads to the container being terminated (Out Of Memory - OOM Killed).
For detailed unit definitions (e.g., m for milliCPU, Mi for mebibytes), refer to Resource Units.

Pod-Level Resource Calculation Logic
The effective CPU and memory Requests and Limits values for a Pod are derived from the sum and maximum of its individual container specifications. The calculation method for Pod-level Requests and Limits is analogous; this document illustrates the logic using Limit values as an example. When a Pod contains only standard containers (business containers): The Pod's effective CPU/Memory Limit value is the sum of the CPU/Memory Limit values of all containers within the Pod.

Example: If a Pod includes two containers with CPU/Memory Limits of 100m/100Mi and 50m/200Mi respectively, the Pod's aggregated CPU/Memory Limit will be 150m/300Mi. When a Pod contains both initContainers and standard containers: The calculation steps for the Pod's CPU/Memory Limit values are as follows:
  • 1. Determine the maximum CPU/Memory Limit value among all initContainers.
  • 2. Calculate the sum of CPU/Memory Limit values of all standard containers.
  • 3. Compare the results from step 1 and step 2. The Pod's comprehensive CPU/Memory Limit will be the maximum of the CPU values (from initContainers max and containers sum) and the maximum of the Memory values (from initContainers max and containers sum).
Calculation Example: If a Pod contains two initContainers with CPU/Memory Limits of 100m/200Mi and 200m/100Mi, the maximum effective CPU/Memory Limit for the initContainers would be 200m/200Mi. Simultaneously, if the Pod also contains two standard containers with CPU/Memory Limits of 100m/100Mi and 50m/200Mi, the total aggregated Limit for the standard containers will be 150m/300Mi. Therefore, the Pod's comprehensive CPU/Memory Limit would be max(200m, 150m) for CPU and max(200Mi, 300Mi) for Memory, resulting in 200m/300Mi.
SourceThe Kubernetes workload controller that manages this Pod's life cycle. This includes Deployments, StatefulSets, DaemonSets, Jobs.
RestartThe number of times the Container within the Pod has restarted since the Pod was started. A high restart count often indicates an issue with the application or its environment.
NodeThe name of the Kubernetes Node where the Pod is currently scheduled and running.
Service AccountA Service Account is a Kubernetes object that provides an identity for processes and services running inside a Pod, allowing them to authenticate and access the Kubernetes APIServer. This field is typically visible only when the currently logged-in user has the platform administrator role or the platform auditor role, enabling the viewing of the Service Account's YAML definition.

Deleting a Pod

Deleting pods may affect the operation of computing components; please proceed with caution.

Use Cases

  • Restore the pods to its desired state promptly: If a pods remains in a state that affects business operations, such as Pending or CrashLoopBackOff, manually deleting the pods after addressing the error message can help it quickly return to its desired state, such as Running. At this time, the deleted pods will be rebuilt on the current node or rescheduled.

  • Resource cleanup for operations management: Some podss reach a designated stage where they no longer change, and these groups often accumulate in large numbers, complicating the management of other podss. The podss to be cleaned up may include those in the Evicted status due to insufficient node resources or those in the Completed status triggered by recurring scheduled tasks. In this case, the deleted podss will no longer exist.

    Note: For scheduled tasks, if you need to check the logs of each task execution, it is not recommended to delete the corresponding Completed status podss.

Procedure

  1. Go to Container Platform.

  2. In the left navigation bar, click Workloads > Pods.

  3. (Delete individually) Click the ⋮ on the right side of the pods to be deleted > Delete, and confirm.

  4. (Delete in bulk) Select the podss to be deleted, click Delete above the list, and confirm.