Refer to the official Kubernetes website documentation: Pod
A Pod is the smallest deployable unit of computing that you can create and manage in Kubernetes. A Pod (as in a pod of whales or a pea pod) is a group of one or more containers (such as Docker containers), with shared storage and network resources, and a specification for how to run the containers. Pods are the fundamental building blocks on which all higher-level controllers (like Deployments, StatefulSets, DaemonSets) are built.
While Pods are often managed by higher-level controllers, direct kubectl operations on Pods are useful for troubleshooting, inspection, and ad-hoc tasks.
To list all Pods in the current namespace:
To list all Pods across all namespaces:
To get detailed information about a specific Pod:
To stream logs from a container within a Pod (useful for debugging):
If a Pod has multiple containers, you must specify the container name:
To follow the logs (stream new logs as they appear):
To execute a command inside a specific container within a Pod (useful for debugging, like accessing a shell):
To forward a local port to a port on a Pod, allowing direct access to a service running inside the Pod from your local machine (useful for testing or direct access without exposing the service externally):
After running this command, you can access the Nginx web server running in my-nginx-pod by visiting localhost:8080 in your web browser.
To delete a specific Pod:
To delete multiple Pods by their names:
To delete Pods based on a label selector (e.g., delete all Pods with the label app=nginx):
The platform interface provides various information about the pods for quick reference.
Container Platform, navigate to Workloads > Pods in the left sidebar.
Locate the Pod you wish to view.
Click the deployment name to see the Details, YAML, Configuration, Logs, Events, Monitoring, etc.
Below are some parameter explanations:
Parameter | Description |
---|---|
Resource Requests & Limits | Resource Requests and Limits define the CPU and memory consumption boundaries for Containers within a Pod, which then aggregate to form the Pod's overall resource profile. These values are crucial for Kubernetes' scheduler to efficiently place Pods on Nodes and for the kubelet to enforce resource governance.
m for milliCPU, Mi for mebibytes), refer to Resource Units. Pod-Level Resource Calculation Logic The effective CPU and memory Requests and Limits values for a Pod are derived from the sum and maximum of its individual container specifications. The calculation method for Pod-level Requests and Limits is analogous; this document illustrates the logic using Limit values as an example. When a Pod contains only standard containers (business containers): The Pod's effective CPU/Memory Limit value is the sum of the CPU/Memory Limit values of all containers within the Pod. Example: If a Pod includes two containers with CPU/Memory Limits of 100m/100Mi and 50m/200Mi respectively, the Pod's aggregated CPU/Memory Limit will be 150m/300Mi. When a Pod contains both initContainers and standard containers: The calculation steps for the Pod's CPU/Memory Limit values are as follows:
|
Source | The Kubernetes workload controller that manages this Pod's life cycle. This includes Deployments, StatefulSets, DaemonSets, Jobs. |
Restart | The number of times the Container within the Pod has restarted since the Pod was started. A high restart count often indicates an issue with the application or its environment. |
Node | The name of the Kubernetes Node where the Pod is currently scheduled and running. |
Service Account | A Service Account is a Kubernetes object that provides an identity for processes and services running inside a Pod, allowing them to authenticate and access the Kubernetes APIServer. This field is typically visible only when the currently logged-in user has the platform administrator role or the platform auditor role, enabling the viewing of the Service Account's YAML definition. |
Deleting pods may affect the operation of computing components; please proceed with caution.
Restore the pods to its desired state promptly: If a pods remains in a state that affects business operations, such as Pending
or CrashLoopBackOff
, manually deleting the pods after addressing the error message can help it quickly return to its desired state, such as Running
. At this time, the deleted pods will be rebuilt on the current node or rescheduled.
Resource cleanup for operations management: Some podss reach a designated stage where they no longer change, and these groups often accumulate in large numbers, complicating the management of other podss. The podss to be cleaned up may include those in the Evicted
status due to insufficient node resources or those in the Completed
status triggered by recurring scheduled tasks. In this case, the deleted podss will no longer exist.
Note: For scheduled tasks, if you need to check the logs of each task execution, it is not recommended to delete the corresponding Completed
status podss.
Go to Container Platform.
In the left navigation bar, click Workloads > Pods.
(Delete individually) Click the ⋮ on the right side of the pods to be deleted > Delete, and confirm.
(Delete in bulk) Select the podss to be deleted, click Delete above the list, and confirm.