Creating Services

In Kubernetes, a Service is a method for exposing a network application that is running as one or more Pods in your cluster.

Why Service is Needed

  1. Pods have their own IPs, but:

    • Pod IPs are not stable (they change if the Pod is recreated).

    • Directly accessing Pods becomes unreliable.

  2. Service solves this by providing:

    • A stable IP and DNS name.

    • Automatic load balancing to the matching Pods.

Example ClusterIP type Service:

# simple-service.yaml
apiVersion: v1
kind: Service
metadata:
  name: my-service
spec:
  type: ClusterIP
  selector:
    app.kubernetes.io/name: MyApp
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
  1. The available type values and their behaviors are ClusterIP, NodePort, LoadBalancer, ExternalName
  2. The set of Pods targeted by a Service is usually determined by a selector that you define.
  3. Service port.
  4. Bind targetPort of the Service to the Pod containerPort. In addition, you can reference port.name under the pod container.

Headless Services

Sometimes you don't need load-balancing and a single Service IP. In this case, you can create what are termed headless Services:

spec:
  clusterIP: None

Headless Services are useful when:

  • You want to discover individual Pod IPs, not just a single service IP.

  • You need direct connections to each Pod (e.g., for databases like Cassandra or StatefulSets).

  • You’re using StatefulSets where each Pod must have a stable DNS name.

Creating a service by using the web console

  1. Go to Container Platform.

  2. In the left navigation bar, click Network > Services.

  3. Click Create Service.

  4. Refer to the following instructions to configure the relevant parameters.

    ParameterDescription
    Virtual IP AddressIf enabled, a ClusterIP will be allocated for this Service, which can be used for service discovery within the cluster.
    If disabled, a Headless Service will be created, which is usually used by StatefulSet.
    Type
    • ClusterIP: Exposes the Service on a cluster-internal IP. Choosing this value makes the Service only reachable from within the cluster.
    • NodePort: Exposes the Service on each Node's IP at a static port (the NodePort).
    • ExternalName: Maps the Service to the contents of the externalName field (for example, to the hostname api.foo.bar.example).
    • LoadBalancer: Exposes the Service externally using an external load balancer. Kubernetes does not directly offer a load balancing component; you must provide one, or you can integrate your Kubernetes cluster with a cloud provider.
    Target Component
    • Workload: The Service will forward requests to a specific workload, which matches the labels like project.cpaas.io/name: projectname and service.cpaas.io/name: deployment-name.

    • Virtualization: The Service will forward requests to a specific virtual machine or virtual machine group.

    • Label Selector: The Service will forward requests to a certain type of workload with specified labels, for example, environment: release.
    PortUsed to configure the port mapping for this Service. In the following example, other podss within the cluster can call this Service via the virtual IP (if enabled) and TCP port 80; the access requests will be forwarded to the externally exposed TCP port 6379 or redis of the target component's pods.
    • Protocol: The protocol used by the Service, supported protocols include: TCP, UDP, HTTP, HTTP2, HTTPS, gRPC.
    • Service Port: The service port number exposed by the Service within the cluster, that is, Port, e.g., 80.
    • Container Port: The target port number (or name) that the service port maps to, that is, targetPort, e.g., 6379 or redis.
    • Service Port Name: Will be generated automatically. The format is <protocol>-<service port>-<container port>, for example: tcp-80-6379 or tcp-80-redis.
    Session AffinitySession affinity based on the source IP address (ClientIP). If enabled, all access requests from the same IP address will be kept on the same server during load balancing, ensuring that requests from the same client are forwarded to the same server for processing.
  5. Click Create.

Creating a service by using the CLI

kubectl apply -f simple-service.yaml

Create a service based on an existing deployment resource my-app.

kubectl expose deployment my-app \
  --port=80 \
  --target-port=8080 \
  --name=test-service \
  --type=NodePort \
  -n p1-1

Example: Accessing an Application Within the Cluste

# access-internal-demo.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
        - name: nginx
          image: nginx:1.25
          ports:
            - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-clusterip
spec:
  type: ClusterIP
  selector:
    app: nginx
  ports:
    - port: 80
      targetPort: 80
  1. Apply this YAML:
kubectl apply -f access-internal-demo.yaml
  1. Starting another Pod:
kubectl run test-pod --rm -it --image=busybox -- /bin/sh
  1. Accessing the nginx-clusterip service in test-pod Pod:
wget -qO- http://nginx-clusterip
# or using DNS records created automatically by Kubernetes: <service-name>.<namespace>.svc.cluster.local
wget -qO- http://nginx-clusterip.default.svc.cluster.local

You should see a HTML response containing text like "Welcome to nginx!".

Example: Accessing an Application Outside the Cluste

# access-external-demo.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
        - name: nginx
          image: nginx:1.25
          ports:
            - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-nodeport
spec:
  type: NodePort
  selector:
    app: nginx
  ports:
    - port: 80
      targetPort: 80
      nodePort: 30080
  1. Apply this YAML:
kubectl apply -f access-external-demo.yaml
  1. Checking Pods:
kubectl get pods -l app=nginx -o wide
  1. curl Service:
curl http://{NodeIP}:{nodePort}

You should see a HTML response containing text like "Welcome to nginx!".

Of course, it is also possible to access the application from outside the cluster by creating a Service of type LoadBalancer.

Note: Please configure the LoadBalancer service beforehand.

# access-external-demo-with-loadbalancer.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
        - name: nginx
          image: nginx:1.25
          ports:
            - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-lb-service
spec:
  type: LoadBalancer
  selector:
    app: nginx
  ports:
    - port: 80
      targetPort: 80
  1. Apply this YAML:
kubectl apply -f access-external-demo-with-loadbalancer.yaml
  1. Get external ip address:
kubectl get svc nginx-lb-service
NAME            TYPE           CLUSTER-IP       EXTERNAL-IP     PORT(S)        AGE
nginx-service   LoadBalancer   10.0.2.57        34.122.45.100   80:30005/TCP   30s

EXTERNAL-IP is the address you access from your browser.

curl http://34.122.45.100

You should see a HTML response containing text like "Welcome to nginx!".

If EXTERNAL-IP is pending, the Loadbalancer service is not currently deployed on the cluster.

Example: ExternalName type of Servce

apiVersion: v1
kind: Service
metadata:
  name: my-external-service
  namespace: default
spec:
  type: ExternalName
  externalName: example.com
  1. Apply this YAML:
kubectl apply -f external-service.yaml
  1. Try to resolve inside a Pod in the cluster:
kubectl run test-pod --rm -it --image=busybox -- sh

then:

nslookup my-external-service.default.svc.cluster.local

You'll see that it resolves to example.com.

LoadBalancer Type Service Annotations

AWS EKS Cluster

For detailed explanations of the EKS LoadBalancer Service annotations, please refer to the Annotation Usage Documentation .

KeyValueDescription
service.beta.kubernetes.io/aws-load-balancer-typeexternal: Use the official AWS LoadBalancer Controller.Specifies the controller for the LoadBalancer type.

Note: Please contact the platform administrator in advance to deploy the AWS LoadBalancer Controller.
service.beta.kubernetes.io/aws-load-balancer-nlb-target-type
  • instance: Traffic will be sent to the pods via NodePort.
  • ip: Traffic routes directly to the pods (the cluster must use Amazon VPC CNI).
Specifies how traffic reaches the pods.
service.beta.kubernetes.io/aws-load-balancer-scheme
  • internal: Private network.
  • internet-facing: Public network.
Specifies whether to use a private network or a public network.
service.beta.kubernetes.io/aws-load-balancer-ip-address-type
  • ipv4
  • dualstack
Specifies the supported IP address stack.

Huawei Cloud CCE Cluster

For detailed explanations of the CCE LoadBalancer Service annotations, please refer to the Annotation Usage Documentation .

KeyValueDescription
kubernetes.io/elb.idFill in the ID of the cloud load balancer, must use an existing cloud load balancer.
kubernetes.io/elb.autocreateExample: {"type":"public","bandwidth_name":"cce-bandwidth-1551163379627","bandwidth_chargemode":"bandwidth","bandwidth_size":5,"bandwidth_sharetype":"PER","eip_type":"5_bgp","available_zone":["cn-north-4b"],"l4_flavor_name":"L4_flavor.elb.s1.small"}

Note: Please read the Filling Instructions first and adjust the example parameters as needed.
New cloud load balancer to be created.
kubernetes.io/elb.subnet-idThe ID of the subnet where the cluster is located. When the Kubernetes version is 1.11.7-r0 or lower, this field must be filled when creating a new cloud load balancer.
kubernetes.io/elb.class
  • union: Shared load balancing.
  • performance: Exclusive load balancing, only supported in Kubernetes version 1.17 and above.
Specifies the type of the new cloud load balancer to be created, please refer to Differences Between Exclusive and Shared Elastic Load Balancing.
kubernetes.io/elb.enterpriseIDSpecifies the enterprise project to which the newly created cloud load balancer belongs.

Azure AKS Cluster

For detailed explanations of the AKS LoadBalancer Service annotations, please refer to the Annotation Usage Documentation .

KeyValueDescription
service.beta.kubernetes.io/azure-load-balancer-internal
  • true: Private network.
  • false: Public network.
Specifies whether to use a private network or a public network.

Google GKE Cluster

For detailed explanations of the GKE LoadBalancer Service annotations, please refer to the Annotation Usage Documentation .

KeyValueDescription
networking.gke.io/load-balancer-typeInternalSpecifies the use of a private network.
loud.google.com/l4-rbsenabledDefaults to public. If this parameter is configured, traffic will route directly to the pods.