Configuring Resource Quotas for Pipeline Components

TOC

Overview

Configure resource quotas related to the Pipeline component.

Use Cases

  • Adjust the resource quotas for the Pipeline component
  • Set default resource quotas for init-containers and containers created by TaskRun

Prerequisites

Resource Configuration Guidelines

Before configuring resource quotas:

  • Assess your cluster's available resources and capacity
  • Consider your workload characteristics and performance requirements
  • Start with conservative values and adjust based on monitoring data
  • Test configurations in non-production environments first

Steps

Step 1

Edit the TektonConfig resource

$ kubectl edit tektonconfigs.operator.tekton.dev config

Step 2

WARNING

Modifying the configuration may trigger a rolling update of the component Pods, which could lead to temporary service unavailability. Please execute this at an appropriate time.

Example of modifying the spec.pipeline.options.deployments configuration:

apiVersion: operator.tekton.dev/v1alpha1
kind: TektonConfig
metadata:
  name: config
spec:
  pipeline:
    options:
      disabled: false

      configMaps:
        config-defaults:
          data:
            # Add default container resource quotas
            # Adjust the values below according to your cluster's resource capacity and workload requirements
            default-container-resource-requirements: |
              place-scripts: # updates resource requirements of a 'place-scripts' container
                requests:
                  memory: "<MEMORY_REQUEST>"  # e.g., "128Mi"
                  cpu: "<CPU_REQUEST>"        # e.g., "250m"
                limits:
                  memory: "<MEMORY_LIMIT>"    # e.g., "512Mi"
                  cpu: "<CPU_LIMIT>"          # e.g., "500m"

              prepare: # updates resource requirements of a 'prepare' container
                requests:
                  memory: "<MEMORY_REQUEST>"  # e.g., "128Mi"
                  cpu: "<CPU_REQUEST>"        # e.g., "250m"
                limits:
                  memory: "<MEMORY_LIMIT>"    # e.g., "256Mi"
                  cpu: "<CPU_LIMIT>"          # e.g., "500m"

              working-dir-initializer: # updates resource requirements of a 'working-dir-initializer' container
                requests:
                  memory: "<MEMORY_REQUEST>"  # e.g., "128Mi"
                  cpu: "<CPU_REQUEST>"        # e.g., "250m"
                limits:
                  memory: "<MEMORY_LIMIT>"    # e.g., "512Mi"
                  cpu: "<CPU_LIMIT>"          # e.g., "500m"

              prefix-scripts: # updates resource requirements of containers which starts with 'scripts-'
                requests:
                  memory: "<MEMORY_REQUEST>"  # e.g., "128Mi"
                  cpu: "<CPU_REQUEST>"        # e.g., "250m"
                limits:
                  memory: "<MEMORY_LIMIT>"    # e.g., "512Mi"
                  cpu: "<CPU_LIMIT>"          # e.g., "500m"

              prefix-sidecar-scripts: # updates resource requirements of containers which starts with 'sidecar-scripts-'
                requests:
                  memory: "<MEMORY_REQUEST>"  # e.g., "128Mi"
                  cpu: "<CPU_REQUEST>"        # e.g., "250m"
                limits:
                  memory: "<MEMORY_LIMIT>"    # e.g., "512Mi"
                  cpu: "<CPU_LIMIT>"          # e.g., "500m"

              sidecar-tekton-log-results: # updates resource requirements of a 'sidecar-tekton-log-results' container
                requests:
                  memory: "<MEMORY_REQUEST>"  # e.g., "128Mi"
                  cpu: "<CPU_REQUEST>"        # e.g., "100m"
                limits:
                  memory: "<MEMORY_LIMIT>"    # e.g., "256Mi"
                  cpu: "<CPU_LIMIT>"          # e.g., "250m"

      deployments:
        # Adjust the resource values below according to your cluster's capacity and performance requirements
        tekton-pipelines-controller:
          spec:
            replicas: <REPLICA_COUNT>  # e.g., 1
            template:
              spec:
                containers:
                  - name: tekton-pipelines-controller
                    resources:
                      requests:
                        cpu: <CPU_REQUEST>        # e.g., "500m"
                        memory: <MEMORY_REQUEST>  # e.g., "512Mi"
                      limits:
                        cpu: <CPU_LIMIT>          # e.g., "1"
                        memory: <MEMORY_LIMIT>    # e.g., "1Gi"

        tekton-pipelines-remote-resolvers:
          spec:
            replicas: <REPLICA_COUNT>  # e.g., 1
            template:
              spec:
                containers:
                  - name: controller
                    resources:
                      requests:
                        cpu: <CPU_REQUEST>        # e.g., "200m"
                        memory: <MEMORY_REQUEST>  # e.g., "256Mi"
                      limits:
                        cpu: <CPU_LIMIT>          # e.g., "500m"
                        memory: <MEMORY_LIMIT>    # e.g., "512Mi"

        tekton-pipelines-webhook:
          spec:
            replicas: <REPLICA_COUNT>  # e.g., 1
            template:
              spec:
                containers:
                  - name: webhook
                    resources:
                      requests:
                        cpu: <CPU_REQUEST>        # e.g., "500m"
                        memory: <MEMORY_REQUEST>  # e.g., "256Mi"
                      limits:
                        cpu: <CPU_LIMIT>          # e.g., "1"
                        memory: <MEMORY_LIMIT>    # e.g., "500Mi"

        tekton-events-controller:
          spec:
            replicas: <REPLICA_COUNT>  # e.g., 1
            template:
              spec:
                containers:
                  - name: tekton-events-controller
                    resources:
                      requests:
                        cpu: <CPU_REQUEST>        # e.g., "100m"
                        memory: <MEMORY_REQUEST>  # e.g., "100Mi"
                      limits:
                        cpu: <CPU_LIMIT>          # e.g., "200m"
                        memory: <MEMORY_LIMIT>    # e.g., "256Mi"

Step 3

Submit the configuration and wait for the Pods to update.

$ kubectl get pods -n tekton-pipelines -w

NAME                                                    READY   STATUS    RESTARTS   AGE
tekton-pipelines-controller-648d87488b-fq9bc            1/1     Running   0          2m21s
tekton-pipelines-remote-resolvers-79554f5959-cbm6x      1/1     Running   0          2m21s
tekton-pipelines-webhook-5cd9847998-864zf               1/1     Running   0          2m20s
tekton-events-controller-5c97b7554c-m59m6               1/1     Running   0          2m21s

Operation Result

You can see that the resource quota configurations for the Pipeline related components are effective.

$ kubectl get deployments.apps -n tekton-pipelines tekton-pipelines-controller tekton-pipelines-remote-resolvers tekton-pipelines-webhook tekton-events-controller -o yaml | grep 'resources:' -A 6

          resources:
            limits:
              cpu: "1"
              memory: 1Gi
            requests:
              cpu: 500m
              memory: 512Mi
--
          resources:
            limits:
              cpu: 500m
              memory: 512Mi
            requests:
              cpu: 200m
              memory: 256Mi
--
          resources:
            limits:
              cpu: "2"
              memory: 500Mi
            requests:
              cpu: "1"
              memory: 256Mi
--
          resources:
            limits:
              cpu: 200m
              memory: 256Mi
            requests:
              cpu: 100m
              memory: 100Mi

Verifying Pod Resource Quota Configuration

Create a TaskRun

$ cat <<'EOF' | kubectl create -f -
apiVersion: tekton.dev/v1
kind: TaskRun
metadata:
  name: hello
  namespace: default
spec:
  taskSpec:
    steps:
      - name: hello
        image: alpine
        command: ["echo", "hello"]
EOF

Wait for the TaskRun to Complete

$ kubectl get taskruns.tekton.dev -n default hello

NAME    SUCCEEDED   REASON      STARTTIME   COMPLETIONTIME
hello   True        Succeeded   2m41s       2m28s

View the Pod Resource Quota Configuration

$ kubectl get pods -n default hello-hello-pod -o yaml

apiVersion: v1
kind: Pod
metadata:
  name: hello-pod
  namespace: default
spec:
  containers:
    - image: alpine
      name: step-hello
      resources: {}
  initContainers:
    - name: prepare
      resources:
        limits:
          cpu: 100m
          memory: 256Mi
        requests:
          cpu: 50m
          memory: 64Mi

You can see that the resource quota for the initContainers container prepare in the Pod matches the resource quotas configured in the config-defaults ConfigMap.

References