About gateways

A gateway is a dedicated Envoy proxy deployment paired with a Kubernetes Service, operating at the edge of the service mesh. It enables fine-grained control over traffic that enters or leaves the mesh. In Alauda Service Mesh, gateways are installed via gateway injection.

TOC

About gateway injection

Gateway injection utilizes the same mechanism as sidecar injection to deploy the Envoy proxy into gateway pods. To deploy a gateway:

  1. Create a Kubernetes Deployment and a corresponding Service in a namespace visible to the Istio control plane.
  2. Annotate and label the deployment so the Istio control plane injects an Envoy proxy configured as a gateway.
  3. Apply Istio's Gateway and VirtualService resources to control ingress or egress traffic.

Linux Kernel Compatibility Notice

For nodes running Linux kernel versions earlier than 4.11 (e.g., CentOS 7), additional configuration is required prior to gateway installation.

INFO

Skip this section if your kernel version is 4.11 or later.

Prerequisites

  • Install jq locally to process JSON in these steps.

Procedure

  1. Create a YAML file named gateway-injection-template.txt that contains the default injection template for gateways.

    Click to expand
    gateway-injection-template.txt
    {{- $containers := list }}
    {{- range $index, $container := .Spec.Containers }}{{ if not (eq $container.Name "istio-proxy") }}{{ $containers = append $containers $container.Name }}{{end}}{{- end}}
    metadata:
      labels:
        service.istio.io/canonical-name: {{ index .ObjectMeta.Labels `service.istio.io/canonical-name` | default (index .ObjectMeta.Labels `app.kubernetes.io/name`) | default (index .ObjectMeta.Labels `app`) | default .DeploymentMeta.Name  | quote }}
        service.istio.io/canonical-revision: {{ index .ObjectMeta.Labels `service.istio.io/canonical-revision` | default (index .ObjectMeta.Labels `app.kubernetes.io/version`) | default (index .ObjectMeta.Labels `version`) | default "latest"  | quote }}
      annotations:
        istio.io/rev: {{ .Revision | default "default" | quote }}
        {{- if ge (len $containers) 1 }}
        {{- if not (isset .ObjectMeta.Annotations `kubectl.kubernetes.io/default-logs-container`) }}
        kubectl.kubernetes.io/default-logs-container: "{{ index $containers 0 }}"
        {{- end }}
        {{- if not (isset .ObjectMeta.Annotations `kubectl.kubernetes.io/default-container`) }}
        kubectl.kubernetes.io/default-container: "{{ index $containers 0 }}"
        {{- end }}
        {{- end }}
    spec:
      securityContext:
      {{- if .Values.gateways.securityContext }}
        {{- toYaml .Values.gateways.securityContext | nindent 4 }}
      {{- else }}
        sysctls: []  
        capabilities:
          add: [CAP_NET_BIND_SERVICE]  
      {{- end }}
      containers:
      - name: istio-proxy
      {{- if contains "/" (annotation .ObjectMeta `sidecar.istio.io/proxyImage` .Values.global.proxy.image) }}
        image: "{{ annotation .ObjectMeta `sidecar.istio.io/proxyImage` .Values.global.proxy.image }}"
      {{- else }}
        image: "{{ .ProxyImage }}"
      {{- end }}
        ports:
        - containerPort: 15090
          protocol: TCP
          name: http-envoy-prom
        args:
        - proxy
        - router
        - --domain
        - $(POD_NAMESPACE).svc.{{ .Values.global.proxy.clusterDomain }}
        - --proxyLogLevel={{ annotation .ObjectMeta `sidecar.istio.io/logLevel` .Values.global.proxy.logLevel }}
        - --proxyComponentLogLevel={{ annotation .ObjectMeta `sidecar.istio.io/componentLogLevel` .Values.global.proxy.componentLogLevel }}
        - --log_output_level={{ annotation .ObjectMeta `sidecar.istio.io/agentLogLevel` .Values.global.logging.level }}
      {{- if .Values.global.sts.servicePort }}
        - --stsPort={{ .Values.global.sts.servicePort }}
      {{- end }}
      {{- if .Values.global.logAsJson }}
        - --log_as_json
      {{- end }}
      {{- if .Values.global.proxy.lifecycle }}
        lifecycle:
          {{ toYaml .Values.global.proxy.lifecycle | indent 6 }}
      {{- end }}
        securityContext:
          runAsUser: {{ .ProxyUID | default "1337" }}
          runAsGroup: {{ .ProxyGID | default "1337" }}
        env:
        - name: PILOT_CERT_PROVIDER
          value: {{ .Values.global.pilotCertProvider }}
        - name: CA_ADDR
        {{- if .Values.global.caAddress }}
          value: {{ .Values.global.caAddress }}
        {{- else }}
          value: istiod{{- if not (eq .Values.revision "") }}-{{ .Values.revision }}{{- end }}.{{ .Values.global.istioNamespace }}.svc:15012
        {{- end }}
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: INSTANCE_IP
          valueFrom:
            fieldRef:
              fieldPath: status.podIP
        - name: SERVICE_ACCOUNT
          valueFrom:
            fieldRef:
              fieldPath: spec.serviceAccountName
        - name: HOST_IP
          valueFrom:
            fieldRef:
              fieldPath: status.hostIP
        - name: ISTIO_CPU_LIMIT
          valueFrom:
            resourceFieldRef:
              resource: limits.cpu
        - name: PROXY_CONFIG
          value: |
                {{ protoToJSON .ProxyConfig }}
        - name: ISTIO_META_POD_PORTS
          value: |-
            [
            {{- $first := true }}
            {{- range $index1, $c := .Spec.Containers }}
              {{- range $index2, $p := $c.Ports }}
                {{- if (structToJSON $p) }}
                {{if not $first}},{{end}}{{ structToJSON $p }}
                {{- $first = false }}
                {{- end }}
              {{- end}}
            {{- end}}
            ]
        - name: GOMEMLIMIT
          valueFrom:
            resourceFieldRef:
              resource: limits.memory
        - name: GOMAXPROCS
          valueFrom:
            resourceFieldRef:
              resource: limits.cpu
        {{- if .CompliancePolicy }}
        - name: COMPLIANCE_POLICY
          value: "{{ .CompliancePolicy }}"
        {{- end }}
        - name: ISTIO_META_APP_CONTAINERS
          value: "{{ $containers | join "," }}"
        - name: ISTIO_META_CLUSTER_ID
          value: "{{ valueOrDefault .Values.global.multiCluster.clusterName `Kubernetes` }}"
        - name: ISTIO_META_NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        - name: ISTIO_META_INTERCEPTION_MODE
          value: "{{ .ProxyConfig.InterceptionMode.String }}"
        {{- if .Values.global.network }}
        - name: ISTIO_META_NETWORK
          value: "{{ .Values.global.network }}"
        {{- end }}
        {{- if .DeploymentMeta.Name }}
        - name: ISTIO_META_WORKLOAD_NAME
          value: "{{ .DeploymentMeta.Name }}"
        {{ end }}
        {{- if and .TypeMeta.APIVersion .DeploymentMeta.Name }}
        - name: ISTIO_META_OWNER
          value: kubernetes://apis/{{ .TypeMeta.APIVersion }}/namespaces/{{ valueOrDefault .DeploymentMeta.Namespace `default` }}/{{ toLower .TypeMeta.Kind}}s/{{ .DeploymentMeta.Name }}
        {{- end}}
        {{- if .Values.global.meshID }}
        - name: ISTIO_META_MESH_ID
          value: "{{ .Values.global.meshID }}"
        {{- else if (valueOrDefault .MeshConfig.TrustDomain .Values.global.trustDomain) }}
        - name: ISTIO_META_MESH_ID
          value: "{{ (valueOrDefault .MeshConfig.TrustDomain .Values.global.trustDomain) }}"
        {{- end }}
        {{- with (valueOrDefault .MeshConfig.TrustDomain .Values.global.trustDomain)  }}
        - name: TRUST_DOMAIN
          value: "{{ . }}"
        {{- end }}
        {{- range $key, $value := .ProxyConfig.ProxyMetadata }}
        - name: {{ $key }}
          value: "{{ $value }}"
        {{- end }}
        {{with .Values.global.imagePullPolicy }}imagePullPolicy: "{{.}}"{{end}}
        readinessProbe:
          httpGet:
            path: /healthz/ready
            port: 15021
          initialDelaySeconds: {{.Values.global.proxy.readinessInitialDelaySeconds }}
          periodSeconds: {{ .Values.global.proxy.readinessPeriodSeconds }}
          timeoutSeconds: 3
          failureThreshold: {{ .Values.global.proxy.readinessFailureThreshold }}
        volumeMounts:
        - name: workload-socket
          mountPath: /var/run/secrets/workload-spiffe-uds
        - name: credential-socket
          mountPath: /var/run/secrets/credential-uds
        {{- if eq .Values.global.caName "GkeWorkloadCertificate" }}
        - name: gke-workload-certificate
          mountPath: /var/run/secrets/workload-spiffe-credentials
          readOnly: true
        {{- else }}
        - name: workload-certs
          mountPath: /var/run/secrets/workload-spiffe-credentials
        {{- end }}
        {{- if eq .Values.global.pilotCertProvider "istiod" }}
        - mountPath: /var/run/secrets/istio
          name: istiod-ca-cert
        {{- end }}
        - mountPath: /var/lib/istio/data
          name: istio-data
        # SDS channel between istioagent and Envoy
        - mountPath: /etc/istio/proxy
          name: istio-envoy
        - mountPath: /var/run/secrets/tokens
          name: istio-token
        {{- if .Values.global.mountMtlsCerts }}
        # Use the key and cert mounted to /etc/certs/ for the in-cluster mTLS communications.
        - mountPath: /etc/certs/
          name: istio-certs
          readOnly: true
        {{- end }}
        - name: istio-podinfo
          mountPath: /etc/istio/pod
      volumes:
      - emptyDir: {}
        name: workload-socket
      - emptyDir: {}
        name: credential-socket
      {{- if eq .Values.global.caName "GkeWorkloadCertificate" }}
      - name: gke-workload-certificate
        csi:
          driver: workloadcertificates.security.cloud.google.com
      {{- else}}
      - emptyDir: {}
        name: workload-certs
      {{- end }}
      # SDS channel between istioagent and Envoy
      - emptyDir:
          medium: Memory
        name: istio-envoy
      - name: istio-data
        emptyDir: {}
      - name: istio-podinfo
        downwardAPI:
          items:
            - path: "labels"
              fieldRef:
                fieldPath: metadata.labels
            - path: "annotations"
              fieldRef:
                fieldPath: metadata.annotations
      - name: istio-token
        projected:
          sources:
          - serviceAccountToken:
              path: istio-token
              expirationSeconds: 43200
              audience: {{ .Values.global.sds.token.aud }}
      {{- if eq .Values.global.pilotCertProvider "istiod" }}
      - name: istiod-ca-cert
      {{- if eq (.Values.pilot.env).ENABLE_CLUSTER_TRUST_BUNDLE_API true }}
        projected:
          sources:
          - clusterTrustBundle:
            name: istio.io:istiod-ca:root-cert
            path: root-cert.pem
      {{- else }}
        configMap:
          name: istio-ca-root-cert
      {{- end }}
      {{- end }}
      {{- if .Values.global.mountMtlsCerts }}
      # Use the key and cert mounted to /etc/certs/ for the in-cluster mTLS communications.
      - name: istio-certs
        secret:
          optional: true
          {{ if eq .Spec.ServiceAccountName "" }}
          secretName: istio.default
          {{ else -}}
          secretName: {{  printf "istio.%s" .Spec.ServiceAccountName }}
          {{  end -}}
      {{- end }}
      {{- if .Values.global.imagePullSecrets }}
      imagePullSecrets:
        {{- range .Values.global.imagePullSecrets }}
        - name: {{ . }}
        {{- end }}
      {{- end }}
    1. Remove sysctls because net.ipv4.ip_unprivileged_port_start is not available on old Linux kernels.
    2. Add CAP_NET_BIND_SERVICE to the list of capabilities in order to allow gateways to listen on ports below 1024.
  2. Patch the default gateway injection template for Istio resource:

    TEMPLATE_CONTENT=$(cat gateway-injection-template.txt)
    PATCH_DATA=$(jq -n \
      --arg template "${TEMPLATE_CONTENT}" \
      '{
        "spec": {
          "values": {
            "sidecarInjectorWebhook": {
              "templates": {
                "gateway": $template
              }
            }
          }
        }
      }')
    # Finally apply the patch to the Istio resource named `default`:
    kubectl patch istio default --type=merge -p "${PATCH_DATA}"
  3. Wait for the control plane to return the Ready status condition by running the following command:

    kubectl wait --for condition=Ready istio/default --timeout=3m

Installing a gateway via gateway injection

This procedure explains how to install a gateway by using gateway injection.

INFO

The following procedure applies to both ingress and egress gateway deployments.

Prerequisites

  • Alauda Service Mesh v2 Operator is installed.
  • An Istio control plane is deployed.

Procedure

  1. Create a namespace for the gateway:

    kubectl create namespace <gateway_namespace>
    NOTE

    Install the gateway and the Istio control plane in different namespaces.

    You can install the gateway in a dedicated gateway namespace. This approach allows the gateway to be shared by many applications operating in different namespaces. Alternatively, you can install the gateway in an application namespace. In this approach, the gateway acts as a dedicated gateway for the application in that namespace.

  2. Create a YAML file named secret-reader.yaml that defines the service account, role, and role binding for the gateway deployment. These settings enable the gateway to read the secrets, which is required for obtaining TLS credentials.

    secret-reader.yaml
    apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: secret-reader
      namespace: <gateway_namespace>
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: Role
    metadata:
      name: secret-reader
      namespace: <gateway_namespace>
    rules:
      - apiGroups: [""]
        resources: ["secrets"]
        verbs: ["get", "watch", "list"]
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
      name:  secret-reader
      namespace: <gateway_namespace>
    roleRef:
      apiGroup: rbac.authorization.k8s.io
      kind: Role
      name: secret-reader
    subjects:
      - kind: ServiceAccount
        name:  secret-reader
  3. Apply the YAML file by running the following command:

    kubectl apply -f secret-reader.yaml
  4. Create a YAML file named gateway-deployment.yaml that defines the Kubernetes Deployment object for the gateway.

    gateway-deployment.yaml
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: <gateway_name>
      namespace: <gateway_namespace>
    spec:
      selector:
        matchLabels:
          istio: <gateway_name>
      template:
        metadata:
          annotations:
            inject.istio.io/templates: gateway
          labels:
            istio: <gateway_name>
            sidecar.istio.io/inject: "true"
        spec:
          containers:
            - name: istio-proxy
              image: auto
              securityContext:
                capabilities:
                  drop:
                    - ALL
                allowPrivilegeEscalation: false
                privileged: false
                readOnlyRootFilesystem: true
                runAsNonRoot: true
              ports:
                - containerPort: 15090
                  protocol: TCP
                  name: http-envoy-prom
              resources:
                limits:
                  cpu: 2000m
                  memory: 1024Mi
                requests:
                  cpu: 100m
                  memory: 128Mi
          serviceAccountName: secret-reader
    1. Indicates that the Istio control plane uses the gateway injection template instead of the default sidecar template.
    2. Ensure that a unique label is set for the gateway deployment. A unique label is required so that Istio Gateway resources can select gateway workloads.
    3. Enables gateway injection by setting the sidecar.istio.io/inject label to true. If the name of the Istio resource is not default you must use the istio.io/rev: <istio_revision> label instead, where the revision represents the active revision of the Istio resource.
    4. Sets the image field to auto so that the image automatically updates each time the pod starts.
    5. Sets the serviceAccountName to the name of the ServiceAccount created previously.
  5. Apply the YAML file by running the following command:

    kubectl apply -f gateway-deployment.yaml
  6. Verify that the gateway Deployment rollout was successful by running the following command:

    kubectl rollout status deployment/<gateway_name> -n <gateway_namespace>

    You should see output similar to the following:

    Example output

    Waiting for deployment "<gateway_name>" rollout to finish: 0 of 1 updated replicas are available...
    deployment "<gateway_name>" successfully rolled out
  7. Create a YAML file named gateway-service.yaml that contains the Kubernetes Service object for the gateway.

    gateway-service.yaml
    apiVersion: v1
    kind: Service
    metadata:
      name: <gateway_name>
      namespace: <gateway_namespace>
    spec:
      type: ClusterIP
      selector:
        istio: <gateway_name>
      ports:
        - name: status-port
          port: 15021
          protocol: TCP
          targetPort: 15021
        - name: http2
          port: 80
          protocol: TCP
          targetPort: 80
        - name: https
          port: 443
          protocol: TCP
          targetPort: 443
    1. When you set spec.type to ClusterIP the gateway Service object can be accessed only from within the cluster. If the gateway has to handle ingress traffic from outside the cluster, set spec.type to LoadBalancer.
    2. Set the selector to the unique label or set of labels specified in the pod template of the gateway deployment that you previously created.
  8. Apply the YAML file by running the following command:

    kubectl apply -f gateway-service.yaml
  9. Verify that the gateway service is targeting the endpoint of the gateway pods by running the following command:

    kubectl get endpoints <gateway_name> -n <gateway_namespace>

You should see output similar to the following example:

Example output

NAME              ENDPOINTS                             AGE
<gateway_name>    10.131.0.181:8080,10.131.0.181:8443   1m
  1. Optional: Create a YAML file named gateway-hpa.yaml that defines a horizontal pod autoscaler for the gateway. The following example sets the minimum replicas to 2 and the maximum replicas to 5 and scales the replicas up when average CPU utilization exceeds 80% of the CPU resource limit. This limit is specified in the pod template of the deployment for the gateway.

    gateway-service.yaml
    apiVersion: autoscaling/v2
    kind: HorizontalPodAutoscaler
    metadata:
      name: <gateway_name>
      namespace: <gateway_namespace>
    spec:
      minReplicas: 2
      maxReplicas: 5
      metrics:
      - resource:
          name: cpu
          target:
            averageUtilization: 80
            type: Utilization
        type: Resource
      scaleTargetRef:
        apiVersion: apps/v1
        kind: Deployment
        name: <gateway_name>
    1. Set spec.scaleTargetRef.name to the name of the gateway deployment previously created.
  2. Optional: Apply the YAML file by running the following command:

    kubectl apply -f gateway-service.yaml
  3. Optional: Create a YAML file named gateway-pdb.yaml that defines a pod disruption budget for the gateway. The following example allows gateway pods to be evicted only when at least 1 healthy gateway pod will remain on the cluster after the eviction.

    gateway-pdb.yaml
    apiVersion: policy/v1
    kind: PodDisruptionBudget
    metadata:
      name: <gateway_name>
      namespace: <gateway_namespace>
    spec:
      minAvailable: 1
      selector:
        matchLabels:
          istio: <gateway_name>
    1. Set the spec.selector.matchLabels to the unique label or set of labels specified in the pod template of the gateway deployment previously created.
  4. Optional: Apply the YAML file by running the following command:

    kubectl apply -f gateway-pdb.yaml