Using OCI Connector to Deploy Workloads in a Secretless Way
In a Kubernetes cluster, pulling images from private registries typically requires distributing registry credentials to namespaces, which increases the risk of credential leakage.
The OCI Connector provides a secretless solution by acting as a proxy for the registry. This allows users to access private registries without storing long-term passwords or robot tokens in every namespace, thereby maximizing credential security.
This guide demonstrates how to use the OCI Connector to deploy workloads that need to pull images from private OCI registries. The OCI Connector functions as a reverse proxy between your Kubernetes cluster and the OCI registry, handling authentication and image retrieval.
TOC
Feature Overview
When deploying workloads with the OCI Connector, keep the following key points in mind:
- To enable image pulls via the OCI Connector proxy in the Kubernetes runtime, you must configure the ConnectorOCI to expose its service using either NodePort or Ingress. Refer to the Installation Guide for detailed setup instructions.
- The image address specified in your workload will be automatically rewritten to point to the OCI Connector proxy. Since the proxy uses HTTP, you must configure the runtime to allow insecure registries.
- The OCI Connector acts as a reverse proxy for the OCI registry, handling authentication and image pulls on behalf of your workloads.
- The OCI Connector proxy authenticates clients using their service account tokens, ensuring that only authorized workloads can access the specified connector.
Prerequisites
- ConnectorsCore is installed and running in the cluster: Ensure that ConnectorsCore is deployed in your cluster.
- ConnectorOCI is installed and exposed externally (NodePort or Ingress): Deploy ConnectorOCI and expose it outside the cluster. See the Installation Guide for details.
- Access to a private OCI registry: You must have valid credentials and access to the target registry.
- kubectl configured: Ensure
kubectl
is installed and configured to access your cluster.
With the OCI Connector, you can securely pull images from private registries in your Kubernetes cluster without storing credentials on the client side. This approach ensures that sensitive credentials are managed centrally and never exposed to individual workloads or users.
The OCI Connector enables seamless, credential-free image pulls for pods by proxying authentication and image requests through a secure, centralized service.
Overview
The process involves several key steps:
- Creating a Connector resource that defines the connection to your OCI registry.
- Setting up authentication secrets.
- Creating a ServiceAccount token for internal authentication.
- Configuring image pull secrets.
- Deploying the workload with the appropriate annotations.
Operational Steps
Step 1: Create Connector
First, create a namespace for the demo:
kubectl create namespace oci-connector-demo
Next, create a Connector resource that defines the connection to your OCI registry. This resource will manage authentication and proxy operations.
kubectl apply -n oci-connector-demo -f - <<'EOF'
apiVersion: connectors.alauda.io/v1alpha1
kind: Connector
metadata:
name: harbor
spec:
address: https://harbor.example.com # Replace with your OCI registry address
auth:
name: tokenAuth
secretRef:
name: harbor-secret
connectorClassName: oci
---
apiVersion: v1
stringData: # Replace with actual authentication information
password: robotsecret
username: robot$demo
kind: Secret
metadata:
name: harbor-secret
type: cpaas.io/distribution-registry-token
EOF
Recommendation: Use a robot account
for registry access instead of an admin
account if your OCI registry supports it.
Explanation:
- The
Connector
resource defines the connection parameters including the registry address and authentication method
- The
Secret
resource stores the registry credentials securely
- The
connectorClassName: oci
specifies that this is an OCI-type connector
- The
tokenAuth
authentication method is used for token-based authentication
Now, we have created a connector in the namespace oci-connector-demo
, and the connector status is Ready
.
$ kubectl get connector.connectors -n oci-connector-demo
NAME CLASS ADDRESS READY REASON AGE
harbor oci https://harbor.example.com True 30s
Step 2: Create a ServiceAccount Token
Generate a token for the default ServiceAccount in your namespace:
$ kubectl create token default -n oci-connector-demo
eyJhbGciOiJSUzI1NiIsImtpZCI6xxxximJ0VNVV_GblYvYy3dg...
This token will be used for pulling images through the connector proxy. Any ServiceAccount with permission to access the connector can be used as a pull secret for the pod.
For more information on Connector resource permissions, see Connector Scope Permissions.
Explanation:
- This token will be used to authenticate requests to the connector proxy
- The token is scoped to the specific namespace (
oci-connector-demo
)
- Store this token securely as it will be used in the next step
Note: ServiceAccount tokens have an expiration time (default: 1 hour). You can use the --duration
flag to extend the expiration. For more details, see the Kubernetes documentation.
Step 3: Create an Image Pull Secret
Create a Docker registry secret using the ServiceAccount token:
kubectl create secret docker-registry oci-connector-secret \
--docker-server="192.168.x.x:31567" \
--docker-username="u" \
--docker-password="eyJhbGciOiJSUzI1NiIsImtpZCI6xxxximJ0VNVV_GblYvYy3dg" \
--docker-email=xxx@xxxx \
--namespace=oci-connector-demo
Explanation:
- The
docker-server
points to the connector proxy service address in the cluster, you can get the address from the connector status
$ kubectl get connectors.connector harbor -n oci-connector-demo -o jsonpath='{.status.proxy.httpAddress.url}'
http://192.168.x.x:31567/namespaces/oci-connector-demo/connectors/harbor
- The
docker-username
is set to "u" (a placeholder username)
- The
docker-password
uses the Service Account token from the previous step
- The
docker-email
can be any valid email address (used for Docker registry compatibility)
Step 4: Patch ServiceAccount with Image Pull Secret
Attach the image pull secret to the ServiceAccount so that pods using this ServiceAccount can automatically use the secret for image pulling.
kubectl patch serviceaccount default -n oci-connector-demo -p "{\"imagePullSecrets\": [{\"name\": \"oci-connector-secret\"}]}"
Explanation:
- This command adds the
oci-connector-secret
to the imagePullSecrets
list of the default ServiceAccount
- Any pod using this ServiceAccount will automatically use this secret for image pulling
- This eliminates the need to specify the secret in each pod definition
This ensures that any pod using this ServiceAccount will automatically use the secret for image pulls.
Step 5: Deploy the Workload
Create a workload (Pod in this example) with the necessary annotations to use the OCI connector.
kubectl apply -n oci-connector-demo -f - <<EOF
apiVersion: v1
kind: Pod
metadata:
name: oci-connector-pod
annotations:
connectors.cpaas.io/connector: oci-connector-demo/harbor
labels:
connectors.cpaas.io/proxy-inject: "true"
spec:
containers:
- name: oci-connector-container
image: harbor.example.com/oci-connector-demo:v1
serviceAccountName: default
EOF
- The
connectors.cpaas.io/connector
annotation specifies which connector to use (namespace/connector-name
).
- The
connectors.cpaas.io/proxy-inject: "true"
label enables proxy injection for this pod.
- The
image
field should specify your original image address.
- The
serviceAccountName: default
ensures the pod uses the ServiceAccount with the image pull secret.
Step 6: Verify Pod Status
After the pod is created, you can see the image address in the pod is rewritten to the connector proxy address.
$ kubectl get pod oci-connector-pod -n oci-connector-demo -o jsonpath='{.spec.containers[0].image}'
# Example output:
# 192.168.x.x:31567/namespaces/oci-connector-demo/connectors/harbor/oci-connector-demo:v1
Then, check that the pod is running and has successfully pulled the image through the connector proxy:
kubectl get pod oci-connector-pod -n oci-connector-demo
Expected Output:
NAME READY STATUS RESTARTS AGE
oci-connector-pod 1/1 Running 0 2m
Troubleshooting
Common Issues
-
Pod stuck in ImagePullBackOff:
- Ensure the connector is properly configured and running.
- Verify the ServiceAccount token is valid and not expired.
- Confirm the image pull secret is correctly attached to the ServiceAccount.
-
Authentication failures:
- Check the registry credentials in the
harbor-secret
.
- Ensure the connector address is accessible.
- Verify the repository parameter matches your actual repository.
-
Proxy injection not working:
- Confirm the
connectors.cpaas.io/proxy-inject: "true"
label is present.
- Check that the connector annotation is correctly formatted.
- Ensure the connector exists in the specified namespace.
Verification Commands
# Check connector status
kubectl get connector.connectors harbor -n oci-connector-demo
# Verify secret exists
kubectl get secret oci-connector-secret -n oci-connector-demo
# Check ServiceAccount configuration
kubectl get serviceaccount default -n oci-connector-demo -o yaml
# View pod events for troubleshooting
kubectl describe pod oci-connector-pod -n oci-connector-demo
Conclusion
You have now completed the process of using the OCI Connector to deploy workloads in a secretless way. This approach enhances security by centralizing credential management and eliminating the need to distribute sensitive information to individual workloads.