Alauda Build of Kiali
TOC
Using Alauda Build of Kiali
The Alauda Build of Kiali provides observability and visualization capabilities for applications deployed within the service mesh. After adding an application to the mesh, the Alauda Build of Kiali can be used to inspect traffic flow and monitor mesh behavior.
About Kiali
The Alauda Build of Kiali is derived from the open source Kiali project and serves as the management console for Alauda Service Mesh.
It provides:
- Visualization of mesh topology and real-time traffic flow
- Insight into application health status and performance metrics
- Centralized access to configuration and validation tools
- Integration with Grafana for metric dashboards
- Support for distributed tracing via Jaeger or OpenTelemetry
These capabilities enable users to diagnose service behavior, identify potential issues, and optimize mesh configuration from a unified interface.
Installing Alauda Build of Kiali
The following steps show how to install Alauda Build of Kiali.
Prerequisites
- You are logged in to the Alauda Container Platform web console as cluster-admin.
Procedure
- In the Alauda Container Platform web console, navigate to Administrator.
- Select Marketplace > OperatorHub.
- Search for the Alauda Build of Kiali.
- Locate the Alauda Build of Kiali, and click to select it.
- Click Install.
- Click Install and Confirm to install the Operator.
Verification
Verify that the Operator installation status is reported as Succeeded
in the Installation Info section.
Configuring Monitoring with Kiali
The following steps show how to integrate the Alauda Build of Kiali with user-workload monitoring.
Prerequisites
Procedure
Retrieve the CA certificate for Alauda Container Platform in Global cluster:
NOTE
Run the following command in the Global cluster
# CA certificate for ACP - base64-encoded
kubectl -ncpaas-system get secret dex.tls -o jsonpath='{.data.ca\.crt}'
The output is a base64-encoded certificate. Store this value for use in later steps.
Retrieve platform configuration from the business cluster:
export PLATFORM_URL=$(kubectl -nkube-public get configmap global-info -o jsonpath='{.data.platformURL}')
export CLUSTER_NAME=$(kubectl -nkube-public get configmap global-info -o jsonpath='{.data.clusterName}')
export ALB_CLASS_NAME=$(kubectl -nkube-public get configmap global-info -o jsonpath='{.data.systemAlbIngressClassName}')
export OIDC_ISSUER=$(kubectl -nkube-public get configmap global-info -o jsonpath='{.data.oidcIssuer}')
export OIDC_CLIENT_ID=$(kubectl -nkube-public get configmap global-info -o jsonpath='{.data.oidcClientID}')
export OIDC_CLIENT_SECRET=$(kubectl -nkube-public get configmap global-info -o jsonpath='{.data.oidcClientSecret}')
export MONITORING_URL=$(kubectl get feature monitoring -o jsonpath='{.spec.accessInfo.database.address}')
Create a Secret
named kiali
in the istio-system
namespace for OpenID authentication:
kubectl create secret generic kiali --from-literal="oidc-secret=$OIDC_CLIENT_SECRET" -nistio-system
Example output:
Create a Secret
for monitoring database credentials:
SECRET_NAME=$(kubectl get feature monitoring -o jsonpath='{.spec.accessInfo.database.basicAuth.secretName}')
AUTH_USERNAME=$(kubectl -ncpaas-system get secret "$SECRET_NAME" -o jsonpath="{.data.username}" | base64 -d)
AUTH_PASSWORD=$(kubectl -ncpaas-system get secret "$SECRET_NAME" -o jsonpath="{.data.password}" | base64 -d)
kubectl create secret generic "kiali-monitoring-basic-auth" \
--from-literal="username=$AUTH_USERNAME" \
--from-literal="password=$AUTH_PASSWORD" \
-n istio-system
Example output:
secret/kiali-monitoring-basic-auth created
Create a file named kiali.yaml with the following content. Replace placeholder values as needed:
kiali.yaml
apiVersion: kiali.io/v1alpha1
kind: Kiali
metadata:
name: kiali
namespace: istio-system
spec:
server:
web_port: 443
web_root: /clusters/${CLUSTER_NAME}/kiali
auth:
openid:
api_proxy: ${PLATFORM_URL}/kubernetes/${CLUSTER_NAME}
api_proxy_ca_data: ${PLATFORM_CA}
insecure_skip_verify_tls: true
issuer_uri: ${OIDC_ISSUER}
client_id: ${OIDC_CLIENT_ID}
username_claim: email
strategy: openid
deployment:
view_only_mode: false
replicas: 1
resources:
requests:
cpu: "100m"
memory: "64Mi"
limits:
memory: "1Gi"
ingress:
enabled: true
class_name: ${ALB_CLASS_NAME}
external_services:
grafana:
enabled: false # Since Grafana is not bundled in ACP anymore, it is disabled by default
prometheus:
# Only required in multi cluster
query_scope:
mesh_id: <mesh_id>
auth:
type: basic
username: secret:kiali-monitoring-basic-auth:username
password: secret:kiali-monitoring-basic-auth:password
insecure_skip_verify: true
# only required when using VictoriaMetrics, remove this field when using Prometheus
thanos_proxy:
enabled: true
retention_period: 30d
scrape_interval: 60s
url: ${MONITORING_URL}
kiali_feature_flags:
ui_defaults:
i18n:
language: en
show_selector: true
web_port
is the port for accessing the Kiali dashboard.
web_root
is the path under platform url for accessing the Kiali dashboard.
api_proxy
points to erebus to map ACP user tokens to Kubernetes tokens.
api_proxy_ca_data
is the base64‑encoded CA certificate used by erebus.
issuer_uri
is the OIDC issuer URL for dex.
client_id
is the OIDC client ID for dex.
replicas
specifies the number of replicas for the Kiali deployment, should be at least 2
in production environments.
class_name
is the ingress class name for the Kiali ingress.
- Required in multi cluster mesh.
<mesh_id>
should be the same as the .spec.values.global.meshId
in Istio
resource.
username
references the monitoring basic‑auth username stored in the kiali-monitoring-basic-auth
Secret.
password
references the monitoring basic‑auth password stored in the kiali-monitoring-basic-auth
Secret.
thanos_proxy
is only required when using VictoriaMetrics, remove this field when using Prometheus.
url
is the monitoring endpoint for Prometheus or VictoriaMetrics.
i18n
specifies the default language and whether to show the language selector.
Apply the configuration, render the manifest with envsubst
:
# Replace <platform-ca> with the real base64-encoded CA certificate saved previously.
export PLATFORM_CA=<platform-ca>
envsubst < kiali.yaml | kubectl apply -f -
- Replace
<platform-ca>
with the real base64-encoded CA certificate saved previously.
Access the Kiali console:
When the Kiali
resource is ready, access the Kiali dashboard at <platform-url>/clusters/<cluster>/kiali
.
Integrating distributed tracing platform with Alauda Build of Kiali
After integration with a distributed tracing platform, the Alauda Build of Kiali enables visualization of request traces directly in the Kiali console. These traces provide insight into inter-service communication within the service mesh and can help identify latency, failures, or bottlenecks in request paths.
This capability supports the analysis of request flow behavior, aiding in root cause identification and performance optimization across services in the mesh.
Prerequisites
- Alauda Service Mesh is installed.
- Distributed tracing platform such as Alauda Build of Jaeger is installed and successfully configured.
Procedure
-
Update the Kiali
resource spec configuration for tracing:
Example Kiali
resource spec
configuration for tracing
spec:
external_services:
tracing:
# Only required in multi cluster
query_scope:
istio.mesh_id: <mesh_id>
enabled: true
provider: jaeger
use_grpc: true
internal_url: "http://jaeger-prod-query.cpaas-system:16685/jaeger"
# (Optional) Public facing URL of Jaeger
# external_url: "http://my-jaeger-host/jaeger"
# When external_url is not defined, disable_version_check should be set to true
disable_version_check: true
- Required in multi cluster mesh.
<mesh_id>
should be the same as the .spec.values.global.meshId
in Istio
resource.
- Specifies whether tracing is enabled.
- Specifies the tracing provider (
jaeger
or tempo
).
- Specifies the internal URL for the Jaeger API.
-
Save the updated spec in kiali_cr.yaml
.
-
Run the following command to apply the configuration:
kubectl -nistio-system patch kiali kiali --type merge -p "$(cat kiali_cr.yaml)"
Example output:
kiali.kiali.io/kiali patched
Verification
- Navigate to the Kiali UI.
- Navigate to Workload Traces tab to see traces in the Kiali UI.