Install

The platform's logging system consists of two plugins: Alauda Container Platform Log Collector and Alauda Container Platform Log Storage. This chapter will introduce you to the installation of these two plugins.

WARNING
  1. The global cluster can query the log data stored on any workload cluster within the platform. Ensure that the global cluster can access port 11780 of the workload cluster.

  2. The Alauda Container Platform Log Storage with Clickhouse plugin needs Clickhouse operator, before installing the plugin, please ensure that the Clickhouse operator is uploaded in the cluster.

TOC

Installation Planning

Alauda Container Platform Log Storage plugins can be installed in any cluster, and any cluster's log storage component can be selected for log collection to interface with the storage data.

So , before installing the log storage plugin , you need to plan the cluster and node where the log storage component will be installed.

  • Avoid deploying log storage plugins in the global cluster. Instead, deploy them in workload clusters to ensure management cluster failures do not disrupt log-based issue resolution.

  • Prioritize centralizing logs to a single log storage cluster. If log volume exceeds maximum capacity thresholds, distribute logs across multiple storage clusters.

  • Deploy at least one log storage instance per network zone to aggregate logs locally, minimizing cross-data-center public network traffic (which incurs high costs and latency).

  • Dedicate exclusive nodes for log storage, avoiding co-deployment with other applications or platform components. Log storage requires high I/O throughput and may be affected by interference.

  • Mount dedicated SSD disks for log storage to significantly enhance performance.

Install Alauda Container Platform Log Storage with ElasticSearch via console

  1. Navigate to App Store Management > Cluster Plugin and select the target cluster.

  2. In the Plugins tab, click the action button to the right of Alauda Container Platform Log Storage with ElasticSearch > Install.

  3. Refer to the following instructions to configure relevant parameters.

    ParameterDescription
    Connect External ElasticsearchKeep closed to install the log storage plugin within the platform.
    Component installation SettingsLocalVolume: Local storage, log data will be stored in the local storage path of the selected node. The advantage of this method is that the log component is directly bound to local storage, eliminating the need to access storage over the network and providing better storage performance.
    StorageClass: Dynamically create storage resources using storage classes to store log data. The advantage of this method is a higher degree of flexibility; when multiple storage classes are defined for the entire cluster, administrators can select the corresponding storage class for the log components based on usage scenarios, reducing the impact of host malfunction on storage. However, the performance of StorageClass may be affected by factors such as network bandwidth and latency, and it relies on the redundancy mechanisms provided by the storage backend to achieve high availability of storage.
    Retention PeriodThe maximum time logs, events, and audit data can be retained on the cluster. Data exceeding the retention period will be automatically cleaned up.
    Tip: You may back up data that needs to be retained for a long time. If you need assistance, please contact technical support personnel.
  4. Click Install.

Install Alauda Container Platform Log Storage with ElasticSearch via YAML

1. Check available versions

Ensure the plugin has been published by checking for ModulePlugin and ModuleConfig resources, in the globalcluster:


# kubectl get moduleplugin | grep logcenter
logcenter                       30h
# kubectl get moduleconfig | grep logcenter
logcenter-v4.1.0                30h

This indicates that the ModulePlugin logcenter exists in the cluster and version v4.1.0 is published.

2. Create a ModuleInfo

Create a ModuleInfo resource to install the plugin without any configuration parameters:

apiVersion: cluster.alauda.io/v1alpha1
kind: ModuleInfo
metadata:
  annotations:
    cpaas.io/display-name: logcenter
    cpaas.io/module-name: '{"en": "Alauda Container Platform Log Storage for Elasticsearch", "zh": "Alauda Container Platform Log Storage for Elasticsearch"}'
  labels:
    cpaas.io/cluster-name: go
    cpaas.io/module-name: logcenter
    cpaas.io/module-type: plugin
    cpaas.io/product: Platform-Center
  name: <cluster>-log-center
spec:
  config:
    clusterView:
      isPrivate: "true"
    components:
      elasticsearch:
        address: ""
        basicAuthSecretName: ""
        hostpath: /cpaas/data/elasticsearch
        httpPort: 9200
        install: true
        k8sNodes:
        - 192.168.139.75
        masterK8sNodes: []
        masterReplicas: 0
        masterResources:
          limits:
            cpu: "2"
            memory: 4Gi
          requests:
            cpu: 200m
            memory: 256Mi
        masterStorageSize: 5
        nodeReplicas: 1
        nodeStorageSize: 200
        resources:
          limits:
            cpu: "4"
            memory: 4Gi
          requests:
            cpu: "1"
            memory: 1Gi
        tcpPort: 9300
        type: single
      kafka:
        address: ""
        auth: true
        basicAuthSecretName: ""
        exporterPort: 9308
        install: true
        k8sNodes:
        - 192.168.139.75
        port: 9092
        storageSize: 10
        tls: true
        zkElectPort: 3888
        zkExporterPort: 9141
        zkLeaderPort: 2888
        zkPort: 2181
      kibana:
        install: false
      storageClassConfig:
        name: elasticsearch-local-log-sc
        type: LocalVolume
      zookeeper:
        storageSize: 1
    ttl:
      audit: 180
      event: 180
      logKubernetes: 7
      logPlatform: 7
      logSystem: 7
      logWorkload: 7
  version: v4.1.0

YAML field reference:

Field pathDescription
metadata.labels.cpaas.io/cluster-nameTarget cluster name where the plugin is installed.
metadata.nameTemporary ModuleInfo name; the platform will rename it after creation.
spec.config.clusterView.isPrivateVisibility setting for cluster view.
spec.config.components.elasticsearch.addressExternal Elasticsearch address; leave empty to use platform-installed Elasticsearch.
spec.config.components.elasticsearch.basicAuthSecretNameSecret name for external Elasticsearch basic auth; leave empty for platform Elasticsearch.
spec.config.components.elasticsearch.hostpathData path for Elasticsearch.
spec.config.components.elasticsearch.httpPortElasticsearch HTTP port, default 9200.
spec.config.components.elasticsearch.installWhether to install Elasticsearch via platform; set to false when using external Elasticsearch.
spec.config.components.elasticsearch.k8sNodesNode IP list for Elasticsearch Data when using LocalVolume.
spec.config.components.elasticsearch.masterK8sNodesNode IP list for Elasticsearch Master (large scale with LocalVolume only).
spec.config.components.elasticsearch.masterReplicasReplica count for Elasticsearch Master (large scale only).
spec.config.components.elasticsearch.masterResourcesResource requests/limits for Elasticsearch Master (large scale only).
spec.config.components.elasticsearch.masterStorageSizeStorage size for Elasticsearch Master (large scale only).
spec.config.components.elasticsearch.nodeReplicasReplica count for Elasticsearch Data.
spec.config.components.elasticsearch.nodeStorageSizeStorage size for Elasticsearch Data (Gi).
spec.config.components.elasticsearch.resourcesResource requests/limits for Elasticsearch Data.
spec.config.components.elasticsearch.tcpPortInternal transport port for Elasticsearch cluster, default 9300.
spec.config.components.elasticsearch.typeElasticsearch cluster size: single/normal/big.
spec.config.components.kafka.addressExternal Kafka address; leave empty to use platform-installed Kafka.
spec.config.components.kafka.authEnable Kafka authentication, default true.
spec.config.components.kafka.basicAuthSecretNameSecret name for external Kafka auth; leave empty for platform Kafka.
spec.config.components.kafka.exporterPortKafka Exporter port, default 9308.
spec.config.components.kafka.installWhether to install Kafka via platform; set to false when using external Kafka.
spec.config.components.kafka.k8sNodesNode IP list for Kafka when using LocalVolume.
spec.config.components.kafka.portKafka exposed port, default 9092.
spec.config.components.kafka.storageSizeKafka storage size (Gi).
spec.config.components.kafka.tlsEnable TLS for Kafka, default true.
spec.config.components.kafka.zkElectPortZookeeper election port, default 3888.
spec.config.components.kafka.zkExporterPortZookeeper Exporter port, default 9141.
spec.config.components.kafka.zkLeaderPortZookeeper leader/follower communication port, default 2888.
spec.config.components.kafka.zkPortZookeeper client port, default 2181.
spec.config.components.kibana.installWhether to install Kibana; Kibana is deprecated, set to false.
spec.config.components.storageClassConfig.nameFor LocalVolume, typically elasticsearch-local-log-sc; for StorageClass, set to the class name.
spec.config.components.storageClassConfig.typeStorage type: LocalVolume/StorageClass.
spec.config.components.zookeeper.storageSizeZookeeper storage size (Gi).
spec.config.ttl.auditRetention days for audit data.
spec.config.ttl.eventRetention days for event data.
spec.config.ttl.logKubernetesRetention days for Kubernetes logs.
spec.config.ttl.logPlatformRetention days for platform logs.
spec.config.ttl.logSystemRetention days for system logs.
spec.config.ttl.logWorkloadRetention days for workload logs.
spec.versionSpecifies the plugin version to install, must match .spec.version in ModuleConfig.

3. Verify installation

Since the ModuleInfo name changes upon creation, locate the resource via label to check the plugin status and version:

kubectl get moduleinfo -l cpaas.io/module-name=logcenter
NAME                                             CLUSTER         MODULE      DISPLAY_NAME   STATUS    TARGET_VERSION   CURRENT_VERSION   NEW_VERSION
global-e671599464a5b1717732c5ba36079795          global          logcenter   logcenter      Running   v4.0.12          v4.0.12           v4.0.12

Field explanations:

  • NAME: ModuleInfo resource name
  • CLUSTER: Cluster where the plugin is installed
  • MODULE: Plugin name
  • DISPLAY_NAME: Display name of the plugin
  • STATUS: Installation status; Running means successfully installed and running
  • TARGET_VERSION: Intended installation version
  • CURRENT_VERSION: Version before installation
  • NEW_VERSION: Latest available version for installation

Install Alauda Container Platform Log Storage with Clickhouse via console

  1. Navigate to App Store Management > Cluster Plugin and select the target cluster.

  2. In the Plugins tab, click the action button to the right of Alauda Container Platform Log Storage with Clickhouse > Install.

  3. Refer to the following instructions to configure relevant parameters.

    ParameterDescription
    Component installation SettingsLocalVolume: Local storage, log data will be stored in the local storage path of the selected node. The advantage of this method is that the log component is directly bound to local storage, eliminating the need to access storage over the network and providing better storage performance.
    StorageClass: Dynamically create storage resources using storage classes to store log data. The advantage of this method is a higher degree of flexibility; when multiple storage classes are defined for the entire cluster, administrators can select the corresponding storage class for the log components based on usage scenarios, reducing the impact of host malfunction on storage. However, the performance of StorageClass may be affected by factors such as network bandwidth and latency, and it relies on the redundancy mechanisms provided by the storage backend to achieve high availability of storage.
    Retention PeriodThe maximum time logs, events, and audit data can be retained on the cluster. Data exceeding the retention period will be automatically cleaned up.
    Tip: You may back up data that needs to be retained for a long time. If you need assistance, please contact technical support personnel.
  4. Click Install.

Install Alauda Container Platform Log Storage with Clickhouse via YAML

1. Check available versions

Ensure the plugin has been published by checking for ModulePlugin and ModuleConfig resources, in the globalcluster:


# kubectl get moduleplugin | grep logclickhouse
logclickhouse                       30h
# kubectl get moduleconfig | grep logclickhouse
logclickhouse-v4.1.0                30h

This indicates that the ModulePlugin logclickhouse exists in the cluster and version v4.1.0 is published.

2. Create a ModuleInfo

Create a ModuleInfo resource to install the plugin without any configuration parameters:

apiVersion: cluster.alauda.io/v1alpha1
kind: ModuleInfo
metadata:
  name: global-logclickhouse
  labels:
    cpaas.io/cluster-name: global
    cpaas.io/module-name: logclickhouse
    cpaas.io/module-type: plugin
spec:
  version: v4.1.0
  config:
    components:
      storageClassConfig:
        type: LocalVolume
        name: ""
      clickhouse:
        resources:
          limits:
            cpu: "2"
            memory: 4Gi
          requests:
            cpu: 200m
            memory: 256Mi
        k8sNodes:
          - xxx.xxx.xxx.xx
        hostpath: /cpaas/data/clickhouse
        nodeReplicas: 1
        nodeStorageSize: 200
        type: single
      razor:
        resources:
          limits:
            cpu: "2"
            memory: 1Gi
          requests:
            cpu: 10m
            memory: 256Mi
      vector:
        resources:
          limits:
            cpu: "4"
            memory: 1Gi
          requests:
            cpu: 10m
            memory: 256Mi
    ttl:
      audit: 180
      event: 180
      logKubernetes: 7
      logPlatform: 7
      logSystem: 7
      logWorkload: 7

YAML field reference (ClickHouse):

Field pathDescription
metadata.nameModuleInfo name. Recommended format: <target-cluster>-logclickhouse.
metadata.labels.cpaas.io/cluster-nameTarget cluster where the plugin is installed.
metadata.labels.cpaas.io/module-nameMust be logclickhouse.
metadata.labels.cpaas.io/module-typeMust be plugin.
spec.versionPlugin version to install.
spec.config.components.storageClassConfig.typeStorage type for ClickHouse data: LocalVolume or StorageClass.
spec.config.components.storageClassConfig.nameStorageClass name when type is StorageClass; keep empty for LocalVolume.
spec.config.components.clickhouse.resourcesResource requests/limits for ClickHouse.
spec.config.components.clickhouse.k8sNodesNode IP list for ClickHouse when using LocalVolume.
spec.config.components.clickhouse.hostpathLocal path for ClickHouse data when using LocalVolume.
spec.config.components.clickhouse.nodeReplicasReplica count when using StorageClass.
spec.config.components.clickhouse.nodeStorageSizeStorage size for ClickHouse data (Gi).
spec.config.components.clickhouse.typeCluster size: single, normal, or big.
spec.config.components.razor.resourcesResource requests/limits for Razor.
spec.config.components.vector.resourcesResource requests/limits for Vector.
spec.config.ttl.auditRetention days for audit data.
spec.config.ttl.eventRetention days for event data.
spec.config.ttl.logKubernetesRetention days for Kubernetes logs.
spec.config.ttl.logPlatformRetention days for platform logs.
spec.config.ttl.logSystemRetention days for system logs.
spec.config.ttl.logWorkloadRetention days for workload logs.
spec.versionSpecifies the plugin version to install, must match .spec.version in ModuleConfig.

3. Verify installation

Since the ModuleInfo name changes upon creation, locate the resource via label to check the plugin status and version:

kubectl get moduleinfo -l cpaas.io/module-name=logclickhouse
NAME                                             CLUSTER         MODULE      DISPLAY_NAME   STATUS    TARGET_VERSION   CURRENT_VERSION   NEW_VERSION
global-e671599464a5b1717732c5ba36079795          global          logclickhouse   logclickhouse      Running   v4.0.12          v4.0.12           v4.0.12

Field explanations:

  • NAME: ModuleInfo resource name
  • CLUSTER: Cluster where the plugin is installed
  • MODULE: Plugin name
  • DISPLAY_NAME: Display name of the plugin
  • STATUS: Installation status; Running means successfully installed and running
  • TARGET_VERSION: Intended installation version
  • CURRENT_VERSION: Version before installation
  • NEW_VERSION: Latest available version for installation

Install Alauda Container Platform Log Collector Plugin

  1. Navigate to App Store Management > Cluster Plugin and select the target cluster.

  2. In the Plugins tab, click the action button to the right of Alauda Container Platform Log Collector > Install.

  3. Select the Storage Cluster (where Alauda Container Platform Log Storage has been installed) and click Select/Deselect log types to set the scope of log collection in the cluster.

  4. Click Install.

Install Alauda Container Platform Log Collector Plugin via YAML

1. Check available versions

Ensure the plugin has been published by checking for ModulePlugin and ModuleConfig resources, in the globalcluster:


# kubectl get moduleplugin | grep logagent
logagent                       30h
# kubectl get moduleconfig | grep logagent
logagent-v4.1.0                30h

This indicates that the ModulePlugin logagent exists in the cluster and version v4.1.0 is published.

2. Create a ModuleInfo

Create a ModuleInfo resource to install the plugin without any configuration parameters:

apiVersion: cluster.alauda.io/v1alpha1
kind: ModuleInfo
metadata:
  annotations:
    cpaas.io/display-name: logagent
    cpaas.io/module-name: '{"en": "Alauda Container Platform Log Collector", "zh": "Alauda Container Platform Log Collector"}'
  labels:
    cpaas.io/cluster-name: go
    cpaas.io/module-name: logagent
    cpaas.io/module-type: plugin
    cpaas.io/product: Platform-Center
    logcenter.plugins.cpaas.io/cluster: go
  name: <cluster>-log-agent
spec:
  config:
    crossClusterDependency:
      logcenter: go
      logclickhouse: null
    dataSource:
      audit: true
      event: true
      kubernetes: true
      platform: false
      system: false
      workload: true
    storage:
      type: Elasticsearch
  version: v4.1.0

YAML field reference (Log Collector):

Field pathDescription
metadata.annotations.cpaas.io/display-namePlugin display name.
metadata.annotations.cpaas.io/module-namePlugin i18n name JSON string.
metadata.labels.cpaas.io/cluster-nameTarget cluster where the plugin is installed.
metadata.labels.cpaas.io/module-nameMust be logagent.
metadata.labels.cpaas.io/module-typeMust be plugin.
metadata.labels.cpaas.io/productProduct identifier, typically Platform-Center.
metadata.labels.logcenter.plugins.cpaas.io/clusterStorage cluster name to which logs are pushed.
metadata.nameTemporary ModuleInfo name; the platform will rename it after creation.
spec.config.crossClusterDependency.logcenterName of the Elasticsearch-based log storage cluster.
spec.config.crossClusterDependency.logclickhouseSet to null when using Elasticsearch storage; otherwise set to ClickHouse cluster name.
spec.config.dataSource.auditCollect audit logs.
spec.config.dataSource.eventCollect event logs.
spec.config.dataSource.kubernetesCollect Kubernetes logs.
spec.config.dataSource.platformCollect platform logs.
spec.config.dataSource.systemCollect system logs.
spec.config.dataSource.workloadCollect workload logs.
spec.config.storage.typeElasticsearch or Clickhouse.
spec.versionPlugin version to install.

3. Verify installation

Since the ModuleInfo name changes upon creation, locate the resource via label to check the plugin status and version:

kubectl get moduleinfo -l cpaas.io/module-name=logagent
NAME                                             CLUSTER         MODULE      DISPLAY_NAME   STATUS    TARGET_VERSION   CURRENT_VERSION   NEW_VERSION
global-e671599464a5b1717732c5ba36079795          global          logagent   logagent      Running   v4.0.12          v4.0.12           v4.0.12