The platform's logging system consists of two plugins: Alauda Container Platform Log Collector and Alauda Container Platform Log Storage. This chapter will introduce you to the installation of these two plugins.
The global
cluster can query the log data stored on any workload cluster within the platform. Ensure that the global
cluster can access port 11780 of the workload cluster.
The Alauda Container Platform Log Storage with Clickhouse plugin needs Clickhouse operator, before installing the plugin, please ensure that the Clickhouse operator is uploaded in the cluster.
Alauda Container Platform Log Storage plugins can be installed in any cluster, and any cluster's log storage component can be selected for log collection to interface with the storage data.
So , before installing the log storage plugin , you need to plan the cluster and node where the log storage component will be installed.
Avoid deploying log storage plugins in the global cluster. Instead, deploy them in workload clusters to ensure management cluster failures do not disrupt log-based issue resolution.
Prioritize centralizing logs to a single log storage cluster. If log volume exceeds maximum capacity thresholds, distribute logs across multiple storage clusters.
Deploy at least one log storage instance per network zone to aggregate logs locally, minimizing cross-data-center public network traffic (which incurs high costs and latency).
Dedicate exclusive nodes for log storage, avoiding co-deployment with other applications or platform components. Log storage requires high I/O throughput and may be affected by interference.
Mount dedicated SSD disks for log storage to significantly enhance performance.
Navigate to App Store Management > Cluster Plugin and select the target cluster.
In the Plugins tab, click the action button to the right of Alauda Container Platform Log Storage with ElasticSearch > Install.
Refer to the following instructions to configure relevant parameters.
Parameter | Description |
---|---|
Connect External Elasticsearch | Keep closed to install the log storage plugin within the platform. |
Component installation Settings | LocalVolume: Local storage, log data will be stored in the local storage path of the selected node. The advantage of this method is that the log component is directly bound to local storage, eliminating the need to access storage over the network and providing better storage performance. StorageClass: Dynamically create storage resources using storage classes to store log data. The advantage of this method is a higher degree of flexibility; when multiple storage classes are defined for the entire cluster, administrators can select the corresponding storage class for the log components based on usage scenarios, reducing the impact of host malfunction on storage. However, the performance of StorageClass may be affected by factors such as network bandwidth and latency, and it relies on the redundancy mechanisms provided by the storage backend to achieve high availability of storage. |
Retention Period | The maximum time logs, events, and audit data can be retained on the cluster. Data exceeding the retention period will be automatically cleaned up. Tip: You may back up data that needs to be retained for a long time. If you need assistance, please contact technical support personnel. |
Click Install.
Ensure the plugin has been published by checking for ModulePlugin and ModuleConfig resources, in the global
cluster:
This indicates that the ModulePlugin logcenter
exists in the cluster and version v4.1.0
is published.
Create a ModuleInfo resource to install the plugin without any configuration parameters:
YAML field reference:
Field path | Description |
---|---|
metadata.labels.cpaas.io/cluster-name | Target cluster name where the plugin is installed. |
metadata.name | Temporary ModuleInfo name; the platform will rename it after creation. |
spec.config.clusterView.isPrivate | Visibility setting for cluster view. |
spec.config.components.elasticsearch.address | External Elasticsearch address; leave empty to use platform-installed Elasticsearch. |
spec.config.components.elasticsearch.basicAuthSecretName | Secret name for external Elasticsearch basic auth; leave empty for platform Elasticsearch. |
spec.config.components.elasticsearch.hostpath | Data path for Elasticsearch. |
spec.config.components.elasticsearch.httpPort | Elasticsearch HTTP port, default 9200. |
spec.config.components.elasticsearch.install | Whether to install Elasticsearch via platform; set to false when using external Elasticsearch. |
spec.config.components.elasticsearch.k8sNodes | Node IP list for Elasticsearch Data when using LocalVolume. |
spec.config.components.elasticsearch.masterK8sNodes | Node IP list for Elasticsearch Master (large scale with LocalVolume only). |
spec.config.components.elasticsearch.masterReplicas | Replica count for Elasticsearch Master (large scale only). |
spec.config.components.elasticsearch.masterResources | Resource requests/limits for Elasticsearch Master (large scale only). |
spec.config.components.elasticsearch.masterStorageSize | Storage size for Elasticsearch Master (large scale only). |
spec.config.components.elasticsearch.nodeReplicas | Replica count for Elasticsearch Data. |
spec.config.components.elasticsearch.nodeStorageSize | Storage size for Elasticsearch Data (Gi). |
spec.config.components.elasticsearch.resources | Resource requests/limits for Elasticsearch Data. |
spec.config.components.elasticsearch.tcpPort | Internal transport port for Elasticsearch cluster, default 9300. |
spec.config.components.elasticsearch.type | Elasticsearch cluster size: single/normal/big. |
spec.config.components.kafka.address | External Kafka address; leave empty to use platform-installed Kafka. |
spec.config.components.kafka.auth | Enable Kafka authentication, default true. |
spec.config.components.kafka.basicAuthSecretName | Secret name for external Kafka auth; leave empty for platform Kafka. |
spec.config.components.kafka.exporterPort | Kafka Exporter port, default 9308. |
spec.config.components.kafka.install | Whether to install Kafka via platform; set to false when using external Kafka. |
spec.config.components.kafka.k8sNodes | Node IP list for Kafka when using LocalVolume. |
spec.config.components.kafka.port | Kafka exposed port, default 9092. |
spec.config.components.kafka.storageSize | Kafka storage size (Gi). |
spec.config.components.kafka.tls | Enable TLS for Kafka, default true. |
spec.config.components.kafka.zkElectPort | Zookeeper election port, default 3888. |
spec.config.components.kafka.zkExporterPort | Zookeeper Exporter port, default 9141. |
spec.config.components.kafka.zkLeaderPort | Zookeeper leader/follower communication port, default 2888. |
spec.config.components.kafka.zkPort | Zookeeper client port, default 2181. |
spec.config.components.kibana.install | Whether to install Kibana; Kibana is deprecated, set to false. |
spec.config.components.storageClassConfig.name | For LocalVolume, typically elasticsearch-local-log-sc ; for StorageClass, set to the class name. |
spec.config.components.storageClassConfig.type | Storage type: LocalVolume/StorageClass. |
spec.config.components.zookeeper.storageSize | Zookeeper storage size (Gi). |
spec.config.ttl.audit | Retention days for audit data. |
spec.config.ttl.event | Retention days for event data. |
spec.config.ttl.logKubernetes | Retention days for Kubernetes logs. |
spec.config.ttl.logPlatform | Retention days for platform logs. |
spec.config.ttl.logSystem | Retention days for system logs. |
spec.config.ttl.logWorkload | Retention days for workload logs. |
spec.version | Specifies the plugin version to install, must match .spec.version in ModuleConfig. |
Since the ModuleInfo name changes upon creation, locate the resource via label to check the plugin status and version:
Field explanations:
NAME
: ModuleInfo resource nameCLUSTER
: Cluster where the plugin is installedMODULE
: Plugin nameDISPLAY_NAME
: Display name of the pluginSTATUS
: Installation status; Running
means successfully installed and runningTARGET_VERSION
: Intended installation versionCURRENT_VERSION
: Version before installationNEW_VERSION
: Latest available version for installationNavigate to App Store Management > Cluster Plugin and select the target cluster.
In the Plugins tab, click the action button to the right of Alauda Container Platform Log Storage with Clickhouse > Install.
Refer to the following instructions to configure relevant parameters.
Parameter | Description |
---|---|
Component installation Settings | LocalVolume: Local storage, log data will be stored in the local storage path of the selected node. The advantage of this method is that the log component is directly bound to local storage, eliminating the need to access storage over the network and providing better storage performance. StorageClass: Dynamically create storage resources using storage classes to store log data. The advantage of this method is a higher degree of flexibility; when multiple storage classes are defined for the entire cluster, administrators can select the corresponding storage class for the log components based on usage scenarios, reducing the impact of host malfunction on storage. However, the performance of StorageClass may be affected by factors such as network bandwidth and latency, and it relies on the redundancy mechanisms provided by the storage backend to achieve high availability of storage. |
Retention Period | The maximum time logs, events, and audit data can be retained on the cluster. Data exceeding the retention period will be automatically cleaned up. Tip: You may back up data that needs to be retained for a long time. If you need assistance, please contact technical support personnel. |
Click Install.
Ensure the plugin has been published by checking for ModulePlugin and ModuleConfig resources, in the global
cluster:
This indicates that the ModulePlugin logclickhouse
exists in the cluster and version v4.1.0
is published.
Create a ModuleInfo resource to install the plugin without any configuration parameters:
YAML field reference (ClickHouse):
Field path | Description |
---|---|
metadata.name | ModuleInfo name. Recommended format: <target-cluster>-logclickhouse . |
metadata.labels.cpaas.io/cluster-name | Target cluster where the plugin is installed. |
metadata.labels.cpaas.io/module-name | Must be logclickhouse . |
metadata.labels.cpaas.io/module-type | Must be plugin . |
spec.version | Plugin version to install. |
spec.config.components.storageClassConfig.type | Storage type for ClickHouse data: LocalVolume or StorageClass . |
spec.config.components.storageClassConfig.name | StorageClass name when type is StorageClass ; keep empty for LocalVolume . |
spec.config.components.clickhouse.resources | Resource requests/limits for ClickHouse. |
spec.config.components.clickhouse.k8sNodes | Node IP list for ClickHouse when using LocalVolume . |
spec.config.components.clickhouse.hostpath | Local path for ClickHouse data when using LocalVolume . |
spec.config.components.clickhouse.nodeReplicas | Replica count when using StorageClass . |
spec.config.components.clickhouse.nodeStorageSize | Storage size for ClickHouse data (Gi). |
spec.config.components.clickhouse.type | Cluster size: single , normal , or big . |
spec.config.components.razor.resources | Resource requests/limits for Razor. |
spec.config.components.vector.resources | Resource requests/limits for Vector. |
spec.config.ttl.audit | Retention days for audit data. |
spec.config.ttl.event | Retention days for event data. |
spec.config.ttl.logKubernetes | Retention days for Kubernetes logs. |
spec.config.ttl.logPlatform | Retention days for platform logs. |
spec.config.ttl.logSystem | Retention days for system logs. |
spec.config.ttl.logWorkload | Retention days for workload logs. |
spec.version | Specifies the plugin version to install, must match .spec.version in ModuleConfig. |
Since the ModuleInfo name changes upon creation, locate the resource via label to check the plugin status and version:
Field explanations:
NAME
: ModuleInfo resource nameCLUSTER
: Cluster where the plugin is installedMODULE
: Plugin nameDISPLAY_NAME
: Display name of the pluginSTATUS
: Installation status; Running
means successfully installed and runningTARGET_VERSION
: Intended installation versionCURRENT_VERSION
: Version before installationNEW_VERSION
: Latest available version for installationNavigate to App Store Management > Cluster Plugin and select the target cluster.
In the Plugins tab, click the action button to the right of Alauda Container Platform Log Collector > Install.
Select the Storage Cluster (where Alauda Container Platform Log Storage has been installed) and click Select/Deselect log types to set the scope of log collection in the cluster.
Click Install.
Ensure the plugin has been published by checking for ModulePlugin and ModuleConfig resources, in the global
cluster:
This indicates that the ModulePlugin logagent
exists in the cluster and version v4.1.0
is published.
Create a ModuleInfo resource to install the plugin without any configuration parameters:
YAML field reference (Log Collector):
Field path | Description |
---|---|
metadata.annotations.cpaas.io/display-name | Plugin display name. |
metadata.annotations.cpaas.io/module-name | Plugin i18n name JSON string. |
metadata.labels.cpaas.io/cluster-name | Target cluster where the plugin is installed. |
metadata.labels.cpaas.io/module-name | Must be logagent . |
metadata.labels.cpaas.io/module-type | Must be plugin . |
metadata.labels.cpaas.io/product | Product identifier, typically Platform-Center . |
metadata.labels.logcenter.plugins.cpaas.io/cluster | Storage cluster name to which logs are pushed. |
metadata.name | Temporary ModuleInfo name; the platform will rename it after creation. |
spec.config.crossClusterDependency.logcenter | Name of the Elasticsearch-based log storage cluster. |
spec.config.crossClusterDependency.logclickhouse | Set to null when using Elasticsearch storage; otherwise set to ClickHouse cluster name. |
spec.config.dataSource.audit | Collect audit logs. |
spec.config.dataSource.event | Collect event logs. |
spec.config.dataSource.kubernetes | Collect Kubernetes logs. |
spec.config.dataSource.platform | Collect platform logs. |
spec.config.dataSource.system | Collect system logs. |
spec.config.dataSource.workload | Collect workload logs. |
spec.config.storage.type | Elasticsearch or Clickhouse . |
spec.version | Plugin version to install. |
Since the ModuleInfo name changes upon creation, locate the resource via label to check the plugin status and version: