Currently, the logs generated by the platform will be stored in the log storage component; however, the retention period for these logs is relatively short. For enterprises with high compliance requirements, logs typically require longer retention times to meet audit demands. Additionally, the economic aspect of storage is also one of the key concerns for enterprises.
Based on the above scenarios, the platform offers a log archiving solution, allowing users to transfer logs to external NFS or object storage.
Resource | Description |
---|---|
NFS | Set up the NFS service in advance and determine the NFS path to be mounted. |
Kafka | Obtain the Kafka service address in advance. |
Image Address | You must use the CLI tool in the global cluster to execute the following commands to get the image addresses:- Get alpine image address: kubectl get daemonset nevermore -n cpaas-system -o jsonpath='{.spec.template.spec.initContainers[0].image}' - Get razor image address: kubectl get deployment razor -n cpaas-system -o jsonpath='{.spec.template.spec.containers[0].image}' |
Click on Cluster Management > Clusters in the left navigation bar.
Click the action button on the right side of the cluster where the logs will be transferred > CLI Tool.
Modify the YAML based on the following parameter descriptions; after modifying, paste the code into the open CLI Tool command line and hit enter to execute.
Resource Type | Field Path | Description |
---|---|---|
ConfigMap | data.export.yml.output.compression | Compress log text; supported options are none (no compression), zlib, gzip. |
ConfigMap | data.export.yml.output.file_type | The type of exported log file; supports txt, csv, json. |
ConfigMap | data.export.yml.output.max_size | Size of a single archived file; unit is MB. If it exceeds this value, logs will be automatically compressed and archived based on the compression field's configuration. |
ConfigMap | data.export.yml.scopes | The scope of log transfer; currently supported logs include: system logs, application logs, Kubernetes logs, product logs. |
Deployment | spec.template.spec.containers[0].command[7] | Kafka service address. |
Deployment | spec.template.spec.volumes[3].hostPath.path | NFS path to be mounted. |
Deployment | spec.template.spec.initContainers[0].image | Alpine image address. |
Deployment | spec.template.spec.containers[0].image | Razor image address. |
Once the container status changes to Running, you can view the continuously archived logs in the NFS path; the log file directory structure is as follows:
Resource | Description |
---|---|
S3 Storage | Prepare the S3 storage service address in advance, and obtain the values for access_key_id and secret_access_key ; create the bucket where the logs will be stored. |
Kafka | Obtain the Kafka service address in advance. |
Image Address | You must use the CLI tool in the global cluster to execute the following commands to get the image addresses:- Get alpine image address: kubectl get daemonset nevermore -n cpaas-system -o jsonpath='{.spec.template.spec.initContainers[0].image}' - Get razor image address: kubectl get deployment razor -n cpaas-system -o jsonpath='{.spec.template.spec.containers[0].image}' |
Click on Cluster Management > Clusters in the left navigation bar.
Click the action button on the right side of the cluster where the logs will be transferred > CLI Tool.
Modify the YAML based on the following parameter descriptions; after modifying, paste the code into the open CLI Tool command line and hit enter to execute.
Resource Type | Field Path | Description |
---|---|---|
Secret | data.access_key_id | Base64 encode the obtained access_key_id. |
Secret | data.secret_access_key | Base64 encode the obtained secret_access_key. |
ConfigMap | data.export.yml.output.compression | Compress log text; supported options are none (no compression), zlib, gzip. |
ConfigMap | data.export.yml.output.file_type | The type of exported log file; supports txt, csv, json. |
ConfigMap | data.export.yml.output.max_size | Size of a single archived file; unit is MB. If it exceeds this value, logs will be automatically compressed and archived based on the compression field's configuration. |
ConfigMap | data.export.yml.scopes | The scope of log transfer; currently supported logs include: system logs, application logs, Kubernetes logs, product logs. |
ConfigMap | data.export.yml.output.s3.bucket_name | Bucket name. |
ConfigMap | data.export.yml.output.s3.endpoint | S3 storage service address. |
ConfigMap | data.export.yml.output.s3.region | Region information for the S3 storage service. |
Deployment | spec.template.spec.containers[0].command[7] | Kafka service address. |
Deployment | spec.template.spec.volumes[3].hostPath.path | Local path to be mounted, used for temporarily storing log information. Log files will be automatically deleted after synchronization to S3 storage. |
Deployment | spec.template.spec.initContainers[0].image | Alpine image address. |
Deployment | spec.template.spec.containers[0].image | Razor image address. |
Once the container status changes to Running, you can view the continuously archived logs in the bucket.