The DeepFlow open-source project aims to provide deep observability for complex cloud-native and AI applications. DeepFlow implemented Zero Code data collection with eBPF for metrics, distributed tracing, request logs and function profiling, and is further integrated with SmartEncoding to achieve Full Stack correlation and efficient access to all observability data. With DeepFlow, cloud-native and AI applications automatically gain deep observability, removing the heavy burden of developers continually instrumenting code and providing monitoring and diagnostic capabilities covering everything from code to infrastructure for DevOps/SRE teams.
Assuming you have a basic understanding of eBPF, it is a secure and efficient technology for extending kernel functionality by running programs in a sandbox, a revolutionary innovation compared to traditional methods of modifying kernel source code and writing kernel modules. eBPF programs are event-driven, and when the kernel or user programs pass through an eBPF Hook, the corresponding eBPF program loaded at the Hook point will be executed. The Linux kernel predefines a series of commonly used Hook points, and you can also dynamically add custom Hook points in the kernel and applications using kprobe and uprobe technologies. Thanks to Just-in-Time (JIT) technology, the execution efficiency of eBPF code can be comparable to native kernel code and kernel modules. Thanks to the Verification mechanism, eBPF code will run safely without causing kernel crashes or entering infinite loops.
DeepFlow consists of two components, Agent and Server. An Agent runs in each K8s node, legacy host and cloud host, and is responsible for AutoMetrics and AutoTracing data collection of all application processes on the host. Server runs in a K8s cluster and provides Agent management, tag injection, data ingest and query services.
The eBPF capabilities (AutoTracing, AutoProfiling) in DeepFlow have the following kernel version requirements:
Architecture | Distribution | Kernel Version | kprobe | Golang uprobe | OpenSSL uprobe | perf |
---|---|---|---|---|---|---|
X86 | CentOS 7.9 | 3.10.0 1 | Y | Y 2 | Y 2 | Y |
RedHat 7.6 | 3.10.0 1 | Y | Y 2 | Y 2 | Y | |
* | 4.9-4.13 | Y | ||||
4.14 3 | Y | Y 2 | Y | |||
4.15 | Y | Y 2 | Y | |||
4.16 | Y | Y | Y | |||
4.17+ | Y | Y | Y | Y | ||
ARM | CentOS 8 | 4.18 | Y | Y | Y | Y |
EulerOS | 5.10+ | Y | Y | Y | Y | |
KylinOS V10 SP2 | 4.19.90-25.24+ | Y | Y | Y | Y | |
KylinOS V10 SP3 | 4.19.90-52.24+ | Y | Y | Y | Y | |
Other Distributions | 5.8+ | Y | Y | Y | Y |
Additional notes on kernel versions:
RedHat's statement:
The eBPF in Red Hat Enterprise Linux 7.6 is provided as Tech Preview and thus doesn't come with full support and is not suitable for deployment in production. It is provided with the primary goal to gain wider exposure, and potentially move to full support in the future. eBPF in Red Hat Enterprise Linux 7.6 is enabled only for tracing purposes, which allows attaching eBPF programs to probes, tracepoints and perf events.
MySQL and ClickHouse in DeepFlow require Persistent Volume storage provisioned by Storage Class.
For more information on storage configuration, please refer to the Storage documentation.
Visit the Custom Portal to download the DeepFlow package.
If you don't have access to the Custom Portal, contact technical support.
Use the violet tool to publish the package to the platform.
For detailed instructions on using this tool, refer to the CLI.
Navigate to Administrator > Marketplace > Cluster Plugins.
Search for "Alauda Container Platform Observability with DeepFlow" in the plugin list.
Click Install to open the installation configuration page.
Fill in the configuration parameters as needed. For detailed explanations of each parameter, refer to the table below.
Wait for the plugin state to be Installed.
Table: Configuration Parameters
Parameter | Optional | Description |
---|---|---|
Replicas | No | The number of replicas for ClickHouse server and DeepFlow server. It is recommended to set it to an odd number greater than or equal to 3 to ensure high availability. |
Storage Class | Yes | The Storage Class used to create Persistent Volumes for MySQL and ClickHouse. If not set, the default Storage Class will be used. |
MySQL Storage Size | No | The size of the persistent volume for MySQL. |
ClickHouse Storage Size | No | The storage size for ClickHouse storage. |
ClickHouse Data Storage Size | No | The storage size for ClickHouse data. |
Username | No | The username for Grafana web console. |
Password | No | The password for Grafana web console. It is strongly recommended to change this password after the first login. |
Confirm Password | No | Confirm the password for Grafana web console. |
Ingress Class Name | Yes | The Ingress Class name used to create Ingress for Grafana web console. If not set, the default Ingress Class will be used. |
Ingress Path | No | The Ingress serving path for Grafana web console. |
Ingress TLS Secret Name | Yes | The name of the TLS secret used by Ingress for Grafana web console. |
Ingress Hosts | Yes | The host list used by Ingress for Grafana web console. |
Agent Group Configuration | No | The configuration of the default DeepFlow agent group. |
You can access the Grafana web UI via the hosts and serving path specified in the Ingress configuration, and login with the username and password.
It's highly recommended to change the password after the first login.