Management of Alert

Function Overview

The alert management function of the platform aims to help users comprehensively monitor and promptly detect system anomalies. By utilizing pre-installed system alerts and flexible custom alert capabilities, combined with standardized alert templates and a tiered management mechanism, it provides a complete alert solution for operation and maintenance personnel.

Whether it's platform administrators or business personnel, they can conveniently configure and manage alert policies within their respective permission scopes for effective monitoring of platform resources.

Key Features

  • Built-in System Alert Policies: Rich alert rules are preset based on common fault diagnosis ideas for global clusters and workload clusters.
  • Custom Alert Rules: Supports the creation of alert rules based on various data sources, including preset monitoring indicators, custom monitoring indicators, black-box monitoring items, platform log data, and platform event data.
  • Alert Template Management: Supports the creation and management of standardized alert templates for quick application to similar resources.
  • Alert Notification Integration: Supports the push of alert information to operation and maintenance personnel through various channels.
  • Alert View Isolation: Distinguishes between platform management alerts and business alerts, ensuring that personnel in different roles focus on their respective alert information.
  • Real-time Alert Viewing: Provides real-time alerts, offering concentrated displays of the number of resources currently experiencing alerts and detailed alert information.
  • Alert History Viewing: Supports the viewing of historical alert records over a period, facilitating the analysis of recent monitoring alert conditions by operation and maintenance personnel and administrators.

Functional Advantages

  • Comprehensive Monitoring Coverage: Supports monitoring of various resource types such as clusters, nodes, and computing components, and comes with rich built-in system alert policies that can be used without additional configuration.
  • Efficient Alert Management: Standardized configurations through alert templates enhance operational efficiency, and the separation of alert views makes it easier for personnel in different roles to quickly locate relevant alerts.
  • Timely Problem Detection: alert notifications are automatically triggered to ensure timely problem detection, supporting multi-channel alert pushing for proactive problem avoidance.
  • Robust Permission Management: Strict access control for alert policies ensures that alert information is secure and manageable.

Creating Alert Policies via UI

Prerequisites

  • A notification policy is configured (if you need to configure automatic alert notifications).
  • Monitoring components are installed in the target cluster (required when creating alert policies using monitoring indicators).
  • Log storage components and log collection components are installed in the target cluster (required when creating alert policies using logs and events).

Procedures

  1. Navigate to Operation and Maintenance Center > alerts > alert Policies.
  2. Click Create Alert Policy.
  3. Configure basic information.

Selecting Alert Type

Resource Alert

  • Alert types categorized by resource type (e.g., deployment status under a namespace).
  • Resource selection description:
    • Defaults to "Any" if no parameter is selected, supporting automatic association with newly added resources.
    • When "Select All" is chosen, it only applies to the current resource.
    • When multiple namespaces are selected, resource names support regular expressions (e.g., cert.*).

Event Alert

  • Alert types categorized by specific events (e.g., abnormal Pod status).
  • By default, selects all resources under the specified resource and supports automatic association with newly added resources.

Configuring Alert Rules

Click Add Alert Rule and configure the following parameters based on the alert type:

Resource Alert Parameters

ParameterDescription
ExpressionMonitoring metric algorithm in Prometheus format, e.g., rate(node_network_receive_bytes{instance="$server",device!~"lo"}[5m])
Metric UnitCustom monitoring metric unit, can be entered manually or selected from platform preset units
Legend ParameterControls the name corresponding to the curve in the chart, formatted as {{.LabelName}}, e.g., {{.hostname}}
Time RangeTime window for log/event queries
Log ContentQuery fields for log content (e.g., Error), where multiple query fields are linked by OR
Event ReasonQuery fields for event reasons (Reason, e.g., BackOff, Pulling, Failed, etc.), where multiple query fields are linked by OR
Trigger ConditionCondition consisting of comparison operators, alert thresholds, and duration (optional). Determines if an alert is triggered based on the comparison of real-time values/log count/event count against the alert threshold, as well as the duration of real-time values within the alert threshold range.
alert LevelDivided into four levels: Critical, Serious, Warning, and Info. You can set a reasonable alert level according to the impact of the alert rules on business for the corresponding resources.

Event Alert Parameters

ParameterDescription
Time RangeTime window for event queries
Event Monitoring ItemSupports monitoring event levels or event reasons, where multiple fields are linked by OR
Trigger ConditionBased on event count for comparison judgement
alert LevelSame definition as resource alert levels

Other Configurations

  1. Select one or more created notification policies.
  2. Configure alert sending intervals.
    • Global: Use platform default configuration.
    • Custom: Different sending intervals can be set based on alert levels.
    • When "Do Not Repeat" is selected, notifications will only be sent when the alert is triggered and recovered.

Additional Notes

  1. In the "More" options of the alert rule, labels and annotations can be set.
  2. Please refer to the Prometheus Alerting Rules Documentation for configuring labels and annotations.
  3. Note: Do not use the $value variable in labels, as this may cause alert exceptions.

Creating Resource Alerts via CLI

Prerequisites

  • A notification policy is configured (if you need to configure automatic alert notifications).
  • Monitoring components are installed in the target cluster (required when creating alert policies using monitoring indicators).
  • Log storage components and log collection components are installed in the target cluster (required when creating alert policies using logs and events).

Procedures

  1. Create a new YAML configuration file named example-alerting-rule.yaml.
  2. Add PrometheusRule resources to the YAML file and submit it. The following example creates a new alert policy called policy:
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
  annotations:
    alert.cpaas.io/cluster: global      # The name of the cluster where the alert is located
    alert.cpaas.io/kind: Cluster        # The type of resource,
    alert.cpaas.io/name: global         # The resource object, supporting single, multiple (separated by |), or any (.*)
    alert.cpaas.io/namespace: cpaas-system  # The namespace where the alert object is located, supporting single, multiple (separated by |), or any (.*)
    alert.cpaas.io/notifications: '["test"]'
    alert.cpaas.io/repeat-config: '{"Critical":"never","High":"5m","Medium":"5m","Low":"5m"}'
    alert.cpaas.io/rules.description: "{}"
    alert.cpaas.io/rules.disabled: "[]"
    alert.cpaas.io/subkind: ""
    cpaas.io/description: ""
    cpaas.io/display-name: policy      # The display name of the alert policy
  labels:
    alert.cpaas.io/owner: System
    alert.cpaas.io/project: cpaas-system
    cpaas.io/source: Platform
    prometheus: kube-prometheus
    rule.cpaas.io/cluster: global
    rule.cpaas.io/name: policy
    rule.cpaas.io/namespace: cpaas-system
  name: policy
  namespace: cpaas-system
spec:
  groups:
    - name: general # alert rule name
      rules:
        - alert: cluster.pod.status.phase.not.running-tx1ob-e998f0b94854ee1eade5ae79279e005a
          annotations:
            alert_current_value: "{{ $value }}"  # Notification of the current value, keep as default
          expr: (count(min by(pod)(kube_pod_container_status_ready{}) !=1) or on() vector(0))>2
          for: 30s # Duration
          labels:
            alert_cluster: global # The name of the cluster where the alert is located
            alert_for: 30s # Duration
            alert_indicator: cluster.pod.status.phase.not.running  # The name of the alert rule indicator (custom alert indicator name as custom)
            alert_indicator_aggregate_range: "30"  # The aggregation time for the alert rule, in seconds
            alert_indicator_blackbox_name: "" # Black-box monitoring item name
            alert_indicator_comparison: ">"  # The comparison method for the alert rule
            alert_indicator_query: ""  # Query for the logs of the alert rule (only for log alerts)
            alert_indicator_threshold: "2"  # The threshold for the alert rule
            alert_indicator_unit: ""  # The indicator unit for the alert rule
            alert_involved_object_kind: Cluster   # The type of the object to which the alert rule belongs: Cluster|Node|Deployment|Daemonset|Statefulset|Middleware|Microservice|Storage|VirtualMachine
            alert_involved_object_name: global   # The name of the object to which the alert rule belongs
            alert_involved_object_namespace: ""  # The namespace of the object to which the alert rule belongs
            alert_name: cluster.pod.status.phase.not.running-tx1ob  # The name of the alert rule
            alert_namespace: cpaas-system  # The namespace where the alert rule is located
            alert_project: cpaas-system  # The project name of the object to which the alert rule belongs
            alert_resource: policy # The name of the alert policy where the alert rule is located
            alert_source: Platform # The data type of the alert policy where the alert rule is located: Platform-Platform Data Business-Business Data
            severity: High # The severity level of the alert rule: Critical-Critical, High-Serious, Medium-Warning, Low-Info

Creating Event Alerts via CLI

Prerequisites

  • A notification policy is configured (if you need to configure automatic alert notifications).
  • Monitoring components are installed in the target cluster (required when creating alert policies using monitoring indicators).
  • Log storage components and log collection components are installed in the target cluster (required when creating alert policies using logs and events).

Procedures

  1. Create a new YAML configuration file named example-alerting-rule.yaml.
  2. Add PrometheusRule resources to the YAML file and submit it. The following example creates a new alert policy called policy2:
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
  annotations:
    alert.cpaas.io/cluster: global
    alert.cpaas.io/events.scope: '[{"names":["argocd-gitops-redis-ha-haproxy"],"kind":"Deployment","operator":"=","namespaces":["*"]}]' 
      # names: The resource name for the event alert; operator is ineffective if name is empty.
      # kind: The type of resource that triggers the event alert.
      # namespace: The namespace where the resource that triggers the event alert belongs. An empty array indicates a non-namespaced resource; when ns is ['*'], it indicates all namespaces.
      # operator: Selector =, !=, =~, !~
    alert.cpaas.io/kind: Event   # The type of alert, Event (event alert)
    alert.cpaas.io/name: "" # Used for resource alerts; remains empty for event alerts
    alert.cpaas.io/namespace: cpaas-system 
    alert.cpaas.io/notifications: '["acp-qwtest"]'
    alert.cpaas.io/repeat-config: '{"Critical":"never","High":"5m","Medium":"5m","Low":"5m"}'
    alert.cpaas.io/rules.description: "{}"
    alert.cpaas.io/rules.disabled: "[]"
    cpaas.io/description: ""
    cpaas.io/display-name: policy2
  labels:
    alert.cpaas.io/owner: System
    alert.cpaas.io/project: cpaas-system
    cpaas.io/source: Platform
    prometheus: kube-prometheus
    rule.cpaas.io/cluster: global
    rule.cpaas.io/name: policy2
    rule.cpaas.io/namespace: cpaas-system
  name: policy2
  namespace: cpaas-system
spec:
  groups:
    - name: general
      rules:
        - alert: cluster.event.count-6sial-34c9a378e3b6dda8401c2d728994ce2f
          # 6sial-34c9a378e3b6dda8401c2d728994ce2f can be customized to ensure uniqueness
          annotations:
            alert_current_value: "{{ $value }}"  # Notification of the current value, keep as default
          expr: round(((avg
            by(kind,namespace,name,reason)(increase(cpaas_event_count{namespace=~".*",id="policy2-cluster.event.count-6sial"}[300s])))
            + (avg
            by(kind,namespace,name,reason)(abs(increase(cpaas_event_count{namespace=~".*",id="policy2-cluster.event.count-6sial"}[300s])))))
            / 2)>2
          # The id in the policy2 needs to be the name of the alert policy; 6sial must match the preceding alert rule name
          for: 15s # Duration
          labels:
            alert_cluster: global # The name of the cluster where the alert is located
            alert_for: 15s # Duration
            alert_indicator: cluster.event.count  # The name of the alert rule indicator (custom alert indicator name as custom)
            alert_indicator_aggregate_range: "300" # The aggregation time for the alert rule, in seconds
            alert_indicator_blackbox_name: "" 
            alert_indicator_comparison: ">" # The comparison method for the alert rule
            alert_indicator_event_reason: ScalingReplicaSet # Event reason.
            alert_indicator_threshold: "2" # The threshold for the alert rule
            alert_indicator_unit: pieces # The indicator unit for the alert rule; remains unchanged for event alerts
            alert_involved_object_kind: Event
            alert_involved_object_options: Single
            alert_name: cluster.event.count-6sial # The name of the alert rule
            alert_namespace: cpaas-system # The namespace where the alert rule is located
            alert_project: cpaas-system # The project name of the object to which the alert rule belongs
            alert_repeat_interval: 5m
            alert_resource: policy2 # The name of the alert policy where the alert rule is located
            alert_source: Platform # The data type of the alert policy where the alert rule is located: Platform-Platform Data Business-Business Data
            severity: High # The severity level of the alert rule: Critical-Critical, High-Serious, Medium-Warning, Low-Info

Creating Alert Policies via alert Templates

alert templates are a combination of alert rules and notification policies targeted at similar resources. Through alert templates, it is easy and quick to create alert policies for clusters, nodes, or computing components on the platform.

Prerequisites

  • A notification policy is configured (if you need to configure automatic alert notifications).
  • Monitoring components are installed in the target cluster (required when creating alert policies using monitoring indicators).

Procedures

Creating Alert Template

  1. In the left navigation bar, click Operation and Maintenance Center > alerts > alert Templates.
  2. Click Create alert Template.
  3. Configure the basic information of the alert template.
  4. In the alert Rules section, click Add alert Rule, and follow the parameter descriptions below to add alert rules:
ParameterDescription
ExpressionMonitoring metric algorithm in Prometheus format, e.g., rate(node_network_receive_bytes{instance="$server",device!~"lo"}[5m])
Metric UnitCustom monitoring metric unit, can be entered manually or selected from platform preset units
Legend ParameterControls the name corresponding to the curve in the chart, formatted as {{.LabelName}}, e.g., {{.hostname}}
Time RangeTime window for log/event queries
Log ContentQuery fields for log content (e.g., Error), where multiple query fields are linked by OR
Event ReasonQuery fields for event reasons (Reason, e.g., BackOff, Pulling, Failed, etc.), where multiple query fields are linked by OR
Trigger ConditionCondition consisting of comparison operators, alert thresholds, and duration (optional).
alert LevelDivided into four levels: Critical, Serious, Warning, and Info. You can set a reasonable alert level according to the impact of the alert rules on business for the corresponding resources.
  1. Click Create.

Creating Alert Policies Using alert Templates

  1. In the left navigation bar, click Operation and Maintenance Center > alerts > alert Policies. Tip: You can switch the target cluster through the top navigation bar.
  2. Click the expand button next to the Create alert Policy button > Template Create alert Policy.
  3. Configure some parameters, referring to the descriptions below:
ParameterDescription
Template NameThe name of the alert template to use. The templates are categorized by cluster, node, and computing component. Upon selecting a template, you can view the alert rules, notification policies, and other information set within the alert template.
Resource TypeSelect whether the template is an alert policy template for Cluster, Node, or Computing Component; the corresponding resource name will be displayed.
  1. Click Create.

Setting Silence for Alerts

Supports silencing alerts for clusters, nodes, and computing components. By setting silence for specific alert policies, you can control that all rules under the alert policy do not send notification messages when triggered during the set silence period. Permanent silence and custom time silence can be set.

For example: When the platform is upgraded or maintained, many resources may show abnormal statuses, leading to numerous triggered alerts, which cause operation and maintenance personnel to frequently receive alert notifications before the upgrade or maintenance is completed. Setting silence for the alert policy can prevent this situation.

Note: When the silence status persists until the silence end time, the silence setting will be automatically cleared.

Setting via UI

  1. In the left navigation bar, click Operation and Maintenance Center > alerts > alert Policies.

  2. Click the operation button on the right side of the alert policy to be silenced > Set Silence.

  3. Toggle alert Silence switch to open it.

    Tip: This switch controls whether the silence setting takes effect. To cancel silence, simply turn off the switch.

  4. Configure relevant parameters according to the descriptions below:

    Tip: If no silence range or resource name is selected, it defaults to Any, meaning that subsequent Delete/Add resource actions will correspond to Delete Silence/Add Silence alert policies; if "Select All" is chosen, it will only apply to the currently selected resource range, and subsequent Delete/Add resource actions will not be processed.

    ParameterDescription
    Silence RangeThe scope of resources where the silence setting takes effect.
    Resource NameThe name of the resource object targeted by the silence setting.
    Silence TimeThe time range for alert silence. The alert will enter silence state at the start of the silence time, and if the alert policy remains in an alert state or triggers alerts after the silence end time, alert notifications will resume. Permanent: The silence setting will last until the alert policy is deleted. Custom: Custom settings for the start time and end time of silence, with the time interval not less than 5 minutes.
  5. Click Set.

    Tip: From the moment silence is set until the start of silence, the silence status of the alert policy is considered Silence Waiting. During this period, when rules in the policy trigger alerts, notifications will be sent normally; after silence starts until it ends, the silence status of the alert policy is Silencing, and when rules in the policy trigger alerts, notifications will not be sent.

Setting via CLI

  1. Specify the resource name of the alert policy you want to set silence for and execute the following command:
kubectl edit PrometheusRule <TheNameOfThealertPolicyYouWantToSet>
  1. Modify the resource as shown in the example to add silence annotations and submit.
---
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
  annotations:
    alert.cpaas.io/cluster: global
    alert.cpaas.io/kind: Node
    alert.cpaas.io/name: 0.0.0.0
    alert.cpaas.io/namespace: cpaas-system
    alert.cpaas.io/notifications: "[]"
    alert.cpaas.io/rules.description: "{}"
    alert.cpaas.io/rules.disabled: "[]"
    alert.cpaas.io/rules.version: "23"
    alert.cpaas.io/silence.config: '{"startsAt":"2025-02-08T08:01:37Z","endsAt":"2025-02-22T08:01:37Z","creator":"leizhu@alauda.io","resources":{"nodes":[{"name":"192.168.36.11","ip":"192.168.36.11"},{"name":"192.168.36.12","ip":"192.168.36.12"},{"name":"192.168.36.13","ip":"192.168.36.13"}]}}'
      # The silence configuration for node-level alert policies, including start time, end time, creator, etc.; if the silence range includes specific nodes, please append the resources.node information as shown above. If you need silence for all resources, you do not need the resources field.
    # alert.cpaas.io/silence.config: '{"startsAt":"2025-02-08T08:04:50Z","endsAt":"2199-12-31T00:00:00Z","creator":"leizhu@alauda.io","name":["alb-operator-ctl","apollo"],"namespace":["cpaas-system"]}'
      # The silence configuration for workload-level alert policies, including start time, end time, creator, etc.; if the silence range includes specific workloads, please append name and namespace information as shown above. If you need silence for all resources, you do not need the name and namespace fields.
      # Setting the endsAt field to 2199-12-31T00:00:00Z indicates permanent silence.
    alert.cpaas.io/subkind: ""
    cpaas.io/creator: leizhu@alauda.io
    cpaas.io/description: ""
    cpaas.io/display-name: policy3
    cpaas.io/updated-at: 2025-02-08T08:01:42Z
  labels:
  ## Exclude irrelevant information

Recommendations for Configuring Alert Rules

More alert rules do not always equate to better outcomes. Redundant or complex alert rules can lead to alert storms and increase your maintenance burden. It is recommended that you read the following guidelines before configuring alert rules to ensure that custom rules can achieve their intended purposes while remaining efficient.

  • Use the Fewest New Rules Possible: Create only those rules that meet your specific requirements. By using the fewest number of rules, you can create a more manageable and centralized alert system in the monitoring environment.
  • Focus on Symptoms Rather than Causes: Create rules that notify users of symptoms rather than the root causes of those symptoms. This ensures that when relevant symptoms occur, users can receive alerts and may investigate the root causes that triggered the alerts. Using this strategy can significantly reduce the total number of rules you need to create.
  • Plan and Assess Your Needs Before Making Changes: First, clarify which symptoms are important and what actions you want users to take when these symptoms occur. Then evaluate existing rules to decide if you can modify them to achieve your objectives without creating new rules for each symptom. By modifying existing rules and carefully creating new ones, you can help simplify the alert system.
  • Provide Clear Alert Messages: When you create alert messages, include descriptions of symptoms, possible causes, and recommended actions. The information included should be clear, concise, and provide troubleshooting procedures or links to additional relevant information. Doing so helps users quickly assess situations and respond appropriately.
  • Set Severity Levels Reasonably: Assign severity levels to your rules to indicate how users should respond when symptoms trigger alerts. For instance, classify alerts with a severity level of Critical, signaling that immediate action is required from relevant personnel. By establishing severity levels, you can help users decide how to respond upon receiving alerts and ensure prompt responses to urgent issues.