Dash0 Raises $35 Million Series A to Build the First AI-Native Observability Platform

Notifications

Notifications are essential for keeping your team informed, aiding troubleshooting or integrating with your on-call and incident management workflows. Configure notification channels directly within your check rules and synthetic checks or leverage label-based alert routing for more complex scenarios.

Configure Notification Channels

When setting up a check rule or synthetic check, you can assign notification channels to receive alerts if the check fails. Select from your existing notification channels or create a new one.

Configure Notification channel in check rules or synthetic checks

Choose from the currently supported notification integrations below to configure how you receive alerts.

Notification Details

When configuring notifications, you define the content that will be sent to your team:

  • Summary: A brief, readable summary of the issue, like "High CPU usage detected on server-1."
  • Description: Detailed information about the issue, such as "CPU usage has exceeded 90% for more than 5 minutes on instance server-1."

Labels & Annotations

Labels

  • Purpose: Labels are key-value pairs that categorize and provide metadata for the alert. They define important aspects like the source of the alert, its priority, and contextual information that can help in filtering, routing, and silencing alerts.
  • Examples: Common labels might include:
    • priority: Defines the urgency level of the alert, such as p1, p2 or p3.
    • alertname: A unique name for the alert rule, like HighCPUUsage or MemoryLeak.
    • instance: Identifies the instance where the alert originated, like server-1 or node-xyz.
    • service: Indicates the job or service name associated with the alert, like web-service or database-service.

Annotations

  • Purpose: Annotations provide descriptive, human-readable information about the alert. They contain details that aid in understanding and troubleshooting the alert, often presented to the user in notification messages.
  • Examples: Common annotations might include:
    • message: Detailed information about the issue, such as "CPU usage has exceeded 90% for more than 5 minutes on instance server-1."
    • runbook_url: A link to documentation or a playbook on how to respond to the alert.
  • Note: Summary and Description that can be configured - are effectively annotations.

Routing with labels

Similar to Prometheus Alertmanager, Dash0 can use labels to determine how alerts are routed to different notification channels. Labels can be attached to check rules and synthetic checks and then referenced in conditions on notification channels.

routing conditions on contact channels

Adding labels to check rules

When configuring a check rule, you can add additional labels as key–value pairs. These labels are included in the metadata of the failed check and can be used for routing.

Adding labels to synthetic checks

Also synthetic checks support custom labels as key–value pairs. These labels are also included in the metadata of the synthetic check and can be used for routing.

Labels are arbitrary metadata, but they become powerful when combined with routing rules, enabling you to direct alerts to specific teams, environments, or escalation paths.

Defining notification conditions

In the Notification Channels settings, you can define conditions to control which failed checks trigger notifications for a given channel.

  • Each condition is built from one or more filters (for example: team = SRE).
  • Multiple filters inside a single condition are combined with an AND.
  • Adding multiple conditions creates an OR between them.

This allows fine-grained control over which teams, services, or environments should receive specific alerts.

Examples:

  • A condition with team=SRE will only notify about checks labeled with team: SRE.
  • A channel with conditions team=SRE OR team=DEV will receive notifications for both teams.

Routing by Severity

You can route notifications based on the severity of the failed check using the dash0.failed_check.max_status attribute. This enables you to route critical alerts to high-priority channels while routing warnings to less urgent destinations.

Example: Tiered Alerting

A common pattern is to send all alerts to a team channel for visibility while reserving on call notifications for critical issues:

  • Slack: No severity filter (receives all alerts)
  • PagerDuty: dash0.failed_check.max_status=critical (receives only critical alerts)

How max status routing works

The dash0.failed_check.max_status attribute reflects the highest severity a failed check has reached during its cycle, not its current state. Once a check escalates to critical, all subsequent notifications route to channels matching that severity, even if the check temporarily returns to a degraded state.

This behavior ensures in the given example that resolution notifications reach every channel that received alerts. If PagerDuty was notified when the check became critical, it will also receive the notification when the check resolves.

Severity ProgressionSlack (all)PagerDuty (critical only)
Degraded → Degraded → Degraded✓ ✓ ✓X X X
Degraded → Critical → Critical✓ ✓ ✓X ✓ ✓
Degraded → Critical → Degraded✓ ✓ ✓X ✓ ✓
Critical → Critical → Critical✓ ✓ ✓✓ ✓ ✓
Critical → Degraded → Critical✓ ✓ ✓✓ ✓ ✓
Critical → Degraded → Degraded✓ ✓ ✓✓ ✓ ✓

Last updated: December 8, 2025