Synthetic Monitoring
Dash0’s Synthetic Monitoring view gives you proactive, automated insights into the availability and performance of your websites and APIs. By running scripted checks at regular intervals from multiple global locations, Synthetic Monitoring ensures that outages, latency spikes, and functional regressions are caught before they affect real users.
The visualizations provide high-level overviews of uptime and response times, while detailed run data and error analysis help you quickly isolate root causes.
Key Responsibilities
- Run availability and performance checks on endpoints
- Collect latency, status code, and error data on every run
- Execute tests from multiple global regions
- Provide historical trends and failure drill-downs
- Trigger alerts via Dash0’s alerting system
Synthetic Checks
A Synthetic Check in Dash0 defines the rules for monitoring an endpoint or workflow. Checks combine a target, assertions, and scheduling to continuously validate the availability and performance of your system.
Each check includes the following elements:
- Target The endpoint under test. For HTTP checks this includes the method (GET, POST, etc.) and full URL. Example: POST https://api.eu-west-1.aws.dash0-dev.com/api/logs?dataset=default
- Assertions Conditions that must hold true for the check to be considered successful. Typical assertions include:
- HTTP status = 200
- Timing response < 5000ms
- Timing response < 2000ms Assertions can be critical (hard failures) or degraded (warnings).
- Retry Policy Configure retries for transient failures. If retries are not set, the first failure marks the run as failed.
- Scheduling Defines how often and from which locations the check runs. You can select multiple regions to measure performance globally.Example: Every 1 minute from Brussels (BE) and Melbourne (AU).
- Status Indicators The check overview shows:
- Uptime over the selected time window (e.g., 7 days).
- Average Duration (latency across successful runs).
- Last Check (time since last execution).
- Up for (continuous uptime streak).
Uptime
The uptime bar shows a pass/fail result for each run over time. A green segment indicates success, while red segments indicate failed assertions. This gives an immediate sense of stability and outage windows.
Duration Breakdown
The duration graph decomposes the total request time into its network phases:
- DNS: Time to resolve hostname
- Connect: TCP connection setup time
- SSL: TLS handshake duration
- Request: Time to send the request payload
- Response: Time to first byte (TTFB) and response body transfer
This breakdown highlights bottlenecks (e.g., high SSL times due to certificate negotiation, or slow backend response times).
Run Detail View
Each execution of a synthetic check can be inspected in detail to understand exactly how it performed. The Check Run Detail view provides a breakdown of all relevant information:
Assertions
- Shows the configured conditions for the check (e.g., status code = 200, response time < 300 ms).
- Each assertion is marked as:
- PASSED – the condition was met.
- DEGRADED – thresholds were exceeded but not critically failed.
- FAILED – the check did not meet a critical requirement.
- This allows quick triage between functional failures and performance regressions.
Timeline
- Displays a waterfall visualization of the HTTP request lifecycle:
- DNS resolution
- TCP connection establishment
- SSL/TLS handshake
- Request send time
- Response wait time (TTFB)
- Total duration is shown at the top, with each step contributing to the overall runtime.
- This breakdown makes it easy to pinpoint whether delays are caused by network, TLS, or backend response times.
Request Details
- Headers: Lists all request headers sent (e.g., User-Agent, Accept, Authorization).
- Method & URL: The exact HTTP method and endpoint being tested.
- Payload: For POST/PUT requests, the body content is available for inspection.
- This transparency ensures you know exactly what was sent during the run.
Response Details
- Status code: Returned by the server (e.g., 200, 500).
- Headers: Key response headers like cache-control, server, and content-type.
- Body: The raw payload returned, where available.
- Useful for verifying that the backend is returning the expected content or metadata.
Error Information
- If the check fails, error attributes like
error.type
anderror.message
are displayed. - These highlight network errors (e.g., timeout, TLS error) or application-level failures (e.g., HTTP 500).
Built-in Metrics
Dash0 automatically captures a set of HTTP timing and execution counters for every synthetic check. These metrics allow you to analyze network performance at a granular level and build alerts or dashboards around them.
Metric | Type | Description |
---|---|---|
dash0.synthetic_check.http.connection.duration | Histogram | Time taken to establish the TCP connection with the target host. High values may indicate network congestion or server saturation. |
dash0.synthetic_check.http.dns.duration | Histogram | Time spent resolving the DNS hostname of the target. Useful to detect DNS misconfigurations or slow upstream resolvers. |
dash0.synthetic_check.http.request.duration | Histogram | Time taken to send the full HTTP request to the server. Large values can suggest client-side delays or issues pushing payloads. |
dash0.synthetic_check.http.response.duration | Histogram | Time from sending the request until the first byte of the response is received (“TTFB” – time to first byte). Elevated values point to backend slowness. |
dash0.synthetic_check.http.ssl.duration | Histogram | Duration of the TLS/SSL handshake. Spikes can indicate certificate issues or overloaded TLS termination. |
dash0.synthetic_check.http.total.duration | Histogram | End-to-end duration of the entire check, from DNS resolution through response reception. This is the key metric for overall availability and latency SLOs. |
dash0.synthetic_check.runs | Sum | Total number of synthetic check executions, including successes and failures. Useful for error-rate calculations and coverage validation. |
Built-in Attributes
In addition to timing metrics, every synthetic check run emits a rich set of attributes. These provide context for filtering, grouping, and troubleshooting.
1. Synthetic Check Attributes
These attributes make it possible to filter dashboards and alerts by check ID, location, or failure type. For example, you can compare performance between probe regions, or build an alert that triggers only on failed critical assertions.
Attribute | Description |
---|---|
dash0.synthetic_check.attempt | Sequential attempt number for this check run. Useful for correlating retries. |
dash0.synthetic_check.attempt_id | Unique identifier for a single attempt within a run. |
dash0.synthetic_check.run_id | Unique identifier for the full check run. Links together all attempts and collected telemetry. |
dash0.synthetic_check.location | Region or probe location from which the check was executed (e.g., us-east, eu-central). |
dash0.synthetic_check.failed_critical_assertions | Count of failed assertions marked as critical in the check definition. |
dash0.synthetic_check.failed_degraded_assertions | Count of failed assertions marked as degraded in the check definition. |
dash0.synthetic_check.passed_critical_assertions | Count of passed assertions marked critical. |
dash0.synthetic_check.passed_degraded_assertions | Count of passed assertions marked degraded. |
2. Error Attributes
Use these fields to classify and group failures. For instance, a spike in dns_error types suggests an upstream DNS problem, while http_error may point to application-level issues.
Attribute | Description |
---|---|
error.message | Human-readable error message (e.g., timeout, TLS handshake failed). |
error.type | Categorical error type (e.g., network_error, dns_error, http_error). |
3. HTTP Request and Response Attributes
Synthetic Monitoring captures both the request sent and the response received.
Request Attributes
http.request.method
— HTTP verb used (GET
,POST
,PUT
,DELETE
).http.request.body
— Payload of the request (if applicable).http.request.header.*
— All request headers, including:- accept, content-type, user-agent
- authorization (masked or redacted for security)
- traceparent (for OpenTelemetry correlation)
http.request.resend_count
— Number of retries performed for the request.
Response Attributes
http.response.status_code
— HTTP status code returned by the server.http.response.body
— Body of the HTTP response (can be logged or validated in assertions).http.response.header.*
— All response headers, including:- cache-control, content-type, date, server
- strict-transport-security, x-content-type-options, content-security-policy
- access-control-allow-origin (CORS)
Correlation with Backend Spans
Synthetic Monitoring runs are automatically linked to backend telemetry via Dash0’s tracing system. Each synthetic request generates spans that can be correlated with downstream service spans.
This allows you to:
- Filter all spans originating from Synthetic Checks by using
dash0.trace.origin.type = SYNTHETIC_CHECK
. - View end-to-end traces: from the synthetic probe through the API gateway, into backend services, and down to database queries.
- Identify root cause: determine whether slowness comes from the network, the API layer, or a specific backend dependency.
For example:
- A synthetic run shows degraded latency.
- Drill down into the trace reveals an internal database query took 500 ms.
- Dash0’s UI lets you pivot from the synthetic check detail directly into the backend trace for full visibility.
Alerting
Synthetic Monitoring integrates directly with Dash0 Notification Channels.
- Define degraded and critical thresholds for response time or uptime.
- Trigger alerts on consecutive failures or sustained latency.
- Configure notification channels (Slack, email, PagerDuty, webhooks).
Last updated: August 25, 2025