Ingress controllers sit at one of the most critical junctions in Kubernetes: the edge of your cluster. They terminate TLS, route requests to the right backend, and in many ways act as the front door to your services. If the ingress fails, your users cannot reach you. If it slows down, everything behind it feels slower. And if it remains opaque, diagnosing issues at the edge becomes guesswork.
This post is part of a series exploring the state of observability for Kubernetes ingress controllers with OpenTelemetry. In the first installment on ingress-NGINX I showed how to make ingress-NGINX fully observable using OpenTelemetry and Dash0. Now we turn our attention to Contour, which uses Envoy under the hood and seems increasingly popular with teams adopting the Gateway API.
Contour has steadily gained traction because it offers a modern, CRD-driven architecture, clean separation of control and data planes. Many teams see it as a more forward-looking option compared to ingress-NGINX. But like any ingress controller, it sits at perhaps the most sensitive layer of your platform: the point where outside requests enter your system. That makes observability not just useful, but essential.
Just like ingress-NGINX, Contour offers native OpenTelemetry support for distributed tracing. Logs and metrics are not OpenTelemetry-native, so the OpenTelemetry Collector becomes essential: it scrapes Prometheus metrics, parses logs, enriches them with trace context, and sends all three signals to Dash0 where they are correlated into a single picture. That pattern - tracing native; logs and metrics via the Collector - recurs across ingress controllers, and it’s exactly how we’ll wire up Contour.
If you’d like to follow along with the examples in this post, a full demo is available in the dash0-examples repository which spins up a kind
cluster and configures everything for you.
Tracing with Envoy through Contour
Tracing is the one signal that works out of the box with OpenTelemetry. Because Contour delegates its data plane to Envoy, and Envoy has mature OpenTelemetry support, enabling tracing doesn’t require custom builds, sidecars, or patches. Instead, you just tell Contour where to send spans and define how they should be tagged.
This matters because the ingress layer often marks the true start of a distributed trace. A request may pass through multiple services and databases downstream, but everything begins when Envoy receives the first byte. Without ingress-level spans, you only see what happens inside your cluster. With them, you capture the whole journey end-to-end.
The process involves two CRDs:
- An
ExtensionService
that points to the Collector. - A
ContourConfiguration
that enables tracing and references that service.
Note: the official Contour chart hasn’t been published yet (tracking PR). So, in these examples we’re using the Bitnami-hosted Contour Helm chart. This is far from ideal given the upcoming deprecation of Bitnami artifacts.
First, define the ExtensionService
. It provides a logical endpoint for Envoy to send spans to the Collector over OTLP with HTTP/2 cleartext (h2c
):
12345678910apiVersion: projectcontour.io/v1alpha1kind: ExtensionServicemetadata:name: otel-collectornamespace: opentelemetryspec:protocol: h2cservices:- name: otel-collectorport: 4317
Next, configure tracing in Contour. The ContourConfiguration
tells Envoy to emit spans, what service name to use, and what additional context to capture:
1234567891011121314151617181920212223apiVersion: projectcontour.io/v1alpha1kind: ContourConfigurationmetadata:name: contourconfig-contournamespace: projectcontourlabels:gateway.networking.k8s.io/gateway-name: contourprojectcontour.io/owning-gateway-name: contourspec:tracing:serviceName: "contour-gateway"includePodDetail: truemaxPathTagLength: 256extensionService:namespace: opentelemetryname: otel-collectorcustomTags:- tagName: "user-agent"requestHeaderName: "User-Agent"- tagName: "x-request-id"requestHeaderName: "X-Request-ID"- tagName: "environment"literal: "demo"
Once applied, Envoy will begin emitting spans for every request it processes. If the request carries a traceparent
header, Envoy will join that trace. If not, it will create a new root span. Either way, in Dash0 you will see ingress-level spans correlated with the rest of your distributed traces:
For full details, see the official tracing documentation.
Correlating Envoy logs with traces
Tracing may be OpenTelemetry-native, but logs are not. Envoy emits access logs in configurable formats, and by default those logs aren’t tied to traces. To make them useful, you need to add trace context.
The easiest way is to extend Envoy’s default JSON log format with the traceparent
header. That way, every access log line includes the data needed to connect it to a trace.
Here’s the full logging configuration:
123456spec:envoy:logging:accessLogFormat: envoyaccessLogFormatString: |{"start_time": "%START_TIME%", "method": "%REQ(:METHOD)%", "path": "%REQ(X-ENVOY-ORIGINAL-PATH?:PATH)%", "protocol": "%PROTOCOL%", "response_code": %RESPONSE_CODE%, "response_flags": "%RESPONSE_FLAGS%", "bytes_received": %BYTES_RECEIVED%, "bytes_sent": %BYTES_SENT%, "duration": %DURATION%, "upstream_service_time": "%RESP(X-ENVOY-UPSTREAM-SERVICE-TIME)%", "x_forwarded_for": "%REQ(X-FORWARDED-FOR)%", "user_agent": "%REQ(USER-AGENT)%", "request_id": "%REQ(X-REQUEST-ID)%", "authority": "%REQ(:AUTHORITY)%", "upstream_host": "%UPSTREAM_HOST%", "traceparent": "%REQ(traceparent)%"}
A log entry might then look like this:
123456789{"start_time": "2025-09-22T13:58:18.832Z","method": "GET","path": "/api/data","protocol": "HTTP/1.1","response_code": 200,"user_agent": "curl/7.85.0","traceparent": "00-45bf23abca694fb8f7665b30b70a4c59-ba1e94cebfeb7067-01"}
Using OTTL to extract trace context
Now that the logs include a traceparent
field, we need a way to parse that field and populate the correct trace and span IDs. This is where OTTL (OpenTelemetry Transformation Language) comes in. If you are interested in learning more about OTTL, check out this guide.
OTTL is a small query-like language built into the Collector. It lets you read and manipulate telemetry data as it flows through pipelines: parsing JSON bodies, matching values with regex, transforming attributes, and setting special fields like trace_id
and span_id
.
In our case, we use OTTL in a transform
processor:
- Parse the log body as JSON.
- Extract the
traceparent
string. - Slice out the 32-character trace ID and 16-character span ID from the string.
- Set those values on the log record so they can be used for correlation in Dash0.
Here’s the actual transform from the example repo:
12345678910transform/trace-extract:log_statements:- context: logstatements:- set(attributes["parsed_body"], ParseJSON(body)) where IsString(body) and IsMatch(body, "^\\{.*\\}$") and IsMatch(body, ".*traceparent.*")- set(trace_id.string, Substring(attributes["parsed_body"]["traceparent"], 3, 32)) where attributes["parsed_body"]["traceparent"] != nil and IsMatch(attributes["parsed_body"]["traceparent"], "^00-[a-f0-9]{32}-[a-f0-9]{16}-[0-9a-f]{2}$")- set(span_id.string, Substring(attributes["parsed_body"]["traceparent"], 36, 16)) where attributes["parsed_body"]["traceparent"] != nil and IsMatch(attributes["parsed_body"]["traceparent"], "^00-[a-f0-9]{32}-[a-f0-9]{16}-[0-9a-f]{2}$")- set(attributes["parsed_body"]["trace_id"], trace_id.string) where attributes["parsed_body"]["traceparent"] != nil and trace_id.string != nil- set(attributes["parsed_body"]["span_id"], span_id.string) where attributes["parsed_body"]["traceparent"] != nil and span_id.string != nil- set(body, attributes["parsed_body"]) where attributes["parsed_body"]["traceparent"] != nil
Note: Envoy doesn’t expose trace_id
and span_id
tokens directly. The only trace context available in logs is the traceparent
header. We also found that OTTL’s TraceID()
and SpanID()
functions don’t work with runtime strings - they expect compile-time literals. The workaround is to set trace_id.string
and span_id.string
, which Dash0 accepts for correlation.
This correlation makes debugging much faster. When you see a failed span in Dash0, you can jump directly to the corresponding access log entry and inspect not only the status code but also flags, headers, and upstream host details. In practice, this means a trace that looks “slow” or “broken” can often be explained by a single correlated log line.
Collecting metrics from Envoy and Contour
Metrics are the third piece. Neither Envoy nor Contour emits OpenTelemetry metrics natively; both expose Prometheus endpoints.
Envoy’s metrics surface is huge. In our demo, we scraped 470 metrics covering request rates, error codes, latency distributions, retries, cluster health, and more. Contour exposes just ~14 metrics, focused on proxies, DAG rebuilds, and update status.
The two sets complement each other: Envoy provides breadth, covering nearly every detail of data plane behavior, while Contour provides focus, surfacing the control plane’s ability to keep routes and proxies healthy. Taken together, they give you both the wide and narrow lenses needed to operate an ingress at scale.
The Collector can scrape both with a Prometheus receiver:
12345678910receivers:prometheus:config:scrape_configs:- job_name: 'contour-controller'static_configs:- targets: ['contour.projectcontour.svc:8000']- job_name: 'contour-envoy'static_configs:- targets: ['envoy.projectcontour.svc:8002']
Routing metrics through the Collector ensures they’re converted to OTLP and exported consistently alongside logs and traces.
Deploying the OpenTelemetry Collector
The Collector is the glue that ties the signals together. In this setup, just like with ingress-NGINX, it runs in two forms: a DaemonSet and a Deployment. Both are deployed using the OpenTelemetry Helm chart, which provides the CRDs, default values, and flexibility to configure receivers, processors, and exporters.
Collector as a DaemonSet
The DaemonSet ensures there’s a Collector pod on every node. This local presence allows it to read container logs directly from the node filesystem. In our setup, the DaemonSet runs the filelog
receiver for Envoy access logs and applies the OTTL transforms that extract trace and span IDs. It then enriches log records with Kubernetes attributes and exports them to Dash0.
Collector as a Deployment
The Deployment has two responsibilities. First, it provides a central OTLP gRPC endpoint where Envoy sends its spans. The Deployment runs the OTLP receiver, batches spans, enriches them with metadata, and exports them to Dash0. Second, it runs the Prometheus receiver to scrape both Envoy and Contour metrics. This avoids duplication and centralizes metric collection.
Putting it together
Together, these two Collector modes ensure all three signals reach Dash0:
- Traces flow directly from Envoy to the central Deployment.
- Metrics are scraped once centrally.
- Logs are tailed locally on each node, enriched with trace context, and exported.
It’s the same pattern used for ingress-NGINX, but adapted for Contour and Envoy. The common takeaway is that splitting responsibilities gives you the best of both worlds: node-local log collection and centralized trace + metric aggregation.
Final thoughts
Contour confirms the pattern we saw with ingress-NGINX: tracing is OpenTelemetry-native, while logs and metrics require the Collector. The Collector unifies these signals, scraping Prometheus metrics, parsing JSON logs, extracting trace_id
/span_id
from traceparent
, and exporting everything over OTLP.
This blog is the second in our series on ingress controllers and OpenTelemetry. By repeating this exercise across ingress controllers, we start to map the current state of OpenTelemetry adoption in the Kubernetes ecosystem. We see clear progress on tracing, slower uptake for metrics and logs, and the Collector bridging the gaps in the meantime.
If you want to try the Contour setup yourself, the full implementation is available here:https://github.com/dash0hq/dash0-examples/tree/main/contour
And if you prefer not to build dashboards from scratch, there’s a ready-to-use Contour integration in the Dash0 Hub with prebuilt dashboard and setup guidance.
For more background, see the official Contour docs on tracing support, Envoy metrics, and Contour metrics.
With OpenTelemetry and Dash0, the edge of your cluster doesn’t have to be a black box. It can be just as observable - and just as reliable - as the services it protects.