In this series, we’ve been exploring the state of OpenTelemetry support for ingress controllers. We began with Ingress-NGINX, where tracing is native but logs and metrics still need help from the OpenTelemetry Collector. Then we turned to Contour, which benefits from Envoy’s mature tracing but similarly to Ingress-NGINX still requires scraping and parsing for logs and metrics.
Now, in this third installment, we focus on Traefik. Unlike the others, Traefik goes a step further: it supports OpenTelemetry traces and metrics natively and can export logs directly over OTLP, though that feature is currently experimental. The result is one of the most OpenTelemetry-friendly ingress controllers available today.
In this post, we’ll explore how Traefik’s built-in telemetry works in practice, how the OpenTelemetry Collector ties the signals together, and how Dash0 provides a unified view across traces, metrics, and logs. Along the way, we’ll lean on the ready-to-run demo in the dash0-examples repository.
Why Traefik is different
Traefik takes a different path than ingress controllers based on NGINX or Envoy. It was designed from the ground up for cloud-native environments, with dynamic service discovery, middleware chains, and CRD-driven configuration. That philosophy extends to its observability model. Tracing is built in and exported over OTLP without requiring patches or sidecar modules. Metrics are also first-class, following OpenTelemetry’s HTTP semantic conventions while exposing additional Traefik-specific series. Logs can now be exported over OTLP as well, though this capability is experimental and not yet directly exposed in the Helm chart values.
This support is improving rapidly thanks to community involvement. My colleague Michele Mancioppi has worked with the Traefik team to refine its OpenTelemetry implementation, ensuring spans and metrics follow semantic conventions and correlate seamlessly in platforms like Dash0.
Tracing at the edge
Distributed tracing begins at the moment a request enters your system. If the ingress is left out, you lose visibility into TLS negotiation, routing, and middleware delays. Traefik solves this by letting you configure OpenTelemetry tracing directly in its static configuration:
1234tracing:otlp:http:endpoint: http://otel-collector.opentelemetry.svc.cluster.local:4318
Note: Example here use OTLP/HTTP for clarity, but Traefik also supports OTLP/gRPC. Switching is as simple as replacing http
with grpc
in the configuration as well as the port.
This setup instructs Traefik to generate spans for each request and send them over OTLP/HTTP to a Collector in the cluster. If a request arrives with a traceparent
header, Traefik continues that trace. If not, it creates a new root span. Either way, you get ingress-level visibility. The spans carry attributes such as HTTP method, request path, status code, and client IP address, all aligned with OpenTelemetry’s semantic conventions.
In recent versions, you can control span volume with the traceVerbosity
setting. When set to minimal, Traefik emits one server and one client span per request, keeping service graphs clean. When set to detailed, it also emits middleware and internal spans for deeper visibility. We advise using the minimal verbosity.
Metrics without scraping
Metrics are often the second signal you enable after tracing. With Ingress-NGINX and Contour, they required Prometheus scraping. Traefik simplifies this by pushing metrics directly in OTLP format.
12345678metrics:otlp:enabled: truehttp:endpoint: http://otel-collector.opentelemetry.svc.cluster.local:4318addEntryPointsLabels: trueaddRoutersLabels: trueaddServicesLabels: true
Once enabled, Traefik emits both OpenTelemetry semantic HTTP metrics and Traefik-specific metrics. The semantic series, such as request duration histograms, use standardized attributes and align directly with other services instrumented via OpenTelemetry. The Traefik series (traefik_*
) cover configuration reloads, entrypoint throughput, open connections, and backend health. Together, they give you both a standards-based view and a controller-aware perspective.
By toggling the addEntryPointsLabels
, addRoutersLabels
, and addServicesLabels
flags, you can enrich these metrics with labels that make them far more useful for troubleshooting and SLO tracking. For example, you can break down latency by router, measure throughput per service, or isolate behavior by entrypoint. The trade-off is higher cardinality in very large clusters, but for most teams the insight outweighs the cost.
Because Traefik exports metrics natively as OTLP and the Traefik team thoughtfully added the k8s.pod.uid
resource attribute out of the box, the Collector simply enriches them with Kubernetes metadata and sends them on to Dash0. If you want to know more about how to make the best of resource attributes on Kubernetes, we have a handy guide.
Structured logs over OTLP
Traefik can export logs directly over OTLP. This feature is still experimental, but it offers a compelling path: logs, traces, and metrics all flow through the same OTLP pipeline without the need to tail container stdout.
When OTLP logs are enabled, Traefik automatically includes the trace_id
and span_id
in each log record. This means correlation with traces works immediately - you don’t need to parse JSON or manually map fields. In Dash0, you can move seamlessly from a trace to its log entry or from a log entry back to its trace.
Here’s how the demo configures OTLP logs:
12345678910additionalArguments:- "--experimental.otlplogs=true"- "--log.level=INFO"- "--log.otlp.endpoint=http://otel-collector.opentelemetry.svc.cluster.local:4318"- "--log.otlp.insecure=true"- "--accesslog=true"- "--accesslog.format=json"- "--accesslog.bufferingSize=0"- "--accesslog.otlp.endpoint=http://otel-collector.opentelemetry.svc.cluster.local:4318"- "--accesslog.otlp.insecure=true"
Notice that this uses additionalArguments
. The Traefik Helm chart already provides native values for tracing and metrics (such as tracing.otlp.*
and metrics.otlp.*
), but not yet for logs. Until support is added, the log flags must be passed manually.
Stable fallback: Filelog tailing
If you prefer to avoid experimental features, there is a stable fallback. Traefik can still write JSON logs to stdout, which include TraceId
and SpanId
. A Collector DaemonSet can then tail these logs with the filelog
receiver, parse the JSON, and map TraceId
and SpanId
to OpenTelemetry’s trace_id
and span_id
. This approach is more traditional but provides the same correlation, with the advantage of relying only on well-established features.
The role of the Collector
Even with Traefik’s strong OpenTelemetry support, the Collector remains central. It receives spans, metrics, and logs, enriches them with Kubernetes metadata, and exports everything consistently to Dash0.
In the OTLP logs setup, a single Collector Deployment is enough, since Traefik sends all three signals over OTLP. In the fallback setup, you add a DaemonSet to tail logs while keeping the central Deployment for traces and metrics.
How Traefik compares
With three controllers examined, some clear differences emerge. Tracing is available pretty much across the board: Ingress-NGINX, Contour, and Traefik all support it natively, making it possible to capture ingress spans. Metrics are a more varied topic. Ingress-NGINX and Contour expose them in Prometheus format, which remains the standard in most Kubernetes environments. Traefik builds on that foundation by also exporting metrics directly as OTLP and aligning them with OpenTelemetry’s semantic conventions, while still supporting Prometheus for teams that prefer it.
Logs remain the trickiest signal to standardize. Ingress-NGINX requires customizing log formats to inject trace context, Contour relies on parsing traceparent
headers, and Traefik has taken the next step by adding an experimental OTLP log exporter. While still early, this feature makes log–trace correlation automatic and points toward a cleaner future.
Taken together, the picture reflects how OpenTelemetry adoption has unfolded in cloud native. Tracing support arrived first and is now table stakes. Metrics are catching up, with Prometheus continuing to play a central role and OTLP emerging alongside it. Logs are still the final frontier, but Traefik shows where things are heading: all three signals - traces, metrics, and logs - flowing natively over OpenTelemetry, reducing the need for custom glue and making correlation a built-in capability rather than an afterthought.
Final thoughts
Traefik demonstrates what ingress observability can look like when OpenTelemetry is embraced directly. Spans, metrics, and logs align with the standard, and correlation becomes seamless. Compared to Ingress-NGINX and Contour, the setup feels cleaner, with less reliance on scraping or parsing.
For platform engineers, that simplicity matters. Observability should feel invisible but never absent. Traefik shows that the edge of your cluster can be just as observable as the services behind it.
If you’d like to try it out yourself, check the dash0-examples repository. It includes a demo cluster with Traefik configured for tracing, metrics, and logs, plus Collector manifests to wire everything up. Once the data flows into Dash0, you can use the Dash0 Integration Hub to enable a prebuilt dashboard, seamless log-trace correlation, and confidence that the very first hop in your system is no longer a black box.
With OpenTelemetry and Dash0, Traefik doesn’t just route traffic - it tells the full story of every request from the moment it arrives.