Last updated: July 21, 2025
Mastering the OpenTelemetry Debug Exporter
The debug exporter is one of the most useful components in the OpenTelemetry Collector. Its function is quite simple: it prints your telemetry data (traces, metrics, logs) directly to the console.
Think of it as the print
statement for your entire observability pipeline, making it an indispensable tool for development, testing, and troubleshooting.
In this guide, you will learn how to use it effectively to solve common problems and verify your Collector configuration.
Quick start: see it in action
To get started, add the debug
exporter to your Collector’s configuration and enable it in the pipelines you want to inspect:
otelcol.yaml12345678910111213exporters:debug:# verbosity can be 'basic', 'normal', or 'detailed'verbosity: detailedservice:pipelines:logs:exporters: [debug]traces:exporters: [debug]metrics:exporters: [debug]
Viewing the output
The debug
exporter writes all output to the Collector’s standard error stream. To see it, you’ll need to check the logs of the running Collector process, which depends on your deployment environment.
If you’re running the Collector in Docker, you can view the live output by using the docker logs command with the -f
flag to follow the log stream:
1docker logs -f <your-collector-container-name>
For Kubernetes deployments, the process is very similar. You’ll use kubectl logs
to stream the logs from the specific Collector pod:
1kubectl logs -f <your-collector-pod-name>
Finally, if you have the Collector running as a service on a bare metal machine or a virtual machine using systemd, you can tail the logs using journalctl:
1journalctl -u <your-collector-service-name> -f
Configuring the verbosity levels
You can tune the exporter’s output to fit your needs. The verbosity
setting is the most consequential one as it controls how much information is printed to your console. It allows three levels: basic
, normal
, and detailed
.
basic
verbosity
This level prints a single-line summary for each batch of data, confirming that data is flowing and showing a simple count of items. You can use this to quickly confirm your pipeline is connected without flooding the console:
122025-07-14T17:12:25.325Z info Logs {"resource": {"service.instance.id": "534d43cd-7eab-4864-92ec-bc17b60939eb", "service.name": "otelcol-contrib", "service.version": "0.129.0"}, "otelcol.component.id": "debug", "otelcol.component.kind": "exporter", "otelcol.signal": "logs", "resource logs": 1, "log records": 4}2025-07-14T17:12:26.328Z info Traces {"resource": {"service.instance.id": "534d43cd-7eab-4864-92ec-bc17b60939eb", "service.name": "otelcol-contrib", "service.version": "0.129.0"}, "otelcol.component.id": "debug", "otelcol.component.kind": "exporter", "otelcol.signal": "traces", "resource spans": 1, "spans": 15}
normal
verbosity
This level offers a middle ground, providing a compact, structured view of your telemetry. It typically shows one line per span or log, including key identifiers but omitting the full data structure:
1234562025-07-14T17:21:47.809Z info Logs {"resource": {"service.instance.id": "02385bc3-1ed7-4c61-936c-7d68267484aa", "service.name": "otelcol-contrib", "service.version": "0.129.0"}, "otelcol.component.id": "debug", "otelcol.component.kind": "exporter", "otelcol.signal": "logs", "resource logs": 1, "log records": 2}2025-07-14T17:21:47.810Z info ResourceLog #0 [https://opentelemetry.io/schemas/1.26.0] host.name=node-1 k8s.container.name=otelgen k8s.namespace.name=default k8s.pod.name=otelgen-pod-14dc7ea0 service.name=otelgenScopeLog #0 otelgenLog 20: Info phase: finish worker_id=20 service.name=otelgen trace_id=da8e1bbf91ce4184a7c6bda4b7b3cf59 span_id=6278766fa4d3c9bd trace_flags=01 phase=finish http.method=POST http.status_code=403 http.target=/api/v1/resource/20 k8s.pod.name=otelgen-pod-5dbeae56 k8s.namespace.name=default k8s.container.name=otelgenLog 21: Debug phase: start worker_id=21 service.name=otelgen trace_id=14594f5d5fe78ff2dbeabf97484d7353 span_id=1b76c4e022eab7fd trace_flags=01 phase=start http.method=GET http.status_code=400 http.target=/api/v1/resource/21 k8s.pod.name=otelgen-pod-38472413 k8s.namespace.name=default k8s.container.name=otelgen{"resource": {"service.instance.id": "02385bc3-1ed7-4c61-936c-7d68267484aa", "service.name": "otelcol-contrib", "service.version": "0.129.0"}, "otelcol.component.id": "debug", "otelcol.component.kind": "exporter", "otelcol.signal": "logs"}
detailed
verbosity
This is the most verbose level and your best friend for debugging. It prints the full, unabridged data model for every signal, exactly as the Collector sees it:
123456789101112131415161718192021222324252627282930313233ResourceLog #0Resource SchemaURL: https://opentelemetry.io/schemas/1.26.0Resource attributes:-> host.name: Str(node-1)-> k8s.container.name: Str(otelgen)-> k8s.namespace.name: Str(default)-> k8s.pod.name: Str(otelgen-pod-ab06ca8b)-> service.name: Str(otelgen)ScopeLogs #0ScopeLogs SchemaURL:InstrumentationScope otelgenLogRecord #0ObservedTimestamp: 2025-07-06 11:21:57.085421018 +0000 UTCTimestamp: 2025-07-06 11:21:57.085420886 +0000 UTCSeverityText: ErrorSeverityNumber: Error(17)Body: Str(Log 3: Error phase: finish)Attributes:-> worker_id: Str(3)-> service.name: Str(otelgen)-> trace_id: Str(46287c1c7b7eebea22af2b48b97f4a49)-> span_id: Str(f5777521efe11f94)-> trace_flags: Str(01)-> phase: Str(finish)-> http.method: Str(PUT)-> http.status_code: Int(403)-> http.target: Str(/api/v1/resource/3)-> k8s.pod.name: Str(otelgen-pod-8f215fc5)-> k8s.namespace.name: Str(default)-> k8s.container.name: Str(otelgen)Trace ID:Span ID:Flags: 0
This is the verbosity we’ll use throughout this article as its the most useful for inspecting telemetry attributes and verifying processor modifications.
How to read the debug
output
The detailed
verbosity output presents a structured representation of your telemetry data, and it generally follows a Resource
-> Scope
-> Record
hierarchy. Let’s break down what to look for.
Resource vs Record attributes
OpenTelemetry data has two main locations for attributes:
- Resource attributes: These are broad attributes describing the entity that produced the data (e.g.,
service.name
,k8s.pod.name
,host.arch
). They apply to all logs, traces, and metrics in that batch and they are defined once at the top. - Record attributes: These are specific to a single LogRecord, Span, or Metric DataPoint (such as
http.response.status_code
,url.path
, or a custom business attribute).This distinction is critical. For example, if your Observability backend isn’t correctly categorizing logs by Kubernetes namespace, the
debug
output might reveal why:1234567891011121314ResourceLog #0Resource SchemaURL: https://opentelemetry.io/schemas/1.26.0Resource attributes: // <-- No k8s.namespace.name here-> service.name: Str(checkout-service)ScopeLogs #0...LogRecord #0ObservedTimestamp: 2025-07-15 09:58:30.123456789 +0000 UTC...Body: Str(Failed to process payment)Attributes:-> trace_id: Str(a1b2c3d4...)-> customer_id: Str(4815162342)-> k8s.namespace.name: Str(production) // <-- It's here, at the Record level instead of ResourceThe k8s.namespace.name attribute exists, but it’s on the individual
Record
. According to OpenTelemetry Semantic Conventions, it should be aResource
attribute. The fix is to use a processor, like the transform processor, to move the attribute from theRecord
up to theResource
level.Data types matter
The
debug
output explicitly tells you the data type of every field:Str()
,Int()
,Bool()
,Map()
, etc. Processors are often strict about these types, so you may see unexpected results if you don’t pay attention to them.For example, imagine you want to move trace context from a log’s attributes to the correct top-level fields. The
debug
output shows they are all strings:123456789LogRecord #0[...]Attributes:-> trace_id: Str(9ab4c4fb62d43c6c1bf1d986d1e85758)-> span_id: Str(6dd880726d5c05ee)-> trace_flags: Str(01)Trace ID:Span ID:Flags: 0An incorrect transform statement might try to set the fields directly, which will fail silently because the top-level fields expect different data types (byte slices and integers):
yamlotelcol.yaml
12345678processors:transform:log_statements:- context: logstatements:- set(trace_id, attributes["trace_id"]) # Expects byte slice, gets string- set(span_id, attributes["span_id"]) # Expects byte slice, gets string- set(flags, attributes["trace_flags"]) # Expects int, gets stringThe correct solution is to use the appropriate type conversion functions and setters provided by the processor:
yamlotelcol.yaml
12345678processors:transform:log_statements:- context: logstatements:- set(trace_id.string, attributes["trace_id"])- set(span_id.string, attributes["span_id"])- set(flags, Int(attributes["trace_flags"]))This works and produces the expected output:
123456789LogRecord #0[...]Attributes:-> trace_id: Str(9ab4c4fb62d43c6c1bf1d986d1e85758)-> span_id: Str(6dd880726d5c05ee)-> trace_flags: Str(01)Trace ID: 9ab4c4fb62d43c6c1bf1d986d1e85758Span ID: 6dd880726d5c05eeFlags: 1Debugging processors with chained pipelines
The most common and critical use for the
debug
exporter is to verify that your processors are behaving correctly. Are they adding the right attributes? Are they dropping data you want to keep?To do this effectively, you need to see the telemetry data before and after it passes through a processor. One way to achieve this is with chained pipelines, where one pipeline shows the raw data and a second one shows the processed result.
This pattern uses named instances of the debug exporter (
debug/raw
anddebug/processed
) and an internal OTLP receiver/exporter pair to pass data between pipelines.yamlotelcol.yaml
12345678910111213141516171819202122232425262728293031323334receivers:otlp: # Receives data from your applicationprotocols:grpc:endpoint: 0.0.0.0:4317otlp/internal: # Receives data from the first pipelineprotocols:grpc:endpoint: 0.0.0.0:4316 # Internal communication portprocessors:k8sattributes:# [...]exporters:debug/raw:verbosity: detaileddebug/processed:verbosity: detailedotlp/internal: # Sends data to the second pipelineendpoint: 127.0.0.1:4316tls:insecure: trueservice:pipelines:logs/raw:receivers: [otlp]exporters: [debug/raw, otlp/internal] # export the data to the 2nd pipelinelogs/processed:receivers: [otlp/internal] # receive the raw dataprocessors: [k8sattributes, ...] # add your processorsexporters: [debug/processed] # see the processed outputWith this configuration, you’ll see two distinct outputs for each piece of telemetry. First, the output from debug/raw shows the data as it arrived, with minimal resource attributes:
1234562025-07-15T06:53:43.100Z info ResourceLog #0Resource SchemaURL: https://opentelemetry.io/schemas/1.26.0Resource attributes:-> host.name: Str(node-1)-> service.name: Str(otelgen)[...]Next, the output from
debug/processed
shows the same data, now enriched by thek8sattributes
processor:text1234567892025-07-15T06:53:43.269Z info ResourceLog #0Resource SchemaURL: https://opentelemetry.io/schemas/1.26.0Resource attributes:-> host.name: Str(node-1)-> k8s.container.name: Str(otelgen)-> k8s.namespace.name: Str(default)-> k8s.pod.name: Str(otelgen-pod-24c23f5a)-> service.name: Str(otelgen)[...]To distinguish between the “raw” pipeline and the “processed” one in the console, ensure to check the
otelcol.component.id
in the log preceding thedebug
exporter output which should be eitherdebug/raw
ordebug/processed
(or see an easier way below):12025-07-15T11:29:34.956Z info Logs {"resource": {..., "otelcol.component.id": "debug/processed", ...}By comparing the before and after states, you get undeniable proof of what your processor is doing.
Some tips for easier troubleshooting
Using the debug exporter is easy, but using it effectively requires you to avoid common traps.
1. Always sample in high-volume environments
When debugging a high-traffic pipeline, the debug output can be overwhelming. To make the output manageable, the exporter offers two sampling options:
sampling_initial
(default: 2): The is number of telemetry records to log per second.sampling_thereafter
(default: 1): defines the rate for logs after the initial burst. A value of 100 means only one out of every 100 records will be logged.
otelcol.yaml12345exporters:debug:verbosity: detailedsampling_initial: 5sampling_thereafter: 100
2. Distinguish pipelines with color
If you’re using multiple debug
instances in your pipelines, their text output can blend together. To make them easy to distinguish, you can switch the Collector’s logger to JSON and use jq to parse and colorize the output.
First, enable JSON logging in your Collector configuration:
otelcol.yaml1234service:telemetry:logs:encoding: json
With this setting, the Collector wraps each debug entry in a JSON object. The raw, multi-line debug output gets packed into a single msg
field, while the pipeline identifier is stored in the otelcol.component.id
field:
123456789101112{"level": "info","ts": "2025-07-15T11:37:34.257Z","msg": "ResourceLog #0\nResource SchemaURL: ... (full debug output) ...","resource": {"service.instance.id": "c4f4782d-7983-473e-8f9c-b935daa58e94",[...]},"otelcol.component.id": "debug/raw","otelcol.component.kind": "exporter","otelcol.signal": "logs"}
You can then pipe the Collector’s logs to the following jq
command to create a clean, color-coded, and readable view:
123456789docker logs otel-collector -f | jq -r 'if .["otelcol.component.id"] == "debug/raw" then"\u001b[1;33m[FROM: raw]\n---\n\(.msg)\u001b[0m"elif .["otelcol.component.id"] == "debug/processed" then"\u001b[1;36m[FROM: processed]\n---\n\(.msg)\u001b[0m"else.end'
This jq
script reconstructs the log output on the fly. The -r
(raw output) flag is essential for correctly processing newlines and color codes. The script uses an if/elif/else
block to check the otelcol.component.id
, then constructs a new colored string using ANSI escape codes (e.g., \u001b[1;33m
for bold yellow) and string interpolation (\(.msg)
). The final else .
clause ensures that other log lines from the collector continue to be printed as before.
The result is a color-coded output that’s significantly easier to distinguish.
3. Check your upstream sources if no data is logged
If the debug
exporter isn’t showing any output, use a minimal pipeline first (without processors) to prove that data is actually arriving:
otelcol.yaml12345service:pipelines:traces:receivers: [otlp]exporters: [debug]
If you deploy this and still see nothing, the problem is almost always upstream. Ensure to check your application’s instrumentation, the receiver’s configuration, and any network policies or firewalls between your application and the Collector.
Final thoughts
Ultimately, the goal of any observability pipeline is to send high-quality, reliable, and well-structured data to a powerful backend. Using the debug
exporter to confirm your data is correct before it leaves the Collector is the currently the best way to do this.
Once your data is clean and your attributes are correct, the final step is sending it to an OpenTelemetry-native platform like Dash0 that can help you transform it into actionable insights.
