Last updated: July 21, 2025

Mastering the OpenTelemetry Debug Exporter

The debug exporter is one of the most useful components in the OpenTelemetry Collector. Its function is quite simple: it prints your telemetry data (traces, metrics, logs) directly to the console.

Think of it as the print statement for your entire observability pipeline, making it an indispensable tool for development, testing, and troubleshooting.

In this guide, you will learn how to use it effectively to solve common problems and verify your Collector configuration.

Quick start: see it in action

To get started, add the debug exporter to your Collector’s configuration and enable it in the pipelines you want to inspect:

yaml
otelcol.yaml
12345678910111213
exporters:
debug:
# verbosity can be 'basic', 'normal', or 'detailed'
verbosity: detailed
service:
pipelines:
logs:
exporters: [debug]
traces:
exporters: [debug]
metrics:
exporters: [debug]

Viewing the output

The debug exporter writes all output to the Collector’s standard error stream. To see it, you’ll need to check the logs of the running Collector process, which depends on your deployment environment.

If you’re running the Collector in Docker, you can view the live output by using the docker logs command with the -f flag to follow the log stream:

sh
1
docker logs -f <your-collector-container-name>
OpenTelemetry Collector debug exporter log output

For Kubernetes deployments, the process is very similar. You’ll use kubectl logs to stream the logs from the specific Collector pod:

sh
1
kubectl logs -f <your-collector-pod-name>

Finally, if you have the Collector running as a service on a bare metal machine or a virtual machine using systemd, you can tail the logs using journalctl:

sh
1
journalctl -u <your-collector-service-name> -f

Configuring the verbosity levels

You can tune the exporter’s output to fit your needs. The verbosity setting is the most consequential one as it controls how much information is printed to your console. It allows three levels: basic, normal, and detailed.

basic verbosity

This level prints a single-line summary for each batch of data, confirming that data is flowing and showing a simple count of items. You can use this to quickly confirm your pipeline is connected without flooding the console:

12
2025-07-14T17:12:25.325Z info Logs {"resource": {"service.instance.id": "534d43cd-7eab-4864-92ec-bc17b60939eb", "service.name": "otelcol-contrib", "service.version": "0.129.0"}, "otelcol.component.id": "debug", "otelcol.component.kind": "exporter", "otelcol.signal": "logs", "resource logs": 1, "log records": 4}
2025-07-14T17:12:26.328Z info Traces {"resource": {"service.instance.id": "534d43cd-7eab-4864-92ec-bc17b60939eb", "service.name": "otelcol-contrib", "service.version": "0.129.0"}, "otelcol.component.id": "debug", "otelcol.component.kind": "exporter", "otelcol.signal": "traces", "resource spans": 1, "spans": 15}

normal verbosity

This level offers a middle ground, providing a compact, structured view of your telemetry. It typically shows one line per span or log, including key identifiers but omitting the full data structure:

123456
2025-07-14T17:21:47.809Z info Logs {"resource": {"service.instance.id": "02385bc3-1ed7-4c61-936c-7d68267484aa", "service.name": "otelcol-contrib", "service.version": "0.129.0"}, "otelcol.component.id": "debug", "otelcol.component.kind": "exporter", "otelcol.signal": "logs", "resource logs": 1, "log records": 2}
2025-07-14T17:21:47.810Z info ResourceLog #0 [https://opentelemetry.io/schemas/1.26.0] host.name=node-1 k8s.container.name=otelgen k8s.namespace.name=default k8s.pod.name=otelgen-pod-14dc7ea0 service.name=otelgen
ScopeLog #0 otelgen
Log 20: Info phase: finish worker_id=20 service.name=otelgen trace_id=da8e1bbf91ce4184a7c6bda4b7b3cf59 span_id=6278766fa4d3c9bd trace_flags=01 phase=finish http.method=POST http.status_code=403 http.target=/api/v1/resource/20 k8s.pod.name=otelgen-pod-5dbeae56 k8s.namespace.name=default k8s.container.name=otelgen
Log 21: Debug phase: start worker_id=21 service.name=otelgen trace_id=14594f5d5fe78ff2dbeabf97484d7353 span_id=1b76c4e022eab7fd trace_flags=01 phase=start http.method=GET http.status_code=400 http.target=/api/v1/resource/21 k8s.pod.name=otelgen-pod-38472413 k8s.namespace.name=default k8s.container.name=otelgen
{"resource": {"service.instance.id": "02385bc3-1ed7-4c61-936c-7d68267484aa", "service.name": "otelcol-contrib", "service.version": "0.129.0"}, "otelcol.component.id": "debug", "otelcol.component.kind": "exporter", "otelcol.signal": "logs"}

detailed verbosity

This is the most verbose level and your best friend for debugging. It prints the full, unabridged data model for every signal, exactly as the Collector sees it:

123456789101112131415161718192021222324252627282930313233
ResourceLog #0
Resource SchemaURL: https://opentelemetry.io/schemas/1.26.0
Resource attributes:
-> host.name: Str(node-1)
-> k8s.container.name: Str(otelgen)
-> k8s.namespace.name: Str(default)
-> k8s.pod.name: Str(otelgen-pod-ab06ca8b)
-> service.name: Str(otelgen)
ScopeLogs #0
ScopeLogs SchemaURL:
InstrumentationScope otelgen
LogRecord #0
ObservedTimestamp: 2025-07-06 11:21:57.085421018 +0000 UTC
Timestamp: 2025-07-06 11:21:57.085420886 +0000 UTC
SeverityText: Error
SeverityNumber: Error(17)
Body: Str(Log 3: Error phase: finish)
Attributes:
-> worker_id: Str(3)
-> service.name: Str(otelgen)
-> trace_id: Str(46287c1c7b7eebea22af2b48b97f4a49)
-> span_id: Str(f5777521efe11f94)
-> trace_flags: Str(01)
-> phase: Str(finish)
-> http.method: Str(PUT)
-> http.status_code: Int(403)
-> http.target: Str(/api/v1/resource/3)
-> k8s.pod.name: Str(otelgen-pod-8f215fc5)
-> k8s.namespace.name: Str(default)
-> k8s.container.name: Str(otelgen)
Trace ID:
Span ID:
Flags: 0

This is the verbosity we’ll use throughout this article as its the most useful for inspecting telemetry attributes and verifying processor modifications.

How to read the debug output

The detailed verbosity output presents a structured representation of your telemetry data, and it generally follows a Resource -> Scope -> Record hierarchy. Let’s break down what to look for.

Resource vs Record attributes

OpenTelemetry data has two main locations for attributes:

  • Resource attributes: These are broad attributes describing the entity that produced the data (e.g., service.name, k8s.pod.name, host.arch). They apply to all logs, traces, and metrics in that batch and they are defined once at the top.
    Resource attributes in the debug exporter
  • Record attributes: These are specific to a single LogRecord, Span, or Metric DataPoint (such as http.response.status_code, url.path, or a custom business attribute).
    Record attributes in the debug exporter

    This distinction is critical. For example, if your Observability backend isn’t correctly categorizing logs by Kubernetes namespace, the debug output might reveal why:

    1234567891011121314
    ResourceLog #0
    Resource SchemaURL: https://opentelemetry.io/schemas/1.26.0
    Resource attributes: // <-- No k8s.namespace.name here
    -> service.name: Str(checkout-service)
    ScopeLogs #0
    ...
    LogRecord #0
    ObservedTimestamp: 2025-07-15 09:58:30.123456789 +0000 UTC
    ...
    Body: Str(Failed to process payment)
    Attributes:
    -> trace_id: Str(a1b2c3d4...)
    -> customer_id: Str(4815162342)
    -> k8s.namespace.name: Str(production) // <-- It's here, at the Record level instead of Resource

    The k8s.namespace.name attribute exists, but it’s on the individual Record. According to OpenTelemetry Semantic Conventions, it should be a Resource attribute. The fix is to use a processor, like the transform processor, to move the attribute from the Record up to the Resource level.

    Data types matter

    The debug output explicitly tells you the data type of every field: Str(), Int(), Bool(), Map(), etc. Processors are often strict about these types, so you may see unexpected results if you don’t pay attention to them.

    For example, imagine you want to move trace context from a log’s attributes to the correct top-level fields. The debug output shows they are all strings:

    123456789
    LogRecord #0
    [...]
    Attributes:
    -> trace_id: Str(9ab4c4fb62d43c6c1bf1d986d1e85758)
    -> span_id: Str(6dd880726d5c05ee)
    -> trace_flags: Str(01)
    Trace ID:
    Span ID:
    Flags: 0

    An incorrect transform statement might try to set the fields directly, which will fail silently because the top-level fields expect different data types (byte slices and integers):

    yaml
    otelcol.yaml
    12345678
    processors:
    transform:
    log_statements:
    - context: log
    statements:
    - set(trace_id, attributes["trace_id"]) # Expects byte slice, gets string
    - set(span_id, attributes["span_id"]) # Expects byte slice, gets string
    - set(flags, attributes["trace_flags"]) # Expects int, gets string

    The correct solution is to use the appropriate type conversion functions and setters provided by the processor:

    yaml
    otelcol.yaml
    12345678
    processors:
    transform:
    log_statements:
    - context: log
    statements:
    - set(trace_id.string, attributes["trace_id"])
    - set(span_id.string, attributes["span_id"])
    - set(flags, Int(attributes["trace_flags"]))

    This works and produces the expected output:

    123456789
    LogRecord #0
    [...]
    Attributes:
    -> trace_id: Str(9ab4c4fb62d43c6c1bf1d986d1e85758)
    -> span_id: Str(6dd880726d5c05ee)
    -> trace_flags: Str(01)
    Trace ID: 9ab4c4fb62d43c6c1bf1d986d1e85758
    Span ID: 6dd880726d5c05ee
    Flags: 1

    Debugging processors with chained pipelines

    The most common and critical use for the debug exporter is to verify that your processors are behaving correctly. Are they adding the right attributes? Are they dropping data you want to keep?

    To do this effectively, you need to see the telemetry data before and after it passes through a processor. One way to achieve this is with chained pipelines, where one pipeline shows the raw data and a second one shows the processed result.

    This pattern uses named instances of the debug exporter (debug/raw and debug/processed) and an internal OTLP receiver/exporter pair to pass data between pipelines.

    yaml
    otelcol.yaml
    12345678910111213141516171819202122232425262728293031323334
    receivers:
    otlp: # Receives data from your application
    protocols:
    grpc:
    endpoint: 0.0.0.0:4317
    otlp/internal: # Receives data from the first pipeline
    protocols:
    grpc:
    endpoint: 0.0.0.0:4316 # Internal communication port
    processors:
    k8sattributes:
    # [...]
    exporters:
    debug/raw:
    verbosity: detailed
    debug/processed:
    verbosity: detailed
    otlp/internal: # Sends data to the second pipeline
    endpoint: 127.0.0.1:4316
    tls:
    insecure: true
    service:
    pipelines:
    logs/raw:
    receivers: [otlp]
    exporters: [debug/raw, otlp/internal] # export the data to the 2nd pipeline
    logs/processed:
    receivers: [otlp/internal] # receive the raw data
    processors: [k8sattributes, ...] # add your processors
    exporters: [debug/processed] # see the processed output

    With this configuration, you’ll see two distinct outputs for each piece of telemetry. First, the output from debug/raw shows the data as it arrived, with minimal resource attributes:

    123456
    2025-07-15T06:53:43.100Z info ResourceLog #0
    Resource SchemaURL: https://opentelemetry.io/schemas/1.26.0
    Resource attributes:
    -> host.name: Str(node-1)
    -> service.name: Str(otelgen)
    [...]

    Next, the output from debug/processed shows the same data, now enriched by the k8sattributes processor:

    text
    123456789
    2025-07-15T06:53:43.269Z info ResourceLog #0
    Resource SchemaURL: https://opentelemetry.io/schemas/1.26.0
    Resource attributes:
    -> host.name: Str(node-1)
    -> k8s.container.name: Str(otelgen)
    -> k8s.namespace.name: Str(default)
    -> k8s.pod.name: Str(otelgen-pod-24c23f5a)
    -> service.name: Str(otelgen)
    [...]

    To distinguish between the “raw” pipeline and the “processed” one in the console, ensure to check the otelcol.component.id in the log preceding the debug exporter output which should be either debug/raw or debug/processed (or see an easier way below):

    1
    2025-07-15T11:29:34.956Z info Logs {"resource": {..., "otelcol.component.id": "debug/processed", ...}

    By comparing the before and after states, you get undeniable proof of what your processor is doing.

    Some tips for easier troubleshooting

    Using the debug exporter is easy, but using it effectively requires you to avoid common traps.

    1. Always sample in high-volume environments

    When debugging a high-traffic pipeline, the debug output can be overwhelming. To make the output manageable, the exporter offers two sampling options:

  1. sampling_initial (default: 2): The is number of telemetry records to log per second.
  2. sampling_thereafter (default: 1): defines the rate for logs after the initial burst. A value of 100 means only one out of every 100 records will be logged.
yaml
otelcol.yaml
12345
exporters:
debug:
verbosity: detailed
sampling_initial: 5
sampling_thereafter: 100

2. Distinguish pipelines with color

If you’re using multiple debug instances in your pipelines, their text output can blend together. To make them easy to distinguish, you can switch the Collector’s logger to JSON and use jq to parse and colorize the output.

First, enable JSON logging in your Collector configuration:

yaml
otelcol.yaml
1234
service:
telemetry:
logs:
encoding: json

With this setting, the Collector wraps each debug entry in a JSON object. The raw, multi-line debug output gets packed into a single msg field, while the pipeline identifier is stored in the otelcol.component.id field:

json
123456789101112
{
"level": "info",
"ts": "2025-07-15T11:37:34.257Z",
"msg": "ResourceLog #0\nResource SchemaURL: ... (full debug output) ...",
"resource": {
"service.instance.id": "c4f4782d-7983-473e-8f9c-b935daa58e94",
[...]
},
"otelcol.component.id": "debug/raw",
"otelcol.component.kind": "exporter",
"otelcol.signal": "logs"
}

You can then pipe the Collector’s logs to the following jq command to create a clean, color-coded, and readable view:

sh
123456789
docker logs otel-collector -f | jq -r '
if .["otelcol.component.id"] == "debug/raw" then
"\u001b[1;33m[FROM: raw]\n---\n\(.msg)\u001b[0m"
elif .["otelcol.component.id"] == "debug/processed" then
"\u001b[1;36m[FROM: processed]\n---\n\(.msg)\u001b[0m"
else
.
end
'

This jq script reconstructs the log output on the fly. The -r (raw output) flag is essential for correctly processing newlines and color codes. The script uses an if/elif/else block to check the otelcol.component.id, then constructs a new colored string using ANSI escape codes (e.g., \u001b[1;33m for bold yellow) and string interpolation (\(.msg)). The final else . clause ensures that other log lines from the collector continue to be printed as before.

Making the debug exporter output easier to read

The result is a color-coded output that’s significantly easier to distinguish.

3. Check your upstream sources if no data is logged

If the debug exporter isn’t showing any output, use a minimal pipeline first (without processors) to prove that data is actually arriving:

otelcol.yaml
12345
service:
pipelines:
traces:
receivers: [otlp]
exporters: [debug]

If you deploy this and still see nothing, the problem is almost always upstream. Ensure to check your application’s instrumentation, the receiver’s configuration, and any network policies or firewalls between your application and the Collector.

Final thoughts

Ultimately, the goal of any observability pipeline is to send high-quality, reliable, and well-structured data to a powerful backend. Using the debug exporter to confirm your data is correct before it leaves the Collector is the currently the best way to do this.

Tracing view in Dash0

Once your data is clean and your attributes are correct, the final step is sending it to an OpenTelemetry-native platform like Dash0 that can help you transform it into actionable insights.

Try it today by signing up for a free trial.

Authors
Ayooluwa Isaiah
Ayooluwa Isaiah