Last updated: August 11, 2025

A Guide to OpenTelemetry Environment Variables

Getting the most out of OpenTelemetry isn't just about instrumenting your code, but also about controlling how that instrumentation behaves once your application is running.

Environment variables give you a powerful way to do exactly that. With a few well-chosen settings, you can direct where your telemetry goes, decide how much to collect, and fine-tune performance without touching a single line of code.

This guide focuses on the OpenTelemetry SDK—the part of OTel that runs inside your application and generates telemetry data. It's separate from the OpenTelemetry Collector, which has its own configuration for processing and exporting.

Here, we'll break down the most useful SDK environment variables, explain what each one does, and show practical examples for setting them in different environments.

Let's get started!

Start with the four variables that matter most

If you remember nothing else about OpenTelemetry environment variables, remember these four. They form the core of any reliable telemetry setup.

1. OTEL_SERVICE_NAME

This is the single most important variable. It defines the logical name of your application or service, and every trace, metric, and log from the SDK carries this name.

OpenTelemetry service name in Dash0 interface

In your observability backend, it's the primary key for filtering, grouping, and searching. Without it, your telemetry is anonymous. With it, you can answer questions like, “What's the error rate for user-service?”

bash
1
export OTEL_SERVICE_NAME="authentication-service"

2. OTEL_RESOURCE_ATTRIBUTES

Resource attributes add context to your telemetry by describing the environment where your code is running. They're key-value pairs that capture details such as environment type, service version, cloud region, or any other metadata that helps you understand and filter your data.

OpenTelemetry Resoure attributes in Dash0

With these attributes, you can quickly answer questions like:

  • Is this issue happening in production or only in staging?
  • Is it affecting all regions, or just eu-west-1?
  • Did the error start after we deployed version 1.3.0?
bash
1
export OTEL_RESOURCE_ATTRIBUTES="deployment.environment=production,service.version=1.2.0,cloud.region=us-east-1"

Attributes are provided as a comma-separated list of key=value pairs. They must follow semantic conventions for consistency across services, or include custom keys relevant to your business.

Note that OTEL_SERVICE_NAME is actually a shorthand for OTEL_RESOURCE_ATTRIBUTES=service.name=my-service. If both are set, OTEL_SERVICE_NAME takes precedence.

3. OTEL_EXPORTER_OTLP_ENDPOINT

This variable tells the OpenTelemetry SDK where to send telemetry data. The OpenTelemetry Protocol (OTLP) is the modern, recommended format, and can be used over either gRPC or HTTP.

bash
12
# Send data via gRPC to a local Collector
export OTEL_EXPORTER_OTLP_ENDPOINT="http://localhost:4317"

The value should point to the OpenTelemetry Collector (recommended) or an observability backend that will receive the data. Default ports are 4317 for gRPC and 4318 for HTTP, but your deployment may use different ones so do adjust accordingly.

Exporting data to Dash0 over gRPC

For HTTP only, the SDK appends the signal-specific path to this base URL. Assuming OTEL_EXPORTER_OTLP_ENDPOINT is set to http://localhost:4318, the final destinations would be: traces sent to http://localhost:4318/v1/traces, metrics to http://localhost:4318/v1/metrics, and logs to http://localhost:4318/v1/logs.

Overriding the destinations per signal

You can set also separate endpoints for traces, metrics, and logs. These override the base OTEL_EXPORTER_OTLP_ENDPOINT for their respective signals:

bash
12
export OTEL_EXPORTER_OTLP_TRACES_ENDPOINT="https://jaeger.example.com:4317"
export OTEL_EXPORTER_OTLP_METRICS_ENDPOINT="http://localhost:9090/api/v1/otlp/v1/metrics"

This is useful when different signals need to go to different systems. For example, you might send traces to Jaeger and metrics to a Prometheus-compatible store as shown above. Logs would continue going to the destination defined by OTEL_EXPORTER_OTLP_ENDPOINT.

Note that the default signal path (/v1/traces, /v1/metrics) is not automatically added to the signal-specific variables (for HTTP) so the specified URL is used as-is.

4. OTEL_SDK_DISABLED

Setting this variable to true completely disables the OpenTelemetry SDK at runtime so that no telemetry is generated, and the performance overhead drops to nearly zero.

This is especially useful in production if you suspect instrumentation is contributing to an outage or performance issue. It allows you to disable telemetry for a specific service immediately, without rebuilding or redeploying your application:

bash
1
export OTEL_SDK_DISABLED=true

Ensure to set it back to false or remove the variable entirely to resume normal telemetry collection.

Fine-tuning the delivery of your telemetry

Once you've told the SDK where to send data with OTEL_EXPORTER_OTLP_ENDPOINT (and its per-signal variants), you can fine-tune how that data is sent with the following variables

OTEL_EXPORTER_OTLP_PROTOCOL

This variable sets the protocol for all telemetry data, unless a per-signal override is provided. Valid values are:

  • grpc for protobuf-encoded data using gRPC wire format over an HTTP/2 connection.
  • http/protobuf for protobuf-encoded data over an HTTP connection.
  • http/json for OTLP/JSON-encoded data over an HTTP connection.

For example:

bash
1
export OTEL_EXPORTER_OTLP_PROTOCOL=grpc

You can also choose different protocols for traces, metrics, and logs individually:

bash
123
export OTEL_EXPORTER_OTLP_TRACES_PROTOCOL=http/protobuf
export OTEL_EXPORTER_OTLP_METRICS_PROTOCOL=grpc
export OTEL_EXPORTER_OTLP_LOGS_PROTOCOL=http/json

If no per-signal protocol is specified, the SDK falls back to the value of OTEL_EXPORTER_OTLP_PROTOCOL.

Note that http/json is significantly less efficient than grpc or http/protobuf and should be avoided for high-volume production telemetry.

OTEL_EXPORTER_OTLP_HEADERS

This adds custom metadata headers to every OTLP request sent by the SDK which is most often used for authentication or other use cases. Multiple headers can be set by separating them with commas:

bash
1
export OTEL_EXPORTER_OTLP_HEADERS="api-key=YOUR_SECRET_TOKEN,tenant-id=acme"

If you're sending data to the OpenTelemetry Collector, these headers can be read and validated by authentication extensions . In multi-tenant environments, custom headers can also help route telemetry to the correct account or storage bucket.

OpenTelemetry also provides signal-specific header variables:

bash
123
export OTEL_EXPORTER_OTLP_TRACES_HEADERS="<headers>"
export OTEL_EXPORTER_OTLP_METRICS_HEADERS="<headers>"
export OTEL_EXPORTER_OTLP_LOGS_HEADERS="<headers>"

Controlling what data is sent

By default, the OpenTelemetry SDK uses the OTLP exporter for all signal types. You can override this with environment variables to choose a different exporter, disable a signal entirely, or even send the same signal to multiple destinations.

bash
123
export OTEL_TRACES_EXPORTER="otlp,zipkin"
export OTEL_METRICS_EXPORTER="prometheus"
export OTEL_LOGS_EXPORTER="console"

Each variable accepts a single value or a comma-separated list to enable multiple exporters for the same signal (if supported by the SDK). Some of the most common values are listed below:

ValueDescriptionApplies to
otlpSends data using the OpenTelemetry Protocol.Traces, Metrics, Logs
zipkinSends trace data to Zipkin (protobuf format by default).Traces only
prometheusExposes metrics in Prometheus scrape format.Metrics only
consoleWrites data to standard output (mainly for debugging).Traces, Metrics, Logs
noneDisables automatic exporting for that signal.Traces, Metrics, Logs

In production, the recommended pattern is to export all signals via OTLP to a Collector. The Collector can then fan out to multiple destinations and perform any required transformations or filtering.

Managing trace sampling

You can also control trace sampling directly through environment variables, without changing application code.

OTEL_TRACES_SAMPLER

Sets the sampling strategy. Common options include:

  • always_on: Sample all traces.
  • always_off: Sample no traces.
  • traceidratio: Sample a fixed percentage of traces based on the trace ID.
  • parentbased_traceidratio: Follow the parent span's sampling decision, but use a ratio for new root spans.
bash
1
export OTEL_TRACES_SAMPLER="parentbased_traceidratio"

OTEL_TRACES_SAMPLER_ARG

Provides an argument to the chosen sampler. For ratio-based samplers, this is a number between 0.0 (0%) and 1.0 (100%).

bash
1
export OTEL_TRACES_SAMPLER_ARG="0.1"

If OTEL_TRACES_SAMPLER_ARG is omitted, the SDK uses its default value, which is typically 1.0 (100%).

For more control, you may use tail-based sampling in the OpenTelemetry Collector.

Fine-tuning batches, queues, and limits

The OpenTelemetry SDK batches telemetry before exporting it to keep network usage efficient. It also enforces limits on the size and number of attributes, events, and links to prevent runaway memory use. You can adjust all of these settings with environment variables.

Controlling how spans are batched

If your service produces a lot of spans, the Batch Span Processor (BSP) collects them in memory and sends them in batches. These variables control when and how those batches are sent:

  • OTEL_BSP_SCHEDULE_DELAY: How long (in milliseconds) to wait between consecutive exports. The default is 5000 (5 seconds).
  • OTEL_BSP_EXPORT_TIMEOUT: How long to wait for an export to complete before giving up. The default is 30000 (30 seconds).
  • OTEL_BSP_MAX_QUEUE_SIZE: Maximum number of spans that can be held in memory before older ones are dropped. The default is 2048.
  • OTEL_BSP_MAX_EXPORT_BATCH_SIZE: Maximum number of spans sent in one export. Default is 512. Must be less than or equal to the queue size.
bash
123
# Example: Export every 2 seconds, up to 256 spans per batch
export OTEL_BSP_SCHEDULE_DELAY=2000
export OTEL_BSP_MAX_EXPORT_BATCH_SIZE=256

Controlling how logs are batched

Logs use a similar Batch LogRecord Processor (BLRP), with separate controls so you can tune them independently:

  • OTEL_BLRP_SCHEDULE_DELAY: The delay between consecutive log exports. Default is 1000 (1 second).
  • OTEL_BLRP_EXPORT_TIMEOUT: Timeout for exporting a batch of logs. Default is 30000 (30 seconds).
  • OTEL_BLRP_MAX_QUEUE_SIZE: Max number of log records kept in memory. Default is 2048.
  • OTEL_BLRP_MAX_EXPORT_BATCH_SIZE: Max number of log records per batch. Default is 512.
bash
123
# Example: Export logs every 500ms, up to 100 per batch
export OTEL_BLRP_SCHEDULE_DELAY=500
export OTEL_BLRP_MAX_EXPORT_BATCH_SIZE=100

Limiting attributes

Attributes give context to telemetry, but too many (or very large ones) can cause performance issues. These variables set global caps:

  • OTEL_ATTRIBUTE_VALUE_LENGTH_LIMIT: Maximum length of any attribute value. Default is unlimited.
  • OTEL_ATTRIBUTE_COUNT_LIMIT: Maximum number of attributes allowed on a single entity. Default is 128.

Span-specific limits

Spans can also have their own limits, allowing finer control over what they contain:

  • OTEL_SPAN_ATTRIBUTE_VALUE_LENGTH_LIMIT: Max length for any span attribute value. Default is unlimited.
  • OTEL_SPAN_ATTRIBUTE_COUNT_LIMIT: Max number of attributes per span. Default is 128.
  • OTEL_SPAN_EVENT_COUNT_LIMIT: Max number of events a span can have. Default is 128.
  • OTEL_SPAN_LINK_COUNT_LIMIT: Max number of links per span. Default is 128.
  • OTEL_EVENT_ATTRIBUTE_COUNT_LIMIT: Max number of attributes on a span event. Default is 128.
  • OTEL_LINK_ATTRIBUTE_COUNT_LIMIT: Max number of attributes on a span link. Default is 128.

Log record limits

Logs have similar controls for attributes:

  • OTEL_LOGRECORD_ATTRIBUTE_VALUE_LENGTH_LIMIT: Max length of any log record attribute value. Default is unlimited.
  • OTEL_LOGRECORD_ATTRIBUTE_COUNT_LIMIT: Max number of attributes per log record. Default is 128.

By tuning these batching intervals and limits, you can make sure your telemetry is sent efficiently without overloading your network, backend, or application memory. In high-throughput services, a few well-chosen values here can make a big difference in stability and cost.

Final thoughts

Environment variables give you a flexible way to control your telemetry without touching application code. When you understand how to use them for context, exporters, sampling, and performance tuning, you can shape an observability setup that's efficient, reliable, and cost-effective.

For the full list of options and their exact behavior, see the OpenTelemetry SDK environment variable specification.

Authors
Ayooluwa Isaiah
Ayooluwa Isaiah