Last updated: October 13, 2025
Getting Started with the OpenTelemetry OTLP/HTTP Exporter
The OpenTelemetry Collector is the foundation of modern observability pipelines, and the OTLP/HTTP exporter is one of the simplest and most interoperable ways to move telemetry data. It uses plain HTTP rather than gRPC, making it easier to integrate with a wide range of backends and environments—especially where proxies, firewalls, or strict network policies make gRPC less practical.
This guide explores the configuration of the OTLP/HTTP exporter in detail, starting from the essentials and progressing to advanced tuning for security, reliability, and performance.
By the end, you'll know how to confidently configure the OTLP/HTTP exporter for both simple local agents and large-scale production pipelines.
Quick start: sending traces to Jaeger over HTTP
Like the gRPC exporter, the OTLP/HTTP exporter primarily needs two pieces of
information: where to send the data (endpoint
) and how to secure the
connection (tls
).
Here's a minimal working example using Docker Compose. This setup includes:
- telemetrygen for generating test traces.
- The OpenTelemetry Collector configured with an OTLP/HTTP exporter instance.
- A Jaeger instance for visualization.
yaml12345678910111213141516171819202122232425# docker-compose.ymlservices:otelcol:image: otel/opentelemetry-collector-contrib:0.137.0volumes:- ./otelcol.yaml:/etc/otelcol-contrib/config.yamljaeger:image: jaegertracing/jaeger:2.10.0container_name: jaegerports:- 16686:16686telemetrygen:image: ghcr.io/open-telemetry/opentelemetry-collector-contrib/telemetrygen:v0.137.0command:["traces","--otlp-endpoint","http://otelcol:4318","--rate","10","--duration","1h",]
Then create the OpenTelemetry Collector configuration file in the same directory:
yaml1234567891011121314151617# otelcol.yamlreceivers:otlp:protocols:http:exporters:otlphttp:endpoint: http://jaeger:4318tls:insecure: trueservice:pipelines:traces:receivers: [otlp]exporters: [otlphttp]
Run the setup with:
bash1docker compose up
Once all containers are running, open http://localhost:16686
and select the
telemetrygen
service to confirm traces are arriving via HTTP.
Setting up the OTLP/HTTP exporter
The OTLP/HTTP exporter supports a similar configuration structure to the gRPC variant but operates over plain HTTP and can target separate endpoints per signal type.
endpoint
The endpoint
defines the base URL for all signals (e.g.,
https://example.com:4318
). The exporter automatically appends the appropriate
path based on the signal:
/v1/traces
for traces/v1/metrics
for metrics/v1/logs
for logs
yaml123exporters:otlphttp:endpoint: https://collector.example.com:4318
If you need to direct different signals to different URLs, you can override the default with signal-specific options:
yaml12345exporters:otlphttp:traces_endpoint: https://traces.example.com:4318/v1/tracesmetrics_endpoint: https://metrics.example.com:4318/v1/metricslogs_endpoint: https://logs.example.com:4318/v1/logs
compression
By default, gzip
compression is enabled. You can disable or change it if
needed:
yaml123exporters:otlphttp:compression: none
Compression can significantly reduce bandwidth usage. For most cases, keeping
gzip
enabled is recommended.
encoding
The exporter supports two encodings for payloads:
proto
(default): efficient and compact; best for production.json
: human-readable but larger in size; useful for debugging or when the backend only accepts JSON.
yaml123exporters:otlphttp:encoding: json
Securing the connection with TLS
HTTP connections can and should be secured using TLS. The configuration follows
the same structure as the gRPC exporter's tls
block.
In most production setups, you'll send telemetry to an HTTPS endpoint using a certificate signed by a trusted CA:
yaml123exporters:otlphttp:endpoint: https://secure-endpoint.example.com:4318
If your collectors communicate internally and use self-signed certificates, you can provide custom certificates for verification:
yaml1234567exporters:otlphttp:endpoint: https://internal-gateway:4318tls:ca_file: /etc/ssl/certs/ca.pemcert_file: /etc/ssl/certs/client.pemkey_file: /etc/ssl/private/client.key
For details on TLS configuration, see the TLS configuration guide.
Timeout and buffer settings
Unlike gRPC, HTTP requests are one-shot operations rather than persistent connections. This means timeouts and buffer sizes play a bigger role in reliability.
timeout
Defines the maximum time an HTTP request can take before it's aborted. Default:
30s
.
yaml123exporters:otlphttp:timeout: 15s
Reducing this value can help detect unresponsive backends faster, while longer timeouts are better for slow or high-latency networks.
read_buffer_size
and write_buffer_size
These control the underlying TCP buffer sizes for the HTTP client. The defaults work well for most environments, but you can tune them for very high throughput or constrained systems.
yaml123exporters:otlphttp:write_buffer_size: 1048576 # 1 MB
Building resilient pipelines with retries and queuing
The OTLP/HTTP exporter uses the same exporterhelper framework as the gRPC exporter. This means it includes the same reliability mechanisms for handling failures.
retry_on_failure
When a request fails, the exporter retries automatically using exponential backoff. Only transient errors such as HTTP 429 or 503 trigger retries.
yaml1234567exporters:otlphttp:retry_on_failure:enabled: trueinitial_interval: 5smax_interval: 30smax_elapsed_time: 300s
Set max_elapsed_time: 0
for indefinite retries, though this can risk
backpressure if the backend stays down for long periods.
sending_queue
The sending queue buffers batches of telemetry in memory before sending them, preventing data loss during temporary slowdowns or retries.
yaml123456exporters:otlphttp:sending_queue:enabled: truequeue_size: 5000num_consumers: 10
This queue ensures that new data is held temporarily rather than dropped if the backend slows down.
Persistent queue for restarts
To survive Collector restarts, use a persistent queue with a storage extension
such as file_storage
:
yaml1234567891011extensions:file_storage:directory: /var/lib/otelcol/storageexporters:otlphttp:sending_queue:enabled: truequeue_size: 5000persistent_storage_enabled: truestorage: file_storage
When the Collector restarts, unsent telemetry will be reloaded and exported automatically.
Performance tuning and scalability
Although HTTP is less efficient than gRPC for high-throughput environments, careful tuning helps minimize overhead and maintain throughput.
Parallel pipelines
Because HTTP requests are independent, scaling horizontally with multiple exporters or Collector instances is often more effective than tuning a single one. This is particularly useful when exporting to backends that impose per-connection rate limits.
Connection reuse
HTTP exporters in the Collector automatically reuse TCP connections using Go's default transport settings, reducing the overhead of connection setup. You can further improve efficiency by ensuring your backends support HTTP keep-alive.
Load balancing
Load balancing is handled outside the Collector for HTTP exporters, typically using DNS or a reverse proxy. Using a domain name that resolves to multiple IPs allows standard client-side balancing without additional configuration.
Monitoring exporter health
You can monitor OTLP/HTTP exporter performance through internal Collector metrics, the same way as for the gRPC exporter:
otelcol_exporter_queue_size
: Current queue occupancy.otelcol_exporter_send_failed_<signal>
: Number of failed sends.otelcol_exporter_sent_<signal>
: Successfully delivered telemetry.otelcol_exporter_enqueue_failed_<signal>
: Dropped telemetry due to full queue.
These metrics provide visibility into the exporter's stability, allowing you to detect bottlenecks or configuration issues early.
Final thoughts
The OTLP/HTTP exporter is the most compatible and straightforward way to ship telemetry data across diverse environments. It works anywhere HTTP can reach, avoids gRPC's connection complexity, and integrates easily with proxies and load balancers.
While it trades some efficiency for compatibility, the exporter includes all the same resilience features as its gRPC counterpart—TLS security, retries, queuing, and persistent buffering. With careful tuning, it delivers reliable, secure telemetry at scale.
Once the data reaches your backend, the true value begins: transforming raw telemetry into actionable insight. When paired with an OpenTelemetry-native platform like Dash0, the OTLP/HTTP exporter helps you maintain full context and observability across your entire stack.
