Last updated: September 1, 2025
A Practical Guide to the OpenTelemetry OTLP Receiver
The OpenTelemetry Collector is the mission control for your observability data. It's where all your metrics, traces, and logs come together to be processed and sent to their final destinations.
At the very front door of the pipeline is the OTLP receiver. This is where your applications and other OpenTelemetry-instrumented services send their telemetry data. If the OTLP receiver isn't set up correctly, your entire observability pipeline can suffer, and this is why mastering this component matters.
Before we dig into the receiver itself, it's worth understanding the protocol it speaks: OpenTelemetry Protocol (OTLP). OTLP is the standard way OpenTelemetry sends telemetry data. It defines how your data is encoded, transported, and delivered from your applications to wherever they need to go.
OTLP supports two main transport options:
- gRPC: A high-performance, open-source RPC framework that's often preferred for its efficiency and low latency, especially in internal network communications.
- HTTP: A more universally compatible choice that uses plain HTTP and Protocol Buffers or JSON. This works well for browser-based instrumentation, and other situations where gRPC isn't supported.
The OTLP receiver can handle both, making it a flexible and essential part of any OpenTelemetry setup. In this guide, we'll walk through exactly how it works, the options you have for configuring it, and the best practices that will keep your telemetry pipeline fast, secure, and rock-solid.
Let's get started!
Quick start: seeing it in action
Getting the OTLP receiver running is straightforward. All you need to do is
define it in the receivers
section of your Collector configuration. By
default, it will enable both gRPC and HTTP protocols on their standard ports.
yaml12345receivers:otlp:protocols:grpc:http:
This minimal configuration sets up:
- A gRPC endpoint listening on
localhost:4317
. - An HTTP endpoint listening on
localhost:4318
.
Once that's in place, any OpenTelemetry-instrumented application can send
traces, metrics, and logs to these endpoints. For example, pointing an exporter
at http://localhost:4318
will send data straight to the Collector's OTLP HTTP
endpoint.
To confirm that data is actually coming through, you can pair the OTLP receiver with the debug exporter:
yaml123456789101112131415161718192021receivers:otlp:protocols:grpc:http:exporters:debug:verbosity: detailed # See full telemetry data structureservice:pipelines:traces:receivers: [otlp]exporters: [debug]metrics:receivers: [otlp]exporters: [debug]logs:receivers: [otlp]exporters: [debug]
With this setup, any telemetry data received by the otlp
receiver will be
printed to your Collector's standard error stream so that you can inspect the
incoming data and confirm successful ingestion before moving on to more advanced
configurations.
Configuring the OTLP receiver
The OTLP receiver offers a variety of configuration options to fine-tune its
behavior, security, and performance. These settings are organized under the
protocols
section, with separate configurations for grpc
and http
.
Common endpoint
configuration
For both gRPC and HTTP, the endpoint
setting specifies the host:port
where
the receiver listens for incoming telemetry data:
yaml1234567receivers:otlp:protocols:grpc:endpoint: "localhost:4317"http:endpoint: "localhost:4318"
By default, the receiver binds to localhost
, which works well for local
development. However, in environments with non-standard networking, such as
Docker containers or Kubernetes pods, localhost
may not be reachable from
other services. In those cases, you'll need to bind the receiver to a service
DNS name or pod IP so it can accept external connections.
With the correct endpoint configuration, your OTLP receiver will be reachable from the parts of your infrastructure that need to send telemetry—whether that's inside a cluster or across network boundaries.
Ensure to always consider security best practices when setting your endpoints.
Advanced http
configuration
The http
protocol provides a flexible, widely compatible way to receive
telemetry. It's a good choice for browser-based applications, clients that can't
use gRPC, or situations where traffic flows through a standard web proxy.
Custom URL paths
By default, the Collector expects traces, metrics, logs, and profiles at the
standard OTLP paths (/v1/traces
, /v1/metrics
, /v1/logs
, /v1/profiles
).
You can override these paths to fit your infrastructure:
yaml12345678receivers:otlp:protocols:http:traces_url_path: "/api/checkout-service/v1/traces"metrics_url_path: "/api/checkout-service/v1/metrics"logs_url_path: "/api/checkout-service/v1/logs"profiles_url_path: "/api/checkout-service/v1/profiles"
Custom paths are often needed when the Collector runs behind an API Gateway that routes requests based on URL patterns. Without them, telemetry from your application's SDK might be blocked or misrouted. They're also useful for supporting older or non-standard clients with fixed endpoint paths you can't change.
Once you customize the paths, your application must send data to the matching URLs. For example, in a Node.js service:
JavaScript12345import { OTLPTraceExporter } from "@opentelemetry/exporter-trace-otlp-http";const traceExporter = new OTLPTraceExporter({url: "http://your-api-gateway.com/api/checkout-service/v1/traces",});
Configuring Cross-Origin Resource Sharing (CORS)
If your OpenTelemetry instrumentation runs in the browser or your web application sends telemetry directly to the Collector, you may need to configure CORS to avoid same-origin policy errors.
The safest approach is to explicitly allow only the domains that should send telemetry, following the principle of least privilege:
yaml123456789101112receivers:otlp:protocols:http:cors:allowed_origins:- http://localhost:8080 # Allow requests from a specific origin- https://*.mycompany.com # Allow from a subdomain with a wildcardallowed_headers:# Allow additional custom headers outside the default safelist: https://developer.mozilla.org/en-US/docs/Glossary/CORS-safelisted_request_header- "X-Custom-Header"max_age: 3600 # Cache preflight responses for 1 hour
While this works in the Collector, production setups often delegate CORS handling to a reverse proxy or API gateway, which also handles TLS termination, rate limiting, and WAF policies.
Managing CORS at the proxy level keeps the Collector's configuration lean and
ensures it only receives trusted, pre-filtered traffic. In that case, the cors
section can be removed from the Collector entirely.
Configuring gRPC settings
For internal, high-throughput scenarios, gRPC is often the best choice because of its efficiency. To run it reliably, though, you need to manage connections carefully and set limits that protect the Collector from excessive load.
Two key settings help prevent resource exhaustion and out-of-memory errors. The
first, max_recv_msg_size_mib
, controls the maximum size (in megabytes) of a
single gRPC message the receiver will accept.
The default is 4 MB, which is fine for most workloads, but you can increase it if clients need to send larger batches, provided the Collector has enough memory to handle them.
The second, max_concurrent_streams
, limits how many simultaneous request
streams a single client connection can keep open. Many environments leave this
effectively unlimited, but in large-scale deployments it can be useful to lower
the value so that no single client can monopolize server resources.
Ensuring connection reliability with keepalives
Long-lived gRPC connections are efficient but can be terminated by network
middleboxes (firewalls, load balancers) that close idle connections. The
keepalive
settings provide a robust mechanism to prevent this, but it's a
two-part system: the server actively checks on clients, and it enforces rules on
how clients can check on it.
1. Proactive server pinging (server_parameters
)
The server can periodically ping connected clients to keep idle connections alive.
-
time
specifies how long the server waits after no activity before sending a keepalive ping. The default is two hours, which is too long for many environments. Choose a value shorter than your network's idle connection timeout. -
timeout
defines how long the server waits for a ping response before considering the connection dead and closing it
You can also configure connection lifecycle limits:
-
max_connection_idle
closes connections that have been inactive for the specified duration. -
max_connection_age
forces a connection to close after a set maximum lifespan, regardless of activity. -
max_connection_age_grace
: provides a grace period for in-flight requests before closing a connection that has reached its maximum age.
2. Enforcing client behavior (enforcement_policy
)
These settings protect your Collector from being overwhelmed by misconfigured or aggressive clients that send too many of their own keepalive pings.
-
min_time
sets the minimum allowed interval between client pings. If a client pings more frequently, the connection is closed. -
permit_without_stream
allows idle clients to send keepalives even when there are no active requests, which can be useful for intermittent workloads.
Below is a sample configuration for a Collector behind a firewall that closes idle connections after 90 seconds:
yaml1234567891011121314receivers:otlp:protocols:grpc:keepalive:enforcement_policy:min_time: 30spermit_without_stream: trueserver_parameters:time: 60stimeout: 15smax_connection_idle: 10mmax_connection_age: 1hmax_connection_age_grace: 30s
Configuring OTLP receiver compression
The receiver's role in compression is to decompress incoming telemetry before it moves through the pipeline. This reduces network bandwidth at the cost of some CPU usage on the Collector. The configuration and behavior differ between the HTTP and gRPC protocols, so let's look at them both below.
HTTP decompression
For the HTTP protocol, you'll explicitly configure which compression algorithms
the receiver will accept through the compression_algorithms
list in your
configuration.
When a request arrives, the receiver checks the Content-Encoding
header. If
the encoding matches one of the allowed algorithms, the body is decompressed
before it's processed. If it doesn't match, the request is rejected.
yaml123456receivers:otlp:protocols:http:# Accept and decompress uncompressed ("") or gzip-compressed data.compression_algorithms: ["", "gzip"] # Default: ["", "gzip", "zstd", "zlib", "snappy", "deflate", "lz4"]
gRPC decompression
With gRPC, no explicit configuration is required. The underlying gRPC
implementation automatically detects and decompresses data sent with supported
algorithms such as gzip
, zstd
, or snappy
. The process is transparent to
both the client and the Collector.
Performance impact of decompression
Although the client chooses the compression algorithm, its impact is felt on the receiver, so understanding these trade-offs is important when sizing your Collector.
gzip
is the most common choice, as it offers good compression ratios and a
predictable CPU cost, though the decompression overhead is moderate.
For greater efficiency, handling zstd
data results in significantly lower CPU
usage for the same amount of information, making it an excellent choice for
resource-conscious collectors.
At the extreme end of CPU optimization, snappy
imposes the lowest processing
overhead, but it achieves much lower compression ratios. As a result, network
throughput requirements are higher.
Uncompressed data removes the CPU cost entirely, but this comes at the expense of maximum possible bandwidth usage. The Collector must be provisioned to handle the full raw data rate.
Securing the OTLP receiver
The OTLP receiver supports Transport Layer Security (TLS) for encryption and mutual TLS (mTLS) for two-way authentication. The configuration is simple, but the real challenge lies in managing certificates securely and automatically.
Enabling TLS requires a server certificate and private key. For mTLS, you also supply a Certificate Authority (CA) file to validate client certificates:
yaml123456789receivers:otlp:protocols:http:tls:cert_file: /etc/pki/tls.crtkey_file: /etc/pki/tls.key# For mTLS, add the client CA fileclient_ca_file: /etc/pki/client_ca.crt
While the YAML is straightforward, production deployments should automate certificate issuance, rotation, and distribution.
In Kubernetes, for example, cert-manager can request certificates from sources like Let's Encrypt, store them in a Kubernetes Secret, and refresh them automatically. The Collector pod mounts this Secret as a volume, ensuring certificates rotate without manual intervention or downtime.
An alternative approach is to terminate TLS before traffic reaches the Collector. In this pattern, a public-facing load balancer or ingress controller handles encryption and authentication, then forwards decrypted traffic over a private, trusted network. This simplifies the Collector's configuration while keeping security at the network edge.
Configuring authentication
After encrypting traffic with TLS, the next security layer is authentication,
which verifies the identity of the client sending data. The Collector uses
extensions
to handle this. You define the authenticator in the extensions
section and then apply it to a receiver via the auth
setting.
Here's an example that uses the
basicauth
extension to provide a standard htpasswd
password file. When applied to both
the gRPC and HTTP, it ensures that all incoming requests must present valid
credentials:
yaml1234567891011121314extensions:basicauth: # Example: using a basicauth extensionhtpasswd:file: /etc/otelcol/users.htpasswdreceivers:otlp/auth:protocols:grpc:auth:authenticator: basicauthhttp:auth:authenticator: basicauth
The Collector also supports other authenticators, such as Bearer Token for token-based security and OIDC for integration with identity providers.
Enriching your telemetry with connection metadata
The OTLP receiver can capture connection-level details and make them available
for processing. By enabling include_metadata: true
, the receiver stores
metadata such as HTTP headers in the telemetry context, allowing processors to
use this information for enrichment.
One common approach is to pair this with the attributes processor to add useful fields to your telemetry data:
yaml123456789101112131415161718receivers:otlp:protocols:http:include_metadata: true # necessary for the `metadata` contextprocessors:attributes/enrichment:actions:# Add the client's real IP address from a load balancer header.- key: client.ipfrom_context: metadata.x-forwarded-foraction: insert# Add a tenant ID from a custom header for multi-tenant systems.- key: tenant.idfrom_context: metadata.x-tenant-idaction: insert
This pattern allows you to capture details like the originating client IP, tenant identifiers, or other custom headers, and propagate them through your telemetry for filtering, routing, or analysis downstream.
Final thoughts
The OTLP receiver is more than a simple ingestion point. It's the gatekeeper of your observability pipeline, making sure telemetry reaches the Collector reliably, securely, and efficiently. Mastering its configuration gives you a strong foundation for consistent, trustworthy data collection.
Once your OTLP receiver is tuned and your data is flowing smoothly, the next step is to put that data to work. An OpenTelemetry-native platform like Dash0 can take this rich, standardized telemetry and turn it into actionable insights, helping you troubleshoot faster and understand your systems in greater depth.
Take control of your observability data and start your free Dash0 trial today.
