Last updated: July 28, 2025

Mastering the OpenTelemetry OTLP Receiver

The OpenTelemetry Collector is the central hub for collecting, processing, and exporting your observability data.

At the very front door of this powerful pipeline sits the OpenTelemetry Protocol (OTLP) Receiver. Its role is to accept telemetry data from your applications and other OpenTelemetry-instrumented services over gRPC or HTTP.

Understanding and correctly configuring the OTLP Receiver is paramount, as it dictates how your data enters the Collector, impacting everything from network performance and security to the overall reliability of your telemetry pipeline.

In this comprehensive guide, we’ll delve into the intricacies of the OTLP Receiver, exploring its capabilities, configuration options, and best practices to ensure a robust and efficient data flow.

Let's get started!

What is OTLP?

Before diving into the receiver itself, it’s essential to grasp OTLP. The OpenTelemetry Protocol (OTLP) is a standardized protocol for transmitting telemetry data.

It defines the encoding, transport, and delivery mechanism for traces, metrics, and logs generated by OpenTelemetry SDKs and other compatible systems.

OTLP supports two primary transport mechanisms:

  • gRPC: A high-performance, open-source universal RPC framework. It’s often preferred for its efficiency and low latency, especially in internal network communications.
  • HTTP/JSON: A more widely compatible option that leverages standard HTTP and JSON encoding. This is particularly useful for web-based clients, browser-based instrumentation, or environments where gRPC might be challenging to implement.

The OTLP Receiver in the OpenTelemetry Collector is designed to speak both these dialects, making it a versatile and indispensable component for any OpenTelemetry deployment.

Quick start: seeing it in action

To get your OTLP Receiver up and running, simply define it in the receivers section of your Collector’s configuration. By default, both gRPC and HTTP protocols are enabled on their respective standard ports.

yaml
otelcol.yaml
12345
receivers:
otlp:
protocols:
grpc:
http:

This minimal configuration sets up:

  • A gRPC endpoint listening on localhost:4317.
  • An HTTP/JSON endpoint listening on localhost:4318.

Your OpenTelemetry-instrumented applications can now send traces, metrics, and logs to these endpoints. For example, an application configured to export to http://localhost:4318 will send data to the Collector’s OTLP HTTP endpoint.

To verify that data is flowing into your Collector, combine the OTLP Receiver with the debug exporter:

yaml
otelcol.yaml
123456789101112131415161718192021
receivers:
otlp:
protocols:
grpc:
http:
exporters:
debug:
verbosity: detailed # See full telemetry data structure
service:
pipelines:
traces:
receivers: [otlp]
exporters: [debug]
metrics:
receivers: [otlp]
exporters: [debug]
logs:
receivers: [otlp]
exporters: [debug]

With this setup, any telemetry data received by the otlp receiver will be printed to your Collector’s standard error stream so that you can inspect the incoming data and confirm successful ingestion.

Configuring the OTLP receiver

The OTLP Receiver offers a wide range of configuration options to fine-tune its behavior, security, and performance. These settings are nested under the protocols section for grpc and http individually.

Common endpoint Configuration

For both gRPC and HTTP protocols, the endpoint setting allows you to specify the host:port where the receiver will listen for incoming data:

yaml
otelcol.yaml
1234567
receivers:
otlp:
protocols:
grpc:
endpoint: "0.0.0.0:4317" # Listen on all network interfaces for gRPC
http:
endpoint: "0.0.0.0:4318" # Listen on all network interfaces for HTTP

Using 0.0.0.0 binds the receiver to all available network interfaces, making it accessible from other machines in your network. Always consider security best practices when setting your endpoints.

HTTP/JSON specifics

The HTTP/JSON endpoint provides additional flexibility, particularly concerning URL paths and Cross-Origin Resource Sharing (CORS).

Custom URL paths

You can customize the specific URL paths for different signal types (traces, metrics, logs, profiles). This can be useful for routing or integrating with specific client configurations:

yaml
otelcol.yaml
12345678
receivers:
otlp:
protocols:
http:
traces_url_path: "/my-app/v1/traces" # Default is /v1/traces
metrics_url_path: "/my-app/v1/metrics" # Default is /v1/metrics
logs_url_path: "/my-app/v1/logs" # Default is /v1/logs
profiles_url_path: "/my-app/v1/profiles" # Default is /v1/profiles

When sending data from an otlphttpexporter or similar client, ensure its endpoint settings match these customized paths.

CORS (Cross-Origin Resource Sharing)

For browser-based OpenTelemetry instrumentation or web applications sending data directly to the Collector, you’ll likely need to configure CORS. This prevents browsers from blocking requests due to same-origin policy restrictions:

yaml
123456789101112131415
receivers:
otlp:
protocols:
grpc:
endpoint: "0.0.0.0:4317"
balancer_name: pick_first # Restore older behavior
max_recv_msg_size_mib: 100 # Allow up to 100 MiB messages
keepalive:
enforcement_policy:
min_time: 30s # Client must send keepalive pings at least every 30s
permit_without_stream: true # Allow pings even when no active streams
server_parameters:
max_connection_idle: 5m # Close connections idle for 5 minutes
time: 1m # Send pings every minute to idle clients
timeout: 20s # Timeout if no response to ping within 20 seconds

Important: Avoid using a plain ["*"] for allowed_origins if Access-Control-Allow-Credentials: true is implied or configured, as browsers will disallow it for security reasons. Instead, specify protocols like ["https://*", "http://*"] to allow any origin.

gRPC specifics

The gRPC protocol offers configurations primarily focused on connection management and buffering.

  • balancer_name: The controls client-side load balancing. The default changed from pick_first to round_robin in v0.103.0. You can revert to pick_first if needed.
  • max_concurrent_streams: Limits the number of concurrent gRPC streams.
  • max_recv_msg_size_mib: Sets the maximum incoming message size in MiB.
  • read_buffer_size and write_buffer_size: Control the gRPC transport’s buffer sizes.
  • keepalive: Configures parameters for gRPC keep-alive pings to prevent idle connections from being closed.
yaml
otelcol.yaml
123456789101112131415
receivers:
otlp:
protocols:
grpc:
endpoint: "0.0.0.0:4317"
balancer_name: pick_first
max_recv_msg_size_mib: 100
keepalive:
enforcement_policy:
min_time: 30s
permit_without_stream: true
server_parameters:
max_connection_idle: 5m
time: 1m
timeout: 20s

Compression configuration

Both gRPC and HTTP protocols support various compression algorithms to reduce network bandwidth usage for telemetry data.

For HTTP, you can specify a list of compression_algorithms the server will accept. For gRPC, the compression setting defines the compression type for client configuration (used by exporters to compress data sent to a receiver).

yaml
otelcol.yaml
12345678910
receivers:
otlp:
protocols:
http:
compression_algorithms: ["gzip", "zstd"] # Accept gzip or zstd compressed HTTP data
grpc:
# Note: gRPC compression is typically configured on the *exporter* side that sends to this receiver
# For a gRPC *receiver*, it inherently supports various compression algorithms from the client.
# This section primarily covers *client* (exporter) side compression if this was an exporter.
# However, for the receiver, you don't explicitly list accepted gRPC compression.

The OpenTelemetry Collector documentation provides benchmarks comparing gzip, snappy, and zstd for different payload sizes. Key takeaways:

  • gzip: Good all-rounder with reasonable compression and performance. It’s the only required compression algorithm for OTLP servers.
  • snappy: Fastest compression speed, but lower compression ratios. Useful if your Collector is CPU-bound and has a very fast network.
  • zstd: Often offers the best compression ratio while maintaining good speed.

Choose your compression based on your network bandwidth constraints, CPU utilization of the Collector, and whether your clients (e.g., SDKs) and other components in your pipeline support the chosen algorithm.

Disabling compression (none or leaving the default) can also be beneficial if your network link is very fast and CPU is a bottleneck.

Securing the OTLP receiver with TLS/mTLS

The OTLP receiver supports Transport Layer Security (TLS) for encrypting communication and Mutual TLS (mTLS) for client authentication.

Basic TLS

To enable basic TLS, requiring clients to communicate over HTTPS or gRPCs (gRPC over TLS), you need to provide a server certificate and private key.

yaml
1234567891011
receivers:
otlp:
protocols:
grpc:
tls:
cert_file: /etc/ssl/certs/server.crt # Path to server certificate
key_file: /etc/ssl/private/server.key # Path to server private key
http:
tls:
cert_file: /etc/ssl/certs/server.crt
key_file: /etc/ssl/private/server.key

Mutual TLS (mTLS)

For enhanced security, mTLS ensures that both the client and the server authenticate each other using certificates. To enable mTLS on the receiver, in addition to the server’s certificate and key, you must provide a client_ca_file which contains the CA certificate used to sign client certificates.

yaml
otelcol.yaml
123456789
receivers:
otlp/mtls: # A named instance for mTLS
protocols:
grpc:
tls:
cert_file: /etc/ssl/certs/server.crt
key_file: /etc/ssl/private/server.key
client_ca_file: /etc/ssl/certs/client_ca.crt # CA to verify client certificates
client_ca_file_reload: true # Reload client CA file if it changes

You can also configure min_version, max_version for TLS protocol versions, and cipher_suites for accepted cryptographic suites.

Trusted Platform Module (TPM)

For highly secure environments, the OTLP receiver can be configured to load TLS private keys from a Trusted Platform Module (TPM) using TSS2 format.

plain text
otelcol.yaml
1234567891011
receivers:
otlp/tpm:
protocols:
grpc:
tls:
cert_file: /etc/ssl/certs/server.crt
key_file: /path/to/server-tss2.key # This key is loaded from TPM
tpm:
enabled: true
path: /dev/tpmrm0
owner_auth: "myownerauth"

This is an advanced feature primarily for specialized hardware security requirements.

Configuring authentication

Beyond TLS, you can integrate external authentication extensions with the OTLP receiver to control access based on identity. This is done via the auth section, referencing a named authenticator extension defined in your Collector’s extensions section.

yaml
otelcol.yaml
1234567891011121314
extensions:
basic_auth: # Example: using a basic auth extension
htpasswd_file: /etc/otelcol/users.htpasswd
receivers:
otlp/auth:
protocols:
grpc:
auth:
authenticator: basic_auth # Reference the basic_auth extension
http:
auth:
authenticator: basic_auth
request_params: ["api_key"] # Extract 'api_key' query param for auth context

Common server authenticators include Basic Auth, Bearer Token, and OIDC extensions. You can also configure request_params for HTTP authentication to extract values from URL query parameters into the authentication context.

For more advanced attribute manipulation based on authentication context (e.g., adding client_ip from X-Forwarded-For headers), combine include_metadata: true on the OTLP receiver with the attributes processor:

yaml
otelcol.yaml
123456789101112
receivers:
otlp:
protocols:
http:
include_metadata: true # Essential to access HTTP headers
processors:
attributes:
actions:
- key: client.address
from_context: metadata.x-forwarded-for # Extract from X-Forwarded-For header
action: upsert

OTLP receiver tips and best practices

  • Secure your endpoints: Always use TLS/mTLS in production environments, and consider adding authentication extensions for stricter access control.
  • Choose compression wisely: While compression saves bandwidth, it consumes CPU. Balance these factors based on your infrastructure. For very high-throughput, you might even consider no compression if network is not a bottleneck and CPU is.
  • Endpoint strategy: For deployments within the same cluster, gRPC is generally preferred for its performance. For external clients or browser-based instrumentation, HTTP/JSON is typically more suitable.
  • Leverage the debug exporter: As highlighted in our guide on the debug exporter, it is your best friend for validating that data is correctly arriving at the OTLP receiver and what its structure looks like. If you’re not seeing data in your backend, the debug exporter is the first tool to check if the OTLP Receiver is even receiving anything.
  • Check upstream: If the OTLP Receiver isn’t showing any data (via the debug exporter), the problem is almost certainly upstream. Verify your application’s OpenTelemetry SDK configuration, network connectivity, and firewall rules between your application and the Collector.

Final thoughts

The OpenTelemetry OTLP Receiver is more than just a data entry point; it’s the gatekeeper of your observability pipeline, ensuring that your valuable telemetry data enters the Collector reliably, securely, and efficiently. By mastering its configuration, you lay a solid foundation for robust data collection and subsequent processing.

Once your OTLP Receiver is configured and data is flowing cleanly into the Collector, the next logical step is to send it to an OpenTelemetry-native platform like Dash0.

Such platforms are designed to ingest this rich, standardized data, transforming it into actionable insights that empower you to understand and troubleshoot your systems with unprecedented clarity.

Take control of your observability data and try Dash0 today by signing up for a free trial.

Authors
Ayooluwa Isaiah
Ayooluwa Isaiah