Last updated: October 12, 2025
Collecting Prometheus Metrics with the OpenTelemetry Collector
Observability in modern systems often starts with Prometheus. It's the tool most developers and operators rely on to scrape and expose metrics from their applications and infrastructure.
But as teams adopt OpenTelemetry for unified telemetry collection, the question now becomes: how do you bring all that Prometheus data into the OpenTelemetry world?
That's where the Prometheus receiver comes in. It lets the OpenTelemetry Collector act like a Prometheus server by scraping any Prometheus-compatible endpoint before converting the metrics into the OpenTelemetry Protocol (OTLP), and sends them through the Collector’s pipelines for further processing and export.
In other words, you get all the power of Prometheus’s scraping and discovery mechanisms, combined with OpenTelemetry’s flexibility and interoperability. This guide walks you through how it works in practice, from setting up simple scrapes to scaling collection efficiently in production.
Quick start: scraping your first target
A good way to get familiar with the Prometheus receiver is to start small and have it scrape the Collector's own metrics endpoint. This gives you a simple, self-contained "hello world" setup.
First, make sure the Collector exposes its own metrics in Prometheus format. You
can do that in the service::telemetry::metrics
section of your configuration:
yaml12345678910111213141516# otelcol.yamlservice:telemetry:metrics:readers:- pull:exporter:prometheus:host: "0.0.0.0"port: 8888pipelines:metrics:receivers: [prometheus]processors: [batch]exporters: [debug] # Use the debug exporter to view the output
Next, configure the Prometheus receiver to scrape that endpoint:
yaml12345678receivers:prometheus:config:scrape_configs:- job_name: otel-collectorscrape_interval: 10sstatic_configs:- targets: ["0.0.0.0:8888"] # Scrape the Collector's own metrics
Keep in mind that this setup is mainly for demonstration, since the Collector
can already expose metrics in OTLP format using its otlp
exporter.
When you run the Collector with this configuration, the debug exporter will print the scraped metrics to your console.
You'll see OTLP-formatted metrics that were scraped from the Prometheus endpoint, similar to the example below:
text1234567891011121314[...]Metric #4Descriptor:-> Name: otelcol_process_cpu_seconds_total-> Description: Total CPU user and system time in seconds [alpha]-> Unit:-> DataType: Sum-> IsMonotonic: true-> AggregationTemporality: CumulativeNumberDataPoints #0StartTimestamp: 2025-10-12 03:44:48.252 +0000 UTCTimestamp: 2025-10-12 03:45:58.041 +0000 UTCValue: 0.360000[...]
Understanding the Prometheus receiver configuration
The Prometheus receiver's strength comes from how closely it mirrors
Prometheus's own configuration model. You can take the same configuration that
would normally live in a prometheus.yml
file and place it under the config:
key in your Collector setup. This makes it simple to migrate existing Prometheus
setups.
At the center of this configuration are the scrape_configs
and
metric_relabel_configs
sections. Together, they define what gets scraped, how
it’s labeled, and which metrics make it into your pipeline.
scrape_configs
This section defines what the receiver scrapes and how it does it. Each entry represents a scrape job with its own targets and parameters.
One of the most common real-world examples is discovering and scraping Kubernetes pods that have been annotated for Prometheus.
Here's what that might look like:
yaml123456789101112131415161718192021222324252627282930receivers:prometheus:config:scrape_configs:# Discover Kubernetes pods that have a Prometheus scrape annotation- job_name: "k8s-annotated-pods"kubernetes_sd_configs:- role: pod# Relabeling defines which pods to include and how to build their targetsrelabel_configs:- source_labels:[__meta_kubernetes_pod_annotation_prometheus_io_scrape]action: keepregex: true- source_labels:[__meta_kubernetes_pod_annotation_prometheus_io_path]action: replacetarget_label: __metrics_path__regex: (.+)- source_labels:[__address__,__meta_kubernetes_pod_annotation_prometheus_io_port,]action: replacetarget_label: __address__regex: ([^:]+)(?::\d+)?;(\d+)replacement: $$1:$$2
Note that the OpenTelemetry Collector uses $
for environment variable
substitution. If you use capture groups in relabeling rules (for example $1
),
you must escape them as $$1
. Otherwise, the Collector will interpret $1
as
an environment variable.
metric_relabel_configs
While relabel_configs
works on scrape targets, metric_relabel_configs
operates on the scraped data itself. It lets you drop or modify metrics after
they’ve been collected but before they’re sent through the pipeline. This is
useful for cleaning up noisy or high-cardinality metrics early.
yaml123456789metric_relabel_configs:# Drop any metric that has the label 'internal_metric'- source_labels: [internal_metric]action: drop# Keep only metrics that match a specific name pattern- source_labels: [__name__]regex: "(http_requests_total|rpc_latency_seconds.*)"action: keep
These two configuration sections give you full control over both what you scrape and how you handle the resulting data. Together, they form the core of most real-world Prometheus receiver setups.
Additional configuration options
The Prometheus receiver includes several other settings and feature gates that
let you fine-tune its behavior for more specialized use cases. These options sit
at the root of the prometheus
receiver configuration, not inside the config
block.
-
trim_metric_suffixes
: When enabled, this removes suffixes like_total
,_sum
,_count
, and common unit suffixes such as_seconds
or_bytes
from metric names. This can help standardize metrics across sources, but be careful since changing metric names can affect dashboards, queries, and alerts that depend on them. -
use_start_time_metric
: If set totrue
, the receiver will look for a metric (by defaultprocess_start_time_seconds
) and use its value to determine the start time for counters. This can be risky, since it assumes all counters were reset when that process started, which may not always be the case. Only use this option if you know your scraped targets behave predictably in that regard.
Scaling scrapes in production with the Target Allocator
When you run multiple OpenTelemetry Collectors for high availability, each one will, by default, scrape the same targets. This means duplicate data, extra load on your services, and wasted resources. To handle scaling cleanly, the OpenTelemetry ecosystem provides a purpose-built solution known as the Target Allocator.
The Target Allocator isn’t part of the standalone Collector binary. It’s an optional component of the OpenTelemetry Operator for Kubernetes. When enabled, it coordinates how Prometheus targets are distributed among your Collector instances by separating service discovery from metric collection, so that each Collector only scrapes the targets it's assigned.
When the Target Allocator is enabled in the Operator's configuration, the
Operator automatically deploys a new service and updates the Prometheus receiver
inside each Collector to fetch its assigned targets through HTTP-based service
discovery. Instead of every Collector running the same scrape_configs
, each
one queries the Target Allocator for its own unique list of endpoints.
A Collector configured with a Target Allocator looks like this:
yaml123456receivers:prometheus:target_allocator:endpoint: http://otelcol-targetallocatorinterval: 30s # How often to fetch new targets from the TAcollector_id: ${POD_NAME} # Unique identifier for this Collector instance
Behind the scenes, the Operator replaces any static or file-based discovery
settings (like static_configs
) with an http_sd_config
that points to the
Target Allocator’s service. This ensures that scraping is evenly distributed
across your Collector pods, with no duplication.
This design centralizes Prometheus service discovery and makes scaling much simpler. You can increase or decrease the number of Collectors in your deployment, and the Target Allocator will automatically rebalance the targets. It’s the recommended way to scale Prometheus scraping within Kubernetes environments managed by the OpenTelemetry Operator.
How Prometheus data is mapped to OTLP
The Prometheus receiver doesn't just pass data through unchanged. It converts metrics from the Prometheus data model into the richer, structured OTLP format. Understanding how this mapping works will help you make the most of your data once it reaches the rest of your pipeline.
-
Labels become Attributes: Every Prometheus label is transformed into a key-value attribute on the corresponding OTLP metric data point.
-
Metric types are translated:
- Prometheus
counter
→ OTLPSum
(cumulative and monotonic) - Prometheus
gauge
→ OTLPGauge
- Prometheus
histogram
→ OTLPHistogram
- Prometheus
summary
→ OTLPSummary
- Prometheus
These mappings preserve meaning across systems, allowing your existing Prometheus metrics to integrate seamlessly with OpenTelemetry tools and backends.
Suppose you have this Prometheus metric exposed by the Node Exporter:
promql1node_cpu_seconds_total{cpu="0", mode="system"} 15342.85
When scraped by the Prometheus receiver, this metric is converted to an OTLP
Sum
type with attributes preserved (as seen in the debug
exporter):
text123456789101112131415Metric #63Descriptor:Name: node_cpu_seconds_total-> Description: Seconds the CPUs spent in each mode.-> Unit:-> DataType: Sum-> IsMonotonic: true-> AggregationTemporality: CumulativeNumberDataPoints #0Data point attributes:-> cpu: Str(0)-> mode: Str(system)StartTimestamp: 2025-10-12 04:39:33.432 +0000 UTCTimestamp: 2025-10-12 04:44:42.412 +0000 UTCValue: 15342.85
If the Node Exporter exposes the same metric for multiple CPUs, each combination
of labels (for example, cpu="1", mode="user"
) becomes a separate OTLP data
point with its own attributes.
Special mappings for Resource and Scope
The receiver also looks for certain metrics and labels that carry additional context about where the telemetry originated. These are used to populate Resource and Scope attributes in OTLP and enrich your data with metadata about the source and instrumentation.
1. target_info
for Resource attributes
If a target exports a metric named target_info
(often added automatically
through service discovery), its labels are converted into Resource
attributes and applied to all other metrics from that same target. After this
mapping, the target_info
metric itself is dropped.
For instance, a Prometheus metric like:
promql1target_info{service_name="auth-api", service_version="1.2.0"}
Will result in all metrics from that scrape including the resource attributes
service.name="auth-api"
and service.version="1.2.0"
.
2. otel_scope_info
for Scope attributes
If metrics include labels such as otel_scope_name
and otel_scope_version
,
the receiver uses them to build the Instrumentation Scope. This defines
which library or component produced the data. The otel_scope_info
metric can
also provide additional attributes for that scope.
This translation process helps unify Prometheus-style metrics within the broader OpenTelemetry model, giving you structured, context-rich data that’s easier to analyze and correlate with other telemetry signals.
Scraping native histograms
Prometheus native histograms are a newer and more efficient way to represent distributions. They offer better accuracy and performance for high-volume metrics.
To collect them with the Prometheus receiver, you need to enable the feature
gate receiver.prometheusreceiver.EnableNativeHistograms
:
The receiver will then automatically convert these into OTLP Exponential Histograms, preserving their precision and structure.
Here’s how to enable them:
yaml1234567891011121314# In your receiver configreceivers:prometheus:config:global:# Enable support for native histogramsscrape_protocols:[PrometheusProto,OpenMetricsText1.0.0,OpenMetricsText0.0.1,PrometheusText0.0.4,]# ... scrape_configs ...
Start the Collector with the feature gate enabled:
yaml1234567# docker-compose.ymlservices:otelcol:command: [--config=/etc/otelcol-contrib/config.yaml,--feature-gates=receiver.prometheusreceiver.EnableNativeHistograms, # enable this]
This feature allows the Collector to capture Prometheus’s most modern metric format and send it through the OpenTelemetry pipeline without losing fidelity.
Final thoughts
The Prometheus receiver is more than just a way to bring in metrics. It’s a full integration point that combines the power of Prometheus with the flexibility of OpenTelemetry. Once you understand how to configure it—from simple static scrapes to distributed setups using the Target Allocator—you can create a consistent and scalable metric pipeline.
A well-tuned configuration ensures that all your Prometheus data, no matter where it comes from, is standardized, enriched with context, and ready for meaningful analysis.
When your pipeline is stable and clean, the next step is to send your metrics to an OpenTelemetry-native platform like Dash0. Take full control of your observability data and start a free trial today.
