Dash0 Logo
Infrastructure MonitoringLog ManagementApplication Performance ManagementDistributed TracingKubernetes MonitoringDashboardsAlertingService MapIntegrationsTransparent CostObservability as Code
PricingDocs
GuidesBlogKnowledgePodcastIntegrationsChangelog
Contact usMeet our TeamCareersSecurity
  • Book A Demo
  • Sign In
  • Start Free Trial
Book a demoSign in
Start Free Trial

Resources

  • Pricing
  • Blog
  • Knowledge
  • Integrations
  • Documentation
  • Glossary
  • OTelBin.io
  • Sitemap

Company

  • Our Team
  • Careers
  • Security

Contact

  • Contact us
  • GitHub
  • LinkedIn
  • X
  • YouTube
  • Dash0 Newsletter
Subscribe to our newsletter

Receive updates on OTelBin, Dash0, Observability, OpenTelemetry, and more.

  • Terms and Conditions
  • Privacy Policy
  • Data Processing Agreement
  • Vulnerability Disclosure

©2025 Dash0 Inc.

Hub/Confluent Cloud

Technology

Confluent Cloud

Dash0 integrates with Confluent Cloud to observe and manage Kafka deployments, gaining visibility into your streaming data pipelines.

Overview

Conluent Cloud Integration in Dash0

Kafka is a powerhouse, and Confluent can run it for you. If you are using their offering, you will still want insights into Kafka's and its clients' behavior. To that extent, Confluent exposes metrics for its customers that you can view via Dash0.

Use Cases

  • Understand Kafka consumer lag.
  • Analyze topic throughput.
  • Inspect storage and usage.
  • See latency insights.

Setup

Overview

Confluent Cloud exposes metrics through a Prometheus endpoint that you can scrape. We recommend to scrape and forward these metrics via an OpenTelemetry Collector.

Retrieving Credentials

To start, get a hold of your exporter key and secret. The Confluent Cloud documentation explains how to access these. Once available, store these. In the next step, we will assume that these are exposed through environment variables called CONFLUENT_METRICS_EXPORTER_KEY and CONFLUENT_METRICS_EXPORTER_SECRET respectively. We also recommend to define the name of the environment within an environment variable called ENVIRONMENT_NAME.

Collector Configuration

The following code snippet shows an OpenTelemetry Collector configuration file.

The OpenTelemetry Collector pipeline visualized in OTelBin

Collector Deployment

Learn how to deploy the collector within our OpenTelemetry collector integration documentation.

References

  • Official Confluent Cloud documentation
  • OpenTelemetry Collector configuration in OTelBin

Dashboards

Kafka - Consumer Lag & Cluster Throughput and Traffic

Confluent Kafka - Consumer Lag & Cluster Throughput and Traffic

[confluent]
[kafka]
[prometheus]
Consumer Lag & Cluster Throughput and Traffic Dashboard

Kafka - Request and Response Patterns

Confluent Kafka - Request and Response Patterns

[confluent]
[kafka]
[prometheus]
Request and Response Patterns Dashboard

Kafka - Request and Response Patterns

Confluent Kafka - Request and Response Patterns

[confluent]
[kafka]
[prometheus]
Request and Response Patterns Dashboard