Last updated: July 11, 2025

Monitoring Heroku Applications with OpenTelemetry

Heroku makes it easy to deploy and run applications, handling much of the operational overhead. One of the time-consuming aspects of running a system is setting up how to monitor it. That’s why, at Dash0, we're excited to hear that Heroku's next-generation Fir platform natively integrates OpenTelemetry (OTel) for observability. The current Cedar platform does not have these facilities, but it is nevertheless possible to monitor your apps running on it with OpenTelemetry.

In this blog post, we explore how to deploy a Demo Spring Boot application with OpenTelemetry auto-instrumentation on Heroku's Cedar platform using Cloud Native Buildpacks. This guide is a variant of our previous Spring on Kubernetes instrumentation guide, tailored for Heroku and featuring hot new OpenTelemetry tips and tricks.

What is Heroku's Cedar Platform?

Heroku's Cedar platform provides the underlying runtime and execution environment for deploying applications on the Heroku platform. It abstracts away the operating system, web server, and process management so developers can package code and let Heroku handle how it runs in the cloud.

What is Spring Boot?

Spring Boot is a framework for building stand-alone, production-grade Spring applications with minimal configuration. It provides opinionated "starter" dependencies (e.g. spring-boot-starter-web, spring-boot-starter-data-jpa) so you avoid manual wiring of common libraries and packages your app as an executable JAR with an embedded servlet container (Tomcat, Jetty, or Undertow). Health checks, metrics, and externalized configuration work out of the box, allowing you to focus on business logic rather than boilerplate setup.

What are Cloud Native Buildpacks (CNBs)?

Writing Dockerfiles is not difficult. Ten lines are often enough to have your application up and running.

Writing good Dockerfiles, on the other hand, is pretty hard. It takes a lot of work to make production-grade container images. There are many facilities one needs to operate applications smoothly, such as good memory settings for the Java Virtual Machine (JVM), including a modern and updated JVM every time you build the image. And then there is monitoring, like configuring an updated OpenTelemetry Java Agent. Additionally, there are security and compliance concerns, such as maintaining an updated base image with security patches and a bill of materials. And the list goes on.

Cloud Native Buildpacks, such as Paketo, are a composable way of creating production-grade container images for your applications, without you writing complex Dockerfiles manually.

With a few configurations that one seldom needs to update, running a single command (e.g. ./mvnw spring-boot:build-image) yields a production-ready container image that follows best practices for security, performance, and observability.

What Is OpenTelemetry (OTel)?

OpenTelemetry (OTel) is the CNCF open-source observability project that standardizes how applications collect and export telemetry data, including traces, metrics, and logs. It provides:

  • A unified API and auto-instrumentation agents that can capture typical telemetry (HTTP requests, database calls, JVM metrics, Spring framework internals, etc.) with little to no manual code changes
  • A modular Collector that can route telemetry to multiple backends (Jaeger, Prometheus, Zipkin, OTLP endpoints, etc.)
  • Built-in context propagation so that trace context flows across service boundaries automatically, correlating events end-to-end

By adding the OpenTelemetry Java agent as a buildpack layer, your Spring Boot application can be instrumented at runtime, sending data to any OTLP-compatible backend without altering its existing code.

If you want to learn more about OpenTelemetry, a good starting point is Dash0’s “What is OpenTelemetry” knowledge base.

Introducing the Demo Application

To demonstrate these concepts, we'll instrument a simple Spring Boot application and deploy it to Heroku. All the code referenced in this guide (along with instructions to try it yourself) is available in the Dash0 Heroku demo repository: dash0hq/heroku-demo.

Our goal is to use a Cloud Native Buildpack to inject the OpenTelemetry Java agent into the app's container image with minimal manual configuration. Unfortunately, Heroku's Cedar platform currently uses a legacy buildpack system that does not support the Paketo buildpacks out of the box. (Native CNB support is currently only available on the Fir platform.) To bridge this gap, we'll build the container image ourselves using Paketo buildpacks and then upload that image to Heroku's container registry for deployment.

Pro tip: If you're creating a new Heroku app for this, you can specify the container stack during the creation process. For example:

sh
1
heroku create <YOUR_APP_NAME> --stack container

This command sets your app to use the "container" stack, which allows you to deploy a Docker/OCI image instead of using Heroku's slug build system. (For an existing app, you can run heroku stack:set container -a <YOUR_APP_NAME> to switch to the container stack.)

Building the Container Image

Cloud Native Buildpacks come with a CLI tool called pack to build images from source using any available buildpacks. In our case, since we're focusing on Spring Boot, we can utilize the built-in support of the Spring Boot Maven plugin (a similar capability also exists for Gradle). With the proper configuration in our pom.xml, building a container image with all the necessary buildpacks is as easy as running:

sh
1
./mvnw package spring-boot:build-image

This command compiles the app and invokes the Spring Boot plugin to produce a container image. Next, let's look at the Maven configuration that makes this magic happen.

Configuring the Buildpacks in pom.xml

In our Maven pom, we configure the Spring Boot plugin's build-image goal to use Paketo buildpacks and include the OpenTelemetry agent. Key settings include specifying the Heroku registry as the target, choosing a builder image, and enabling the OTel buildpack. Here's a snippet from the pom.xml:

xml
1234567891011121314151617181920212223242526272829303132333435
<plugin>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-maven-plugin</artifactId>
<configuration>

</configuration>
</plugin>

This Maven configuration instructs the Spring Boot plugin to use Cloud Native Buildpacks to build the container image. Specifically, it will:

  1. Set up the base image with the correct Java runtime and a process to run an executable JAR (the preferred way to run Spring Boot apps).
  2. Add the OpenTelemetry Java agent to the image and configure the Java runtime to launch with the agent attached.
  3. Bundle the Spring Boot application JAR into the image so that it runs on container startup.

When you run the spring-boot:build-image goal, you'll see output logs from the CNB builder. Notably, the buildpack will contribute multiple layers to the container image. For example, you should see log lines like:

sh
123456
[INFO] [creator] Reusing layer 'paketo-buildpacks/ca-certificates:helper'
[INFO] [creator] Reusing layer 'paketo-buildpacks/bellsoft-liberica:jre'
[INFO] [creator] Reusing layer 'paketo-buildpacks/executable-jar:classpath'
[INFO] [creator] Reusing layer 'paketo-buildpacks/spring-boot:web-application-type'
[INFO] [creator] Reusing layer 'paketo-buildpacks/opentelemetry:opentelemetry-java'
...

Each layer corresponds to a specific responsibility in the image. In fact, Cloud Native Buildpacks assemble the image as a stack of layers, where each buildpack contributes its piece. In our case, layers include:

All of the above (except the OTel layer) are bundled under the umbrella of the Paketo Java buildpack, which groups common Java-related buildpacks. The beauty of this approach is that the build process is reproducible and easy to update. When new versions of the JVM, the OpenTelemetry agent, or any buildpack improvements are released, you can get them simply by rebuilding the image with no manual Dockerfile edits required.

(For more details on how Cloud Native Buildpacks work, check out the official documentation on buildpacks.io.)

Deploying the Instrumented App to Heroku

Now, we have a container image for our app, complete with the OpenTelemetry agent. Deploying it to Heroku is relatively straightforward. First, make sure you've logged in to the Heroku Container Registry:

sh
1
heroku container:login

Next, we need to push our image to Heroku. The Spring Boot build already tagged the image as registry.heroku.com/<YOUR_APP_NAME>/web in our pom.xml. If you haven't done so, be sure to replace <YOUR_APP_NAME> with your actual Heroku app name. We should also configure a few environment variables in Heroku before release:

sh
12345
# Set required config vars on Heroku
heroku config:set \
OTEL_EXPORTER_OTLP_ENDPOINT=https://ingress.<REGION>.aws.dash0.com \
OTEL_EXPORTER_OTLP_HEADERS="Authorization=Bearer <YOUR_DASH0_API_TOKEN>" \
-a <YOUR_APP_NAME>

Pro tip: For ingestion-only scenarios, you can create an authentication token that is specifically scoped to permit only telemetry ingestion.

Let's break down what these settings do:

  • OTEL_EXPORTER_OTLP_ENDPOINT - This is the endpoint where the OTel agent sends telemetry (traces, metrics, logs) in OTLP format. In this case, we've pointed it to Dash0's ingestion endpoint for the appropriate region. (In your Dash0 account, you can find the correct OTLP endpoint URL in the settings.)
  • OTEL_EXPORTER_OTLP_HEADERS - Many OTLP endpoints (including Dash0's SaaS) require an authentication header, such as an API token. We use this variable to provide any needed headers. For Dash0, you would set it to something like Authorization=Bearer <YOUR_DASH0_API_TOKEN>, as shown above.

With these config vars in place, we can build, push and release our image to Heroku:

sh
123456
# Build the application image
./mvnw package spring-boot:build-image
# Push the image to Heroku's registry
docker push registry.heroku.com/<YOUR_APP_NAME>/web
# Release the image to the Heroku app
heroku container:release web -a <YOUR_APP_NAME>

Once the release finishes, Heroku will spin up a dyno using our container. Our Spring Boot app should start as usual, but now it will have the OpenTelemetry Java agent running inside the JVM. The agent will begin capturing telemetry and sending it to the configured OTLP endpoint.

(If your app is already running, you might need to restart the dyno after setting config vars so that the new settings take effect. In this case, releasing the new image already restarts the app.)

The OpenTelemetry SDK Config File

So far, we've enabled the OpenTelemetry agent and pointed it to Dash0. We will get traces, metrics, and logs from our app. However, to make this telemetry truly useful, we need to be able to answer a few simple questions:

  • Which application is this telemetry coming from?
  • Which instance?
  • Which version?

And the list of necessary context goes on. At Dash0, we express this by saying (a lot) that telemetry without context is just data.

OpenTelemetry defines standardized semantic conventions for resources (and other data) to contextualize telemetry consistently. In the context of Heroku, OTel has defined a set of resource attributes for dyno metadata (currently in development status). For example, attributes like heroku.app.id, heroku.release.creation_timestamp, service.name, service.version, and service.instance.id are used to describe a Heroku app and release. Heroku provides many of these values to your running app via environment variables (when the Dyno Metadata labs feature is enabled). There is even a Heroku-specific resource detector in the OpenTelemetry Collector that can read those env vars and set the appropriate attributes. However, this is not going to help us, as we do not plan to run an OpenTelemetry Collector as a sidecar in your Heroku dyno, have the OpenTelemetry Java agent report directly to Dash0. This also means that the Java agent will not automatically apply Heroku resource conventions by default.

Fortunately, the OpenTelemetry Java agent features an experimental SDK configuration option that enables us to achieve the same result with a simple YAML file. We can define resource attributes in this configuration file using environment variable substitutions, and the agent will use them to configure the SDK. We've provided such a file (sdk-config.yaml) in our application's resources and included it in the image. Below is an excerpt showing how we map Heroku's env vars to standard OTel attributes:

yaml
sdk-config.yaml
123456789101112131415161718
# Configure resource attributes for all signals.
resource:
attributes:
- name: cloud.provider
value: heroku
- name: heroku.app.id
value: ${HEROKU_APP_ID:-unknown-app-id}
#- name: heroku.release.commit
# value: ${HEROKU_BUILD_COMMIT:-unknown-release-commit}
# (HEROKU_SLUG_COMMIT is deprecated per Heroku Dyno Metadata docs)
- name: heroku.release.creation_timestamp
value: ${HEROKU_RELEASE_CREATED_AT:-unknown-release-created-at}
- name: service.name
value: ${HEROKU_APP_NAME:-heroku-demo}
- name: service.version
value: ${HEROKU_RELEASE_VERSION:-v0}
- name: service.instance.id
value: ${HEROKU_DYNO_ID:-unknown-instance-id}

Note that most of the values are collected through the Heroku environment (such as ${HEROKU_APP_ID}), while cloud.provider is hard-coded to "heroku". Also, we have defined fallbacks for all values coming from the environment, as the OpenTelemetry Java agent had errors when dealing with dynamic values it could not resolve from the configuration file.

Enabling Heroku Dyno Metadata: The HEROKU_* environment variables are not available by default at runtime. Instead, you need to opt in through a lab's feature before deploying the first time:

sh
12
heroku labs:enable runtime-dyno-metadata -a <YOUR_APP_NAME>
heroku labs:enable runtime-dyno-build-metadata -a <YOUR_APP_NAME>

The metadata environment variables will be available after the next deployment.

If you want to learn more about OpenTelemetry resource attributes and their significance, have a look at the Dash0 guide on OTel resource attributes.

Ready to Explore in Dash0

With the application running on Heroku and sending telemetry, we head over to Dash0 to see the results. All the traces, metrics, and logs emitted by the OpenTelemetry Java agent are immediately visible in Dash0, no additional configuration is needed on the Dash0 side. In our case, we can see the JVM metrics (like memory usage and garbage collection metrics) flowing in.

Perhaps most impressively, because we enabled OpenTelemetry's instrumentation for logs and spans, our application logs are automatically correlated with traces. This means that in Dash0, you can pick a trace and not only see the spans but also see the log entries that were emitted during that trace, complete with trace and span IDs attached. This capability is incredibly useful for debugging!

At this point, our Heroku app is emitting three types of telemetry: distributed traces, metrics, and logs, all of which are ingested by Dash0 and tied together with consistent context. We have full observability into the application's behaviour on the Heroku platform, using open standards and minimal overhead.

If you made it this far, congratulations on enabling OpenTelemetry on Heroku's Cedar stack! We've covered a lot, from buildpacks and OpenTelemetry auto-instrumentation to dyno metadata and SDK configuration files. The payoff is a modern observability setup on an "old school" platform with very little custom configuration. We hope this guide was helpful and lowers the friction for you to try out OTel on Heroku.

As always, we'd love to hear your feedback. If you found this guide useful and are interested in using Dash0 to monitor your Heroku deployments (or any other environment), feel free to reach out or start a free 14-day trial to explore Dash0's unified view of logs, metrics, and traces. Happy monitoring!

Authors
David Aimé Greven
David Aimé Greven