OpenTelemetry has crossed an important threshold.
According to the CNCF Annual Cloud Native Survey, 49% of respondents report running OpenTelemetry in production, with another 26% actively evaluating it. That means three out of four surveyed organizations are either already using OpenTelemetry or seriously considering it as part of their cloud-native stack. OpenTelemetry is no longer an experimental integration or a niche technology for early adopters - it has become an integral part of how modern cloud-native infrastructure is instrumented and understood.
As adoption has grown, expectations have evolved with it.
From instrumentation to infrastructure
For many platform engineers, OpenTelemetry adoption is not primarily about adding observability features. It reflects a broader architectural shift toward open standards, vendor-neutral interfaces, and platforms designed to evolve over time. Treating OpenTelemetry - and particularly the OpenTelemetry Collector - as a shared integration layer allows teams to decouple telemetry production from backend choice and manage observability concerns centrally, rather than pushing complexity into every application.
Pretty much all software we push to production is worth observing. And that observability nowadays is built on top of OpenTelemetry, using instrumentations to collect telemetry.
In practice, this happens in two ways. Automatic instrumentation - via agents, SDK wrappers, or eBPF - helps teams get started quickly by capturing common signals without code changes. But it only gets you so far. The real value comes from native instrumentation, where a project intentionally models its spans, metrics, and logs with the right context, structure, and semantics.
This is especially important for infrastructure components like ingress controllers, service meshes, API gateways, and workflow engines. These systems sit on the critical path of every request. If they only expose telemetry as an afterthought - or rely on downstream processing to “fix” it - they become blind spots in an otherwise correlated system.
The OpenTelemetry Collector plays an important role in routing and shaping telemetry, but it cannot reconstruct meaning that was never captured. If context, semantics, or relationships between signals are missing at the source, no amount of downstream processing will fully recover them.
This is a subtle but important change. It moves observability out of the realm of tooling preference and into the realm of platform architecture.
At the same time, teams are moving beyond the traditional model of the “three pillars.” Logs, metrics, and traces consumed in isolation each provide only a partial view of system behavior. As systems grow more complex, understanding increasingly depends on correlation across signals, not on any single data type. OpenTelemetry was designed for exactly this - through a shared data model, context propagation, and standardized semantic conventions.
But, as emphasized, this shift only works if telemetry is modeled consistently at the source. If signals are disconnected, semantics are inconsistent, or context is missing, you don’t get correlation - you just get three better silos.
Given that reality, the question facing the ecosystem today is no longer whether a CNCF project supports OpenTelemetry, but what that support actually looks like in practice - and whether it enables a coherent, system-level view or simply rebrands existing limitations.
The limits of a binary label
For years, describing OpenTelemetry support in binary terms was sufficient. Either a project exported traces via the OpenTelemetry Protocol (OTLP) or it did not. That level of clarity was useful when adoption was still limited and expectations were modest. Little attention was spent on other signals (logs, metrics), the quality of the metadata on those spans, the overall shape of the trace resulting from them, or any other important detail that makes or breaks the usefulness of telemetry when troubleshooting is afoot.
At today’s scale, however, evaluating how well a software package integrates with OpenTelemetry needs to be nuanced and grounded in practical considerations of how the resulting telemetry will be used.
In practice, OpenTelemetry support rarely matures evenly across signals. Tracing is often the first signal to receive serious attention and tends to be relatively robust, but there are virtually always significant margins of improvement. Logging almost always already exists, but not always in a structured form, seldom reliable correlation to traces, and almost never supporting OTLP. Metrics frequently remain Prometheus-native, even when traces and logs are exported using OpenTelemetry protocols. At the same time, there is still a kind of “Sahara desert” when it comes to resource modeling - consistent, well-defined attributes describing what is actually emitting telemetry are often missing or derived later in pipelines rather than modeled at the source.
Not all of these patterns represent failure - many reflect the ecosystem’s history and the need for backward compatibility. But some clearly do. Poorly structured traces, missing context propagation, or signals that cannot be correlated across components directly reduce the usefulness of observability data and increase operational cost.
The issue is not that variation exists. The issue is that a single phrase - “supports OpenTelemetry” - collapses that variation into a flat description that obscures meaningful differences.
Two projects can both claim OpenTelemetry support while presenting very different integration surfaces, operational behaviors, and downstream costs. This isn’t just theoretical - it shows up very clearly in practice. For example, two ingress controllers might both emit traces. One produces well-structured spans with consistent attributes and end-to-end context propagation. The other emits spans with generic names, missing attributes, and no reliable linkage to logs or metrics. Both technically “support OpenTelemetry”, but only one enables meaningful debugging without additional pipeline work.
At the adoption levels reflected in the CNCF survey, that ambiguity is no longer harmless. It shows up in architecture reviews, in pipeline complexity, and eventually in production incidents where assumptions about telemetry behavior quietly break down.
OpenTelemetry as a platform contract
When OpenTelemetry becomes part of how a platform operates and evolves over time, its role changes significantly.
It becomes an integration contract between services, infrastructure components, and the observability platform itself. The OpenTelemetry Collector becomes a telemetry control plane that centralizes decisions about routing, enrichment, sampling, and export.
This architectural model is powerful precisely because it allows teams to separate concerns. Application developers focus on emitting meaningful telemetry. Platform teams focus on shaping and governing that telemetry. Vendors can be evaluated or replaced without forcing code changes across the organization.
However, this promise depends on integration clarity.
If a project’s OpenTelemetry support is partial, undocumented, or inconsistent, complexity does not disappear. It moves downstream. Platform teams compensate with Collector pipelines filled with transforms, parsing rules, attribute normalization, and context reconstruction. These pipelines work, but they encode assumptions that are rarely explicit and often brittle.
Over time, the observability stack becomes harder to reason about. The platform is technically standardized, but operationally bespoke.
OpenTelemetry support, in this sense, is about how well a project participates in a shared, platform-level observability model.
Modern distributed systems don’t fail along neat boundaries. A latency issue might surface in metrics, be traced across services, and only be explained by a log in a downstream dependency. That’s why correlation is no longer optional. OpenTelemetry’s value comes from connecting signals through shared context and consistent semantics - but when that consistency breaks, so does the ability to reason about the system. What used to be a minor inconvenience quickly turns into operational risk at scale.
Structured telemetry and emerging analysis workflows
Observability workflows are evolving beyond dashboards and static alert thresholds. Whether it's automated analysis, assisted debugging, or AI-driven reasoning across signals, the requirements are the same: structured data, stable semantics, and reliable relationships between signals. OpenTelemetry provides the scaffolding for this, but only when conventions are consistently applied. From this perspective, OpenTelemetry maturity shapes not only ease of integration but also the long-term evolution of observability tooling itself.
Existing ecosystem efforts - and the remaining gap
The OpenTelemetry community is not ignoring these challenges.
Instrumentation Score provides rule-based validation of emitted telemetry, helping ensure correctness and completeness. The OpenTelemetry Ecosystem Explorer catalogs integrations and components, improving discoverability and ecosystem understanding. Both efforts are valuable and necessary.
They answer important questions: does the telemetry that is produced with OpenTelemetry conform to expectations, and does an integration exist at all?
What they do not capture is design intent and integration evolution. They do not describe whether OpenTelemetry is treated as a primary interface or a secondary export path. How consistently semantic conventions are applied across signals. How stable telemetry behavior is across releases. The expectations in terms of which telemetry should be produced with OpenTelemetry. How different signals should relate with one another, how “OpenTelemetry-native” is the way that the telemetry production is configured and wired up with the rest of the observability pipeline. And more.
Those questions are harder to encode in rules. They require language.
Introducing a descriptive maturity model
This is the motivation behind proposing an OpenTelemetry Support Maturity Model for CNCF projects.
The intent is not to create a certification program or a ranking system. It is to provide a shared vocabulary for describing how OpenTelemetry support evolves across multiple dimensions - and to guide projects toward better integrations, while making gaps visible and easier to improve over time.
Different projects will mature along different dimensions at different rates. That is expected. The purpose is not to declare one project superior to another, but to make integration characteristics visible and discussable. The model describes OpenTelemetry support across seven dimensions, each evaluated along a progression from basic instrumentation to fully OpenTelemetry-optimized:
- Integration Surface – how users connect the project to their observability pipelines
- Semantic Conventions – how consistently telemetry meaning aligns with OpenTelemetry conventions
- Resource Attributes & Configuration – how identity and configuration behave across environments
- Trace Modeling & Context Propagation – how traces are structured and how context flows
- Multi-Signal Observability – how traces, metrics, and logs work together in practice
- Audience & Signal Quality – who the telemetry is designed for and how usable it is by default
- Stability & Change Management – how telemetry evolves once users depend on it
The initial draft of the model has already been evaluated against multiple CNCF projects, with an early focus on ingress and gateway implementations where telemetry sits directly on the request path and naturally exercises all three stable signals. That work helped surface recurring patterns and informed the model's structure. The resulting project-level evaluations will be published separately to ground this discussion in concrete, real-world examples - and to invite critique.
A maturity model should not be accepted uncritically. It should be tested, debated, and refined.
Relation to CNCF project maturity and graduation
This timing also coincides with OpenTelemetry itself applying for CNCF graduation, a process that has prompted broader discussion about what maturity should signal to users. While those conversations are still ongoing, they point to a growing need for clearer ways to describe ecosystem integration and operational readiness.
OpenTelemetry support is not a formal graduation requirement, and this proposal does not suggest it should become one. However, as OpenTelemetry moves toward graduation and becomes even more foundational, the ability to describe how projects integrate with it - not just whether they do - becomes increasingly important.
A descriptive maturity model can provide useful context for those conversations without imposing new governance criteria.
A shared responsibility
This proposal is not about enforcing uniformity or issuing badges. It is about improving clarity at a point where OpenTelemetry adoption has become widespread enough that ambiguity around integration characteristics creates real friction. A simple “supports OpenTelemetry” label no longer provides sufficient information for platform teams to make informed architectural decisions.
Greater clarity benefits different parts of the ecosystem in different ways. If you maintain a project, it provides a structured way to describe integration intent, roadmap priorities, and trade-offs without reducing the conversation to pass-or-fail judgments. If you are a platform engineer, it offers a more nuanced lens for evaluating how projects will behave within a standardized OpenTelemetry-based architecture. And if you are an end user, it reduces surprise by making integration characteristics explicit rather than implicit.
A shared vocabulary for describing OpenTelemetry support helps align expectations across these groups. It surfaces trade-offs early, encourages intentional evolution, and ultimately strengthens trust in how projects participate in shared observability infrastructure.
Final thoughts
If you are a maintainer, this proposal invites collaboration rather than evaluation. It offers a structured way to articulate integration characteristics and roadmap decisions more clearly.
If you are a platform engineer, it offers a lens for assessing OpenTelemetry integration beyond binary claims before those decisions become architectural constraints.
If you are part of the CNCF or OpenTelemetry community, this is an opportunity to help shape how we talk about one of the most widely adopted observability standards in the ecosystem.
The ecosystem is ready to move beyond a binary “supports OpenTelemetry” label.
It is time to describe - and evolve - what that support actually means.
Join the discussion and help shape the model.


