Last updated: March 6, 2026
Code RED Newsletter #26
If you were expecting a bold new observability manifesto, this isn’t that issue.
Instead, the past couple of weeks have been about tuning rather than transforming. Sampling knobs are being reconsidered. Collector resilience is being improved. Prometheus and OpenTelemetry are learning to coexist more peacefully. A new guide tries to lower the barrier for teams who still find OTel intimidating. And somewhere in the middle of all that, a very honest piece asks what happens when outages multiply faster than ownership.
Not flashy. But important. Let’s dig in.
In Focus: Calibration Over Hype
None of the pieces in this issue scream “major shift.” And that’s the point.
They’re about refinement. Smarter sampling. Clearer integration paths. More approachable guidance. Sharper thinking around responsibility.
It’s the kind of iteration that doesn’t trend - but quietly makes systems more stable.
OpenTelemetry Roadmap: Sampling Rates and Collector Improvements Ahead
At OTel Unplugged in Brussels, the conversation wasn’t about dramatic new features. It was about refinement.
This New Stack piece captures the themes that surfaced during the event: smarter sampling strategies, incremental Collector improvements, and a continued focus on operational resilience. Tail sampling, dynamic controls, and pipeline durability aren’t abstract ideas anymore - they’re practical priorities.
Nothing revolutionary. Just the steady work of making OpenTelemetry stable at scale. For teams running OTel in production, that’s exactly the kind of roadmap signal that matters.
Prometheus and OpenTelemetry Want To Play Nice
Another thread coming out of OTel Unplugged in Brussels was the evolving relationship between the Prometheus and OpenTelemetry projects.
For years, they’ve often been positioned as competing approaches - one deeply rooted in Kubernetes and pull-based metrics, the other aiming to standardize telemetry across signals.
This New Stack commentary captures a more pragmatic tone from the event: less replacement, more coexistence. Prometheus remains strong in what it does well. OpenTelemetry continues to unify traces, logs, and metrics through open standards. The goal isn’t consolidation - it’s interoperability.
The tension hasn’t entirely disappeared. But the conversation is very practical, with goodwill on both sides. Michele Mancioppi was in the room in Brussels and described the discussion as “very healthy and grounded”.
OpenTelemetry Declarative Configuration JSON Schema hits 1.0.0
Most OpenTelemetry applications rely on environment variables for configuration - simple, but limiting for advanced use cases like data multiplexing without a Collector or fine-grained control over automatic instrumentation.
The OpenTelemetry Declarative Configuration project addresses this with a structured config model designed for richer setups. It works especially well on (virtual) hosts and in Kubernetes environments using mounted ConfigMaps. With the JSON Schema reaching v1.0.0 and the declarative configuration option formalized in the OpenTelemetry specification (see the spec section here), the model is moving from experimental to stable. Support across SDKs is evolving, with current language status tracked here.
If you’ve ever felt constrained by long chains of OTEL_* variables, this is worth a look.
Read more about the 1.0.0 release here
OpenTelemetry Project Publishes “Demystifying OpenTelemetry” Guide
The OpenTelemetry blog recently published a piece titled “Demystifying OpenTelemetry” aimed at making the project more approachable - particularly for teams outside Kubernetes-centric environments.
InfoQ picked up the story and provides a short overview of the guide and its intent. The focus isn’t on new features, but on clarity: explaining what OpenTelemetry is, where it fits, and how to get started without feeling overwhelmed.
It’s a reminder that documentation and framing matter just as much as APIs and SDKs.
One Hundred Outages and Nobody in Charge
This piece stands slightly apart from the others - and that’s precisely why it fits here.
“One Hundred Outages and Nobody in Charge” isn’t about tooling or standards. It’s about ownership. As systems grow more distributed, accountability doesn’t automatically scale with architecture. Incidents become frequent, responsibility becomes blurred, and without clear ownership, recurrence becomes almost inevitable.
Telemetry can provide visibility. But visibility alone doesn’t resolve systemic drift. It simply makes it observable.
A sharp reminder that observability maturity is as much cultural as it is technical.
Beyond Kubernetes: Platform Engineering, Developer Experience and GenAI with Mauricio Salatino
In this Code RED Podcast episode, I’m joined by Mauricio Salatino for a conversation that stretches beyond Kubernetes YAML and into platform thinking.
We talk about platform engineering as a product, developer experience as a first-class concern, and where GenAI actually fits - beyond hype.
If OpenTelemetry is infrastructure, then platform engineering is the operating model around it. This episode connects those dots clearly.
Telemetry Drops: OpenTelemetry Injector with Michele Mancioppi
In the latest Telemetry Drops episode, Juraci sits down with Michele Mancioppi to unpack the OpenTelemetry Injector - what it is, how it works, and why it exists.
The conversation goes beyond the surface. They cover LD_PRELOAD mechanics, language runtime quirks, Kubernetes operators vs. virtual machine environments, and even why the Injector is written in Zig. It’s a pragmatic look at how to activate auto-instrumentation at scale - especially in environments where rebuilding container images to add OpenTelemetry isn’t an option.
If you’re operating heterogeneous systems and want instrumentation without friction, this one’s worth your time.
Choice Cuts
A few additional ecosystem signals worth your attention.
What’s Up, OTel? It’s Us, Your Community Managers!
The OpenTelemetry community managers share updates, reflections, and a look at the human side of maintaining a fast-growing open standard.
Governance, contributor onboarding, roadmap transparency - it’s easy to forget how much coordination happens behind the scenes.
Standards don’t evolve automatically. People shape them.
The Lookout Agent: Your AI Assistant for Web Performance and User Experience
On the Dash0 side, we’ve added a new specialist to Agent0: the Lookout Agent.
Designed specifically for web performance and user experience, it understands Core Web Vitals, JavaScript errors, session behavior, and how frontend issues correlate with backend traces - all grounded in your actual telemetry data. Instead of navigating dashboards or crafting complex queries, you can ask questions in plain language and get contextual, data-backed answers.
It doesn’t replace dashboards. It complements them, reducing investigation time and making frontend performance analysis more conversational.
A small addition to Agent0 - but a focused one.
If there’s a thread running through this issue, it’s a simple one: refinement. Sampling strategies are getting sharper. Integration paths are becoming clearer. Guidance is improving. And the conversations around ownership are getting more explicit. None of it is dramatic. But it’s the kind of steady iteration that makes systems easier to run.
Observability maturity isn’t about collecting everything by default. It’s about collecting intentionally - and operating it responsibly once it’s in production. Stability is still the most underrated feature.
We’ll be back in two weeks with more from OpenTelemetry, platform engineering, and the ongoing effort to make systems observable without making them overwhelming.
Until then: sample wisely.
Kasper, out.
Hi, my name is Kasper!
I’m Kasper Borg Nissen, Principal Developer Advocate at Dash0. I’m passionate about Observability and bridging the gap toward developers through Platform Engineering. I’ve previously worked 8 years as a platform engineer, I’m a former co-chair of KubeCon+CloudNativeCon, and I’m genuinely obsessed with all things cloud-native and open standards.








