Dash0 Raises $110M Series B at $1B Valuation

Last updated: May 7, 2026

Why Resource Equality?

Learn why and how Dash0 automatically merges telemetry from multiple sources into unified resource views by identifying when different signals describe the same observed system.

The same application or infrastructure component often emits telemetry from multiple sources—an OpenTelemetry SDK inside a container, a log agent on the host, a metrics exporter, such as kube-state-metrics. Each source describes the same observed system, but with different sets of resource attributes.

Note

It is difficult, and sometimes just not possible, to make all your signals (metrics, traces, logs) agree 100% with each other in terms of resource attributes. Dash0 solves this for you by automatically identifying when different telemetry sources describe the same resource and pooling resource attributes across them.

The Dash0 Intervention

Without intervention, an observability backend treats each unique signal as a separate resource. The result is resource fragmentation: your pod's spans appear on one resource, its logs on another, and its metrics on a third. You lose the unified view you need to troubleshoot effectively.

Resource equality is Dash0's solution to this problem. It's an expanding set of rules that Dash0 applies at ingestion time to determine when different signals describe the same observed system, so their telemetry can be correlated automatically.

Tip

For an overview of resource equality concepts and platform-specific rules, see References.

Common Fragmentation Scenarios

This section walks through common scenarios where fragmentation occurs and how Dash0's resource equality rules handle them automatically.

Amazon EKS Pod Logs

When using Fluent Bit or Fluentd to collect Amazon EKS pod logs, the log agent accesses the host's /var/log/pods/<namespace>_<pod_name>_<pod_id>/<container_name>/ path. Without special configuration, it can only set:

  • k8s.namespace.name
  • k8s.pod.name
  • k8s.pod.uid
  • container.name

Meanwhile, the OpenTelemetry SDK inside the same pod reports a much richer set of attributes: service name, deployment metadata, node information, and more. Despite describing the same pod, these two signals would produce two separate resources without resource equality.

How Dash0 Resolves This

The Kubernetes workload equality rules recognize that both sources share k8s.pod.uid and merge them into a single resource. For details on Kubernetes equality rules, see Kubernetes Resources.

Amazon ECS Tasks + CloudWatch Logs

ECS workloads can send logs to CloudWatch using the awslogs driver. The AWS Logs semantic conventions define aws.log.{group|stream}.{arns|names} resource attributes, which can be retrieved from Amazon ECS metadata endpoints.

However, logs fetched from CloudWatch have no metadata about what generated them. Without explicit mapping, there's no way to correlate CloudWatch logs with ECS task telemetry from an embedded SDK.

How Dash0 Resolves This

The ECS equality rule matches resources by aws.ecs.task.arn, bridging the gap between CloudWatch log metadata and SDK-reported attributes. For details on AWS equality rules, see AWS Resources.

Vercel Spans and Logs

Vercel spans (via the @vercel/otel package) include detailed resource attributes:

json
123456789101112131415161718192021
{
"service": {
"name": "my-app",
"namespace": "my-app"
},
"cloud": {
"provider": "Vercel",
"platform": "Vercel Functions",
"region": "iad1"
},
"vercel": {
"environment": "production",
"url": "my-app.vercel.app",
"deployment_id": "dpl_abc123",
"project_id": "prj_xyz789"
},
"faas": {
"name": "function-identifier",
"version": "$LATEST"
}
}

Vercel log drains provide a different, sparser set of metadata. Without correlation, spans and logs from the same Vercel deployment appear as separate resources.

How Dash0 Resolves This

The Vercel equality rule matches resources by vercel.deployment_id (or vercel.sha) + vercel.project_id, correlating spans and logs from the same Vercel deployment regardless of other attribute differences. For details on Vercel equality rules, see Other Platforms.

Kubernetes Pod with Service Mesh Sidecar

Kubernetes pods running service mesh solutions, such as Istio or Linkerd, have multiple containers, each potentially reporting different resource attributes:

  • Main application container: Sets service.name=my-api, container.name=app, process.runtime.name=python, telemetry.sdk.name=opentelemetry
  • Envoy sidecar proxy: Sets service.name=envoy, container.name=istio-proxy, process.runtime.name=c++
  • Log collector sidecar: May only report basic Kubernetes metadata

Without resource equality, telemetry from these three containers would appear as three separate resources, making it difficult to get a unified view of pod health.

How Dash0 Resolves This

The Kubernetes workload equality rule matches all containers by k8s.pod.uid, allowing you to query telemetry from any container in the pod using any of their attributes. When multiple containers have conflicting attribute values (like different service.name values), you can filter by any value present across the merged resources. For details on Kubernetes equality rules, see Kubernetes Resources.

AWS Lambda with CloudWatch Logs

AWS Lambda functions often send logs to CloudWatch using the default runtime logging. Meanwhile, an embedded OpenTelemetry SDK instruments the function to send traces and metrics. These two signals report very different attributes:

  • OTel SDK: Sets faas.name=order-processor, faas.instance=invocation-123, service.name=order-service, cloud.region=us-east-1, telemetry.sdk.*
  • CloudWatch logs: May only include aws.log.group.names=/aws/lambda/order-processor, aws.log.stream.names=2026/04/27/[$LATEST]abc, with limited resource context

Without explicit correlation, there's no automatic way to connect CloudWatch logs with SDK-reported traces and metrics.

How Dash0 Resolves This

The AWS Lambda equality rule uses faas.instance (the unique invocation ID) to correlate all telemetry from the same Lambda invocation, regardless of whether it comes from CloudWatch or an embedded SDK. Each Lambda invocation is treated as a distinct resource. For details on AWS Lambda equality rules, see AWS Resources.

Multi-Container Pod with kube-state-metrics

Kubernetes deployments often use kube-state-metrics to expose cluster-level metrics about workload state. These metrics describe resources at the pod or deployment level but contain different attributes than the telemetry from applications running inside those pods:

  • Application in pod: Sets full resource attributes including service.name, k8s.pod.uid, k8s.deployment.name, container.name
  • kube-state-metrics: Reports aggregate metrics with k8s.deployment.name, k8s.namespace.name, but missing k8s.pod.uid since it describes the deployment, not individual pods

This creates a challenge: how do you correlate deployment-level metrics with pod-level application telemetry?

How Dash0 Resolves This

The Kubernetes resource equality rules include separate rules for pods (by k8s.pod.uid) and deployments (by k8s.deployment.uid or k8s.deployment.name + k8s.namespace.name). kube-state-metrics data is matched at the deployment level, while application telemetry is matched at the pod level, allowing you to view both perspectives. For details on Kubernetes equality rules, see Kubernetes Resources.

Kubernetes Enrichment

Many deployments use the OpenTelemetry Collector with the k8sattributes processor to enrich telemetry with Kubernetes metadata. This can lead to fragmentation when some telemetry passes through the collector while other telemetry doesn't:

  • Via collector: Application sends spans with minimal attributes (service.name=api), collector enriches with k8s.pod.uid, k8s.namespace.name, k8s.deployment.name, k8s.node.name
  • Direct to backend: Host agent sends logs directly with only k8s.pod.uid, k8s.pod.name, container.name

The two paths produce signals with different attributes for the same observed system.

How Dash0 Resolves This

The Kubernetes workload equality rules prioritize k8s.pod.uid as the highest-precedence identifier. Both paths share this attribute, so Dash0 merges them into a single resource, combining the rich metadata from the collector with the direct logs from the host agent. For details on Kubernetes equality rules, see Kubernetes Resources.

Further Reading

For complete resource equality rules and reference material, explore the following resources.

  • About Resources: Overview of resources in Dash0
  • Explore Resources: Get started exploring resources in Dash0
  • References: Complete reference for equality, naming, and typing rules across all supported platforms

Dash0 Guides & Knowledge

OpenTelemetry Resources