Dash0 Acquires Lumigo to Expand Agentic Observability Across AWS and Serverless

Last updated: March 3, 2026

Understand Resource Equality

In practice, the same application or infrastructure component often emits telemetry from multiple sources—an OpenTelemetry SDK inside a container, a log agent on the host, a metrics exporter like kube-state-metrics. Each source describes the same logical resource, but with different sets of resource attributes.

Why Resource Equality Matters

Without intervention, an observability backend treats each unique attribute dictionary as a separate resource. The result is resource fragmentation: your pod's spans appear on one resource, its logs on another, and its metrics on a third. You lose the unified view you need to troubleshoot effectively.

Resource equality is Dash0's solution to this problem. It is not a built-in OpenTelemetry mechanism—it's a set of rules Dash0 applies at ingestion time to determine when different attribute dictionaries describe the same logical resource, so their telemetry can be correlated automatically.

A Quick Example: Amazon EKS Pod Logs

When using Fluent Bit or Fluentd to collect Amazon EKS pod logs, the log agent accesses the host's /var/log/pods/<namespace>_<pod_name>_<pod_id>/<container_name>/ path. Without special configuration, it can only set:

  • k8s.namespace.name
  • k8s.pod.name
  • k8s.pod.uid
  • container.name

Meanwhile, the OpenTelemetry SDK inside the same pod reports a much richer set of attributes—service name, deployment metadata, node information, and more. Despite describing the same pod, these two attribute sets would produce two separate resources without resource equality.

Dash0's Kubernetes workload equality rules recognize that both sources share k8s.pod.uid and merge them into a single coalesced resource.

For more scenarios like this, see Recognize common resource fragmentation scenarios.


How the Rules Work

Resource equality uses a hierarchy of rules, evaluated in order of precedence:

  1. SemConv-based equalities — technology-specific rules using subsets of resource attributes based on OpenTelemetry semantic conventions
  2. Identity — exact attribute dictionary match (same keys, same typed values)

When two resources are matched by any rule, Dash0 merges them into a single coalesced resource. The resulting resource identifier is stored as the dash0.resource.id resource attribute.

SemConv-based Equalities

Two resources are considered the same if any of the following attribute subsets match. Rules are listed in descending order of precedence.

Kubernetes Workload Equality

For applications running in pods:

By Pod UID:

  • k8s.pod.uid (unique across clusters)

By Pod Name + Workload UID:

  • k8s.pod.name AND one of: k8s.namespace.uid, k8s.deployment.uid, k8s.daemonset.uid, k8s.replicaset.uid, k8s.statefulset.uid, k8s.cronjob.uid, k8s.job.uid

By Pod Name + Workload Name + Namespace:

  • k8s.pod.name AND one of: k8s.deployment.name, k8s.daemonset.name, k8s.replicaset.name, k8s.statefulset.name, k8s.cronjob.name, k8s.job.name AND k8s.namespace.name

Kubernetes Resource Equality

For aggregate metrics and events about Kubernetes resources (not pods):

Workload Schedulers:

  • k8s.daemonset.uid OR k8s.deployment.uid OR k8s.replicaset.uid OR k8s.statefulset.uid OR k8s.cronjob.uid OR k8s.job.uid

Namespaces:

  • k8s.namespace.uid OR k8s.namespace.name

Kubernetes Node Equality

When k8s.node.name or k8s.node.id is set AND no pod/workload attributes are present:

  • k8s.node.id OR k8s.node.name

Container Equality

For containers not running on Kubernetes (e.g., Docker Desktop):

  • container.id OR container.name

Host Equality

  • host.id OR host.name

Heroku Equality

  • heroku.app.id + service.instance.id

CI/CD Pipeline Equality

  • cicd.pipeline.name + cicd.pipeline.run.id

Vercel Equality

When cloud.provider == "Vercel":

  • cloud.region + faas.name

Amazon ECS Equality

  • ECS workloads: aws.ecs.task.arn
  • ECS clusters: aws.ecs.cluster.arn

Service Equality

As a fallback, resources can be identified by the service triplet:

  • service.namespace + service.name + service.instance.id

To use service-based resource equality, set these environment variables:

bash
12
OTEL_SERVICE_NAME=my-service
OTEL_RESOURCE_ATTRIBUTES="service.instance.id=instance-123"
Note

Service equality has the lowest priority to avoid overriding more specific technology-based equalities.

Identity

Two resources are identical if their attribute dictionaries have equivalent sets of attribute keys (case-sensitive, order-insensitive) and, for each key, the associated values have the same type and are equivalent.


Coalesced Resources

When viewing resources in the Map or Resource pages, telemetry from multiple sources is correlated using the equality rules above. The result of merging equal resources is a coalesced resource.

Handling Attribute Conflicts

When merging resources, attribute conflicts are handled as follows:

ScenarioResource 1Resource 2Coalesced Result
Attribute missing in bothunsetunsetunset
Present only in firstvalueunsetvalue
Present only in secondunsetvaluevalue
Same value in bothvaluevaluevalue
Different valuesvalue1value2value1 OR value2

When querying coalesced resources:

  • Attribute existence: Matches if any underlying resource has the attribute
  • Attribute value: Matches if any underlying resource has that specific value

For details, see Resolve Resource Attribute Conflicts.

Practical Examples

Finding resources that changed teams: Query for team = A AND team = B to find resources where the team attribute changed during the time range.

Multi-team ownership: A Kubernetes pod with a service mesh sidecar may have attributes from both the platform team (sidecar) and application team (main container). Both teams can find "their" resources using their respective attribute queries.


Best Practices

  1. Use consistent resource attributes across all telemetry sources for the same logical resource
  2. Leverage resource detectors in OpenTelemetry SDKs to automatically populate standard attributes
  3. Configure log agents to include Kubernetes metadata when collecting pod logs
  4. Use the K8sAttributes processor in the OpenTelemetry Collector to enrich telemetry with Kubernetes metadata
  5. Set service.instance.id explicitly when service-based equality is needed